Security usability: Difference between revisions

From JookWiki
(Clarify the trust towards GrapheneOS comes from their behaviour not issues in this page)
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''This is a WIP page, come back later.'''
This is a quick page on my feelings towards security and how most security software fails to be usable.


This is a quick page on my feelings towards security and how most security software fails to be usable.
I want to note that while the background and final sections talk about Android you don't need to follow them to get the main point of the article, it's just used as a concrete example.


== Background ==
== Background ==
Recently I read the article [https://wonderfall.dev/fdroid-issues/ F-Droid: how is it weakening the Android security model?] which provides a critique of F-Droid's security model and recommends people use Google Play Store.
Recently I read the article [https://wonderfall.dev/fdroid-issues/ F-Droid: how is it weakening the Android security model?] which provides a critique of F-Droid's security model and recommends people use Google Play Store.


The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to [https://twitter.com/SylvieLorxu/status/1497624955705565188 threaten SylvieLorxu with legal action] for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given their priorities shown here.  
The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to [https://twitter.com/SylvieLorxu/status/1497624955705565188 threaten SylvieLorxu with legal action] for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given this behaviour.  


== Usability ==
== Usability ==
Line 26: Line 26:


==Key management==
==Key management==
It's hard to discuss any security solution without discussing keys, so allow me to sidetrack for a minute.  
It's hard to discuss any security solution without discussing key management, so allow me to sidetrack for a minute.  


Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.  
Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.  
Line 42: Line 42:
The second task is backing up keys. People have to:
The second task is backing up keys. People have to:


* Create a secure storage location
# Create a secure storage location
* Copy the keys to the location
# Copy the keys to the location
* Backup the secure storage location as well
# Backup the secure storage location as well


Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this step. Skipping this step can result in wasted time or loss of data, or even loss of finances.
Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this task. Skipping this task can result in wasted time or loss of data, or even loss of finances.


In the case where they do take steps to back things up they have to have enough knowledge to do it securely and create redundant backups. Doing this step wrong (such as by backing up a key to cloud storage) can result in compromised keys.
People who take steps to back things must have enough knowledge to do it securely and create redundant backups. Doing this wrong (such as by backing up a key to cloud storage) can result in compromised keys.


The third step is to manage revoking and rotating keys. People have to:
The third step is to manage revoking and rotating keys. People have to:
Line 60: Line 60:
Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?
Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?


The answer is simple: Trust. Who do you trust to verify keys for you? Who do you trust to backup your keys? Who do you trust to revoke and rotate your keys? Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.
The answer is simple: Trust. Ask yourself:
 
* Who do you trust to verify keys for you?
* Who do you trust to backup your keys?
* Who do you trust to revoke and rotate your keys?  
 
Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.


My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?
My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?
Line 72: Line 78:
A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.
A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.


==Case study: F-Droid vs Google Play==
==Should you use F-Droid?==
Now that I've explained how I feel about security and usability, I'm going to circle back to the blog post I mentioned at the start of this page. It focused on how F-Droid doesn't abide by the Android way of doing things. It's a  
Now that I've explained how I feel about security and usability, I'm going to circle back to the article I mentioned at the start of this page. As a summary, the article spends its time explaining how F-Droid as a project is technically inferior to Google Play and its process of curating and building applications has no advantages over Google Play.  
 
=== F-Droid is a trusted party ===
Unlike Google Play, F-Droid builds and signs applications with its own keys. This means you have to trust both the developer and F-Droid to not tamper with the binary.
 
My immediate question here is: How do people verify these signatures? The answer is that the process is tedious and error prone and outside the ability of almost everyone using the application store.
 
=== F-Droid is not free of malicious applications ===
F-Droid and Google Play can both contain malicious applications.
 
F-Droid has a hard requirement on source code being available for an application which in theory weeds out a lot of obvious malware. But in practice nobody is really doing much more than a cursory look.
 
It's quite easy to find a malicious app on Google Play: Download anything that's free and watch it place ads all over your screen that clearly don't have your best interest in mind.
 
But where are the malicious applications in F-Droid? In practice there doesn't seem to be any at all.
 
=== F-Droid updates are slow and irregular ===
I agree with this criticism to the point I'd like to extend it to Linux distributions.
 
Software development usually only targets the newest version of software unless dedicated stable branches are handled by the the same developers. Shipping old versions of software effectively means you're shipping unmaintained and unaudited versions of software.
 
There are many distributions that ship old versions and backport fixes, but for security this doesn't work very well as most security bugs aren't publicly reported or even privately noticed. This applies especially to complex high risk software like your operating system kernel where any bug could theoretically be turned in to an exploit.
 
=== F-Droid supports older Android versions ===
F-Droid supports older Android versions and builds applications with support for older API levels. This causes newer systems to use weaker application sandboxes than they would otherwise need.
 
Naturally this isn't ideal, but the solution suggested in the article is to ditch support for old systems and complain to vendors to provide longer support cycles for devices.
 
In practice this doesn't work and ends up with users getting old insecure software. That seems like a bigger problem to me than sandbox improvements.
 
=== F-Droid doesn't pin TLS certificates ===
F-Droid communicates to its servers using TLS like most modern software does. It trusts the pre-installed certificate authorities on a device which might intercept the communications.
 
My response to this is that if you have TLS interception using compromised certificate authorities then you likely have much bigger problems than F-Droid updates as this would affect basically every application that connects to arbitrary servers using TLS, such as your chat and email applications.
 
=== F-Droid used old signature schemes ===
F-Droid held out on using the APK v1 signature schemes which were ineffective and compromised, allowing attackers to trick Android in to installing APKs made with false signatures.
 
Given this issue is fixed and never exploitable as F-Droid used its own signature scheme that worked on any version of Android, this issue is mainly brought up to demonstrate a pattern of F-Droid not keeping up with Android security practices.


While this is true, the article doesn't explain that this pattern is mainly because nobody's working on these issues. They don't provide much or any benefit and take time away from working on other parts of the project. To make it clear I'm not defending being slow to adopt security practices. But the article paints a picture of carelessness that I don't think is actually the case.
It proposes that people: 


=== F-Droid misleads people about permissions ===
*Assume applications might be malicious or exploitable
F-Droid lists classic static permissions for very old Android versions on its website with wrong descriptions and doesn't curate packages based on permissions they require.
*Pay close attention to permissions you grant applications
*Download applications from GitHub or Play Store
*Verify signatures using apksigner upon install
*Sandbox Play services using GrapheneOS
This conclusion kind of demonstrates a strange detachment from reality that security developers tend to have: This is all based in theory where an individual manages their own security process and vets everything manually.


I'm not too sure how big of a deal this is given Android's permission system is a mess. People are taught to grant permissions to applications unconditionally lest they suffer consequences such as the app not working or bother them.
Reality is a different story. In reality:


=== F-Droid uses the same application ID as developers ===
* People install whatever applications that solve their issues
F-Droid uses the same application ID as developers
* People grant whatever permissions they ask for
* Google Play is filled with malware, F-Droid is not


* Installing third party APKs might give errors
Given an actual person it seems like suggesting they use Google Play is a disaster waiting to happen.
* Users are misled about permissions at install time
* Unattended updates aren't supported


=== Conclusion ===
F-Droid may have worse security technologically but it has much better security socially.
[[Category:Research]]

Latest revision as of 02:59, 12 November 2022

This is a quick page on my feelings towards security and how most security software fails to be usable.

I want to note that while the background and final sections talk about Android you don't need to follow them to get the main point of the article, it's just used as a concrete example.

Background[edit | edit source]

Recently I read the article F-Droid: how is it weakening the Android security model? which provides a critique of F-Droid's security model and recommends people use Google Play Store.

The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to threaten SylvieLorxu with legal action for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given this behaviour.

Usability[edit | edit source]

Security software almost always asks people to do some of the following:

  • Verify authenticity of some data
  • Remember sensitive data
  • Store sensitive data securely

Unfortunately people are imperfect and fail to do these, not for the lack of trying.

Security developers take three approaches to deal with this:

  • Train people to make fewer mistakes
  • Design software to catch mistakes
  • Lessen the impact of mistakes

Together all three of these are used to make security software usable.

Key management[edit | edit source]

It's hard to discuss any security solution without discussing key management, so allow me to sidetrack for a minute.

Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.

The first task is verifying keys. There are a few ways to do this:

  • Skip verifying the key
  • Send the key using another communication service or method
  • Ask for the key from someone you trust
  • Meet the person in real life and exchange the key directly
  • Verifying the key incorrectly

If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.

The second task is backing up keys. People have to:

  1. Create a secure storage location
  2. Copy the keys to the location
  3. Backup the secure storage location as well

Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this task. Skipping this task can result in wasted time or loss of data, or even loss of finances.

People who take steps to back things must have enough knowledge to do it securely and create redundant backups. Doing this wrong (such as by backing up a key to cloud storage) can result in compromised keys.

The third step is to manage revoking and rotating keys. People have to:

  • Replace keys regularly in case of unknown compromise
  • Revoke keys in case of known compromise

As far as I know almost no security software supports doing these tasks in the first place. That means if someone steals your key they can impersonate or access some resource you have for an unlimited amount of time. The only way around this is to inform people through social networks and other insecure communication methods that your old key is compromised and you have a new one, and go through the steps of verifying and backing up the keys again. Yikes.

Trust[edit | edit source]

Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?

The answer is simple: Trust. Ask yourself:

  • Who do you trust to verify keys for you?
  • Who do you trust to backup your keys?
  • Who do you trust to revoke and rotate your keys?

Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.

My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?

  • I trust most social media services I use not to lie to me about keys. Why can't I ask software to check various websites and verify a key that way? This is how I would verify keys anyway if people posted their keys online.
  • I trust services to hold my keys in portions so if I lose them I can recombine them. Why can't I ask software to distribute keys to my friends and give them back to me if I lose them? This is already how distributed cloud storage and things already work.
  • I trust my social media services or instant messaging services to inform me if someone has lost or had a key compromised. Why can't I ask software to handle that for me? Again, I already do this, just manually.

The only answer I can really come to is that there's difference in world view between security developers and me. After all, security is a technical problem to a social issue. Instead of working on building trustable systems, security software seems to be built for people that trust nobody but themselves. Which isn't how humans work.

A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.

Should you use F-Droid?[edit | edit source]

Now that I've explained how I feel about security and usability, I'm going to circle back to the article I mentioned at the start of this page. As a summary, the article spends its time explaining how F-Droid as a project is technically inferior to Google Play and its process of curating and building applications has no advantages over Google Play.

It proposes that people:

  • Assume applications might be malicious or exploitable
  • Pay close attention to permissions you grant applications
  • Download applications from GitHub or Play Store
  • Verify signatures using apksigner upon install
  • Sandbox Play services using GrapheneOS

This conclusion kind of demonstrates a strange detachment from reality that security developers tend to have: This is all based in theory where an individual manages their own security process and vets everything manually.

Reality is a different story. In reality:

  • People install whatever applications that solve their issues
  • People grant whatever permissions they ask for
  • Google Play is filled with malware, F-Droid is not

Given an actual person it seems like suggesting they use Google Play is a disaster waiting to happen.

F-Droid may have worse security technologically but it has much better security socially.