Security usability: Difference between revisions

From JookWiki
(Add case study)
(Clarify the trust towards GrapheneOS comes from their behaviour not issues in this page)
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''This is a WIP page, come back later.'''
This is a quick page on my feelings towards security and how most security software fails to be usable.


This is a quick page on my feelings towards security and how most security software fails to be usable.
I want to note that while the background and final sections talk about Android you don't need to follow them to get the main point of the article, it's just used as a concrete example.


== Background ==
== Background ==
Recently I read the article [https://wonderfall.dev/fdroid-issues/ F-Droid: how is it weakening the Android security model?] which provides a critique of F-Droid's security model and recommends people use Google Play Store.
Recently I read the article [https://wonderfall.dev/fdroid-issues/ F-Droid: how is it weakening the Android security model?] which provides a critique of F-Droid's security model and recommends people use Google Play Store.


The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to [https://twitter.com/SylvieLorxu/status/1497624955705565188 threaten SylvieLorxu with legal action] for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given their priorities shown here.  
The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to [https://twitter.com/SylvieLorxu/status/1497624955705565188 threaten SylvieLorxu with legal action] for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given this behaviour.  


== Usability ==
== Usability ==
When you look at the current state of open source you tend to see two things:
Security software almost always asks people to do some of the following:


* Security software is near perfect, able to prevent attacks from state actors
*Verify authenticity of some data
* People don't use the security software correctly
*Remember sensitive data
*Store sensitive data securely


There's generally two places you could blame for this:
Unfortunately people are imperfect and fail to do these, not for the lack of trying.


* Developers for making unusable software
Security developers take three approaches to deal with this:
* Users for using software incorrectly


In recent years the latter camp of blaming the user has died down given it's not very actionable to solve.
*Train people to make fewer mistakes
*Design software to catch mistakes
*Lessen the impact of mistakes


People have predictable patterns when it comes to usability:
Together all three of these are used to make security software usable.


* Pick the easiest way to accomplish a task
==Key management==
* Become complacent and skip tasks
It's hard to discuss any security solution without discussing key management, so allow me to sidetrack for a minute.
* Do things wrong
* Fail at impossible tasks


Any process that humans interact with have to account for these patterns and lower risk to an acceptable level.
Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.  


== Case study: Installing programs ==
The first task is verifying keys. There are a few ways to do this:
TODO, windows, linux, android, etc


== Case study: Key verification ==
*Skip verifying the key
I've used and use a lot of open source security software.
*Send the key using another communication service or method
*Ask for the key from someone you trust
*Meet the person in real life and exchange the key directly
* Verifying the key incorrectly


Here's a quick list of the best examples of modern security I can think of:
If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.


* OpenSSH
The second task is backing up keys. People have to:
* Tor and its hidden services
* Matrix


All of these rely on users verifying keys in order to get any sane security guarantee.
# Create a secure storage location
# Copy the keys to the location
# Backup the secure storage location as well


There are a few ways to do this:
Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this task. Skipping this task can result in wasted time or loss of data, or even loss of finances.


* Skip verifying the key
People who take steps to back things must have enough knowledge to do it securely and create redundant backups. Doing this wrong (such as by backing up a key to cloud storage) can result in compromised keys.
* Send the key using another communication service or method
 
* Ask for the key from someone you trust
The third step is to manage revoking and rotating keys. People have to:
* Meet the person in real life and exchange the key directly
 
* Verifying the key incorrectly
* Replace keys regularly in case of unknown compromise
* Revoke keys in case of known compromise
 
As far as I know almost no security software supports doing these tasks in the first place. That means if someone steals your key they can impersonate or access some resource you have for an unlimited amount of time. The only way around this is to inform people through social networks and other insecure communication methods that your old key is compromised and you have a new one, and go through the steps of verifying and backing up the keys again. Yikes.
 
==Trust==
Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?
 
The answer is simple: Trust. Ask yourself:
 
* Who do you trust to verify keys for you?
* Who do you trust to backup your keys?
* Who do you trust to revoke and rotate your keys?
 
Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.
 
My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?
 
* I trust most social media services I use not to lie to me about keys. Why can't I ask software to check various websites and verify a key that way? This is how I would verify keys anyway if people posted their keys online.
* I trust services to hold my keys in portions so if I lose them I can recombine them. Why can't I ask software to distribute keys to my friends and give them back to me if I lose them? This is already how distributed cloud storage and things already work.
* I trust my social media services or instant messaging services to inform me if someone has lost or had a key compromised. Why can't I ask software to handle that for me? Again, I already do this, just manually.
 
The only answer I can really come to is that there's difference in world view between security developers and me. After all, security is a technical problem to a social issue. Instead of working on building trustable systems, security software seems to be built for people that trust nobody but themselves. Which isn't how humans work.
 
A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.


If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.
==Should you use F-Droid?==
Now that I've explained how I feel about security and usability, I'm going to circle back to the article I mentioned at the start of this page. As a summary, the article spends its time explaining how F-Droid as a project is technically inferior to Google Play and its process of curating and building applications has no advantages over Google Play.  


== Case study: Key loss ==
It proposes that people:
TODO


== Case study: Key compromise ==
*Assume applications might be malicious or exploitable
TODO
*Pay close attention to permissions you grant applications
*Download applications from GitHub or Play Store
*Verify signatures using apksigner upon install
*Sandbox Play services using GrapheneOS
This conclusion kind of demonstrates a strange detachment from reality that security developers tend to have: This is all based in theory where an individual manages their own security process and vets everything manually.


== Trust ==
Reality is a different story. In reality:
security is a software problem to a social issue


libertarian threat model
* People install whatever applications that solve their issues
* People grant whatever permissions they ask for
* Google Play is filled with malware, F-Droid is not


not how reality works
Given an actual person it seems like suggesting they use Google Play is a disaster waiting to happen.


bitcoin, keys
F-Droid may have worse security technologically but it has much better security socially.
[[Category:Research]]

Latest revision as of 02:59, 12 November 2022

This is a quick page on my feelings towards security and how most security software fails to be usable.

I want to note that while the background and final sections talk about Android you don't need to follow them to get the main point of the article, it's just used as a concrete example.

Background[edit | edit source]

Recently I read the article F-Droid: how is it weakening the Android security model? which provides a critique of F-Droid's security model and recommends people use Google Play Store.

The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to threaten SylvieLorxu with legal action for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given this behaviour.

Usability[edit | edit source]

Security software almost always asks people to do some of the following:

  • Verify authenticity of some data
  • Remember sensitive data
  • Store sensitive data securely

Unfortunately people are imperfect and fail to do these, not for the lack of trying.

Security developers take three approaches to deal with this:

  • Train people to make fewer mistakes
  • Design software to catch mistakes
  • Lessen the impact of mistakes

Together all three of these are used to make security software usable.

Key management[edit | edit source]

It's hard to discuss any security solution without discussing key management, so allow me to sidetrack for a minute.

Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.

The first task is verifying keys. There are a few ways to do this:

  • Skip verifying the key
  • Send the key using another communication service or method
  • Ask for the key from someone you trust
  • Meet the person in real life and exchange the key directly
  • Verifying the key incorrectly

If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.

The second task is backing up keys. People have to:

  1. Create a secure storage location
  2. Copy the keys to the location
  3. Backup the secure storage location as well

Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this task. Skipping this task can result in wasted time or loss of data, or even loss of finances.

People who take steps to back things must have enough knowledge to do it securely and create redundant backups. Doing this wrong (such as by backing up a key to cloud storage) can result in compromised keys.

The third step is to manage revoking and rotating keys. People have to:

  • Replace keys regularly in case of unknown compromise
  • Revoke keys in case of known compromise

As far as I know almost no security software supports doing these tasks in the first place. That means if someone steals your key they can impersonate or access some resource you have for an unlimited amount of time. The only way around this is to inform people through social networks and other insecure communication methods that your old key is compromised and you have a new one, and go through the steps of verifying and backing up the keys again. Yikes.

Trust[edit | edit source]

Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?

The answer is simple: Trust. Ask yourself:

  • Who do you trust to verify keys for you?
  • Who do you trust to backup your keys?
  • Who do you trust to revoke and rotate your keys?

Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.

My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?

  • I trust most social media services I use not to lie to me about keys. Why can't I ask software to check various websites and verify a key that way? This is how I would verify keys anyway if people posted their keys online.
  • I trust services to hold my keys in portions so if I lose them I can recombine them. Why can't I ask software to distribute keys to my friends and give them back to me if I lose them? This is already how distributed cloud storage and things already work.
  • I trust my social media services or instant messaging services to inform me if someone has lost or had a key compromised. Why can't I ask software to handle that for me? Again, I already do this, just manually.

The only answer I can really come to is that there's difference in world view between security developers and me. After all, security is a technical problem to a social issue. Instead of working on building trustable systems, security software seems to be built for people that trust nobody but themselves. Which isn't how humans work.

A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.

Should you use F-Droid?[edit | edit source]

Now that I've explained how I feel about security and usability, I'm going to circle back to the article I mentioned at the start of this page. As a summary, the article spends its time explaining how F-Droid as a project is technically inferior to Google Play and its process of curating and building applications has no advantages over Google Play.

It proposes that people:

  • Assume applications might be malicious or exploitable
  • Pay close attention to permissions you grant applications
  • Download applications from GitHub or Play Store
  • Verify signatures using apksigner upon install
  • Sandbox Play services using GrapheneOS

This conclusion kind of demonstrates a strange detachment from reality that security developers tend to have: This is all based in theory where an individual manages their own security process and vets everything manually.

Reality is a different story. In reality:

  • People install whatever applications that solve their issues
  • People grant whatever permissions they ask for
  • Google Play is filled with malware, F-Droid is not

Given an actual person it seems like suggesting they use Google Play is a disaster waiting to happen.

F-Droid may have worse security technologically but it has much better security socially.