Security usability: Difference between revisions

From JookWiki
(Rework)
Line 9: Line 9:


== Usability ==
== Usability ==
When you look at the current state of open source you tend to see two things:
Security software almost always asks people to do some of the following:


* Security software is near perfect, able to prevent attacks from state actors
*Verify authenticity of some data
* People don't use the security software correctly
*Remember sensitive data
*Store sensitive data securely


There's generally two places you could blame for this:
Unfortunately people are imperfect and fail to do these, not for the lack of trying.


* Developers for making unusable software
Security developers take three approaches to deal with this:
* Users for using software incorrectly


In recent years the latter camp of blaming the user has died down given it's not very actionable to solve.
*Train people to make fewer mistakes
*Design software to catch mistakes
*Lessen the impact of mistakes


People have predictable patterns when it comes to usability:
Together all three of these are used to make security software usable.


* Pick the easiest way to accomplish a task
==Case study: Key verification==
* Become complacent and skip tasks
Here's a quick list of the best examples of modern security I can think of:
* Do things wrong
* Fail at impossible tasks


Any process that humans interact with have to account for these patterns and lower risk to an acceptable level.
*OpenSSH
*Tor and its hidden services
*Matrix


== Case study: Installing programs ==
All of these rely on keys which dumps a few tasks on people using the software.
TODO, windows, linux, android, etc


== Case study: Passwords and accounts ==
The first is verifying keys. There are a few ways to do this:


== Case study: Key verification ==
*Skip verifying the key
I've used and use a lot of open source security software.
*Send the key using another communication service or method
 
*Ask for the key from someone you trust
Here's a quick list of the best examples of modern security I can think of:
*Meet the person in real life and exchange the key directly
 
* OpenSSH
* Tor and its hidden services
* Matrix
 
All of these rely on users verifying keys in order to get any sane security guarantee.
 
There are a few ways to do this:
 
* Skip verifying the key
* Send the key using another communication service or method
* Ask for the key from someone you trust
* Meet the person in real life and exchange the key directly
* Verifying the key incorrectly
* Verifying the key incorrectly


If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.
If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.


== Case study: Key loss ==
- managing keys
The effect of losing your key varies between applications.
 
Some impacts it can have are:
 
* Needing to do verifications again (a few or many)
* Loss of service, for example with signing keys
* Loss of data, for example with disk encryption
* Loss of financial money, for example with cryptocurrencies
 
Unfortunately humans lose keys a lot, mainly because it takes effort to avoid losing them.


== Case study: Key compromise ==
- key compromise  
TODO


== Trust ==
==Trust==
security is a software problem to a social issue
security is a software problem to a social issue


Line 78: Line 55:
not how reality works
not how reality works


bitcoin, keys
==F-Droid vs Google Play==

Revision as of 11:33, 2 March 2022

This is a WIP page, come back later.

This is a quick page on my feelings towards security and how most security software fails to be usable.

Background

Recently I read the article F-Droid: how is it weakening the Android security model? which provides a critique of F-Droid's security model and recommends people use Google Play Store.

The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to threaten SylvieLorxu with legal action for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given their priorities shown here.

Usability

Security software almost always asks people to do some of the following:

  • Verify authenticity of some data
  • Remember sensitive data
  • Store sensitive data securely

Unfortunately people are imperfect and fail to do these, not for the lack of trying.

Security developers take three approaches to deal with this:

  • Train people to make fewer mistakes
  • Design software to catch mistakes
  • Lessen the impact of mistakes

Together all three of these are used to make security software usable.

Case study: Key verification

Here's a quick list of the best examples of modern security I can think of:

  • OpenSSH
  • Tor and its hidden services
  • Matrix

All of these rely on keys which dumps a few tasks on people using the software.

The first is verifying keys. There are a few ways to do this:

  • Skip verifying the key
  • Send the key using another communication service or method
  • Ask for the key from someone you trust
  • Meet the person in real life and exchange the key directly
  • Verifying the key incorrectly

If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.

- managing keys

- key compromise

Trust

security is a software problem to a social issue

libertarian threat model

not how reality works

F-Droid vs Google Play