Security usability: Difference between revisions

From JookWiki
Line 72: Line 72:
A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.
A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.


==Case study: F-Droid vs Google Play==
==Is F-Droid worse than Google Play?==
Now that I've explained how I feel about security and usability, I'm going to circle back to the blog post I mentioned at the start of this page. It focused on how F-Droid doesn't abide by the Android way of doing things. It's a  
Now that I've explained how I feel about security and usability, I'm going to circle back to the blog post I mentioned at the start of this page. It focused on how F-Droid doesn't abide by the Android way of doing things. It's a good read but arrives at a strange conclusion: Use Play Store for top-notch security.


=== F-Droid is a trusted party ===
- permissions
Unlike Google Play, F-Droid builds and signs applications with its own keys. This means you have to trust both the developer and F-Droid to not tamper with the binary.


My immediate question here is: How do people verify these signatures? The answer is that the process is tedious and error prone and outside the ability of almost everyone using the application store.
- f-droid


=== F-Droid is not free of malicious applications ===
- curation
F-Droid and Google Play can both contain malicious applications.


F-Droid has a hard requirement on source code being available for an application which in theory weeds out a lot of obvious malware. But in practice nobody is really doing much more than a cursory look.
-etc
 
It's quite easy to find a malicious app on Google Play: Download anything that's free and watch it place ads all over your screen that clearly don't have your best interest in mind.
 
But where are the malicious applications in F-Droid? In practice there doesn't seem to be any at all.
 
=== F-Droid updates are slow and irregular ===
I agree with this criticism to the point I'd like to extend it to Linux distributions.
 
Software development usually only targets the newest version of software unless dedicated stable branches are handled by the the same developers. Shipping old versions of software effectively means you're shipping unmaintained and unaudited versions of software.
 
There are many distributions that ship old versions and backport fixes, but for security this doesn't work very well as most security bugs aren't publicly reported or even privately noticed. This applies especially to complex high risk software like your operating system kernel where any bug could theoretically be turned in to an exploit.
 
=== F-Droid supports older Android versions ===
F-Droid supports older Android versions and builds applications with support for older API levels. This causes newer systems to use weaker application sandboxes than they would otherwise need.
 
Naturally this isn't ideal, but the solution suggested in the article is to ditch support for old systems and complain to vendors to provide longer support cycles for devices.
 
In practice this doesn't work and ends up with users getting old insecure software. That seems like a bigger problem to me than sandbox improvements.
 
=== F-Droid doesn't pin TLS certificates ===
F-Droid communicates to its servers using TLS like most modern software does. It trusts the pre-installed certificate authorities on a device which might intercept the communications.
 
My response to this is that if you have TLS interception using compromised certificate authorities then you likely have much bigger problems than F-Droid updates as this would affect basically every application that connects to arbitrary servers using TLS, such as your chat and email applications.
 
=== F-Droid used old signature schemes ===
F-Droid held out on using the APK v1 signature schemes which were ineffective and compromised, allowing attackers to trick Android in to installing APKs made with false signatures.
 
Given this issue is fixed and never exploitable as F-Droid used its own signature scheme that worked on any version of Android, this issue is mainly brought up to demonstrate a pattern of F-Droid not keeping up with Android security practices.
 
While this is true, the article doesn't explain that this pattern is mainly because nobody's working on these issues. They don't provide much or any benefit and take time away from working on other parts of the project. To make it clear I'm not defending being slow to adopt security practices. But the article paints a picture of carelessness that I don't think is actually the case.
 
=== F-Droid misleads people about permissions ===
F-Droid lists classic static permissions for very old Android versions on its website with wrong descriptions and doesn't curate packages based on permissions they require.
 
I'm not too sure how big of a deal this is given Android's permission system is a mess. People are taught to grant permissions to applications unconditionally lest they suffer consequences such as the app not working or bother them.
 
=== F-Droid uses the same application ID as developers ===
F-Droid uses the same application ID as developers
 
* Installing third party APKs might give errors
* Users are misled about permissions at install time
* Unattended updates aren't supported
 
=== Conclusion ===

Revision as of 10:02, 3 March 2022

This is a WIP page, come back later.

This is a quick page on my feelings towards security and how most security software fails to be usable.

Background

Recently I read the article F-Droid: how is it weakening the Android security model? which provides a critique of F-Droid's security model and recommends people use Google Play Store.

The GrapheneOS developers provided similar critique but it contains numerous uncorrected errors. Instead of correcting this information they have chosen to threaten SylvieLorxu with legal action for pointing out these mistakes. I strongly recommend reconsidering any trust towards GrapheneOS and its developers given their priorities shown here.

Usability

Security software almost always asks people to do some of the following:

  • Verify authenticity of some data
  • Remember sensitive data
  • Store sensitive data securely

Unfortunately people are imperfect and fail to do these, not for the lack of trying.

Security developers take three approaches to deal with this:

  • Train people to make fewer mistakes
  • Design software to catch mistakes
  • Lessen the impact of mistakes

Together all three of these are used to make security software usable.

Key management

It's hard to discuss any security solution without discussing keys, so allow me to sidetrack for a minute.

Keys are private tokens used in almost all modern security software to gain some useful security property such as confidentiality or authenticity. Unfortunately almost all modern security software requires manual key management. This dumps a few tasks on people.

The first task is verifying keys. There are a few ways to do this:

  • Skip verifying the key
  • Send the key using another communication service or method
  • Ask for the key from someone you trust
  • Meet the person in real life and exchange the key directly
  • Verifying the key incorrectly

If I had to guess which method is the most common, it's skipping verification. This is the option I pick all the time now for two simple reasons: It's easy, and it's reliable.

The second task is backing up keys. People have to:

  • Create a secure storage location
  • Copy the keys to the location
  • Backup the secure storage location as well

Unless keys are used for something very important like signing packages or cryptocurrencies, people don't put much effort in to this step. Skipping this step can result in wasted time or loss of data, or even loss of finances.

In the case where they do take steps to back things up they have to have enough knowledge to do it securely and create redundant backups. Doing this step wrong (such as by backing up a key to cloud storage) can result in compromised keys.

The third step is to manage revoking and rotating keys. People have to:

  • Replace keys regularly in case of unknown compromise
  • Revoke keys in case of known compromise

As far as I know almost no security software supports doing these tasks in the first place. That means if someone steals your key they can impersonate or access some resource you have for an unlimited amount of time. The only way around this is to inform people through social networks and other insecure communication methods that your old key is compromised and you have a new one, and go through the steps of verifying and backing up the keys again. Yikes.

Trust

Requiring people to manage keys themselves is asking for a lot of trouble and mistakes. So why do it?

The answer is simple: Trust. Who do you trust to verify keys for you? Who do you trust to backup your keys? Who do you trust to revoke and rotate your keys? Whoever or whatever you trust to accomplish these tasks becomes another link in the chain of security, and if this link is compromised then so are you. Security software that uses manual key management tries to avoid adding links to this chain of trust and instead act as a tool. A tool that's as secure as the person using the software. If you're diligent then the software won't betray you, but if you're sloppy then the software won't protect you.

My problem with this answer is that it brings up another question: Why doesn't the software mimic the trust I already have as a person?

  • I trust most social media services I use not to lie to me about keys. Why can't I ask software to check various websites and verify a key that way? This is how I would verify keys anyway if people posted their keys online.
  • I trust services to hold my keys in portions so if I lose them I can recombine them. Why can't I ask software to distribute keys to my friends and give them back to me if I lose them? This is already how distributed cloud storage and things already work.
  • I trust my social media services or instant messaging services to inform me if someone has lost or had a key compromised. Why can't I ask software to handle that for me? Again, I already do this, just manually.

The only answer I can really come to is that there's difference in world view between security developers and me. After all, security is a technical problem to a social issue. Instead of working on building trustable systems, security software seems to be built for people that trust nobody but themselves. Which isn't how humans work.

A lot of this really makes sense once you look in to the people that actually develop security software: They're almost always knee deep in to crypto-anarchism and other libertarian ideologies that hold the individual as the sole authority over one's life, with a rejection of things like mutual aid and social structures. These developers have an explicit distrust of authorities, big or small.

Is F-Droid worse than Google Play?

Now that I've explained how I feel about security and usability, I'm going to circle back to the blog post I mentioned at the start of this page. It focused on how F-Droid doesn't abide by the Android way of doing things. It's a good read but arrives at a strange conclusion: Use Play Store for top-notch security.

- permissions

- f-droid

- curation

-etc