The “FBI–Apple encryption dispute” has come to an – preliminary, at least – end with the FBI announcing yesterday that they have no interest any more in Apple assisting them to recover user data from a seized iPhone that once was used by a terrorist. The FBI says that they have been able to recover the data without Apple’s help. Apple has received much public support for their opposition to provide software to recover the data from the phone. I believe that Apple doesn’t deserve this sympathy and is the wrong superhero to adore. Contrary to what they say in their press releases, Apple is not protecting their user’s freedom. Even though they might have put security measures into place that are distinguishing compared to those of other competitors, Apple’s products are mistreating their users just as any other product based on proprietary software. Sadly, there is no smart phone available today that runs exclusively on free software and gives control to the user instead of the vendor, which is why I don’t use or even have a smart phone.
The whole talk about the “FBI–Apple encryption dispute” is highly disturbing. If anything, it is not about cryptography. Cryptography is about securing information from unauthorized access by means of math. Math is not controlled by anybody. (Although some people might withhold interesting findings from the public.) If the FBI wanted to break the cryptography of the phone in question, it wouldn’t need Apple’s help at all. The FBI can run the math as well as anybody else. The fact that the FBI “had to” (more on that later) ask Apple for assistance is that the iPhone does not rely on math for keeping its users’ protected but rather on the authority of Apple. This is wrong. Nobody deserves for their freedom to depend on the mercy of any authority, be it Apple or anybody else. As a matter of fact, freedom means having to obey no authority in the first place.
Regardless of the many bad things I have to say about Apple, at least, they have published a decent document about the iOS security model and I’m referring to the information disclosed therein for my further discussion. It might be inadvertently or deliberately incorrect, and given that the software is proprietary and cannot be inspected publicly, we would never know, but I have no reason to assume it is at this point. Even if it is accurate, Apple is not to be praised.
The persistent storage of user data on a recent iPhone is encrypted with a strong symmetric block cipher (AES-256) that is believed by today’s public cryptographers to be impossible to break even with the most powerful computers available. Unlike block-level encryption as it is commonly used with Linux, Apple’s encryption seems to work on the file-system level. Each file is encrypted individually with a so-called file key which, alongside with the file’s meta-data, is itself encrypted with a key derived from the so-called file-system key and a so-called class key. If I understand correctly, then the purpose of the class key is to create different classes of files, maybe for use by different kinds of applications. The file-system key is stored in plain-text in a special region of the flash device that can be wiped at the transistor level. Once this key is effectively wiped, no data on the flash device can be accessed any more. The class key is derived from the user’s “passcode” and a hardware key. The passcode is, ideally, only known by the user and entered when unlocking the phone. The hardware key is a random key generated at the time the phone is fabricated and permanently stored in a dedicated cryptography co-processor chip. This chip is designed such that it won’t output the key directly and make it difficult to extract even given physical access to it. Apple claims that it doesn’t know or record the hardware keys of the phones it sells.
As can be seen, all data can be recovered if the hardware key and the user’s passcode are known. The passcode is a four-digit PIN that is extremely easy to guess. Even the stupidest attacker could sit down for a weekend with the phone and try entering all 10 000 possible combinations. In order to thwart this attack, Apple has programmed the operating system of the iPhone to do two things. First, it will introduce artificial delays after failed attempts to enter the passcode, which will grow exponentially up to an hour with the number of failed attempts. Secondly, it will trigger a wipe of the stored data after 10 failed attempts. As I understand, this means that only the file-system key is wiped from the flash storage and I believe that this is a security flaw. I shall discuss this soon.
Since trying to brute-force the passcode through regular user inputs is not an option and because the hardware key is said to be difficult to extract from the chip, what the FBI wanted to have is an “update” of the iOS that would not introduce artificial delays, not trigger a wipe after a fixed number of failed attempts and presumably also allow them to enter the passcodes programmatically (eg via the USB connection) instead of having to pay an official for typing his fingers wound. Patching iOS to behave like this should be a trivial task. One would simply comment out a few lines of code, re-build iOS and deploy this insecure patch to the seized phone. As I understand, the iPhone will automatically install updates without asking for user confirmation, which one might or might not consider a flaw. However, it will only do so if the patch has a valid cryptographic signature by Apple. Apple’s public key is burned onto the iPhone’s chip during fabrication. Even if the phone could be tricked to accept the malicious patch, unless it bears a valid signature, the firmware will refuse to boot it.
This mechanism is marketed as “secure boot” and could indeed be a useful feature. Unfortunately, if the owner of the device cannot change the list of trusted public keys, it does more harm than good. Not only does it prevent attackers from installing software that the device vendor has not approved of then, but also the legitimate owner of the device. This is why we prefer to call it restricted boot. It doesn’t have to be like this, but Apple prefers it that way.
So, while the FBI could easily – assuming they have access to the source code of the iOS – prepare the malicious patch, they would need Apple to sign it with their private key. Apple is, presumably rightfully, claiming that doing so would unsettle the faith of their users and that the existence of the signed malicious patch would pose a threat to all of its users as it could be deployed to any iPhone.
I have the suspicion that the entire “FBI–Apple encryption dispute” could have been made up. It seems to me that there are numerous attack vectors that, with help of their friends from the NSA, would be well within reach for the FBI and wouldn’t require collaboration on Apple’s behalf.
- Physically recover the hardware key from the iPhone’s chip by removing and opening the chip and inspecting it under a microscope. This possibility has been publicly discussed and seems feasible, if risky, as they would only have a single chance to either recover the key or else destroy all data permanently.
- Copy the data from the flash device to a backup location, try entering the passcode until the phone wipes itself, copy the data back, continue. Without circumventing the delay mechanism, this would be extremely time-consuming, though. (In a more secure design, the hardware key, which, unlike the flash storage, is not readily accessible, would be wiped, too.)
- Use a hardware-level backdoor in the cryptography chip to unveil the hardware key, if one exists.
- Flash their own public key onto the phone’s ROM or replace it so the phone will boot a compromised kernel. Depending on how separated the ROM is from the cryptography co-processor, this seems to be a fairly low-risk operation.
- Deploy any of the publicly known iOS “jailbreaks” to safely boot their corrupted kernel. Isn’t Apple’s restricted boot known to be broken?
- Steal the private key Apple is using to sign official releases.
- Use a ready-made backdoor in iOS, if one exists.
It seems unlikely to me that none of these attacks would be within their scope. On the other hand, publicly fighting with Apple had desirable implications for both parties. Apple has created the impression that it deeply cares about the privacy of its users and that its products are secure even against the most advanced attackers, which is a growing concern of many customers and will be beneficial for their sells. The FBI will also be interested in making the general public think that…
- …they cannot break the cryptography used in the iPhone for encrypting the user data and signing the software.
- …iOS has no backdoor accessible to security officials.
- …there is no backdoor in the iPhone’s hardware that can be used by the state.
- …the state does not collaborate with vendors of proprietary operating systems.
While security agencies generally like the people to believe they are powerful, at the same time, they try to keep them as ill-informed as possible so they will continue using weak technology. Maybe they decided that after the thing went quite well for a while, the United States would look too silly if they had to pretend that they were unable to recover the data so they let a friendly forensic company “help” them recover the bits from the chip, which doesn’t invalidate any of the above impressions.
I hate speculation and I’m in no way claiming that any of these possibilities is true. I do think, however, that they are too plausible to ignore them completely. The fact that Edward Snowden has claimed that the “FBI saying it can’t unlock [the] iPhone is bullshit” might be relevant, too. Although I would have preferred technical explanations over vulgar expressions.
Apple’s security model is technically and politically broken and ethically wrong.
Despite all the effort Apple has apparently put into making the iPhone seemingly secure, it is not. At least not in a cryptographic sense. As far as math is concerned, given all information that is permanently stored on the phone, there are only the four digits of the passcode that are unknown and these are trivial to brute-force. All security comes from the assumption that parts of the storage on the phone are extremely difficult to access even when one has physical access to the device and advanced forensic equipment. Security in a cryptographic sense can only be achieved by using a strong pass-phrase that is not persistently stored on the device itself. And even then, security is only granted when the device is powered off. For example, on my Laptop running GNU/Linux, I have encrypted the hard disk with LUKS, using a strong pass-phrase that I need to enter from memory each time I’m booting the machine up. While the machine is running, I ensure that nobody gets unauthorized physical access to it. When I have to leave it alone, I make sure to power it down which wipes the decryption key from the computer’s memory. Unless there is a flaw in the implementation of LUKS or a major break-through in cryptanalysis, the data from my disk can only be recovered if I’m telling the pass-phrase. If I’m dead, nobody will ever get to know what’s on the disk. It doesn’t matter whether they take the LUKS maintainers to court, threaten or torture them. Even if they wanted, they couldn’t possibly help accessing the data. That’s the kind of security that supports freedom!
Of course, while I’m still alive and installing updates of my operating system – which, by the way, allows me to decide who’s public keys I’d like to trust – there is a risk that the maintainers of LUKS turn evil and release an insecure version of the software that will leak the key. As no group of people is perfect, this could happen to any software product. However, it is very unlikely that it will happen to LUKS or any other free software project. The reason is that – and this brings us to the core of my argument – unlike Apple’s proprietary operating system, free software grants everybody who obtains a copy of it four essential freedoms.
- The freedom to run the program as you wish, for any purpose.
- The freedom to study how the program works, and change it so it does your computing as you wish.
- The freedom to redistribute copies.
- The freedom to distribute copies of your modified versions to others.
These freedoms guarantee that the source code for free software is or can be made available to the public. This means that anybody can inspect it and audit it for inadvertent flaws or intentionally added malicious “features”. If anybody detects such defects, they can immediately publish them and ask the maintainers of the software to fix them. If they refuse to do so (maybe they’ve turned evil or were forced to turn evil), anybody who has the skill can publish a corrected version of the software and the word will spread that it is this version to continue using.
When I’m telling people this, they usually respond that they are not knowledgeable enough to read the source code of computer programs. Even if they were, they wouldn’t have the time to study the code of all software they are using. That’s obvious. Nobody can read the source code of all software they’re using. But fortunately, you’re not alone. I didn’t read the source code of all the software I am using. But I did read enough of it to gain confidence that it is generally trustworthy. What is more important, I have seen many smart people who are passionate about maintaining an infrastructure of high-quality free software. I don’t overly trust any of them individually, but I do trust the free software community as a whole to have enough people who behave with integrity to keep our software clean. This concept is not new. It is, in fact, the same idea that builds the foundation of democracy. No single instance is trustworthy enough to exclusively depend on it. This is true for kings as much as it is for Apple. But acknowledging that the majority of people, while not perfect, generally prefers the good over the evil, and given a sufficient number of observant citizens, our society can go pretty far and ensure utmost freedom for everybody. People in a free society don’t need the mercy of a wrong superhero to grant their freedom to them.
If watching the “FBI–Apple encryption dispute” has let you gain confidence that your privacy is well-protected by using Apple products, please reconsider your conclusion and think whether the money you would have spent on your next iPhone wouldn’t serve your freedom better when donated to the Free Software Foundation, the Electronic Frontier Foundation, Wikipedia or any free software project that builds something you care about.