Barr, Apple, Google And You
The Market Ticker - Commentary on The Capital Markets
Logging in or registering will improve your experience here
Main Navigation
Full-Text Search & Archives

Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2020-01-16 08:12 by Karl Denninger
in Technology , 142 references Ignore this thread
Barr, Apple, Google And You
[Comments enabled]

So the alleged "encrypted phones suck" thing has come up once again, with AG Barr claiming that Apple is refusing to "help" unlock the Pensacola shooter's iPhone.

This shouldn't BE a conversation -- because what's being discussed shouldn't be something the authorities can do, if you as a user choose to protect your data in a reasonable fashion.  Further, these devices are designed, intentionally, to make that not possible.

A quick primer -- there are available today for anyone who cares to use them (and a lot of people do, including banks, other financial institutions, individuals, corporations of all sorts, and governments) very high-quality encryption.  It is effectively unbreakable using today's technology.  The symmetric encryption used for the actual payload data on modern systems has never been demonstrably broken.  If it is broken then not only will criminals be unable to protect what they do but so will governments, military organizations, banks, your brokerage, etc.

The best means of deriving those session keys use either asymmetric encryption (e.g. RSA) or a multi-part derivation function that is "one way" - that is, you put in an input and get a key out, but the key cannot be reversed.  Multi-part key derivation has significant advantages, including some degree of protecting you from yourself.  That is, if you use a weak password then that can obviously be guessed, but if you ALSO need (for example) a strongly-generated piece on a smart-card or USB stick then without that even being able to guess the password doesn't help.

If a storage volume is encrypted using one of these systems it is effectively impenetrable except by obtaining the keying.  If part of the keying is in your head, then the 5th Amendment prevents the government from acquiring it without your consent, which they cannot compel.  Further, if it is only in your head and your head is no longer functional for some reason, whether by your own hand or someone else's, then obviously it's gone.  Note that if it is derived from biometric data, such as a fingerprint or retinal scan, current court decisions allow you to be compelled to provide those!  For this reason while it's ok for that type of information to be part of the key security demands it is never the entire key.

If, for example, I encrypt a disk volume using something like "GELI" (on FreeBSD) and use a composite key -- that is, part on a USB stick and part password -- then without both that disk cannot be decrypted.  Further, if the machine in question is tamper-aware it can upon detection of tampering (e.g. removal of the lid of the case) almost-instantly erase the keying blocks at the front of the volume containing the metadata needed to derive the session key from the provided components.  Without all three of those items it is not possible to determine the session key.  If the metadata blocks are destroyed (and there is no backup copy anywhere) you have a disk indistinguishable from one filled with random ones and zeros.

Now let's think about cellphones.  When the phone is running the entire storage volume is mounted.  This implies that any decryption keys have been provided and are in use.  Apple claims to have a multi-level "keybag" approach that is essentially file-by-file and, supposedly, can't be bypassed.  But how is it that a firm like Cellbrite can break into a locked iPhone if that is truly the case?  And why can Google remote unlock your device -- a capability they do not deny?

Let's cut the crap: If the session key has been destroyed by the operating system due to a timeout that allegedly "requires" you to re-enter the components to re-generate it, or it was never entered since the device was powered on then unless it is somewhere on the device or your credentials were stored either on the device or provider's infrastructure breaking in by other than brute-force guessing, which with a reasonably-decent password will take thousands of years even with supercomputer (e.g. NSA) assistance, is impossible.

At least Google is honest about it -- the storage encryption on your mobile device is not uniquely derived from, among other things, your entered password.  Further, these firms have intentionally designed their phones to be tough to "quick-hardlock" and they don't "time out" on a user-desired basis in that regard either.  Whether there is any actual protection if the device is off at the time of interception, or out of power, is an open question - but I would not bet on it.  More on that in a minute.

Let's presume once again said FreeBSD machine (e.g. my primary server.)  When it boots there is a small loader that has to be unencrypted.  That loader knows just enough to be able to look at the installed disks and figure out if any of them are bootable with a FreeBSD operating system -- and if so, if the components of that volume appear to be encrypted.  You tell the system this, incidentally, by setting a simple flag on the partition in question.

The loader doesn't know if the allegedly encrypted volume is really encrypted or full of trash; it has no way to know.  It asks for a password and then tries to use both it and, if the flags specify, the other location for an alleged other piece of the key derivation components (e.g. a USB stick.)  Once it has what it thinks is a valid set of keying it attempts to run through that, sets the keying for GELI on that space, and then probes the disk and sees if what's there is an actual volume or not.  If it's not a valid volume then you either specified a disk that wasn't really encrypted or you provided a wrong key component (password, bad USB stick, etc) -- it doesn't know which, just that the attempt failed.

If the loader actually sees a valid disk when it has done this then it knows that keying is good (because there's no possible way for the volume to be valid if it isn't) and it proceeds to load the operating system, then it passes the derivation information it used to the kernel, which then uses it to mount the disk and startup commences normally.

Note the risk here -- that loader, if it's tampered with, could get you to enter the password and stash it somewhere.  Now it's not a secret anymore!  Worse, it could steal the contents of any auxiliary keying device too.  So it is really, really important that this not happen, which is why you have things like "secure boot" and signed bootloaders on phones and some modern PCs.  But, of course, that requires you to trust whoever signed those boot components absolutely.

This is what Barr is talking about -- he wants Apple to provide him with a signed but tampered with bootloader that will start the phone.  Apple has refused.  But Apple is being disingenuous; that loader will not unlock the device by itself unless the user's password isn't really required to unlock the storage in the first place

Remember that in this case specifically the shooter is dead; his password, in his head, died with him.  Therefore if a compromised bootloader would unlock the device the password isn't actually required!

Let's say you wanted to steal my data off my system (whether with a warrant or not.)  One way to do it would be to tamper with the "gptzfsboot" file on my system somehow (theoretically you could break in, pull the cord, change that small unencrypted part of the disks involved, put them back in and turn the power back on.)  I might well think that is a random crash or power loss event -- and not that someone was screwing with me.  It is of course imperative that I not detect you did it, because if I do detect the tampering before you steal the data I can put the good code back and change the keying (e.g. password), never mind that I know you're trying to break in!  Assuming you can pull this off now all you need to do is force me to reboot it so I have to put the password in again (e.g. you kill the power for long enough that my battery backup system is exhausted) and the next time I boot the machine.... Bob's your uncle!  Now you serve your warrant and... heh, look what we have here!

But in the context of a mobile phone the manufacturer can send down a "software update" you have no control over or ability to understand what it is, nor can you in most cases replace with an older or different version on your own because mobile phones have what is called an "anti-rollback" register in them that prohibits you from loading an earlier version of the software.  This means you're 100% at the sufferance of the company since you (1) can't compile the software yourself after looking at it to see if it's doing something evil like storing and sending your password and (2) if the manufacturer does have some skulldruggery -- or just a bug -- in the code you can't roll it back either or you will brick the device.

But it gets worse.  Is your password really required to "start" the phone in the first place?  

No.

Let me explain.  I have a Pixel 3a and if I turn it off and then back on it says "unlock for all features and data."  Uh huh.  If I get an SMS message and I haven't unlocked it the phone does bing at me.  How did it manage to access the operating storage of the device without my password to unlock the volume?

The answer of course is that it didn't need my password to generate the storage key; it was in the device.  The phone couldn't have booted without it, but of course it clearly did boot.

Now what the manufacturers could do is recognize that there is a significant difference between types of data on your device.  Specifically, a phone call or text message isn't private because your service provider has the source and destination, time, and "size" (duration of the call) and in the case of a SMS it has the contents too.  Thus the manufacturer could have "not really locked" (equivalent to what all of the storage is now on your device) that is accessible on boot, just like it is now, and which would permit an either a restarted device or one which was either timed out or force-locked could access.

All the rest of the data, however, including all the application data, your photos and similar would be on a partition that is encrypted using key derivation that includes your manually-entered password.  On a boot none of this would be accessible without that, and on either expiration of a user-selected timeout or a "duress" action (e.g. long press on the power key) that keying would be destroyed in RAM.  That data would simply never be accessible to anyone without your personal act of unlocking -- period.  If you choose to use only a fingerprint or other biometric for that it's on you, but if you wanted to you could use a long alphanumeric password -- effectively impossible to guess, even if some firm can bypass any anti-guessing algorithm designed to slow down such a process.

Google tries to pretend they are doing this with fingerprint-unlocked devices in that about once a day it will demand your password for "extra security."  But that's a false premise.  Even though it is demanding my password, claiming it "needs" it, a text message that comes in still echoes to my Garmin watch, which means that (1) the phone can receive and store the text, (2) it can correlate that with my contact list which is run by an app and (3) it can also communicate that to a second app (Garmin Connect) which talks to the watch over Bluetooth.  None of this could happen if the storage keys had been destroyed and the volume was inaccessible.

Why isn't this done by the manufacturers?

It has nothing to do with terrorism.  It has to do with one and only one thing: Money.

Simply put because none of these companies get a wet crap about your privacy, and doing that would compromise their primary business model which is not selling you phones -- it's selling your personal data directly and indirectly via their "ecosystem" and app developers.  Since consumer fraud -- that is, intentionally concealing the true purpose and implications of what you allegedly "agree" to is no longer prosecuted, ever, and nobody in a large firm ever goes to prison for screwing consumers they do exactly that.

See, if this was implemented then any process running that had or desired to open a file handle on the encrypted volume would have to be blocked as soon as the keying was removed.  This means that any app that wanted to retrieve background information couldn't as soon as your timeout expired until and unless you re-entered the password.   Your much-vaunted "encrypted message app" could tell you something was waiting for you, but not what or from whom since it couldn't get to the storage until you unlocked the device.  You'd probably find that acceptable, by the way.

But Facebook would find it completely unacceptable that it couldn't get to your location all the time, because its app couldn't look up whatever sort of "user key" was associated with your user login information or anything else in storage when the device had timed out.  Google couldn't tell you that the store you just walked by takes Google Pay and Apple couldn't likewise tell you that the store takes Apple Pay.   Various other apps couldn't siphon off location or other data (e.g. Walmart saying "heh there's a Supercenter right over there!") because it couldn't get to its local storage either.

In other words now you'd have to have the phone unlocked and in use, or within the active "quick unlock" (e.g. fingerprint only) window for any background app to run that needs access to local storage -- because that local storage could implicate something personal and private.

There's utterly nothing preventing the Android and IOS folks from having their OS work this way.  In fact it wouldn't be difficult at all to change their code to work like this.  They have just refused to do so, on purpose, and it's not because they want to help the cops catch (dead) terrorists.

It's simply because their entire business model relies on that storage being accessible any time the device is on and has any sort of external connection, whether to WiFi or a mobile network.

The implication of this, however, is that nothing on your cellular device is ever secure.  Period.  This has profound implications for things like personal banking and other financial data, never mind any sort of business-sensitive information and, for many people, photos.

These firms are not selling you phones.

They're selling you to the companies that make apps for phones, including themselves.

And by the way, while you can hate on Google for this at least they're honest about it.

Go to responses (registration required to post)
 



 
Comments.......
User: Not logged on
Login Register Top Blog Top Blog Topics FAQ
User Info Barr, Apple, Google And You in forum [Market-Ticker]
Whitehat
Posts: 1503
Incept: 2017-06-27

Gone West
Report This As A Bad Post Add To Your Ignored User List
the whole problem here is the absence of a phone that allows the user to install his own operating system.

such a device can have the necessary "open" data and software functionality to be a phone when locked, but have a complete OS and data accessed as the user sees fit with or without encryption, multi-boot and most of the ways that are already possible on personal computers.

what strikes me as odd is that Microsucks did not offer a phone like this. it could actually have a full Windows 10 installation that functions based upon the environment, being optimized as a phone. the real magic would occur when such phone was docked with a PC and becomes what we expect from personal computers today. the ability to do things would be enhanced by hardware sharing in that the PC has more room for chips and less power restrictions.

Karl discussed this concept a long time ago, the phone being a person's computer. the docking concept solves this easily and would allow MS to dominate and bring back the user customization of alt OS and security.

seems like two things are at play. MS is not a dynamic company and further broke said by internally losing this core competency. there is the possibility that the powers that be prevent such an innovative phone as lots of interests make money from Stupidus Americanus.

----------
There are two ways to be rich: One is by acquiring much, and the other is by desiring little.
snow, seasons, distance and dirt roads: SSDD
"Be not deceived; G-d is not mocked; for whatsoever a man soweth, that shall he also reap" (Gal. 6:7)
Emg
Posts: 467
Incept: 2012-11-20

Canada
Report This As A Bad Post Add To Your Ignored User List
"But how is it that a firm like Cellbrite can break into a locked iPhone if that is truly the case?"

As I understand it, they're brute-forcing the password. From what I've read, they have a way to backup the phone, brute-force the password until the phone refuses to let them try again or wipes itself, then reflash the phone from the backup and continue the brute-forcing. This means even the built-in 'wipe the phone after ten failed passwords' has no effect, because they just reload it from the backup.
Tickerguy
Posts: 161139
Incept: 2007-06-26
A True American Patriot!
Report This As A Bad Post Add To Your Ignored User List
@Emg - That's pointless if (1) the person used an alpha, long password and (2) the key derivation is actually any good.

Now I agree that if the person in question used a 4-digit password they'll get in within a reasonable amount of time. But that's not their claim -- they claim they can break ANY of these devices. That implies that either the key derivation components aren't really necessary or Apple chimped it on purpose.

----------
Winding it down.
Redjack
Posts: 219
Incept: 2018-01-29

Iowa
Report This As A Bad Post Add To Your Ignored User List
I find this interesting.

Buddy went to China, A LOT, over the years.

It is "common knowledge" that the ChiComs can access all Iphones. My buddy refused to use an I phone while traveling.
Now, he may be wrong. Such things are not his wheel house. This whole thing is a bit odd though.
Tickerguy
Posts: 161139
Incept: 2007-06-26
A True American Patriot!
Report This As A Bad Post Add To Your Ignored User List
Apple allegedly has a "burnt-in" part of the keying in the processor that -- supposedly -- nobody has.

Supposedly.

It is my suspicion that this is sufficient STANDING ALONE to unlock the storage. It is also my suspicion that since Apple has no foundary in the US they exclusively control, well.... guess who also has the keying list that correlates with a given CPU serial number?

Uh huh.

----------
Winding it down.
Emg
Posts: 467
Incept: 2012-11-20

Canada
Report This As A Bad Post Add To Your Ignored User List
If they can break any device, the Feds don't need Apple to do anything.

Apple do have a fairly thick PDF on how they claim to do security on their phones. It looked pretty secure when I went through it, so long as the hardware key store is secure and the passcode is long enough.

Of course, they might be lying, because no-one else can prove it's working the way it says it's working.
Franco
Posts: 91
Incept: 2009-10-06

Report This As A Bad Post Add To Your Ignored User List
So nobody has produced a mobile phone that can be loaded with an open-source operative system?
Tickerguy
Posts: 161139
Incept: 2007-06-26
A True American Patriot!
Report This As A Bad Post Add To Your Ignored User List
@Emg - I read their keybag document. If it's accurate then there's no known way to break it that also doesn't break every military and civilian encryption system out there, provided the user actually uses a strong password.

There's a lot of "ifs" in there, however, among them:

1. The user key is actually the entire thing, or a large enough component standing alone that having the processor's burnt-in key is worthless.

2. If #1 is true what purpose does that key in the CPU have, other than as a serial number, and if it is just a serial number why not just make it a serial number? That is, there's no value to its secrecy in that case.

That you cannot forcibly extract a key out of a TPM (no matter if it's a separate chip or is effectively something like this and is burnt into the CPU) is worthless if an authorized party can tell the TPM to release the keying. This is where I have a problem with the premise of a TPM -- it only protects one thing, and that is the physical separation of storage from a married device. If the married device can be updated with a signed, authorized piece of firmware then the TPM does not consider that to be "tampering" and the key will be released. Unless I am the only one who can update said married device because I am the only one with the signing key there is no security; I now must trust that Apple (for example) will NEVER flash my device either on purpose or otherwise with something that tells the TPM to let the key go!

Now if the TPM only has PART of the key then that's ok, but now we're wondering why its there in the first place? If it's only purpose is to authenticate signed code updates I'm ok with that, but as soon as it gets involved in the data storage side I'm back to trusting someone else. If it is only there to "help", for example, against pedestrian attacks (e.g. someone steals my laptop and removes the disk to try to steal the data on it) I'm ok with that. But against a conspirator that either has the company's cooperation or can compel it that's worthless.

For example my laptop has a TPM and its disk encryption BY DEFAULT uses only that. Well, bull****! If you edit the config, however, it can also demand either a USB keying device or a password as well. I'm still not convinced however, because guess what -- the password length is quite limited! Probably enough to eventually be brute-force broken. That's on purpose. So, if I was the paranoid type or involved in something nasty, nothing important would be on said machine and OS, because it's implementation (Bitlocker, by the way) is simply not secure enough. Never mind that I have no trust whatsoever that Microsoft isn't transmitting and retaining recovery keys even when told that you only want to print or locally store them.

Now for the pedestrian "someone steals the laptop" problem the default is enough because if you remove the disk you have nothing. But for actual security it's worth zero.

So what's the real model here? Hmmmm.....

Android 10 tries to play this cute "keybag" game too, and most modern hardware has a TPM in the phone. OK. Now show me evidence that without my password the storage cannot be unlocked, that the key derivation is secure without my password's component (in other words it provides enough entropy that even with what Google has if you don't have that you have nothing) AND that the device destroys that keying data on demand or interval (which I know it does not, and Apple does not either, other than MAYBE -- and I do mean MAYBE -- on a full restart.)

Point being if Cellbrite and others can break the device then either the keying is insecure in that the parts they can compel release of are enough to brute-force the rest no matter what you do or worse, the keying is being escrowed somewhere that survives a lock or even reset and thus can be popped back out, given appropriate "code authorization."

Incidentally this is turning into a problem with Android for sure because I've heard that in the very near future Android phones will be able to update the OS, restart and not require the user's passcode to restart. That of course means a reset isn't really a reset, and as soon as you allow that data into the TPM the manufacturer can send an "authorized" firmware update that causes it to be spat back out.

----------
Winding it down.

Draakken
Posts: 2
Incept: 2019-12-04

Report This As A Bad Post Add To Your Ignored User List
have you had a chance to look at the librem phone?
https://puri.sm/
Tinman
Posts: 636
Incept: 2008-02-16

People's Republic Of Maryland
Report This As A Bad Post Add To Your Ignored User List
Login Register Top Blog Top Blog Topics FAQ