The Market Ticker
Commentary on The Capital Markets- Category [Technology]
Logging in or registering will improve your experience here
Main Navigation
Full-Text Search & Archives

Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2020-01-16 08:12 by Karl Denninger
in Technology , 145 references
[Comments enabled]  

So the alleged "encrypted phones suck" thing has come up once again, with AG Barr claiming that Apple is refusing to "help" unlock the Pensacola shooter's iPhone.

This shouldn't BE a conversation -- because what's being discussed shouldn't be something the authorities can do, if you as a user choose to protect your data in a reasonable fashion.  Further, these devices are designed, intentionally, to make that not possible.

A quick primer -- there are available today for anyone who cares to use them (and a lot of people do, including banks, other financial institutions, individuals, corporations of all sorts, and governments) very high-quality encryption.  It is effectively unbreakable using today's technology.  The symmetric encryption used for the actual payload data on modern systems has never been demonstrably broken.  If it is broken then not only will criminals be unable to protect what they do but so will governments, military organizations, banks, your brokerage, etc.

The best means of deriving those session keys use either asymmetric encryption (e.g. RSA) or a multi-part derivation function that is "one way" - that is, you put in an input and get a key out, but the key cannot be reversed.  Multi-part key derivation has significant advantages, including some degree of protecting you from yourself.  That is, if you use a weak password then that can obviously be guessed, but if you ALSO need (for example) a strongly-generated piece on a smart-card or USB stick then without that even being able to guess the password doesn't help.

If a storage volume is encrypted using one of these systems it is effectively impenetrable except by obtaining the keying.  If part of the keying is in your head, then the 5th Amendment prevents the government from acquiring it without your consent, which they cannot compel.  Further, if it is only in your head and your head is no longer functional for some reason, whether by your own hand or someone else's, then obviously it's gone.  Note that if it is derived from biometric data, such as a fingerprint or retinal scan, current court decisions allow you to be compelled to provide those!  For this reason while it's ok for that type of information to be part of the key security demands it is never the entire key.

If, for example, I encrypt a disk volume using something like "GELI" (on FreeBSD) and use a composite key -- that is, part on a USB stick and part password -- then without both that disk cannot be decrypted.  Further, if the machine in question is tamper-aware it can upon detection of tampering (e.g. removal of the lid of the case) almost-instantly erase the keying blocks at the front of the volume containing the metadata needed to derive the session key from the provided components.  Without all three of those items it is not possible to determine the session key.  If the metadata blocks are destroyed (and there is no backup copy anywhere) you have a disk indistinguishable from one filled with random ones and zeros.

Now let's think about cellphones.  When the phone is running the entire storage volume is mounted.  This implies that any decryption keys have been provided and are in use.  Apple claims to have a multi-level "keybag" approach that is essentially file-by-file and, supposedly, can't be bypassed.  But how is it that a firm like Cellbrite can break into a locked iPhone if that is truly the case?  And why can Google remote unlock your device -- a capability they do not deny?

Let's cut the crap: If the session key has been destroyed by the operating system due to a timeout that allegedly "requires" you to re-enter the components to re-generate it, or it was never entered since the device was powered on then unless it is somewhere on the device or your credentials were stored either on the device or provider's infrastructure breaking in by other than brute-force guessing, which with a reasonably-decent password will take thousands of years even with supercomputer (e.g. NSA) assistance, is impossible.

At least Google is honest about it -- the storage encryption on your mobile device is not uniquely derived from, among other things, your entered password.  Further, these firms have intentionally designed their phones to be tough to "quick-hardlock" and they don't "time out" on a user-desired basis in that regard either.  Whether there is any actual protection if the device is off at the time of interception, or out of power, is an open question - but I would not bet on it.  More on that in a minute.

Let's presume once again said FreeBSD machine (e.g. my primary server.)  When it boots there is a small loader that has to be unencrypted.  That loader knows just enough to be able to look at the installed disks and figure out if any of them are bootable with a FreeBSD operating system -- and if so, if the components of that volume appear to be encrypted.  You tell the system this, incidentally, by setting a simple flag on the partition in question.

The loader doesn't know if the allegedly encrypted volume is really encrypted or full of trash; it has no way to know.  It asks for a password and then tries to use both it and, if the flags specify, the other location for an alleged other piece of the key derivation components (e.g. a USB stick.)  Once it has what it thinks is a valid set of keying it attempts to run through that, sets the keying for GELI on that space, and then probes the disk and sees if what's there is an actual volume or not.  If it's not a valid volume then you either specified a disk that wasn't really encrypted or you provided a wrong key component (password, bad USB stick, etc) -- it doesn't know which, just that the attempt failed.

If the loader actually sees a valid disk when it has done this then it knows that keying is good (because there's no possible way for the volume to be valid if it isn't) and it proceeds to load the operating system, then it passes the derivation information it used to the kernel, which then uses it to mount the disk and startup commences normally.

Note the risk here -- that loader, if it's tampered with, could get you to enter the password and stash it somewhere.  Now it's not a secret anymore!  Worse, it could steal the contents of any auxiliary keying device too.  So it is really, really important that this not happen, which is why you have things like "secure boot" and signed bootloaders on phones and some modern PCs.  But, of course, that requires you to trust whoever signed those boot components absolutely.

This is what Barr is talking about -- he wants Apple to provide him with a signed but tampered with bootloader that will start the phone.  Apple has refused.  But Apple is being disingenuous; that loader will not unlock the device by itself unless the user's password isn't really required to unlock the storage in the first place

Remember that in this case specifically the shooter is dead; his password, in his head, died with him.  Therefore if a compromised bootloader would unlock the device the password isn't actually required!

Let's say you wanted to steal my data off my system (whether with a warrant or not.)  One way to do it would be to tamper with the "gptzfsboot" file on my system somehow (theoretically you could break in, pull the cord, change that small unencrypted part of the disks involved, put them back in and turn the power back on.)  I might well think that is a random crash or power loss event -- and not that someone was screwing with me.  It is of course imperative that I not detect you did it, because if I do detect the tampering before you steal the data I can put the good code back and change the keying (e.g. password), never mind that I know you're trying to break in!  Assuming you can pull this off now all you need to do is force me to reboot it so I have to put the password in again (e.g. you kill the power for long enough that my battery backup system is exhausted) and the next time I boot the machine.... Bob's your uncle!  Now you serve your warrant and... heh, look what we have here!

But in the context of a mobile phone the manufacturer can send down a "software update" you have no control over or ability to understand what it is, nor can you in most cases replace with an older or different version on your own because mobile phones have what is called an "anti-rollback" register in them that prohibits you from loading an earlier version of the software.  This means you're 100% at the sufferance of the company since you (1) can't compile the software yourself after looking at it to see if it's doing something evil like storing and sending your password and (2) if the manufacturer does have some skulldruggery -- or just a bug -- in the code you can't roll it back either or you will brick the device.

But it gets worse.  Is your password really required to "start" the phone in the first place?  

No.

Let me explain.  I have a Pixel 3a and if I turn it off and then back on it says "unlock for all features and data."  Uh huh.  If I get an SMS message and I haven't unlocked it the phone does bing at me.  How did it manage to access the operating storage of the device without my password to unlock the volume?

The answer of course is that it didn't need my password to generate the storage key; it was in the device.  The phone couldn't have booted without it, but of course it clearly did boot.

Now what the manufacturers could do is recognize that there is a significant difference between types of data on your device.  Specifically, a phone call or text message isn't private because your service provider has the source and destination, time, and "size" (duration of the call) and in the case of a SMS it has the contents too.  Thus the manufacturer could have "not really locked" (equivalent to what all of the storage is now on your device) that is accessible on boot, just like it is now, and which would permit an either a restarted device or one which was either timed out or force-locked could access.

All the rest of the data, however, including all the application data, your photos and similar would be on a partition that is encrypted using key derivation that includes your manually-entered password.  On a boot none of this would be accessible without that, and on either expiration of a user-selected timeout or a "duress" action (e.g. long press on the power key) that keying would be destroyed in RAM.  That data would simply never be accessible to anyone without your personal act of unlocking -- period.  If you choose to use only a fingerprint or other biometric for that it's on you, but if you wanted to you could use a long alphanumeric password -- effectively impossible to guess, even if some firm can bypass any anti-guessing algorithm designed to slow down such a process.

Google tries to pretend they are doing this with fingerprint-unlocked devices in that about once a day it will demand your password for "extra security."  But that's a false premise.  Even though it is demanding my password, claiming it "needs" it, a text message that comes in still echoes to my Garmin watch, which means that (1) the phone can receive and store the text, (2) it can correlate that with my contact list which is run by an app and (3) it can also communicate that to a second app (Garmin Connect) which talks to the watch over Bluetooth.  None of this could happen if the storage keys had been destroyed and the volume was inaccessible.

Why isn't this done by the manufacturers?

It has nothing to do with terrorism.  It has to do with one and only one thing: Money.

Simply put because none of these companies get a wet crap about your privacy, and doing that would compromise their primary business model which is not selling you phones -- it's selling your personal data directly and indirectly via their "ecosystem" and app developers.  Since consumer fraud -- that is, intentionally concealing the true purpose and implications of what you allegedly "agree" to is no longer prosecuted, ever, and nobody in a large firm ever goes to prison for screwing consumers they do exactly that.

See, if this was implemented then any process running that had or desired to open a file handle on the encrypted volume would have to be blocked as soon as the keying was removed.  This means that any app that wanted to retrieve background information couldn't as soon as your timeout expired until and unless you re-entered the password.   Your much-vaunted "encrypted message app" could tell you something was waiting for you, but not what or from whom since it couldn't get to the storage until you unlocked the device.  You'd probably find that acceptable, by the way.

But Facebook would find it completely unacceptable that it couldn't get to your location all the time, because its app couldn't look up whatever sort of "user key" was associated with your user login information or anything else in storage when the device had timed out.  Google couldn't tell you that the store you just walked by takes Google Pay and Apple couldn't likewise tell you that the store takes Apple Pay.   Various other apps couldn't siphon off location or other data (e.g. Walmart saying "heh there's a Supercenter right over there!") because it couldn't get to its local storage either.

In other words now you'd have to have the phone unlocked and in use, or within the active "quick unlock" (e.g. fingerprint only) window for any background app to run that needs access to local storage -- because that local storage could implicate something personal and private.

There's utterly nothing preventing the Android and IOS folks from having their OS work this way.  In fact it wouldn't be difficult at all to change their code to work like this.  They have just refused to do so, on purpose, and it's not because they want to help the cops catch (dead) terrorists.

It's simply because their entire business model relies on that storage being accessible any time the device is on and has any sort of external connection, whether to WiFi or a mobile network.

The implication of this, however, is that nothing on your cellular device is ever secure.  Period.  This has profound implications for things like personal banking and other financial data, never mind any sort of business-sensitive information and, for many people, photos.

These firms are not selling you phones.

They're selling you to the companies that make apps for phones, including themselves.

And by the way, while you can hate on Google for this at least they're honest about it.

View this entry with comments (opens new window)
 



2020-01-14 07:00 by Karl Denninger
in Technology , 188 references
[Comments enabled]  

Remember that we were told that you'd have fully autonomous driving by 2020?

Tesla was going to be first, but not the only offering for long.

Well, it's 2020.

Where is it?

It turns out that, well, Musk lied.  Like every good carnival barker he also ran the pricing scam to a "T"; buy it now or you will pay more later, a ludicrous statement for software that has never, ever been true in the history of computerized anything.

Now could he get away with that if there was no competition?  Sure.  But there will be competition.

As Tesla's stock flies over $500 a share exactly nobody is holding anyone accountable for that bull****.  Just like nobody is holding Trump accountable for his claim that by the end of his first term the deficit would be zero; it has instead skyrocketed.

Calling something "full self-driving" ought to be good for an instant cement shoe sentence if in fact you cannot punch in a destination, get in the back seat with a bottle of Rum and arrive smashed out of your mind -- without breaking any laws.  Since the car by definition doesn't require you, it also doesn't require you be sober.  Oh, by the way, it also has no liability premium on your insurance either, because you can never be personally (and thus legally) responsible for a crash.

Marketing pablum is nothing new; it dates to well, the beginning of marketing.

But specific claims for which you charge money are a different matter.  You've collected money under a false pretense.

We all know what that word is.....

PS: I'll bet I won't be able to buy an actual full self-driving car within the next 10 years.  But since Tesla said "by 2020" placing a bet on it is a loser for anyone who does it, since the bet has already been lost.

View this entry with comments (opens new window)
 

2020-01-10 13:17 by Karl Denninger
in Technology , 107 references
[Comments enabled]  

Jeff Bezos ought to be in prison.

Now.

Amazon's home security company, Ring, admitted to firing four employees for abusing their ability to view customers' video feeds in a Jan. 6 letter to five Democratic U.S. senators.

The January letter came in response to a Nov. 2 letter from the five senators requesting Amazon founder Jeff Bezos to disclose information regarding Ring's privacy practices given its ability to upload "video footage detailing the lives of millions of Americans in and near their homes" to its servers.

Now take a look at the excuse:

"Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data," Amazon Vice President of Public Policy Brian Huseman wrote in the letter. "Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions,"

Why is anyone at Amazon able to look at any of that data?

Physically able, not "well, they don't have a password."

And this, friends, is also a lie:

Additionally, no Ring employee has complete access to a customer's video footage. Ring only has three employees who currently "have the ability to access stored customer videos for the purpose of maintaining Ring’s AWS infrastructure," Huseman said.

Bull****.

Every AWS employee who has hypervisor access can get at any of the guest instances -- all of them.  In addition any unencrypted data is accessible to anyone with administrative access on that cloud infrastructure.  The number of people with that access, should they decide to try to use it, likely numbers in the thousands if not tens of thousands.

While it would be nice to believe that there is never a "bad guy" the facts are a different matter.  And further, it was a conscious decision in the design of that system to transmit and store that data unencrypted, or effectively so (e.g. where the keys are on the infrastructure itself and thus an administrator can get at them.)

It is entirely possible to choose not to do that; that transmission never happens in unencrypted form and the only person with the key is the customer.

But exactly none of these systems are designed this way.

HomeDaemon-MCP is -- on purpose.  Now if you, as the end user, decide to store the data on an unencrypted volume that's on you.  But that decision is yours, and the transmission to your device (e.g. phone) is encrypted 100% of the time.  Not just the authentication credentials (e.g. via a digest, etc) -- the entire video and audio stream.

Why would you be so crazy as to use a system that by design makes possible the interception, viewing and disclosure of the inside of your home, plus the front door, plus whatever that camera can see from there to a nameless, faceless list of people who you have exactly no ability to discover in terms of their identity or vetting?

Are you out of your damn minds?

Oh, and does this tell you exactly why none of these firms are interested in an actually secure system?  Do you still believe this is about helping you and securing your property when an available alternative hasn't been snapped up and distributed in the market?

View this entry with comments (opens new window)
 

2019-12-31 09:24 by Karl Denninger
in Technology , 211 references
[Comments enabled]  

Gee, what did I point out years ago?

WASHINGTON/LONDON/SAN FRANCISCO (Reuters) - Hackers working on behalf of China’s Ministry of State Security breached the networks of Hewlett Packard Enterprise Co and IBM, then used the access to hack into their clients’ computers, according to five sources familiar with the attacks.

This is why the so-called "cloud" model is a crock when it comes to security.

You are trusting your data to people who you cannot screen, you did not hire, you cannot fire, and you have zero ability to even know if the provider is being straight with you.

In this case the allegation is that they weren't.

Businesses and governments are increasingly looking to technology companies known as managed service providers (MSPs) to remotely manage their information technology operations, including servers, storage, networking and help-desk support.

Right.  Because then you can "offshore" work to places like India and be (at least somewhat) insulated from the blowback when you fire your $100,000+ a year salaried people who used to do the work.  This makes your stock price go up.  What nobody talks about is that it destroys both your data security and that of your customers and clients because now instead of a dozen people having administrative access, all of whom you vetted, there are hundreds if not thousands of people at said "cloud company" who have hypervisor access, which can get into any of the client processes and you have no idea who those people are, where they are, how good they are at their jobs and whether they give a wet crap about security, whether through active indifference or simple incompetence.

And oh, by the way, if they're not in the US and/or not US nationals then you can forget about US law enforcement being able to arrest them too.  So much for "accountability" when what occurred wasn't a mistake but rather a deliberate act.

What's even worse is that if the hypervisor is breached any audit infrastructure available within the client environment, no matter how robust and tamper-resistant is not touched and thus you have absolutely no way to know that an incursion has taken place.

This cannot be resolved in any so-called "cloud" environment.  Ever.

Period.

It is not a matter of "more diligence"; it is a function of the fact that the best security occurs when only one person, who is highly competent, has the ability to get into things and do bad stuff.  If the security model is sound then you have to compromise the one dude somehow -- either by finding something he missed or through active interference of some sort.  If he's competent that's hard.

As soon as there are two people it's half as hard because the attack surface is twice the size.  When it's thousands of people then it only takes one who drinks too much, *****s around with God-knows-who or is behind on his house payment and thus can easily be bribed, never mind that it also just takes one bad coding mistake and the more fingers in the pie the greater the risk of that mistake.

The worst news in these "multi-client" (all cloud) environments is that if the bad guys get not into one client but into the infrastructure they're quite a bit harder to find and they can jump from client to client.  That's what appears to be the case here, although without details, which nobody wants to talk about of course (gee, that wouldn't make stock prices go up would it?) who knows what was infiltrated or exactly how.

Cloud resources are fine for public distribution of information where the entire point is to publish for freely-available access. There is no security issue there because there's nothing to protect; the entire point is for lots of people to be able to see it without constraint.

As soon as the data being held doesn't fit that description you're a five-alarm idiot to use any "cloud" resource for whatever it is, and the more interesting, and more-secure the data has to be the more stupid it is to put it in the hands of others where you cannot personally vet and control that access list.

smiley

View this entry with comments (opens new window)
 

2019-12-20 09:29 by Karl Denninger
in Technology , 245 references
[Comments enabled]  

I've pointed this out since The Market Ticker began publication.

I raised hell -- lots of of it -- even before then.

Nobody cares.

The NY Times wrote an article talking about it and found it "shocking." 

What is it?

The fact that your portable spying device tracks you, that it does so all the time, and that the claims that the data is "anonymized" and thus is used only for the "legitimate" purpose of displaying "relevant ads" is a lie.

In point of fact it is trivially easy to not only know exactly who a given ID is it is very easy to target an individual person and, having done so, to follow them.  What's even worse is that it's also trivially easy to target "potentially interesting people" (e.g. anyone who goes into the White House, the Pentagon, a Military Base, etc) and then retrospectively identify them, then having done so both retrospectively and prospectively track them.

Your car and health insurance company can, if it wishes to (and it probably does) track that you're frequently in a bar.  Not only in a bar, but which bar.  It can then buy your credit card data stream (nothing prevents the firm from selling it) to determine if you're drinking in there -- or just watching a football game.

I have a novel that I've been writing for a couple of years -- I work on it here and there -- that features this exact sort of malevolent abuse, but not to steal money.  To kill.

It would trivially easy to do, and impossible to prevent under today's legal structures.

People say we should "change laws."  What laws would you change?  What changes would be effective?  Remember, today nobody goes to jail for anything they do so long as it's a corporation.  The mobile phone companies themselves have been caught selling your location -- to people like process servers and bounty hunters, both of which are private entities -- not the government itself.

And by the way, that's not, according to our "current law", a search and thus no warrant is required.

But heh, you think it's great to post your cute selfies all over social media.

Yeah, ok.

They -- where "they" is anyone interested, for any purpose whatsoever, can and do buy, sell and trade this information.  If you're lucky all they want to do is steal some of your money by disadvantaging you in some way in the marketplace.  You don't think people actually spend all that money these firms that are publicly traded report without having revenue ten times as much from their "ad buys", do you?  Guess who's pocket that comes out of?

Just wait until someone wants to use it to find and kill you.

View this entry with comments (opens new window)