The Market Ticker
Commentary on The Capital Markets- Category [Technology]

This isn't good at all....

When creators of the state-sponsored Stuxnet worm used a USB stick to infect air-gapped computers inside Iran's heavily fortified Natanz nuclear facility, trust in the ubiquitous storage medium suffered a devastating blow. Now, white-hat hackers have devised a feat even more seminal—an exploit that transforms keyboards, Web cams, and other types of USB-connected devices into highly programmable attack platforms that can't be detected by today's defenses.

This just plain sucks.

What they've done here is figure out that (unfortunately) many of the common USB controller chips are reprogrammable in the field and there is no verification of what's loaded to them.  Apparently there is also enough storage (or, in the case of a pen drive, lots of storage!) to do some fairly evil things.

At the core of this problem is the fact that a USB device has an identifying "class" and vendor ID.  If the "class" is one the computer knows it will attach it, usually without prompting of any sort.  This is especially bad if the "class" presented is what is known as a "HID", or "Human Input Device" -- like a mouse or worse, a keyboard.

Yes, you can have more than one keyboard connected, and all are active at once.  And yes, this is as bad as you think it might be.

The worst part of it is that various virus and anti-spyware programs can't detect it because the code doesn't run on the host machine, it runs on the device.  All the computer sees is a "keyboard" -- but it's not really a keyboard, it's your USB pen drive that sends a key sequence down that invokes something (e.g. a browser to go to a specific bad place.)

This can be detected if you're paying attention, but most people don't.  You can see what classes a particular device attached, but few people will look and current operating systems don't prompt, with good cause.  How do you answer such a prompt if you're plugging in a keyboard -- that isn't yet allowed to attach?  Ah, there's a chicken and egg problem, eh?

In any event there ARE defenses against this, but they will require significant operating system patches and then a paradigm to be taken care of with USB -- which will help, but not prevent these sorts of exploits.  As it sits right now, unfortunately, mainstream operating systems are wide open to this sort of abuse.

For example, if my keyboard is plugged into USB Port 2, and it has a Vendor ID of "X" and a device type of HID/Keyboard, then any other port, or this port, that sees a different vendor ID and/or ANY HID/Keyboard device would bring up a warning that a user input device, specifically a keyboard, was attempting to attach.  You could then say "Yes" or "No", and if the device that popped up that prompt was a webcam or USB data stick go looking for your sledge hammer to get a bit of an upper-body workout taking care the problem.

But as it sits right now the only way you'll catch it is if the vendor and device ID don't match a loaded set of drivers and thus the system has to go looking for them -- in which case you will get a warning.  Sadly, for the common abuses of this (e.g. keyboards and mice in particular) you almost-certainly already have such a driver on the system and thus you're unlikely to catch it.

Yeah, this is a problem.....  and a pretty nasty problem at that.

View this entry with comments (registration required to post)
 

My view: If this is how Ford views security and the iPhone short Ford to zero.

“We are going to get everyone on iPhones,” Tatchio said. “It meets the overall needs of the employees because it is able to serve both our business needs in a secure way and the needs we have in our personal lives with a single device.”

Given what is publicly known about the fact that any IOS device that is connected to another data-bearing device transfers all of its trust envelope to that second device this means that an IOS device in a corporate environment now becomes only as secure as a personal computer in said employee's home that is not under control of the corporate IT department.

Read this again.

Now contemplate this -- said Ford employee, with a device that Ford, the company believes is "secure", connects said phone to their personal computer at home to transfer some music.  Said computer at home has a virus on it that it picked up when that person, on their own time and in the privacy of their own home, surfed to some porn site on the Internet.

That virus sends the trust records for the iPhone back to a hacker in China!

The device's security has now been permanently compromised; said hacker can now, any time the device is on a network where he also has presence (say, a public WiFi point) access huge amounts of data off said device, including the contact lists, messages, pictures and similar items, along with (gulp!) OAUTH tokens. The latter, by the way, is identical in effect to having someone's password for social media accounts; this allows the impersonation of that individual on those accounts.

Secure my ass.

That Ford published such nonsense tells me exactly how Ford the company looks at data security issues at an enterprise level.  The company has publicly declared that fellating employee egos takes precedence over enterprise data security.

A company that takes this position deserves what befalls them as a consequence.

View this entry with comments (registration required to post)
 

Time for this one...

smiley

I've known this for quite a while, and in fact have alerted (in an oblique way) to the risk when I wrote on 10/4/2013 about Android apps and the permission screen -- and how Android intentionally hides certain very dangerous security permissions on a secondary screen when asking if it's ok to install something.

Now that the nasty is out in the open there's no reason to mince my words any more.

The majority of devices running Google's Android operating system are susceptible to hacks that allow malicious apps to bypass a key security sandbox so they can steal user credentials, read e-mail, and access payment histories and other sensitive data, researchers have warned.

Not the majority.  Basically all.

All Android apps have to be "signed" with a cryptographic key.  That's good.  I have one that I self-generated and use for testing and development purposes.

The problem comes with the fact that Android has a (relatively short) list of "super signatures" that allow access to places you should not be able to go.  Those signatures are there for what could be argued are legitimate purpose, such as a MDM component that needs to be able to run around your system with effective impunity.

Think of it as a "SUID" bit of sorts -- there are certain applications that have "super user" capability even on a non-rooted device, and when they use it you're not prompted.  This is different than when you root your device; in that case you can tell your phone to warn you that privilege escalation is being requested and give you the option to say "No!"

This particular back-door privilege escalation is of the same sort as Apple's IOS one, in that it doesn't require you to grant permission for it.  Adobe Flash uses it as a means to shim itself into an app so you can display flash content, but it never has to ask you if this is ok first -- it's given the permission by virtue of what the app is.

That's very dangerous -- if there's a bug in that application your data can be left at risk.

In this case what Google did, however, is much worse: Android fails to verify that the cryptographic signature that claims to be for one of those trusted apps really came from the actual legitimate source of said application.

How would you know this, in general, if it was being done correctly?  Each of these "real" apps with this set of super privilege has a certificate signed by a certifying authority.  Just like your driver license has certain validating components to it such as a holographic picture embedded in it a cryptographic signature has validation features in it as well.

If you go to tickerforum.org via https, you see the little lock.  If you click that lock and then drill down you will see a certification path down to that certificate; each of the certificates up the chain has signed the one under it.  So long as all of those signatures are good the certificate claiming that I am "tickerforum.org" is known (as long as the cryptography is not compromised) to be valid.

This assumes you actually check the chain all the way back up to the root in the certificate store, and Android does not!

What that allows me to do is present a trivial forgery of any application, and since certain app names and sources are "white listed" and automatically allow that app super permissions, if you are tricked into installing one of those with a forged certificate you're ****ed.

How do you get tricked into installing it?  Google Play has historically been full of fake apps!  For example, when BBM was being introduced for Android there were dozens of fake BBM applications released on Google Play.  What was in them?  I don't know exactly, but that they were fakes was easily determined -- if you looked.  Yes, Google claims to be looking for such forgeries -- but if they are and detecting them before they allow them to be accessed by the public how did all the fake BBM apps wind up in the Play Store?  Google could check certificate validation (and might) in the Play Store before releasing an app, but given their clear and very publicly-discernible record with the fake BBM apps.....

In short I don't believe Google's claims in regard to their curation of apps prior to their visibility to the world.

So what happens if you install one of these trojan horses, either through an "App Store" or otherwise?

Your phone's security is instantly and permanently compromised.  Once installed the app can rename and hide itself, preventing you from removing it.  Worse, it's entirely possible for an application that is running with elevated permissions of this sort to remount your root partition on the phone read/write and then insert itself into the NVRAM of the device.

For most Android devices this means that the malware will survive even if you hard reset the phone in the future because Android, in general, does not keep a separate copy of the original firmware from which it can reload itself -- a hard reset simply formats the data partition.  Anything that manages to get into the system partition stays and will remain active, even across a hard reset!  The only way to get rid of the spyware if you get bit by this is to use something like ODIN (if your device is a Samsung) to re-flash the entire device from scratch -- which is not software that a user typically has access to.

There is one defense against this sort of persistent risk available -- the base load can be engineered to have zero free space (in which case you can't write anything more into there as there's no room for it.)  There are a few stock device loads I'm aware of that are built this way -- but not many.

If you get bit by this, and your device's load isn't protected against this risk it takes only seconds for a rogue app to permanently destroy your device's security.

I think you can understand why I didn't let loose everything I discovered at the time I started talking about this.... but now that it's in the open, well.....

View this entry with comments (registration required to post)
 

A follow-up was posted by the forensic security guy who uncovered Apple's spookware on IOS devices on the 23rd.  Let me point out a few salient points from it (and Apple's attempt at a "response"):

Additionally, this claim that your data is respected with data-protection encryption. The pairing record that is used to access all of this data is sent an escrow bag, which contains a backup copy of your key bag keys for unlocking data protection encryption. So again, we’re back to the fact that with any valid pairing, you have access to all of this personal data – whether it was Apple’s intention or not.

This is the 900lb gorilla in the room.

The keyring is part of the pairing data.

You have to understand how all of this works.  When you turn on your phone it has to load the encryption keys (if it, or any data on it), is encrypted.  Those keys have to be on the device, but you have a "master key" that unlocks the "safe" in which those keys are stored. 

If you use a password management tool like KeePass you know how this works on your PC.  You have a password for your KeePass file, and in that file are lots of passwords.  With the key to unlock the safe, you can get to the actual keys (passwords) for each of the sites you're storing credentials for.

Disk (and file) encryption works more or less the same way.  The key itself is a random (you hope actually random!) series of bits.  That of course would be damned near impossible to memorize, so the computer conveniently "envelopes" that in what amounts to a safe, and your password (or passcode, or whatever) unlocks that safe.

The problem here is that your passcode isn't the actual key itself -- and that key is in the pairing record!  So with a pairing record everything on your device is exposed.

Like Jonathan, my issue here isn't whether the NSA was involved.  I frankly don't care if the NSA was involved in designing and implementing this, if the FBI was involved, or who was involved.  That's not the point.

The point, as I've repeatedly made, is that it is the height of arrogance to believe that you're smarter than everyone else.  To include this sort of back door facility in a piece of software under the belief that nobody but the intended persons will use it (no matter who that is, and under what pretense) is ridiculous, it is grossly negligent and it is unacceptable.

Indeed this is exactly the same crap that the NSA did do when it was alleged they weakened certain public-key generation routines on purpose so they could break them.  To believe nobody else would figure that out and exploit it is ****ing stupid beyond words.  The NSA is not the smartest bunch of guys on the planet, now and forevermore, and neither is Apple.  Someone is always smarter and always able to figure it out.  As soon as you intentionally put in the means to break security someone else, other than the intended people, will do so and use that capability you put in place to screw you.

The real problem with this revelation, as with the NSA's key generation games, are the myriad entities that would love those pairing records that are not governments with a legitimate (or even illegitimate) beef against you.  Consider the value such capability has to criminals, especially when people talk about allowing your phone to be used to work as if it was your VISA card, for instance.  Or, for that matter, how many of you have an app on your phone that accesses your bank?  Note that with access to protected storage while I might not be able to get your password I almost-certainly can get your account and routing numbers, and with that I can drain your checking account.

Do you still think you shouldn't care about things like this because you're not doing anything wrong?

Second, there is the claim that you must "pair" the device to expose the risk.  That's true.  It's also true that as of IOS7 you get asked if you want to trust a plugged-in connected device.  For how many seconds do you have to take your eye off your device while you're somewhere for a person who you don't implicitly trust to jab a connector in the socket and hit the prompt?  Are you sure that's never going to happen?  Remember -- if it does, even once, what the person who does it gains is the ability to break back into your phone any time they're on the same network you are forever, unless you hard-reset the phone back to factory defaults!

At its core the problem this revelation exposes, and the length of time this has been "in the wild" underlines, that there is zero accountability for the so-called security promises that companies make with their products.  You simply cannot claim to have "encryption" as a feature when you give the ******ned keys away without telling the user you're doing it and getting his explicit permission to permanently void his security!

WAKE THE HELL UP AMERICA.

View this entry with comments (registration required to post)
 

Oh boy....  Got a Chromecast?

Petro’s 20-minute YouTube video breaks down how the Rickmote works, but to briefly summarize, the device employs an unencrypted command called “deauth,” which basically deauthorizes the device from the network. As TechCrunch points out, this isn’t a Chromecast bug, but actually a relatively common quirk among WiFi devices.

Uh, yep.  And now that it's "clean" the hacking device simply attaches it (since it's looking for a network to attach to) and.... now it has control of it.

The worse news?  If the person who hacked you walks off (drives off, powers off, etc) you're screwed since there's no clean way to reset it back to unconfigured mode and the authorized network to which it is attached is no longer around!

Got a $35 Raspberry Pi?  The code for this one is public..... And maybe you need to grab a copy of your own for defensive purposes lest your Chromecast unit be rendered worthless!  At least if you have a copy you can reset your own device if someone hacks it...

How much fun could you have with this in an apartment building parking lot, or simply driving down a residential street?

SECURITY IS A PROCESS, NOT A PRODUCT GOOGLE, AND NOT HAVING A PHYSICAL RESET/CLEAR NVRAM BUTTON ON A DEVICE ALONG WITH LEAVING OPEN UNAUTHENTICATED "RESET" COMMANDS IS ****ING CRIMINALLY STUPID!

h/t Mr. Ford

Update: There is a claim in the user manual that there is indeed a "hard reset" capability built into the device that would allow an exit from this condition.  That would at least allow you to get out of this state, once it happens.  I don't have one to test with.... and given that unauthenticated access to reset functionality (over wireless at that!) can be performed I'm not about to have one any time soon either.

View this entry with comments (registration required to post)
 

Main Navigation
Full-Text Search & Archives
Archive Access
Get Adobe Flash player
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be reproduced or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media or for commercial use.

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.