The Market Ticker
Commentary on The Capital Markets- Category [Technology]
Logging in or registering will improve your experience here
Main Navigation
Full-Text Search & Archives

Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2019-07-18 11:10 by Karl Denninger
in Technology , 96 references
[Comments enabled]  

This story should not surprise:

Permissions on Android apps are intended to be gatekeepers for how much data your device gives up. If you don't want a flashlight app to be able to read through your call logs, you should be able to deny that access. But even when you say no, many apps find a way around: Researchers discovered more than 1,000 apps that skirted restrictions, allowing them to gather precise geolocation data and phone identifiers behind your back.

Let's be clear: Google has never given a wet crap about permissions.  They've been dragged kicking and screaming into the world of "granular" permissions, and even now they play cute and claim "well, in the next release.  Maybe."

Back in the earlier days of Android you were presented with permissions an app wanted when you installed it.  It was take it or leave it.  If your favored music app wanted location permission you either took that or didn't get the app, even though knowing where you were had nothing to do with playing music.

There has never been an "Internet" permission you can see, for example.  Why not?  Because if you can shut that off (say, for a music app that plays music on your local SD card) then the app can't serve ads.  Which Google sells.  Duh.  Oh by the way, yes, Internet permission does exist.  You have to declare it in the manifest.  The user is just forbidden the ability to change it's setting.

Here is the manifest section for HomeDaemon-MCP's Android app:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.VIBRATE" />
<uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.NFC" />

Note that it indeed has an Internet permission declared.  Most of the others make sense as they stand, but with the exception of location, ignoring battery optimization and external storage there's no user prompt for any of them, nor any way to shut them off as a user.  Never mind that there are two forms of location and you as a user cannot choose; it's either on or off, but you can't allow network location but deny access to the GPS chip for example.  "Receive_boot_completed" is the one that allows any app to be started when the phone is booted -- but there's no way for you, as a user, to shut that off.  For my app it's exposed in the settings and optional, but for  most apps if that's in there you can't disable it.  Ditto for NFC; what if you want to revoke an app's ability to access the NFC hardware?  You can't; it's not exposed.  Just as is true for Internet access.

Four of those permissions, incidentally, are there specifically so the software can run in the background and notify you of things without Google's "optimization" interfering with it (or stopping it entirely.)  WiFi and location are there only so the software can know if you're at home (and can be denied in the permission screen.)  And write external storage is only necessary if you wish to save video clips; if you turn that off and everything else works normally, but attempting to "grab" an AV clip from a security camera will fail (you can still watch it in real time, just not save it to your device.)

A number of years ago coders found that there was in fact a hidden system screen to revoke app permissions in Android V4 -- at least the ones users "see" -- after installation.  Quickly a handful of apps showed up that could get to it easily, since it was just an intent call into the software already present on the phone.  Google banned those apps from the store, then removed the hidden page in the next Android update.

Marshmallow (Android V6) finally allowed you to choose which permissions an app had and change them after installation.  Google claimed that this was because "some apps would crash if they couldn't get what they wanted", but so what?  Back in 4.0 you could get to it if you knew how, and if an app developer had his software blow up when he tried to get something and got an error back that's on him!  After it was discovered that Google had left this out there they intentionally removed it in the next release, so if you had or have an Android V5 phone (Lollipop) you got the lollipop all right -- up your chute.

Who's phone was (and is) it anyway -- yours or Google's?

Today we're on "Pie" if you have a modern enough phone (V9) of Android.  You still have to go through hoops to tell an app that it cannot run when it's not in the foreground -- that is, when you're not looking at it.  Many apps severely misbehave if you don't do this; Twatter is one of them, as are a number of "news" apps.  Maybe they're just being aggressive about "downloading" data, even though they've been told not to on their settings pages.  Or maybe they're spying and sending data to "Momma".  Who knows -- but what I do know is that a lot of them chew power like crazy in the background which serves exactly no VALID purpose if I've told them not to refresh automatically on a timed basis.

You still cannot tell an app by any means that it can't access the Internet when you're not looking at it.  And supposedly, "Q" (next release) will let you revoke location access when not in the foreground -- but allow it if you are, although it won't allow revoking Internet access in the background or at all.  And yes, that is a declared permission.

To be fair Google has tightened certain things up.  Among others you can't request access to the phone and SMS logs and data stream anymore unless your app is the primary handler.  For example you can write an SMS application, but it can't ask for the SMS permission unless it is set as the handler by the user.  In addition Google requires you never transmit the contents of anything you gain from those permissions off-device for any purpose including advertising.  Is that honored?  Who knows.  This has bitten some developers with legitimate needs, however -- for example, if you have an app that does something for the user differently if he's on the phone it may want to read the current phone state.  That, however, can expose the IMEI, which is forbidden, so..... unless you're a primary phone app, no dice.

The apps that are cheating today are being sneaky about it.  For example one of them scrapes position data from photographs; it has to have storage access to run, so there's no way to stop it because it has to have permission to access the SD card in order to save and manipulate pictures.  Others grab WiFi data if allowed to "see" the WiFi state; if you have a WiFi network's BSSI (it's station ID) then you can often figure out exactly where it is.  They're not only using this data for internal purposes in the app (possibly legitimate) they're sending it home to Momma too, who is doing "whatever they want" with it.

One of the more-egregious examples of this were "Remote control" apps; they have no reason to know where you are; "in my living room" isn't necessary to talk to my TV.  But due to them having access to the WiFi network to control the device they were able to get the router's MAC address and sending it.  There is utterly no legitimate reason for them to do that.

Google was apparently told about this abuse in September, nearly a year ago.  Did they ban the companies and publicly announce who they were?  Nope.

Why not?

Does not the customer own the phone and the data on it?  Is not intentionally circumventing a permission a direct violation of the agreement that you enter into as an Android Developer when you publish apps on the Play Store?

You would think so.

But apparently Google does not.

It gets even worse.  You might expect some third party to do this through sneaky means in order to sell your data and make money off it.  How about a phone OEM?

Samsung was tagged by this in their Health and Browser apps, which is especially egregious due to the size of their installed base (over 500 million devices) and that they include these apps in their base software load.  In some cases you can disable base loaded apps but you can't remove them since they're part of the ROM the phone is loaded with at the factory.

This all comes back to the reality of it -- who owns your device?

Did you buy it or did you lease it on undisclosed (and therefore illegal, under US law and FTC regulation) terms?

Were you fairly told that Samsung devices would scrape data from you without permission?  Did you buy that phone knowing this, and do you still know it today?  Was there a fair disclosure, negotiation and then decision to buy and, had to known they were going to do that, would you have still given them your money?

All good questions..... but no answers, and, of course, no revocation of access via the Play Store, as an OEM, or for that matter, jail.

Still love that Samsung phone, do you?

View this entry with comments (opens new window)
 



2018-12-03 09:43 by Karl Denninger
in Technology , 232 references
[Comments enabled]  

Someone -- or more like a few someones -- have screwed the pooch.

IPv6, which is the "new" generation of Internet protocol, is an undeniable good thing.  Among other things it almost-certainly resolves any issues about address exhaustion, since it's a 128 bit space, with 64 bits being "local" and the other 64 bits (by convention, but not necessity) being "global."

This literally collapses the routing table for the Internet to "one entry per internet provider" in terms of address space, which is an undeniable good thing.

However, this presumes it all works as designed. And it's not.

About a month ago there began an intermittent issue where connections over IPv6, but not IPv4, to the same place would often wind up extremely slow or time out entirely.  My first-blush belief was that I had uncovered a bug somewhere in the routing stack of my gateway or local gear, and I spent quite a bit of time chasing that premise.  I got nowhere.

The issue was persistent with both Windows 10 and Unix clients -- and indeed, also with Android phones.  That's three operating systems of varying vintages and patch levels.  Hmmmm.....

Having more or less eliminated that I thought perhaps my ISP at home was responsible -- Cox.

But then, just today, I ran into the exact same connection lockup on ToS's "Trader TV" streaming video while on XFinity in Michigan.  Different provider, different brand cable modem, different brand and model of WiFi gateway.

Uhhhhhh.....

Now I'm starting to think there's something else afoot -- maybe some intentional pollution in the ICMP space, along with inadequate (or no!) filtering in the provider space and inter-provider space to control malicious nonsense.

See, IPv6 requires a whole host of ICMP messages that flow between points in the normal course of operation.  Filter them all out at your gateway and bad things happen --- like terrible performance, or worse, no addressing at all.  But one has to wonder whether the ISP folks have appropriately filtered their networks at the edges to prevent malicious injection of these frames from hackers.

If not you could quite-easily "target" exchange points and routers inside an ISP infrastructure and severely constrict the pipes on an intermittent and damn hard to isolate basis.  

Which, incidentally, matches exactly the behavior I've been seeing.

I can't prove this is what's going on because I have no means to see "inside" a provider's network and the frames in question don't appear to be getting all the way to my end on either end.  But the lockups that it produces, specifically on ToS' "Trader TV", are nasty -- you not only lose the video but if you try to close and re-open the stream you lose the entire application streaming data feed too and are forced to go to the OS, kill the process and restart it.

The latter behavior may be a Windows 10 thing, as when I run into this on my Unix machines it tends to produce an aborted connection eventually, and my software retries that and recovers.  Slowly.

In any event on IPv4 it never happens, but then again IPv4 doesn't use ICMP for the sort of control functionality that IPv6 does.  One therefore has to wonder..... is there a little global game going on here and there that amounts to moderately low-level harassment in the ISP infrastructure -- but which has as its root a lack of appropriate edge-level -- and interchange level -- filtering to prevent it?

Years ago ports 138 and 139 were abused mightily to hack into people's Windows machines, since SMB and Netbios run on them and the original protocol -- which, incidentally, even modern Windows machines will answer to unless turned off -- were notoriously insecure.  Microsoft, for its part, dumped a deuce in the upper tank on this in that turning off V1 will also turn off the "network browse" functionality, which they never reimplemented "cleanly" on V2 and V3 (which are both more-secure.)  Thus many home users and more than a few business ones have it on because it's nice to be able to "see" resources like file storage in a "browser" format.

But in turn nearly all consumer ISPs block those ports from end users because if they're open it can be trivially easy to break into user's computers.

One has to wonder -- is something similar in the IPv6 space going on now, but instead of stealing things the outcome is basically harassment and severe degradation of performance?

Hmmmm....

View this entry with comments (opens new window)
 

2018-06-06 16:23 by Karl Denninger
in Technology , 103 references
[Comments enabled]  

Nope, nope and nope.

Quick demo of the lock support in the HomeDaemon-MCP app including immediate notification of all changes (and why/how) along with a demonstration of the 100% effective prevention of the so-called Z-Shave hack from working.

Simply put it is entirely under the controller's choice whether it permits high-power keying for S0 nodes.  For those controllers that have no batteries and no detachable RF stick, which is a design choice, there's not a lot of option.

But for those who follow best practice that has been in place since the very first Z-Wave networks you're 100% immune to this attack unless you insist and intentionally shut off the protection -- even in a world where S2 adoption becomes commonplace (which certainly isn't today but will become more-so over time.)

HomeDaemon-MCP is available for the entity that wishes to make a huge dent in the market with a highly-secure, very fast and fully-capable automation, security and monitoring appliance, whether for embedded sale (e.g. in the homebuilding industry) or as a stand-alone offering.  Look to the right and email me for more information.

View this entry with comments (opens new window)
 

2018-05-31 13:27 by Karl Denninger
in Technology , 148 references
[Comments enabled]  

There's a story making the rounds that appears to have some corroboration at this point, but my sourcing is too thin (and specific to people) to document.

Apparently if you bought an "Alexa" and activated it you can wind up with an un-asked for Prime subscription and it can wind up linked to some other card you have out there that Amazon managed to get their claws on.

Of course some people won't care because their entire point of buying one of these "Smart speaker" things is to link it with Prime for their "shopping" purposes.  Well, ok, but whatever happened to informed consent?

There might well be, somewhere, one of those "buying this will subscribe you to X at price Y" deals somewhere in the fine print on the startup or registration page.  In fact I wouldn't doubt it if it's there somewhere, maybe in the "click-through" terms and conditions that nobody actually clicks through and reads the entirety of.

My question is why is this sort of thing happening at all?

Let's be real here: These so-called "smart speakers" are anything but.  They aren't "smart", they're pattern-recognition devices and you're the pattern.  They're linked to "the cloud" because the CPU, RAM and similar requirement to run voice recognition is quite high but extremely bursty since you only give the unit a command once in a long while; the rest of the time it is either idle or (and you hope it's not!) simply recording what it hears.  Putting the capability for fast, decently-accurate response in the unit when it would be active 0.1% of the time at most is why these devices are all "cloud-powered"; they would be stupid-expensive if not.

But these things don't exist for your benefit, they exist for someone else's benefit.  If you want to know what sort of imagery gets conjured in my mind when I hear of people installing and using them it's from the first part of WALL-E..... you know, this one.

Yeah.

That looks appealing.

Not.

Heh, I get it.  You like convenience.  So do I.  I like being able to see what's going on in my house, even if I'm not there, especially if I get alerted to something sketchy going on.  After all that video evidence is useful for the cops to prosecute someone with if they try stealing my stereo.  I like sitting in the bar, pushing a button, and having the hottub ready for me when I get home a half-hour later.  That's convenient.  And I like knowing with hard confirmation that I really did remember to close the damned garage door on the way out.  Peace of mind and all that.

But all of this nonsense in today's world seems to be centered around not your convenience and security, but rather someone else mining your data for profit, not telling you what they're doing with it, or even lying about when they collect it, for what purpose they use it, and who gets access to it.

In our world of today we don't jail executives for that sort of crap.  We should, but we don't.

I get the limitations as well. But what I don't get is the insane price ripoffs that come with it, never mind the privacy and data security implications, especially when you bring something like this into your house or, even worse, your bedroom.

For an example price out a "NEST" thermostat.  You'll blanch.  For half the price I can buy a Z-wave enabled thermostat from Trane.  You probably heard of them -- they make air conditioners and heating systems and have a decades-long history of building high-quality, reliable gear.  It doesn't need "connectivity" to work; it's a thermostat.  Indeed, the one on the left at that link is the one I have in my house.  Oh, and it monitors service intervals too (e.g. for your filters), which is nice -- and you can set them to suit the level of general dust and such in your environment.  But, you can talk to it over Z-Wave and both see what's going on and control it if you want to.

Like, for example, right here:

 

That's real-time, right now, and if I tap it I can change the temperature it's set for.  HomeDaemon-MCP has an outdoor temperature sensor and switches its mode automatically; there's no need to be in "auto" or "heat" mode around here for half the year or more; if it's 70+F outside you won't want heat!  But in the "middle seasons" it's nice to have it automatically switch between the two because there might actually be a reason for that, and in many other parts of the country (especially at higher elevations) where temperature swings of 30-40F are not uncommon during a single 24 hour period it's very useful.

Someone who buys HomeDaemon-MCP and stands up the business to retail it could easily sell the entire package including the controller, a software license and the thermostat for the same sort of money as one "Nest."  But what you'd get is not just a thermostat in that case -- it can run your entire house at the cost of simply adding more modules that are reasonably priced.

Want a camera too?  Nest wants $200 for them.  What?

Amcrest wants $81 for an indoor camera with double the resolution!  If you're happy with the same 1080p that Nest offers and shop around you can get 'em for about $60, or less than a third of the price.

Instead of demanding you use a "cloud" service which inherently means no security as the data is not yours and is being stored and transmitted to a big company that might use it for "whatever" (good luck proving it if they do and you'll need an act of God to hold them accountable if you catch them either doing so or someone hacks it and uses it to target your house for a break-inwith HomeDaemon-MCP only you ever have the data, your cameras can be 100% firewalled from the outside so they cannot speak in or out beyond the perimeter of your network directly and yet you can have access to both snapshots (which you can have it take when it sees movement, etc) and real-time, streaming video any time you'd like over a high-grade encrypted connection from anywhere.

Oh, and the second camera isn't another $200+ either -- or $300 if you want one in an outside-rated enclosure!

With a couple of motion sensors and a garage door sensor (magnetic) you can set it up so that the camera automatically points at the wall when you're home (for the paranoid), when you leave it "arms" itself and points at the room, and if there's motion seen without the "authorized" path being taken (e.g. opening your legitimate garage door with the button in your car) you get alerted immediately so you can grab a video or screen shot for the police. 

What the hell is wrong with people?  Do you really want a copy of video of your house to be in someone's cloud machine ever?  Think about it folks -- we're talking about data that if some malefactor gets ahold of it and pattern-matches it they can figure out if you're home, when you're home, when you go to work and when you're on vacation!

Why the hell would you want that data anywhere except on your premise and on your personal device on demand only and delivered only over a secure connection if it ever leaves your home at all?

Never mind that it's better, faster and cheaper to do it that way.

So who wants to make a billion dollars?  The ask for the entire package will never be lower than it is now; there is exactly one thing needed to deploy it commercially and that's a customer-facing web interface to automate the certificate keying the license system uses.  The code to actually use those certificates and enforce them is already in the package as is the server side which can hit a Postgres instance (in other words nearly-infinitely scaleable and easily extended as you may wish.)

Is there actually a desire to sell products and services to people any longer that are theirs, that deliver value to the customer, or has everything turned into a scheme to data-mine you, get you to pay two, three, five or ten times as much for less functionality and try to stick you with a recurring bill you can't opt out of without turning your investment into dust?  Adobe anyone?

If you want to be that guy or gal that disrupts this space, look to the right and email me.

The answer to the problem is ready to go -- right here, right now.

View this entry with comments (opens new window)
 

2018-05-29 21:20 by Karl Denninger
in Technology , 311 references
[Comments enabled]  

So this blew up on Twatter today, after the author of an article I went after on his blog figured out who I was.

Here's the article that I went ape-**** over.

TL;DR: Stronger S2 Z-Wave pairing security process can be downgraded to weak S0, exposing smart devices to compromise.

Z-Wave uses a shared network key to secure traffic. This key is exchanged between the controller and the client devices (‘nodes’) when the devices are paired. The keys are used to protect the communications and prevent attackers exploiting joined devices.

The earlier pairing process (‘S0’) had a vulnerability – the network key was transmitted between the nodes using a key of all zeroes, and could be sniffed by an attacker within RF range. This issue was documented by Sensepost in 2013. We have shown that the improved, more secure pairing process (‘S2’) can be downgraded back to S0, negating all improvements.

Once you’ve got the network key, you have access to control the Z-Wave devices on the network. 2,400 vendors and over 100 million Z-wave chips are out there in smart devices, from door locks to lighting to heating to home alarms. The range is usually better than Bluetooth too: over 100 metres.

Ok, so the claims are basically:

1. S2 is better than S0 (true; it's faster mostly.)  S2 also allows for user-initiated keying exchange with a shared secret of sorts (e.g. a pin code, etc.)

2. The latter is important because during the setup of an encrypted device you have to get the key into the device somehow.  Of course if that key is shared and not hashed with each use by something unique to each endpoint then if you get the key you have it for every secure thing that speaks with the other end!

Oh, and "100 million devices and 2,400 vendors!"  My God, it's full of stars!

Except..... 90+% of those devices do not support encryption at all.  Your common light controls, thermostats, PIRs (motion detectors), etc -- nearly all of them run without any encryption.  They don't get turned on by the neighbors only because their network ID is different, but that's not actual security.  Newer devices support decent (CRC16) integrity checking, but older ones don't.  Don't write that older stuff off though -- despite some misbehavior the old Intermatic CA9000 PIRs are arguably the most-rugged on offer and one of the best options if you don't need pet-proofing, the older Leviton Vizia series of switches and controllers are extraordinarily reliable, etc.  Encryption support is not "free"; it requires "nonces" to be sent around, which consume network traffic, and of course there's a CPU requirement to encrypt and decrypt.  All this means response time is impacted.  You choose the trade-off.  And be careful how you choose it -- for example, if you have a motion detector outside and it's running encrypted the key is in the unit and I could just steal it and then extract the key from the NVRAM at my leisure!  Theoretically the SOCs in these units should prevent that.  Theoretically.

Typically you find encrypted mode support where it matters, which is in devices like locks.  For obvious reasons anything that operates as a lock (e.g. a garage door opening device) without encryption is no lock at all and the part containing the key is in the protected space (in other words to steal it you must first break in, at which point the discussion is academic.)

One of the reasons "universal" S0 is not supported is that it is fairly "heavy" in terms of network and processor (battery, etc) load.  S2 does address this to a material degree so when it is available on a "nearly-universal" basis for devices it'll be a "good thing."  But that day is not today, and probably won't be tomorrow either.  In fact I'll bet less than 1% of all Z-wave units in use today support any encryption whatsoever.

Now let's talk about how S0, which is the "default" secure implementation (the one that's actually in units today), works for a new device.

When you pair a new device the exchange goes like this:

The controller {C} is put into pairing mode (MANUALLY!)

The device {D} is poked (when "clean" pretty-much anything pokes it -- a button press, etc.)

{D} Hi, I am a device of type X and I like you!  --> {C}

{D} <<< Ok, tell me what you are, here's your node number and network id {C}

{D} Thanks, I'm a device type Z here's a few things you should know --> {C}

Now the node talks a bit, along with the controller, and figures out what's in range so it knows how to build it's idea of the mesh of the nodes around it.  It can do this later too, but you REALLY want this to be right or the performance of the network goes to hell FAST. 

Then we look up that "few things you should know" set of data (is the unit always listening, what bitrate does it run at, etc) and looks for a flag that says "I know how to talk encrypted."  If it finds that specific flag set to "on" this happens during a very short window of time (100ms or so):

{D} -----> I want scheme X ----------> {C}

{D} <<< Ok, that's cool, give me a Nonce so I can send encrypted {C}

{D} ----> Here it is ------> {C}

{D} << Here's your network key {C}

Now at this point if we're doing "S0" the potential issues arises.  The node has provided a "nonce", which permutes the encryption so you can't repeat a packet and have it work twice.  (Number once is what "nonce" stands for.)  But the node doesn't have a key yet.  So there's a hard-coded "pairing" set of data which the folks say has a "zero key" -- accurate as far as it goes, but not really because there are three components to the IV (what you initialize the encryption algorithm with) and only one of them is zeros.  Not that it matters in practice, because they all have to be hardcoded, so figuring out what they are is a matter of disassembling someone's controller code or any device's microcodeBut it is absolutely inaccurate to say that the encryption is initialized with all zeros -- it most-certainly is not!

(Remember, the device has to have the same hardcoded initialization value set in it... it does, so it can decrypt that packet and does so, then immediately replaces the working key with the network key it just received) 

{D}  Here's a reply confirming the key set operation --------> {C}

The problem is that during that little bit of time, specifically, that last bolded line, that specific packet can be picked off and since the keying is known you can theoretically steal the key.

If you do then you can proactively read (or send) traffic because the nonces are sent across and thus you have access to them.

In short if you can steal the key well, you stole the key!

The gist of the article from PenTest deals with the S2 scheme, which is more-secure because the user can be prompted to seed the exchange from the console (e.g. "punch in a 5 digit code on the lock, and the same code on the controller") and this makes it a lot harder to rip off the keying.  Further, S2 uses a formal key exchange mechanism so stealing the key isn't a bit harder, it's a lot harder, provided there's a shared secret.  In fact, it's basically impossible.  This is great.

So where's the problem?

Right now there are no commercial controllers that do S2, largely because there are damn near no devices that do S2!  I haven't implemented it yet on HomeDaemon simply because I don't have anything that can run it here, and while I could certainly implement against a single device I'd like to have a few of them to make sure the code is actually stable and I'm not implementing and testing against a buggy implementation that some random manufacturer put out.

Which is quite possible, by the way -- don't get me started on that or I'll talk for hours..... 

The "attack" put forward by the original article is an intentional downgrade attack.  In other words by jamming the device's communications or otherwise tampering with them (remember, this is RF, so you CAN tamper with the transmission by jamming or other means) you can damage the reply packet from the node that says it can do S2.

This will cause the controller to attempt to request compatibility with S0 since it thinks the node cannot do S2, which the devices also support.

Now, during that immediately-forthcoming forced S0 exchange you steal the key.

Note that this is exactly the same risk that exists for any S0 device -- originally, now and forevermore.  It is not unique to the newly-minted S2-capable units.  In fact for an S0 unit there's no need to jam anything.

That sounds ugly, and it would be except for some realities that get in the way of it being ugly in actual practice.

First, the window of exposure is very small and cannot be triggered from the outside.  The controller has to be told to pair, the node has to be told to pair, and then you have to be able to both jam and intercept during a very specific and small window of time.  The frame time for the scheme reply to be valid is about 100ms and if you're off then the node comes up unsecured entirely.  And... you don't get the key because the controller never sends it as there's no agreement on the scheme to be used.  Oh by the way if the node is a lock (I have one sitting here on my desk bolted to two small piece of wood that I use for testing) and it includes insecure then the "lock" functions are missing.  Good, because that will force you to exclude and re-include it so you actually get secure mode (you do want a lock to be able to be locked, right?)

Second and probably the best defense is that best practice is that you pair in low-power mode.  In other words you remove the RF stick from the controller, physically take it to the new device and push a button on it to initiate the operation -- you do not do it from the command console.  In that case the range of the pairing transmission from the controller, which is the only one you care about (since it contains the key) is inches, not 100' or more.

Now for convenience newer nodes (and controllers) can initiate pairing from a distance.  In fact all the 500-series chipset stuff supports doing so.  However, there's a lot of older gear out there that works perfectly well but can't handle high power include, including roughly half or more of the devices in my house.  In fact the standard for setting up Zwave devices and best practice has always been to do so in the final installed location and to pair with the controller at the device.  This is easy when the master is a handheld controller roughly the size of a small remote control as was originally the case with the original Leviton master controllers years ago.  Of course this is sort of hard to do when your controller is this thing that has a wall-wart, doesn't have a separate RF interface and it's running your user interface at the same time!  In other words convenience and poor design of some controllers (essentially all of the mass-market stuff, I might add) means you get to bring the device to the controller, pair it there, and then deal with an inevitable network reorganization to get good performance.

That, by the way, is another sore spot in that many controllers try to do it on their own which is flat-out stupid.  The scope of why is beyond this article (although it's covered in some depth in HomeDaemon's user manual as a caution to people who would try to use those commands without understanding them) but it has to do with the fact that most battery-powered devices are not listening all the time and in order to get a good network map every device has to be on and able to receive and transmit.  Good luck with that on an automated basis where you have anything that runs on batteries in the network.  And if you think this is a "theoretical" pain in the ass there are 53 active Z-wave units in my house right now.  There's nothing theoretical about running around removing the covers, sending configuration commands and similar on over a dozen battery-powered devices so they're all "awake" and can properly participate in a network rediscovery process!  "Best practice" exists for a reason and especially with complex installations it's important to follow it for reasons other than security -- that's a nice side effect.

So the long and short of it is that these guys consider this a protocol problem and severe vulnerability.

I called bull**** on that and they didn't like it.

Here are my reasons; you decide who's right (with their full source article up above):

1. You cannot initiate pairing from a remote, nor over RF.  You have to put the controller in that mode deliberately, and if it is not then it will ignore a unit that tries to perform pairing, never responding to the request at all.  Since it never responds it cannot divulge a key.  Therefore you need a deliberate act by the owner or system installer to first open the potential vulnerability in the first place.

2. You must then initiate pairing on the new unit.  Now this is where things could get sort of ugly; a malefactor could put a "bare" (uninitialized) unit outside your house but within range and pair that.  Then again if they can manage that they can steal the key no matter what because they then take the physical unit.  If you can do that you can build a confederate unit that is designed to get keys and then display them for you.  Bingo -- Bob's Your Uncle.  Note that the only defense against that if you pair in high power is #1 because the unit cannot initiate pairing on its own.

3. Best practice is to perform pairing with the unit in the installed final location using a controller that is operating in low-power mode to pair.  This reduces the potential interception range to inches from ~100' or so whether the intruder is using custom-designed equipment or a simple sniffer.  If a confederate can get a listening device within a foot of you when you are doing this then he can also put a fake node in the same place, trigger it to include whenever it sees some other node trying to do so, and steal the key the hard way -- by retrieving the device later and extracting it from the device's NVRAM.  S2 mitigates this if there's a user-controlled PIN or similar used, obviously, since the confederate would not know what it is nor have a way to enter it and he needs that for the initial keying exchange to be decodable.  Note that it does not matter whether the node runs in low-power during inclusion or not, since the node doesn't send the key -- the controller does, and by the time the node sends an encrypted message it has the actual network key in it and the risk window is closed.  If you have a controller that is fixed-location and doesn't have a removable stick that's an implementation problem and stupid design of the controller, not the protocol, but even that can be overcome -- keep reading.

4. Once keyed it doesn't matter.  In other words the risk is only in forcing a fallback from S2 -> S0.  Further, the standards say that if you do that you're supposed to warn the user.  In fact the fallback chain is S2 -> S0 -> Insecure, and that happens sometimes when including S0 and you get to start over because RF noise or similar corrupts one of the packets; they pass checksum (1 byte and thus not much for integrity; a 1-in-256 chance the packet is smashed but the checksum is good) but fail MAC validation (VERY solid on integrity) and the other end cannot possibly discern what was being said, since the packet doesn't decrypt.  Indeed if the MAC fails you don't even know what the transmission was and the underlying protocol does not have a "repeat last message please" request either.  This happens fairly regularly by the way in ordinary operation; I get MAC/NONCE errors (one of them is bad and thus the decrypt fails, but which?  No way to know) quite regularly on one of my units that's installed in a metal box and thus the RF is sort of nasty-attenuated.  It still works fine and I leave it that way intentionally as a code-robustness test but a fairly decent number of decrypt failures get logged.  And yes, HomeDaemon-MCP does tell you explicitly whether a node is included secure or not, both initially and permanently on the main node display in that secure units have a "*" after their name.

Incidentally this is a severe weakness in the Z-Wave spec but it's not a security concern, it's an operational one.  If you send a packet and it passes checksum the underlying RF protocol considers it perfectly fine and the originating device (which you don't control if it's not your controller where you wrote the code!) can and will overwrite its buffer; that is, it considers the message "delivered" and all is good.  Well, it might not be.  If the CRC16 or MAC computation fails you know you have trash instead of a valid packet but no way to ask for a retransmit. There's a fair bit of code in HomeDaemon that does its level best to prevent that from being operationally significant but sadly if the original event was an asynchronous report (that is, you didn't solicit it so you don't know what it was supposed to be) there's really not much you can do other than log the fact that you got something you can't successfully (or safely) process.

So what would have been a more responsible way to look at this issue from a standpoint of what you could reasonably ask the Z-wave people to do as a means of immediately mitigating this (modest, but real) risk?

There's a very simple mitigation that could be made without breaking backward compatibility in any way: Change the specification for the controller code so that any transmission of Class 98 (Security), Subclass 06 (Keying) always goes out at low power.

That's the end of the problem in real terms; now the interception range of that frame, which is the specific one at risk, is measured in inches instead of tens or even 100'.  And, pleasantly, that's a fix that can be put into controller firmware by manufacturers as a firmware update.

What did I do?

Well, I can't get into the firmware of the RF chip that is used in the Aeotec stick, so I can't fix it the way I'd like, although I can certainly recommend that Aeotec do so, and will.

But what I can do and did is set HomeDaemon's code to explicitly not ask for either high-power or network include unless you tell it to include non-secure only.

HomeDaemon has always had two commands to add nodes; "add-node" and "add-node-nosecure"; the latter intentionally ignores a security scheme request.  In both cases the "add node" command stanza to the controller includes options that (1) can constrain the type of node that it will accept, whether NWI (network-wide includes or only direct) and (2) whether high-power transmission is used.  So the simple delta to the code was to remove those two flags from the "add node" request unless you've blocked secure inclusion for that device.

In other words if you force insecure include mode, that is, you won't answer a request for keying even if the node sends one, then there's no harm in a high-power inclusion since there's no keying sent.  But if you do a "regular" include which allows for secure mode negotiation then high power mode is not requested, nor is network-include.

Never mind that you shouldn't do the potentially-risky thing in the first place; the best and proper way to include a node is to pull the stick out, take it to the node and use the button on the stick to include the new device.  That's how I do all my node includes in testing and recommend it generally.  The range of that operation is inches by my direct experience just like it was with my old Leviton handheld master (which I still have, by the way, although I don't use it anymore) and again, if someone can get a device that can pick off the signal from that close when you're doing it they can just put a fake node out there, pair that, and then retrieve it and steal the key directly.

Incidentally S2 is not a panacea. In order for it to work the seed for the ephemeral key has to be known somehow.  DHE is (elliptic curve) is fabulous and the curve they chose (25519) is the same one that is used by modern ssh2 clients and servers; it has the attractive property of having reasonably-short keying so you can rationally print or barcode it on a unit somewhere.  Since a unit (wall switch, etc) has a very limited amount of storage and CPU power that choice was also driven in part, I'm sure, by those constraints.  Of course if the pairing code is printed on a wall switch and you need to re-pair it you might find it at least moderately annoying to have to remove the switch from the wall to get the pairing code again, and God Help You if the barcode or printing is damaged since there's absolutely no way to guess it.  S2 has a second attractive property in that there are multiple "levels" of trust (three) and does forward nonce computation (assuming there aren't RF errors, which can happen and forces a nonce resync) which reduces both traffic and power consumption.  Those are both good and that's enough reason to support it standing alone once there is a decent selection of S2 capable units available for purchase.

So that's my read on this alleged "terrible" vulnerability.

I'll support S2 if and when I can get my hands on some devices to proof the code against.  It shouldn't be that hard to add; the methods are decently documented.

But until then, and especially if you have devices that use S0 (the "usual" Zwave security) now, take a chill pill instead of the clickbait.  Understand the threat and attack surface -- it's small but real, and you have to do dumb things in order for it to be a problem.  I don't consider this a protocol fault by any means; S2 is faster and, while the keying is better, you typically key a device exactly once in a given installation, so the actual attack surface is likewise very small.

In short be aware of best practices, follow them, and if very odd, unexplainable things start happening ask questions before you just start wildly doing something that some malefactor may want you to do.

You wouldn't answer the phone and give the caller your social security number or garage code; treat this the same way.

View this entry with comments (opens new window)