The Market Ticker
Commentary on The Capital Markets- Category [Technology]
Logging in or registering will improve your experience here
Main Navigation
MUST-READ Selection(s):
Make Me Move

Display list of topics

Sarah's Resources You Should See
Sarah's Blog Buy Sarah's Pictures
Full-Text Search & Archives
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2018-04-17 11:43 by Karl Denninger
in Technology , 126 references
[Comments enabled]  

My HomeDaemon-MCP controller has always "attracted" a certain number of "probes", nearly all of which result in a log entry that looks like this:

[10:18] SSL ACCEPT Error [http request] on [::ffff:103.254.156.xxx]

These are connections that are made to the controller and the SSL negotiation fails because the other end doesn't respond at all to it.  That is, it doesn't get back a bad negotiation or attempt to play games with the SSL protocol, it simply gets nothing, a bare HTTP request, or something similar (e.g. someone thinks it might be a Telnet server, for example.)

HomeDaemon-MCP contains its own web server; it does not rely on something like ngnix or Apache.  The reason for this is that the required set of capabilities is well-defined and it's much more-secure to write code that does what you need, along with interdicting attempts to be "bad" (and reporting them) than it is to rely on someone else's brain-fart which, due to the complexity of what it must handle, is inherently much larger and thus has a far larger attack surface.

In the last 48 hours or so the number of "probes" have exploded as apparently Russia and a handful of other locations (the Czech republic, for one) have decided to "ratchet up" attempted assaults on various "Internet of Things" devices.  I'm now seeing these reported not one or two at a time but by the hundreds, back-to-back.  There has also been a marked increase in the number of what appear to be "white hat" surveillance attempts on said devices, including mine, looking for potential vulnerabilities.  The fact that the "bad guys" know I'm here and how to find me isn't surprising in the main.  That the white hat guys are also hammering me is, because the presumption has to be that their primary means of finding me is an exhaustive port-scan on IP address ranges.

Why the "bad guys"?  Because I've been at this sort of stuff long enough, and am well-enough known from my days of running an ISP, that my presumption is that they know who I am and are at least passingly interested in trying to steal things -- like my software.  Maybe I'm wrong and they're just randomly looking too, but I wouldn't take that bet.

This appears to be related, according to the "white hat" folks, to this CERT alert -- and it's a NASTY one.

But HomeDaemon-MCP is laughing all of these attempted assaults off -- both the "white hat" probes and the far more-malicious "black hat" sort.

It's not at all impossible to write IOT code -- HomeDaemon-MCP is one such instance -- that is reasonably secure.

However, it's very hard to cheat and use libraries to do huge amounts of security-sensitive parts of the processing, including the web service part, and actually maintain security because the code isn't yours, even if you attempt to audit it you didn't write it and thus don't have a full understanding of it, and if there is a problem found you're reliant on someone else to fix it.  It gets even worse if you're writing in something like PHP.

That's why HomeDaemon-MCP doesn't do any of that and I took the time and effort to write it on the metal using "C", with the only outside dependency being the OpenSSL libraries.

A reasonably-full description of HomeDaemon-MCP can be found here; it speaks not only Z-Wave with an inexpensive USB "stick" but also can manage independent and fully-internal analog input monitoring using an extremely inexpensive ADC "bolt on" but also GPIO (digital) outputs.  If you have encryption-enabled Z-wave devices it will use AES encryption as well (e.g. door locks.)  It's fast, secure, and runs on extremely inexpensive hardware (the Pi2 and Pi3 computers) with the code itself and its entire working data set for a reasonably-large (~150 events) installation, plus a slave controller, requiring only 10MB of working RAM.  It consumes roughly 10-20% of a Pi2's CPU with it clocked at 600Mhz, or roughly "half-speed" and even with the FreeBSD operating system has nearly 3/4 of the 1Gb of RAM on the unit free.  In other words it's insanely economical in terms of resource consumption, is entirely self-contained in terms of security and it's also extraordinarily fast.

I've recently implemented an "app interface" on top of the standard, HTML-5 browser port that will make streaming-update apps (e.g. for Android) a trivial undertaking, and am starting development of a sample Android app to speak to it (which ought to be fun, since I need to teach myself Android app development in the process!)

Oh, and the license verification code (also certificate based using PKI) is built-in already -- it's literally ready to go, needing only the issuance of certificates to each customer for however long their license terms is.

So where is the firm or firms that want to offer a secure controller of this sort, whether as a packaged product or as an installed system complete with all the mark-up available to same?

If you're that firm email me at karl@denninger.net and let's talk.

Yes, it's for sale -- in source, all rights, and while it's not cheap the asking price is, for what it is, very reasonable.

View this entry with comments (opens new window)
 

Aha....

SAN FRANCISCO (Reuters) - Concern about Facebook Inc’s (FB.O) respect for data privacy is widening to include the information it collects about non-users, after Chief Executive Mark Zuckerberg said the world’s largest social network tracks people whether they have accounts or not.

I'm shocked, shocked I tell you, that Reuters actually printed this.

Of course the company tried to push back on it...

Facebook gets some data on non-users from people on its network, such as when a user uploads email addresses of friends. Other information comes from “cookies,” small files stored via a browser and used by Facebook and others to track people on the internet, sometimes to target them with ads.

...


“This kind of data collection is fundamental to how the internet works,” Facebook said in a statement to Reuters.

That's a lie.

Yes, it is fundamental to how the Internet works that when you send a request to a site the site the request is directed at gets the "referring page" -- that is, the page on which the request came from, if it's not the "root document" you're looking at.  So if you have a button from "facebook.com" on page "market-ticker.org" facebook will get the exact page on this site that requested the button.

But it is not fundamental for you to store and process that, it is not fundamental for you to send back a cookie with a document so you have a persistent tracking device across other web sites and pages other than as an authenticator (e.g. a login) and it is definitely not fundamental to use such cookies with things like static images (e.g. "like" upturned fingers) nor is it fundamental to use eTAGs and similar as tracking devices, which are intended to reduce traffic for things you've already seen.

In short it is not "fundamental" to pervert these mechanisms as a means of tracking people -- THAT IS RAW AND INTENTIONAL ABUSE.

Nor is it fundamental to place one-pixel transparent images all over the place for the specific purpose of tracking people from other pages you do not own, and to which you can also attach cookies and eTAGs to obtain both the referenced page and a unique identifier you can link to individual persons.

And finally, it is not fundamental to how the Internet works that you store, process, correlate and sell that data, whether directly or indirectly via "ad targeting."

At a minimum, “Facebook is going to have to think about ways to structure their technology to give that proper notice,” said Woodrow Hartzog, a Northeastern University professor of law and computer science.

There is no way to give "proper notice" when the tracking happens before you can possibly consent.

You can give consent on a site when you sign up for an account and are using that site, provided the consent is (1) reasonably understandable, (2) honestly outlines what is collected, when it's collected, how long its retained and what it is used for, and that use extends to nothing else.

You cannot give consent to collection "off the site" because there is no possible way for you to know the links, buttons, one-pixel beacons and similar are there prior to viewing the page.  Further, there is no possible way for you to revoke such consent or refuse because the tracking happens before the page is displayed and thus before you could give consent.

This sort of "tracking" is similar to grabbing a woman in a bar, tearing off her clothes, having sex with her and then claiming that she must have consented after the fact because it is inherent in going out while nicely dressed and entering a place that serves adult beverages -- that is, doing so fundamentally means she wants to screw.  That, of course, would be a damned lie.

Such actions and tracking are inherently abusive for this reason.  They are inherently unfair, dishonest and ought to be felonious just as tearing off said woman's clothes would be.  They are already illegal under the FTC's general rule of "unfair and dishonest trade practices" since you can't consent, you can't opt out and you have no way to know that it will or has happened until after the fact, never mind that ****book and Zuckerpig have repeatedly lied by obfuscation not only as to what they collect and why but how it is used.  Said abuses include "responding" to a government subpoena that was so ridiculously broad it may have included data on millions of Americans for which they have never explained nor been held to account for, and which was blatantly unconstitutional.  It also includes the outrageously illegal (under federal election laws) "assistance" given to the Obama re-election campaign without charge, which is not only a violation of your privacy rights it's flat-out illegal as corporations cannot contribute to federal campaigns at all.

This firm and its executives -- all of them -- must be completely destroyed along with any other firm doing the same thing and this practice must be not only stopped but those who engage in it must be imprisoned.

View this entry with comments (opens new window)
 

2018-03-28 07:00 by Karl Denninger
in Technology , 743 references
[Comments enabled]  

It's not that the car apparently didn't "see" a pedestrian walking a bicycle and hit her.  At least we think it didn't see her.  We actually don't know that yet, just that she was struck.

But you better start thinking about the real regulatory issue, and one that had better get center-stage right damn now and be part of the debate and requirements if these things are going to run around on our roads.

It's this:

The car is on a two-lane road with a hard and fixed barrier on the right side, and on-coming traffic on the other side.  A child runs out in front of the vehicle inside of the stopping distance.  The car detects the child instantly, but is unable to stop.

The vehicle is able to compute the probability of your death (and everyone else in your vehicle) with a fair degree of certainty if it intentionally crashes into the fixed object to avoid the child.  If the oncoming vehicle is also self-driving it may also be able to compute the risk of death or serious injury for those occupants if it hits that car intentionally since the vehicles are probably communicating.  If it hits the child it also probably can compute the odds (very high, perhaps 100%) that the child will die as well.

The vehicle must strike something due to the physics of the situation.

What decision does the car make and who or what does it hit?

If you are in this situation as a human driver you cannot compute the risk of death for various parties, since you don't have the mass of the vehicles, the energy each carriers, the presence or absence of wearing seat belts, where the impact will be taken, etc.  The car can make that computation in the milliseconds available, you cannot.

However, you still can choose to intentionally decide to hit the solid abutment, or the oncoming car to avoid hitting the child.

You as someone buying or riding in a self-driving vehicle must be able to know the decision tree on this situation in advance because it does happen.

Now let's take another example.  Uber is claimed to have something like 2 million road miles on its self-driving cars but there have been ~60 accidents, most minor and nearly all the fault of the other driver.  This may sound to you like a good record.

It's not.  It in fact sucks big fat donkey balls; a human driver with that record would be considered a terrible risk (and pay an astronomical insurance premium) no matter who was technically "at fault."

I have well over 750,000 lifetime miles on the road by my best guess.  My current car has 130,000.  The truck in the driveway has about 60,000.  My Jetta, which my kid now has, was given to her just short of 200,000.  That's nearly 400,000 miles just between these last three vehicles, and there were years when I lived in Chicago where I put 50k on a car because I was doing contract work and in the damn thing all the time.

My lifetime accident record on the road?  Zero.

But I have, several times (including fairly recently) intentionally violating a traffic law to avoid an accident, the most-common incident being intentionally running a light that has either just turned red or is about to and I detect that the vehicle behind me is not going to stop before striking me, yet the crossing road is clear.  The law says I must stop if I can do so safely.  I will take and fight any ticket ever given to me where I run such a light for that reason, because if I'm about to be hit then "stopping safely" isn't going to occur.  I've yet to be ticketed for this but I've avoided several accidents this way -- none of which would have been "my fault", but all of which would have damaged my car and maybe myself or others in the car.

How are these self-driving cars programmed and why would anyone get into one without knowing that first -- or buy one, for that matter?

You have every right to know if the vehicle will obey traffic laws even if it means getting struck and you potentially being injured or worse as a result, and you also have the right to know if the vehicle will kill or injure you preferentially and intentionally to avoid killing someone else who is not in the car.  Since you are not driving you are never "at fault" for hitting someone else, which means you personally will never be tagged in such a lawsuit -- the company that made the car will be!

This provides a powerful incentive for the vehicle designer to avoid hitting the other party even if they have to injure or kill you in order to do so since you have some contractual relationship with said firm (in which they might, and probably will, try to limit their liability) but the other party does not and thus can't be bound by same.

In short you have every right to know what the vehicle's "prime directive" is but we're not even talking about this!

This is the issue with these self-driving cars.  The machine may have limitations on what it can see and sense but it is always faster in making the decision and acting on it than you can be.  The question here first is did the car see the woman but second, if it did then the next question is "did it deliberately hit her because the other options were worse?"

If that's not the case here it will be in the future and in fact it has already happened with less-dire consequences, it would appear, if these vehicles have managed to rack up 60 accidents in 2 million miles where I, as a human, have yet to have one in more than 25% of that mileage -- but I have had to break traffic laws and take anticipatory actions for that to be the case.

These self-driving cars clearly are doing neither and you have every right to know what their decision matrix is and what preference it will take both in the event that a traffic law conflicts with an impending accident (that is otherwise avoidable) and what it will do in a "no-win" scenario -- before you get into one.

Never mind the data security and spying that will go on, with every trip you take in one of these things being data that does not belong to you.

View this entry with comments (opens new window)
 

2018-03-23 07:00 by Karl Denninger
in Technology , 603 references
[Comments enabled]  

Folks, cut the crap ok?

I know what you're thinking -- I'll just turn off "third party cookies" and all will be ok (in relation to my previous article.)

Incidentally, that is not the default for Chrome and other browsers.  Gee, I wonder why?  Who runs all sorts of third-party ad networks again?

But that aside this doesn't work.

The reason is an HTTP field called an "Etag."

Etags, along with expiration dates and "If-Modified-Since" allow a browser to quickly check with a host whether or not content has changed, without re-downloading it.  Let's say you get an image on the web.  Later, you go back to the same page and the same image is there, since it has not changed.  If the image is still in your cache it is very wasteful to send the whole thing again -- which could be several megabytes.  Instead, if it hasn't changed, you can just display what's in the cache.

Well, to know that, you need to know if the resource changed on the server end.  There are two ways to do this -- using a date stamp, and using what's called an "Etag."

The latter can be attached to any resource, although it's usually attached to images.  The server sends down an Etag: field with the image in the HTTP headers, which is an opaque identifier.  In other words, from the browser's point of view it does not care what the string is; it doesn't represent a time, date, or anything other than a promise from the server that it shall change if the content has changed and needs to be re-sent.

If this sounds like a cookie that's because it can be abused to become one, and you cannot shut it off unlike cookies!

So let's say you disable third-party cookies.  Fine, you think.  Nope.

I have a "Like" button.  Said button has an image.  That image is the finger pointing up, of course, and you must transfer it at least once.  I send an Etag with it, but instead of it being a change index it's unique to you!

Now, every single time you request the button you send the Etag for the image.  If it hasn't changed (and it basically never will, right -- it's an upturned finger!) I send back "Not modified".  Except.... I just pinned to you, personally, that access to the page and you have third-party cookies turned  off!

So I send back "Not modified" but you just told me who you are, what web page you were viewing, and your browser ID and IP address.

I get all of this for every page you visit where such a button or function is present even if you never use it.

Surprise!

Oh by the way this works with beacons of course, since they're 1-pixel transparent images.  And no, I wasn't the first to figure this one out many years ago, and it's been known and in active use on the web for a long time.

The premise that blocking third-party cookies prevents these folks from being able to figure out who you are and what arbitrary web content you are viewing is false!  Nice switch Mr. Browser writer, too bad it doesn't solve the problem!

What this means is that you can be tracked specifically and individually, as you personally, with knowledge of who you are, where you are, when you clicked it and exactly what page you looked atwhenever you visit a page that has any such thing on it without your knowledge or consent should any such resource be included in that page.  It is inherently part of the web server's logs that the owner of the page you visit gets your browser ID, IP address and what you viewed.  But what you probably didn't know and certainly did not consent to is that through very trivial abuse any resource that comes from some other web property -- a like button, a sign-in option for other than a locally-stored account, even an ad can cause your system to obtain, store and regurgitate a unique identifier specific to you and your device whenever that resource is encountered, anywhere.  As soon as you do anything that links that identifier to you as a human that relationship is then known and never lost.  Indeed it can happen retroactively in that the tag can be generated one day and then days, weeks, months or even years later you might provide the missing component (your identity) on some other page that contains the same resource.

There is no way for you to consent because it happens before you can possibly know it will and thus you can't give consent.  You also can't know in advance where else that "capturing" system for your presence might be operating. It works exactly like a third-party cookie except that you cannot shut it off other than by operating system (or firewall) blocking of the entire domain or IP address involved or by clearing all cached data on every access, which is extraordinarily wasteful.  If you're on an Android phone or an iPhone, since both prohibit editing the /etc/hosts file that would otherwise make blocking such possible without too much trouble (e.g. through "Adblock") you cannot reasonably interdict this at all on the stock browsers.

You also cannot block this on desktop or tablet browsers without severely damaging your browsing experience.  Specifically, while you could conceivably load an extension to block all Etag headers doing so would probably get you blackballed on many sites (it sure would here and probably automatically as the system would consider it abuse!) because doing that would result in your data transfer requirements from the site skyrocketing as every single image would have to be sent on every access even if you already had an unaltered copy in your local system's cache in memory or on disk.

Facebook's entire business model relies on this.  That is why they "offer" their sign-on system to newspapers, blogs and other web sites all over the world.  It is also why they have their "like" buttons everywhere.  It is through those "features" that they track everything you do online, even if you don't have an account with them, and all of that tracking processing and sale of whatever they learn of your personal life is done without any consent because it is not possible to consent to what you're not aware of in advance.

This is why the only solution to Facebook's data mining, and they're not alone in this (and yes, it has to apply to all of these firms and those yet to come), is legislative.  This sort of activity -- collecting anything from those places where "like" buttons or any other third-party content is placed, or where sign-on credentials are used, and where that data is either used to inform decisions (e.g. advertising) or sold must be considered a felony criminal offense punished with the revocation of corporate charters and indictment of every officer and director of the firm involved.

could trivially commit this sort of abuse, by the way, on The Ticker.  It would require a hell of a lot of storage, but it would be easy to do. 

I don't do it because it's wrong.

Others don't give a crap if it's wrong.

Zucker****er is one of the worst.  His latest missive is especially damning, in that it deliberately omits the fact that Obama's 2012 campaign used such data mining.  He didn't object then because they wanted the Democrats to win.  Note that he takes no credit for that, nor does he accept blame.  He simply lies by omission.

No, you can't fix this by not having a social media account personally since you don't have to sign in for you to be tracked and the tracking not only happens on the site in question it happens anywhere connections to that site are found such as images, buttons or other related functionality.

For this reason the problem can only be fixed legislatively or if all of said firms are driven out of business due to mass-revulsion by the people -- either way the only fix is if pulling this crap is an instant corporate death sentence right here, right now.

View this entry with comments (opens new window)
 

Folks, let's make this easy.

Everyone wants to talk about how Podesta's email was penetrated, or the rest of the DNC, or that the RNC, allegedly, was not.

All the screamers are (still) out about  "Russia" and similar.

Let me restate -- while Podesta's email was apparently broken into via a "spearfishing" email (one with a reset password link embedded in it that didn't go to the real site, but rather to the person who was trying to steal) and which he was dumb enough to click and then provide his current password the real issue here isn't about this sort of attack at all.

The real issue is about the idiocy of such "email" systems or the use of any other sort of cloud provider for anything secure in the first place.

Let me explain.

I run my own email here.  It would be trivial for me to lock it down so that even if you stole my password it would be worthless.

How?

Simple, really.  You see on the same network I have a VPN gateway that does not accept passwords at all.  It only accepts a certificate.  Such a SSL certificate is (nominally) intended to sign and encrypt private emails, and can also be used as a secure identifier for a VPN.  It is, effectively, the same thing a server uses to secure web communications but with a different set of "intended use" flags set (client authentication and digital signature rather than SSL server authentication.)

All I'd have to do is change the configuration on the email system slightly so that only accesses that came from connected VPN clients could connect at all.

Now you'd have to steal a device and if you did, it would only work until I knew it was stolen (and revoked the key.)  No other means of getting in would work even with the password.

It is literally a 15 second configuration change on my Dovecot and Exchange servers to do this, and it would not impact my ability to exchange email with others one bit.

Modern smartphones (including Android, IOS and BlackBerry 10 handsets) can all use these certificates for an IPSEC/IKEv2 connection.  Such a connection can be "nailed" open as well, active even on cellular, or activated "on demand" by the user.  Modern commercial and freely available operating systems (Windows 7/8/10, MacOS, Linux and FreeBSD) can also use same.  Doing so positively encrypts all traffic coming into or leaving said device.

Such a system is extremely secure because only authorized devices, secured with a cryptographic key loaded on them, can see the service in question.  An unknown key is refused by the VPN gateway as is one that has been revoked. Only trusted certificates (which are loaded on the host in a certificate store) can connect.  I use this facility with other services here at Ticker Central so I can have my laptop with me and use it "as if I was at home" even from half the world away on an insecure, or even known to be monitored data link.

The only way to get packets onto the "private" network from the outside and thus be able to "see" the email store is to connect to the VPN and establish a tunnel and the only way to do that is to have a trusted certificate on the device in question.  No certificate, no connection, no access, password or no password -- period.

This sort of facility is essential if you intend to allow remote access to services that are themselves of questionable security (or worse) such as, for example, Windows file shares.

So why didn't the DNC do this?

Because it takes more than 30 seconds of thought to do it and in addition it means not using email providers like Google -- you have to do it yourself, in-house, or all these security steps are worthless since your certificates and such have to be where someone else, who is unvetted, can get at them.

In other words they were stupid, and so have been the others.  They chose the equivalent of an unlocked front door for their house, and then are surprised when someone walks in and takes all the beer out of the fridge.

Oh, and all the guns and money in the house too, along with the nice widescreen TV!

Just remember folks that these are the very same people who claim to be smart enough to run the country.

PS: All the cloud providers are unlocked houses.  Always. They have to be in order for a cloud service to work; it's not a choice, it's an inherent part of any public "cloud" architecture. Claims otherwise are like putting a 25 cent TSA lock on your suitcase and calling it "secure."  The reason you have not and will not see this discussed in the media, especially the "business media", is that the minute this fact reaches the level of general knowledge all of said "cloud providers" have their stock prices collapse.

View this entry with comments (opens new window)