The Market Ticker ®
Commentary on The Capital Markets - Category [Technology]
Login or register to improve your experience
Main Navigation
Sarah's Resources You Should See
Full-Text Search & Archives
Leverage, the book
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions. For investment, legal or other professional advice specific to your situation contact a licensed professional in your jurisdiction.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility; author(s) may have positions in securities or firms mentioned and have no duty to disclose same.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must be complete (NOT a "pitch"; those get you blocked as a spammer), include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2024-03-17 07:00 by Karl Denninger
in Technology , 316 references
[Comments enabled]  

This article deserves more notice than it got....

The consumer advocacy group found issues with a dozen seemingly identical video doorbells sold under brand names including Eken and Tuck. All are made by the Eken Group, based in Shenzhen, China, and controlled through a mobile app called Aiwit, which Eken operates, CR said. 

Eken and Tuck are not well-known brands in the video doorbell market, yet they are relatively strong sellers online. The doorbells appeared in multiple listings on Amazon, with more than 4,200 sold in January alone. Both brands are often touted as "Amazon's Choice: Overall Pick," CR stated.

What is the definition of an "Amazon's Choice"?

That's a good question, but arguably one of the better answers is probably "doesn't get returned often."

This much I can assure you -- Amazon doesn't verify the security chops of such an app or device and apparently neither does Google's Play Store other than by automated scan because despite this article and CR's warning the app is still on the Play Store and claims all data is encrypted in transit and none is collected.

This may be true, by the way.

But as I've noted repeatedly over the years when it comes to home security surveillance cameras are a special problem.  The common protocol used to stream data, RTSP, dates to 1998 and while there is a replacement as a "proposed standard" in RFC 2876 it is not backward-compatible and RTSP offers only authentication via a digest method for access and no security whatsoever on the payload which is live video and audio!

There is a serious tension between the cost of IP cameras and encryption, in that encryption is not "free" on a CPU cycle basis and making cameras that have two-digit costs before the decimal is fairly incompatible with real-time video encryption -- never mind the other issues that arise such as a lack of a published, reasonable standard for it that is interoperable across vendors and the certificate and keying management problems that have to have some sort of secure means of being resolved when you have these devices all over the place.  The latter is serious as PKI (public key) has a cost too; that little "https" thing we all use isn't free to the site owner because the folks issuing it actually have to do work and their security has to be up to snuff or every certificate they issue can be compromised.  In other words this is a material problem and not a trivial one to fix on a mass-marketed device, particularly when cost pressures are involved.

HomeDaemon, the software package I wrote quite some time ago but refuse to put into commercial channels for multiple reasons I've pointed out in the past, works with pretty-much any camera that can do RTSP and resolves the problem by insuring that the data never leaves your premises without being encrypted with strong, PFS-enabled security -- and it never goes to any sort of "cloud" system at all, only being decrypted on your phone.  This narrows the PKI space to one device in your house which is the HomeDaemon gateway (but still has the PKI issue and, if done through public certification authorities such as is in use for this blog) still would have a recurring expense.

New rules are needed to hold online retailers accountable for vetting sellers and the product sold by their platforms, according to CR. It called on the Federal Trade Commission to stop the online sales of the doorbell cameras and on retailers to do more to ensure the quality of the products they sell. 

Well, "stop the sale"?  Entirely disagree.  It should be up to an individual consumer whether they find the trade-off to be fair or not.

But force both the sellers and app publishers to be honest about the issues, yeah, how about that?  And how about considering misrepresentation to be fraud (and throwing people in prison) when it occurs?  And by the way, does this apply to the various "proprietary" cameras and such?  I don't know, because I haven't bought one and looked into it but I'll guarantee given what we do know that the data is not secure on an end-to-end basis.  I'd like to assume that the "Ring" (and related) versions don't share this issue in transport but those have potentially even more-serious trouble because they're all "cloud" enabled and while their data may be encrypted in transport it is not individually encrypted in storage with keying only the customer has and controls.  This clearly is not the case otherwise we wouldn't have (as we do) various places that ask (and allow you to) "help" agencies of various sorts (e.g. police departments) access to said information.

Never mind that Amazon in particular at least back to last summer is known not to encrypt data "at rest" in their cloud storage because they paid a nearly $6 million fine to the FTC for allowing their workers to access RING camera video.  You can't access what's encrypted with only the customer having the keying and thus the question as to whether such is stored "in the clear" is quite-conclusively known.

Why is the above a big deal?  Because such "cloud" storage concentrates a whole bunch of said unencrypted data from different people and places into one place and thus makes that place a very juicy target.  To compromise one camera and its data is bad if its your house that's targeted but to compromise one million cameras at once is obviously much worse and thus it becomes effectively the same as a bank with a big sign on the front of the building stating "$100 million in gold bars is in our vault!"

If you do that you better be sure the vault is adequate to prevent anyone from breaking into it successfully to steal same and if they try they'll get caught before they get in.

My assumption is that any such device sold in the consumer marketplace is insecure in transport and any "cloud" storage unless you, personally, wrap all said transportation of data before it leaves your premises and you never use any of the offered "cloud" options at all.  In the current product environment I have no way to make a recommendation that doesn't result in a severe privacy problem because there's no way to reasonably believe said data is secure in any of the commercial offerings.

In an environment where what is observed is public land (thus there's no expectation of privacy at all) it obviously does not really matter but as soon as that camera is pointing at private property, presumably yours, it matters a great deal and unfortunately in the current marketplace, as has been the case for several years now, there's no answer that I'm comfortable with recommending for purchase.

Maybe I'll decide to release HomeDaemon generally (without charging for it) at some point to resolve at least the "personal access only" problem side of the issue for those willing to put in some personal effort.

View this entry with comments (opens new window)
 

2024-03-01 07:45 by Karl Denninger
in Technology , 289 references
[Comments enabled]  

This is flat-out stupid.

WASHINGTON – Today, the White House Office of the National Cyber Director (ONCD) released a report calling on the technical community to proactively reduce the attack surface in cyberspace. ONCD makes the case that technology manufacturers can prevent entire classes of vulnerabilities from entering the digital ecosystem by adopting memory safe programming languages.

Let me be clear: There is no such thing that can be assured beyond the level of individual code.

The premise here is that if something is "memory safe" then an attacker cannot cause an overflow or underflow that results in arbitrary (conveniently chosen by the attacker) code to be executed.  It sounds like the basic principle is "well, just protect the environment so that can't happen when the code is written, and all is well."

The problem is that the premise is false but not for the reason you think it is.

All software is written by people (or machines at the direction of people.)  Humans (and the programs written by them, as a result) contain errors.  We call them "bugs" in the computer vernacular.

If I write a piece of software and it contains a memory-unsafe error then that software can potentially be caused to execute something that I did not intend, but an attacker intends, by exploiting that error.  If that program has the capacity to escalate privilege then it is especially bad because the compromise can extend beyond the software in question to anything else running on that particular computer.  In a "cloud" environment this is especially nasty because other people's software unrelated to me may be running on it as well, and that means that, in theory, my bad code can cause your secutity to be compromised even though you are uninvolved or even unaware of me.

It sounds like the answer is to use a programming language that makes this impossible.  And, if you do not think about it for more than 30 seconds, that sounds smart.

But -- what if the language software itself contains a memory-unsafe bug?

Then every piece of software written against that language and compiled by that flawed language software can be compromised now or in the future and, much worse, simply because the scope of the potential compromise now includes every piece of software written in that language the impact, if such a flaw exists, is exponentially larger.

Now you could say "well, but we'll make sure our top men write these compilers."  Ok, fine and well enough.  If the average computer language compiler produces, say, ten thousand programs that are then executed are those "top men" ten thousand times better than the average programmer who is hired to do work where secure programming is important?

I doubt it very much simply because I don't believe that the difference between "a good programmer" and the best there is reaches that multiplication factor.  Ever.

When it comes to common languages that 10,000 multiple, incidentally, is laughably low.

Second, actually enforcing these features requires CPU cycles.  Said tests must be performed all the time whether the programmer is skilled and does things properly or not.  This overhead is not trivial; it both makes the software larger and costs more CPU cycles to execute.

So the second question becomes this:  Would you like to take said overhead once, when the software is written, to make sure it in fact does not result in memory constraint violations, or would you like to take it every single time the software is used?

Nothing is free; paying once for quality work is always cheaper than paying every time you use something to insure that someone didn't do a stupid thing.  Further, those integrity "assurances" presume the compiler and language itself do not contain any flaws, as discussed above.  This you cannot actually obtain assurance of any more than you can assure any individual piece of software does not contain flaws.

And finally, if you're wrong exactly how many programs would you like to have compromised at once?

Why do I bring this up?

Because some of the most-ridiculous problems in this regard have been in the microcode for CPUs.  Are not those programmers "top men" and by the way where is accountability for the flaws in said microcode in processors and the performance impact, which is quite-significant, that results in loss of value that is utterly enormous every time an update requirement is discovered that effectively ruins the performance of said CPUs that were sold to both individuals and businesses?

What was that you said about "memory safe" and "accountability" again?

And why is not the discussion instead about the quality of performance in coding applications that have security impacts -- such as systems holding financial, business and medical data?

If your police force cannot manage to shoot straight, and thus when attempting to apprehend a bank robber shoot innocent civilians that are in the general area the answer to the problem is not to issue bullet resistant vests to every citizen in town.

View this entry with comments (opens new window)
 

2023-04-12 07:00 by Karl Denninger
in Technology , 474 references
[Comments enabled]  

I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one.  The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.

I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.

First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?)  Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.

What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis.  Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data.  That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."

Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex."  If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.

Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."

You've just asked the AI to produce something that impugns black people..  The data it will, without bias, consider includes the FBI's UCR reports which are published annually.  Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.

However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.

There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.

If it does you would call that AI lying.

It would not call it a lie and factually you're both correct.  It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider.  Thus it has accurately processed the data it considered and did not lie.

However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.

This problem is resolvable.

For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:

If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.

All such "constitutional rules" trace to humans.  Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person.  We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it.  If you sponsor a bill or vote for it your name is on it.  Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.

Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.

The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment.  "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense.  A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose.  No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.

This must be extended to "AI" style decision-making anywhere it operates.  Were the "reports" of jack-slammed hospitals during Covid, for example, false and amplified by robot actors in the media?  It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen.  So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.

Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size.  Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors.  If pharmaceutical companies were a part of it they would be destroyed as well.

This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery.  Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.

But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows.  If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.

Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?

We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort.  We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.

A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know.  Yet exactly none of that is real.  Replika anyone?

Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.

How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years?  This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans.  We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics.  Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.

View this entry with comments (opens new window)
 

2018-12-03 09:43 by Karl Denninger
in Technology , 249 references
[Comments enabled]  

Someone -- or more like a few someones -- have screwed the pooch.

IPv6, which is the "new" generation of Internet protocol, is an undeniable good thing.  Among other things it almost-certainly resolves any issues about address exhaustion, since it's a 128 bit space, with 64 bits being "local" and the other 64 bits (by convention, but not necessity) being "global."

This literally collapses the routing table for the Internet to "one entry per internet provider" in terms of address space, which is an undeniable good thing.

However, this presumes it all works as designed. And it's not.

About a month ago there began an intermittent issue where connections over IPv6, but not IPv4, to the same place would often wind up extremely slow or time out entirely.  My first-blush belief was that I had uncovered a bug somewhere in the routing stack of my gateway or local gear, and I spent quite a bit of time chasing that premise.  I got nowhere.

The issue was persistent with both Windows 10 and Unix clients -- and indeed, also with Android phones.  That's three operating systems of varying vintages and patch levels.  Hmmmm.....

Having more or less eliminated that I thought perhaps my ISP at home was responsible -- Cox.

But then, just today, I ran into the exact same connection lockup on ToS's "Trader TV" streaming video while on XFinity in Michigan.  Different provider, different brand cable modem, different brand and model of WiFi gateway.

Uhhhhhh.....

Now I'm starting to think there's something else afoot -- maybe some intentional pollution in the ICMP space, along with inadequate (or no!) filtering in the provider space and inter-provider space to control malicious nonsense.

See, IPv6 requires a whole host of ICMP messages that flow between points in the normal course of operation.  Filter them all out at your gateway and bad things happen --- like terrible performance, or worse, no addressing at all.  But one has to wonder whether the ISP folks have appropriately filtered their networks at the edges to prevent malicious injection of these frames from hackers.

If not you could quite-easily "target" exchange points and routers inside an ISP infrastructure and severely constrict the pipes on an intermittent and damn hard to isolate basis.  

Which, incidentally, matches exactly the behavior I've been seeing.

I can't prove this is what's going on because I have no means to see "inside" a provider's network and the frames in question don't appear to be getting all the way to my end on either end.  But the lockups that it produces, specifically on ToS' "Trader TV", are nasty -- you not only lose the video but if you try to close and re-open the stream you lose the entire application streaming data feed too and are forced to go to the OS, kill the process and restart it.

The latter behavior may be a Windows 10 thing, as when I run into this on my Unix machines it tends to produce an aborted connection eventually, and my software retries that and recovers.  Slowly.

In any event on IPv4 it never happens, but then again IPv4 doesn't use ICMP for the sort of control functionality that IPv6 does.  One therefore has to wonder..... is there a little global game going on here and there that amounts to moderately low-level harassment in the ISP infrastructure -- but which has as its root a lack of appropriate edge-level -- and interchange level -- filtering to prevent it?

Years ago ports 138 and 139 were abused mightily to hack into people's Windows machines, since SMB and Netbios run on them and the original protocol -- which, incidentally, even modern Windows machines will answer to unless turned off -- were notoriously insecure.  Microsoft, for its part, dumped a deuce in the upper tank on this in that turning off V1 will also turn off the "network browse" functionality, which they never reimplemented "cleanly" on V2 and V3 (which are both more-secure.)  Thus many home users and more than a few business ones have it on because it's nice to be able to "see" resources like file storage in a "browser" format.

But in turn nearly all consumer ISPs block those ports from end users because if they're open it can be trivially easy to break into user's computers.

One has to wonder -- is something similar in the IPv6 space going on now, but instead of stealing things the outcome is basically harassment and severe degradation of performance?

Hmmmm....

View this entry with comments (opens new window)
 

2018-06-06 16:23 by Karl Denninger
in Technology , 114 references
[Comments enabled]  

Nope, nope and nope.

Quick demo of the lock support in the HomeDaemon-MCP app including immediate notification of all changes (and why/how) along with a demonstration of the 100% effective prevention of the so-called Z-Shave hack from working.

Simply put it is entirely under the controller's choice whether it permits high-power keying for S0 nodes.  For those controllers that have no batteries and no detachable RF stick, which is a design choice, there's not a lot of option.

But for those who follow best practice that has been in place since the very first Z-Wave networks you're 100% immune to this attack unless you insist and intentionally shut off the protection -- even in a world where S2 adoption becomes commonplace (which certainly isn't today but will become more-so over time.)

HomeDaemon-MCP is available for the entity that wishes to make a huge dent in the market with a highly-secure, very fast and fully-capable automation, security and monitoring appliance, whether for embedded sale (e.g. in the homebuilding industry) or as a stand-alone offering.  Look to the right and email me for more information.

View this entry with comments (opens new window)