The Market Ticker ®
Commentary on The Capital Markets - Category [Technology]
Login or register to improve your experience
Main Navigation
Sarah's Resources You Should See
Full-Text Search & Archives
Leverage, the book
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions. For investment, legal or other professional advice specific to your situation contact a licensed professional in your jurisdiction.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility; author(s) may have positions in securities or firms mentioned and have no duty to disclose same.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must be complete (NOT a "pitch"; those get you blocked as a spammer), include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2024-07-18 08:07 by Karl Denninger
in Technology , 197 references
[Comments enabled]  

There's a quite-serious problem with the shift we've seen in software over the last decade or so.

In short: You can't buy it anymore, only get a subscription.

The "buy it" system typically worked like this: You paid for each "seat" you wished to use at once.  One butt, pair of eyeballs and pair of hands, one license.  Most companies let you install the software on a limited number of devices (e.g. two -- since you might have a laptop and a desktop, or one computer at the office and another in your home office) but you could only use one at a time and this was enforced by having the system "check in" with some online location when in use.  Over time if you tried to cheat (e.g. give your activation code to 10 people) you'd get caught and the system would lock people out.

Here's the rub -- the companies had to innovate in this model because if they didn't you stop paying for upgrades and kept using what you had.  Innovation was real and significant -- Word Perfect was, well, not-so-perfect and integration between Word and Excel really did matter (never mind Powerpoint) and the rather rapid innovation that happened with Adobe products.

Who remembers "Pagemaker "from the 1990s?  It sucked, to be blunt, including crashing on a regular basis but it was the only tool that could reasonably do color seps and we needed that for print ads, so we put up with it at MCSNet.  I didn't like it but the other alternatives were worse.

Through the 1990s and into the 2000s Adobe's creative software got better.  The original Dreamweaver, as an example, blew big ones, but as time went on it got better.  Flash eventually got killed because it couldn't be extricated from the security problems it created holes for, but there's no doubt that Photoshop, up to about CS6, really did improve with each release and not a little either.  I bought several of those Creative Suite upgrades over the years and every one of them was worth the spend.

But then came "Creative Cloud."  I refused to buy that for one simple reason: If you ever stop paying all your existing work becomes inaccessible.  This is a ratcheting contract of adhesion and what's worse, it deliberately addictive in truth because you continue to create and thus the amount of material that you lose goes up each and every year.

Now Adobe doesn't have to innovate -- at all.  You either pay for equal capability at best on a continuing basis or all of your former work becomes instantly inaccessible.  Yes, you might able to plan for that and migrate off over time but you'll still be forced to buy.

This has to be stopped -- among other things it has destroyed the incentive for firms to innovate in their offerings, and in addition its actually regressive and operates as a coercive force on consumers and businesses.  This sort of business model is the very definition of an unfair and deceptive business practice and thus under state consumer protection laws is illegal.

Locking up someone's creative work by effectively "tying" it to a continuing revenue stream is the very definition of an unfair and deceptive practice no matter what they claim you "agreed" to.

How do we address this?

Change the law.  Specifically:

APIs, file formats and similar cannot be protected under either copyright or trade secret (irrespective of private agreement by adhesion or even negotiation) as that is deemed contrary to public policy unless the software that uses them is sold as a fully-licensed per-user product.  You can restrict the number of concurrent users but not "marry" the software to hardware nor can you prevent a reinstall if someone buys a new machine, even if the old one can't be "de-registered" (because its broken, for example.)

Abandoned versions, where the licensing check engine or any part of its authentication or other online resource necessary for its ordinary use are shut down lose copyright and reverse-engineering protection as a matter of statute and the firm is further required to publish a patch file that disables said checks.  Adobe can choose to shut off re-installs of CS6 or the "boxed" Lightroom version, for example (and they have; even if you deactivate one of your old licenses to reinstall it will tell you that you still have all of them outstanding, which is not true, and in addition a "new" install attempt fails as it apparently relies on breakage in the former "Internet Explorer" which Microsoft has retired and as such you can't even sign in to validate it on a clean install anyway) but if they do shut down verification or refuse to address breakage that then it is legal for anyone to patch out the checks entirely and distribute the file(s) required to do so.  In short if a firm abandons a piece of software, whether by subscription or not and by doing so effectively destroys the value of all the existing copies they previously sold the old version becomes freely available whether the firm likes it or not.

This prevents not only orphaning and in fact locking up people's work which the software company does not own, has no rights to and never did, that is the common practice of effectively forcing you to pay them for access to your own work that you own all rights to but in addition it prevents firms (like Quicken, who has moved to a "software as a subscription" model as well) from forcing you through a one-way door.  They can shut down support for the old boxed versions of their software but they can't perform an "upgrade" of your data file that renders it unreadable by anything else except their subscription version.  With a fully-public API and file format that capacity is foreclosed so if they refuse to sell a boxed version today (which they can) you can take the data to some other vendor because they must have the format published and available to use at no cost or use it on the older, boxed version you already own even if your existing computer breaks and you buy a new one.  The firm's choice in that situation is to either continue on a perpetual basis the infrastructure required to validate that you're not stealing it (e.g. you actually bought it and are using it within the original agreed terms) or the firm is deemed to have constructively abandoned all rights including copyright, trade secret and any "private" licensing agreement.

Restoring the imperative for firms to actually provide value on an ongoing basis, rather than abusing customers through what amount to forced lock-ins and continuing expenditures to obtain access to their own information is not just good policy -- its a hard, Constitutional requirement as intellectual property is in fact property and the Constitution protects your right to not have it stolen from you, whether by government or otherwise (if not there wouldn't be an explicit Patent and Copyright clause in the Constitution, and there is in Article 1, Section 1, Clause 8.)  Denying you access to your creative work is in fact theft of your rights in said work.  You paid for that software and you have a right to use it to access and exploit your creative content that you created with same.

Those firms that argue against this change, which must take place immediately, should be destroyed.

View this entry with comments (opens new window)
 

2023-04-12 07:00 by Karl Denninger
in Technology , 487 references
[Comments enabled]  

I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one.  The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.

I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.

First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?)  Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.

What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis.  Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data.  That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."

Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex."  If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.

Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."

You've just asked the AI to produce something that impugns black people..  The data it will, without bias, consider includes the FBI's UCR reports which are published annually.  Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.

However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.

There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.

If it does you would call that AI lying.

It would not call it a lie and factually you're both correct.  It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider.  Thus it has accurately processed the data it considered and did not lie.

However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.

This problem is resolvable.

For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:

If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.

All such "constitutional rules" trace to humans.  Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person.  We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it.  If you sponsor a bill or vote for it your name is on it.  Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.

Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.

The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment.  "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense.  A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose.  No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.

This must be extended to "AI" style decision-making anywhere it operates.  Were the "reports" of jack-slammed hospitals during Covid, for example, false and amplified by robot actors in the media?  It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen.  So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.

Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size.  Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors.  If pharmaceutical companies were a part of it they would be destroyed as well.

This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery.  Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.

But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows.  If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.

Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?

We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort.  We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.

A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know.  Yet exactly none of that is real.  Replika anyone?

Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.

How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years?  This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans.  We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics.  Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.

View this entry with comments (opens new window)
 

2018-12-03 09:43 by Karl Denninger
in Technology , 249 references
[Comments enabled]  

Someone -- or more like a few someones -- have screwed the pooch.

IPv6, which is the "new" generation of Internet protocol, is an undeniable good thing.  Among other things it almost-certainly resolves any issues about address exhaustion, since it's a 128 bit space, with 64 bits being "local" and the other 64 bits (by convention, but not necessity) being "global."

This literally collapses the routing table for the Internet to "one entry per internet provider" in terms of address space, which is an undeniable good thing.

However, this presumes it all works as designed. And it's not.

About a month ago there began an intermittent issue where connections over IPv6, but not IPv4, to the same place would often wind up extremely slow or time out entirely.  My first-blush belief was that I had uncovered a bug somewhere in the routing stack of my gateway or local gear, and I spent quite a bit of time chasing that premise.  I got nowhere.

The issue was persistent with both Windows 10 and Unix clients -- and indeed, also with Android phones.  That's three operating systems of varying vintages and patch levels.  Hmmmm.....

Having more or less eliminated that I thought perhaps my ISP at home was responsible -- Cox.

But then, just today, I ran into the exact same connection lockup on ToS's "Trader TV" streaming video while on XFinity in Michigan.  Different provider, different brand cable modem, different brand and model of WiFi gateway.

Uhhhhhh.....

Now I'm starting to think there's something else afoot -- maybe some intentional pollution in the ICMP space, along with inadequate (or no!) filtering in the provider space and inter-provider space to control malicious nonsense.

See, IPv6 requires a whole host of ICMP messages that flow between points in the normal course of operation.  Filter them all out at your gateway and bad things happen --- like terrible performance, or worse, no addressing at all.  But one has to wonder whether the ISP folks have appropriately filtered their networks at the edges to prevent malicious injection of these frames from hackers.

If not you could quite-easily "target" exchange points and routers inside an ISP infrastructure and severely constrict the pipes on an intermittent and damn hard to isolate basis.  

Which, incidentally, matches exactly the behavior I've been seeing.

I can't prove this is what's going on because I have no means to see "inside" a provider's network and the frames in question don't appear to be getting all the way to my end on either end.  But the lockups that it produces, specifically on ToS' "Trader TV", are nasty -- you not only lose the video but if you try to close and re-open the stream you lose the entire application streaming data feed too and are forced to go to the OS, kill the process and restart it.

The latter behavior may be a Windows 10 thing, as when I run into this on my Unix machines it tends to produce an aborted connection eventually, and my software retries that and recovers.  Slowly.

In any event on IPv4 it never happens, but then again IPv4 doesn't use ICMP for the sort of control functionality that IPv6 does.  One therefore has to wonder..... is there a little global game going on here and there that amounts to moderately low-level harassment in the ISP infrastructure -- but which has as its root a lack of appropriate edge-level -- and interchange level -- filtering to prevent it?

Years ago ports 138 and 139 were abused mightily to hack into people's Windows machines, since SMB and Netbios run on them and the original protocol -- which, incidentally, even modern Windows machines will answer to unless turned off -- were notoriously insecure.  Microsoft, for its part, dumped a deuce in the upper tank on this in that turning off V1 will also turn off the "network browse" functionality, which they never reimplemented "cleanly" on V2 and V3 (which are both more-secure.)  Thus many home users and more than a few business ones have it on because it's nice to be able to "see" resources like file storage in a "browser" format.

But in turn nearly all consumer ISPs block those ports from end users because if they're open it can be trivially easy to break into user's computers.

One has to wonder -- is something similar in the IPv6 space going on now, but instead of stealing things the outcome is basically harassment and severe degradation of performance?

Hmmmm....

View this entry with comments (opens new window)
 

2018-06-06 16:23 by Karl Denninger
in Technology , 114 references
[Comments enabled]  

Nope, nope and nope.

Quick demo of the lock support in the HomeDaemon-MCP app including immediate notification of all changes (and why/how) along with a demonstration of the 100% effective prevention of the so-called Z-Shave hack from working.

Simply put it is entirely under the controller's choice whether it permits high-power keying for S0 nodes.  For those controllers that have no batteries and no detachable RF stick, which is a design choice, there's not a lot of option.

But for those who follow best practice that has been in place since the very first Z-Wave networks you're 100% immune to this attack unless you insist and intentionally shut off the protection -- even in a world where S2 adoption becomes commonplace (which certainly isn't today but will become more-so over time.)

HomeDaemon-MCP is available for the entity that wishes to make a huge dent in the market with a highly-secure, very fast and fully-capable automation, security and monitoring appliance, whether for embedded sale (e.g. in the homebuilding industry) or as a stand-alone offering.  Look to the right and email me for more information.

View this entry with comments (opens new window)
 

2018-05-31 13:27 by Karl Denninger
in Technology , 162 references
[Comments enabled]  

There's a story making the rounds that appears to have some corroboration at this point, but my sourcing is too thin (and specific to people) to document.

Apparently if you bought an "Alexa" and activated it you can wind up with an un-asked for Prime subscription and it can wind up linked to some other card you have out there that Amazon managed to get their claws on.

Of course some people won't care because their entire point of buying one of these "Smart speaker" things is to link it with Prime for their "shopping" purposes.  Well, ok, but whatever happened to informed consent?

There might well be, somewhere, one of those "buying this will subscribe you to X at price Y" deals somewhere in the fine print on the startup or registration page.  In fact I wouldn't doubt it if it's there somewhere, maybe in the "click-through" terms and conditions that nobody actually clicks through and reads the entirety of.

My question is why is this sort of thing happening at all?

Let's be real here: These so-called "smart speakers" are anything but.  They aren't "smart", they're pattern-recognition devices and you're the pattern.  They're linked to "the cloud" because the CPU, RAM and similar requirement to run voice recognition is quite high but extremely bursty since you only give the unit a command once in a long while; the rest of the time it is either idle or (and you hope it's not!) simply recording what it hears.  Putting the capability for fast, decently-accurate response in the unit when it would be active 0.1% of the time at most is why these devices are all "cloud-powered"; they would be stupid-expensive if not.

But these things don't exist for your benefit, they exist for someone else's benefit.  If you want to know what sort of imagery gets conjured in my mind when I hear of people installing and using them it's from the first part of WALL-E..... you know, this one.

Yeah.

That looks appealing.

Not.

Heh, I get it.  You like convenience.  So do I.  I like being able to see what's going on in my house, even if I'm not there, especially if I get alerted to something sketchy going on.  After all that video evidence is useful for the cops to prosecute someone with if they try stealing my stereo.  I like sitting in the bar, pushing a button, and having the hottub ready for me when I get home a half-hour later.  That's convenient.  And I like knowing with hard confirmation that I really did remember to close the damned garage door on the way out.  Peace of mind and all that.

But all of this nonsense in today's world seems to be centered around not your convenience and security, but rather someone else mining your data for profit, not telling you what they're doing with it, or even lying about when they collect it, for what purpose they use it, and who gets access to it.

In our world of today we don't jail executives for that sort of crap.  We should, but we don't.

I get the limitations as well. But what I don't get is the insane price ripoffs that come with it, never mind the privacy and data security implications, especially when you bring something like this into your house or, even worse, your bedroom.

For an example price out a "NEST" thermostat.  You'll blanch.  For half the price I can buy a Z-wave enabled thermostat from Trane.  You probably heard of them -- they make air conditioners and heating systems and have a decades-long history of building high-quality, reliable gear.  It doesn't need "connectivity" to work; it's a thermostat.  Indeed, the one on the left at that link is the one I have in my house.  Oh, and it monitors service intervals too (e.g. for your filters), which is nice -- and you can set them to suit the level of general dust and such in your environment.  But, you can talk to it over Z-Wave and both see what's going on and control it if you want to.

Like, for example, right here:

 

That's real-time, right now, and if I tap it I can change the temperature it's set for.  HomeDaemon-MCP has an outdoor temperature sensor and switches its mode automatically; there's no need to be in "auto" or "heat" mode around here for half the year or more; if it's 70+F outside you won't want heat!  But in the "middle seasons" it's nice to have it automatically switch between the two because there might actually be a reason for that, and in many other parts of the country (especially at higher elevations) where temperature swings of 30-40F are not uncommon during a single 24 hour period it's very useful.

Someone who buys HomeDaemon-MCP and stands up the business to retail it could easily sell the entire package including the controller, a software license and the thermostat for the same sort of money as one "Nest."  But what you'd get is not just a thermostat in that case -- it can run your entire house at the cost of simply adding more modules that are reasonably priced.

Want a camera too?  Nest wants $200 for them.  What?

Amcrest wants $81 for an indoor camera with double the resolution!  If you're happy with the same 1080p that Nest offers and shop around you can get 'em for about $60, or less than a third of the price.

Instead of demanding you use a "cloud" service which inherently means no security as the data is not yours and is being stored and transmitted to a big company that might use it for "whatever" (good luck proving it if they do and you'll need an act of God to hold them accountable if you catch them either doing so or someone hacks it and uses it to target your house for a break-inwith HomeDaemon-MCP only you ever have the data, your cameras can be 100% firewalled from the outside so they cannot speak in or out beyond the perimeter of your network directly and yet you can have access to both snapshots (which you can have it take when it sees movement, etc) and real-time, streaming video any time you'd like over a high-grade encrypted connection from anywhere.

Oh, and the second camera isn't another $200+ either -- or $300 if you want one in an outside-rated enclosure!

With a couple of motion sensors and a garage door sensor (magnetic) you can set it up so that the camera automatically points at the wall when you're home (for the paranoid), when you leave it "arms" itself and points at the room, and if there's motion seen without the "authorized" path being taken (e.g. opening your legitimate garage door with the button in your car) you get alerted immediately so you can grab a video or screen shot for the police. 

What the hell is wrong with people?  Do you really want a copy of video of your house to be in someone's cloud machine ever?  Think about it folks -- we're talking about data that if some malefactor gets ahold of it and pattern-matches it they can figure out if you're home, when you're home, when you go to work and when you're on vacation!

Why the hell would you want that data anywhere except on your premise and on your personal device on demand only and delivered only over a secure connection if it ever leaves your home at all?

Never mind that it's better, faster and cheaper to do it that way.

So who wants to make a billion dollars?  The ask for the entire package will never be lower than it is now; there is exactly one thing needed to deploy it commercially and that's a customer-facing web interface to automate the certificate keying the license system uses.  The code to actually use those certificates and enforce them is already in the package as is the server side which can hit a Postgres instance (in other words nearly-infinitely scaleable and easily extended as you may wish.)

Is there actually a desire to sell products and services to people any longer that are theirs, that deliver value to the customer, or has everything turned into a scheme to data-mine you, get you to pay two, three, five or ten times as much for less functionality and try to stick you with a recurring bill you can't opt out of without turning your investment into dust?  Adobe anyone?

If you want to be that guy or gal that disrupts this space, look to the right and email me.

The answer to the problem is ready to go -- right here, right now.

View this entry with comments (opens new window)