The Market Ticker
Commentary on The Capital Markets - Category [Technology]
Login or register to improve your experience
Main Navigation
Sarah's Resources You Should See
Full-Text Search & Archives
Leverage, the book
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions. For investment, legal or other professional advice specific to your situation contact a licensed professional in your jurisdiction.


Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility; author(s) may have positions in securities or firms mentioned and have no duty to disclose same.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must be complete (NOT a "pitch"), include full and correct contact information and be related to an economic or political matter of the day. Pitch emails missing the above will be silently deleted. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2024-02-16 07:00 by Karl Denninger
in Technology , 249 references
[Comments enabled]  

Oh, you have a "discrete" TPM in your machine and this means your disk encryption is "safe" if someone steals it, right?

Uh, no.

We're very familiar with the many projects in which Raspberry Pi hardware is used, from giving old computers a new lease of life through to running the animated displays so beloved by retailers. But cracking BitLocker? We doubt the company will be bragging too much about that particular application.

This isn't really "cracking"; the Pi is simply used as a snoop to capture the key after the TPM releases it.  Which it will do, if its happy with the hardware configuration and such (e.g. same disk, nobody's tampered with the machine that it knows about, etc.)

Calling this a "hack" is, well.... wrong.

An "encrypted" disk in a machine that has a TPM in it, and no password, simply means if you steal only the disk you can't decrypt it because you don't have the key which is in the TPM.  If you steal the entire machine including the TPM and disk and can convince the TPM it has not been compromised (which, if nothing has been removed or added, it hasn't) it will release the key and since they are two pieces of hardware separated by a wire you can pick it off quite-trivially.

The correct answer is "don't do that if this is your threat model"; use a password along with the "built-in" TPM.  Now the TPM only has part of the key and you have the rest, which can't be snooped off the hardware because it is in your head.

(The same applies if you use a CAC-style card or similar authentication device; if you don't have it you have no way to know what that part of the key is.)

It is possible to design a device that has "tamper detection" hardware (e.g. a pin switch that opens if the case is opened to get to the drive) and which "trips" the TPM if it detects chassis intrusion so that it refuses to release the keying (or erases itself) but as far as I know none of the laptops out there in common use have that addition.  Most server boards of reasonably recent vintage have a connector for it but of course your case has to have the appropriate switch(es) in the correct places and this would still not help if, for example, someone knows its there and cuts through the metal away from the switch.

A "hack" would involve, for example, finding the IV and keying on the disk somewhere you can read it.  Now you need nothing other than the drive itself because you can obtain the IV and key -- and with both you can decrypt the device's data.

This is not a "hack", it is merely clever interception of data that the system's security chip was willing to give up, and said hardware wasn't tricked into doing it either.

If you're not going to put a PIN/Password on your laptop Bitlocker you might as well run with it turned off (and get the performance improvement from not doing the encryption in the CPU at all since in most instances Windows refuses to use the OPAL hardware encryption on the disk itself anyway as they claim they're not confident it is actually secure.)

View this entry with comments (opens new window)

2024-02-13 07:00 by Karl Denninger
in Technology , 274 references
[Comments enabled]  

I found this ad particularly offensive -- and troubling.

It was for Microsoft's vision of AI ("Copilot") and a bunch of different scenarios; all people struggling to do a given thing that has some element of original thought associated with it -- making a scifi movie, for example.

Here comes AI to the rescue which sounds great, right?

Uh, no, its actually bad.

Why is it bad?

Because it puts forward the premise that the human in question can't do it on their own, and of course all these humans have different "diversity" elements to them.

That's basically a thinly-veiled claim that we're all "defective" in some way and that the human experience, that of finding a path and innovating, is dead, to be turned over to what amounts to a glorified robot.

Advertising is always and everywhere about trying to sell you something, of course.  If you buy this car you won't just get to work you'll also get the girl.  If you back this company you'll get a cure for cancer (and, of course, the imagery used is of a young kid and while its not inaccurate as kids do get cancer the disease is largely one of lifestyle and hits older people the vast majority of the time.)  If its not an outright sales job then its about feeling good about the company doing the selling (the beer commercial which featured using draft horses because, of course, there was too much snow for a truck.)  And so on.

But I find it troubling when the thing being sold implies that you're just not good enough as a human being in a creative context -- but a machine is. 

That, to me at least, is not only over the line and offensive -- its false and has implications you will not like.

Specifically, under Copyright Law and the decisions handed up by the courts in such cases a "production" by said machine, if you use it, obviously isn't yours (you didn't create it, the machine did) and more to the point in the context of creative acts isn't copyrightable at all, by anyone.

Thus it would not be "your movie" as just one example.

View this entry with comments (opens new window)

2024-02-12 07:00 by Karl Denninger
in Technology , 270 references
[Comments enabled]  

The Supremes have been asked to "peel back" content moderation by so-called "big tech":

Next month (ed: now "this month") the high court will hear a set of cases that question whether state laws that limit Big Tech companies’ ability to moderate content on their platforms curbs the companies’ First Amendment liberties.

Missouri Attorney General Andrew Bailey – one of the Republican AGs leading the lawsuit against the Biden administration, alleging it engaged in a "vast censorship enterprise" – on Monday filed an amicus brief along with 19 of his colleagues in the cases, asking the Supreme Court to rule in favor of the laws meant to limit internet platform’s ability to moderate content. 

At the core of the argument appears to be a fairly-easily-resolved circumstance: The evidence is quite-clear that the Biden Administration colluded to censor political speech it disagreed with.

Since the First Amendment forbids the government from doing this directly for it to go "around the door" via such a mechanism appears to be at its core a mere pretext and thus should fall.  The suing parties note that telephone and telegraph companies in fact have a "must-carry" requirement that was imposed on them Congressionally in the late 1800s and that has not been successfully challenged as unconstitutional.

There are several important differences here, however.  Among them are that it is not disputed that the government in fact colluded in these efforts, particularly when it came to the pandemic.  But any attempt to claim that this was "justified" due to a public health emergency falls flat when one looks at the broader view -- the government also did so in regard to the Hunter laptop data -- an event that had nothing to do with public health -- and which agencies claimed was "Russian disinformation" as justification yet at the time the claim was made there were agencies in the government that knew, factually, it was authentic because they were in possession of unaltered and authenticated copies of the data itself.

The context of political speech is quite-different than many others; free speech rights are at their highest protective level in that context, even if the speech turns out to be false or intentionally misleading.  You can't sue a candidate over making a political promise in the form of such speech they never intended to carry out and don't.  There are more examples can you can count in this regard with one of the most-egregious of modern times being the three planks of Trump's platform to put a stop to medical monopolists that disappeared without a trace on election night in 2016.  You have no cause of action in such a circumstance nor can you, even if you identify the falsity right up front, demand successfully that a media outlet or other conduit not run said speech.  You can speak in opposition and point out the lie, but you can't force said lie off the air -- even if you can objectively prove it is a lie.

On the other extreme commercial speech that is intentionally misleading and results in harm to consumers can in fact be enjoined.  The FTC, for example, has regulations on the books in this regard and those have been upheld as Constitutional.

But in this case the bigger problem, and one that I will find quite interesting when the decision is finally handed up, is whether or not the various brief-filing parties bring up the fact that these platforms all are driven not just by ideology as "charged" but also by commercial interest twisted around samein that they embed their fortunes into ad sales which they claim gives them a capacity to remain out of whatever they view as "wrong-think" because their advertisers want it that way.

That position, however, is also a Gordian Knot and you need look no further than Media Matters and "X" to see it.  Said pressure group, it is alleged, fabricated through deliberate manipulation a claim that X was advancing "abhorrent" viewpoints that advertisers would not want their ads displayed against and then turned that into a "news story" which led advertisers to actually pull back from the platform.  It appears, upon investigation (and this is yet to be proved through due process, but you can bet it will be as X has filed suit) that in fact Media Matters deliberately manufactured the circumstance under which the claim was made and it would not have otherwise occurred and in fact did not occur with any actual human person.  This is rather similar to a company that makes lawn mowers being sued because someone got their foot cut off -- but the entity bringing the suit first deliberately tampered with all the safety mechanisms on the mower so they would not work and then cut their own foot off on purpose, knowing they had defeated the safeties in advance and an actual human mowing an actual lawn would not have it happen to them.  That has occurred before -- NBC infamously admitted it rigged a crash to cause a fire that would not otherwise occur after they got caught and settled the resulting defamation lawsuit GM brought.

This leads to the bigger question, which hopefully the Supremes will digest and decide if they actually have the argument presented before them: The entire premise of "targeted advertising" is that the advertiser gets to pick and choose what content they wish to run ads against.  Does that in turn eviscerate the tech company argument of "reputational damage by association" in that to rule otherwise means a commercial entity or group of entities can in fact effectively ban distribution of viewpoints they disagree with even though there is no actual forced association with said speech as they can target around it; that is, the "forced association" isn't against specific speech it is the mere presence of it on the platform, not against their advertising specifically, that they claim a right of control.

That's a sticky question indeed as it gets into a balancing act.  It is clear that if I run a company and I do not wish to do business with, for example, the ACLU because I disagree with their political positions I can do that -- this is entirely legal in that I simply choose to refuse association with them.  MCSNet did this in a few instances where public positions were taken by the customer desiring service that were antithetical to our core mission: Provision of internet service to all with money to pay for it on non-discriminatory terms for buyers of like kind and quantity, irrespective of their political or social views, which flows from the protections of the US Constitution in its entirety.

Where the question arises is "where is the line" between freedom of association and effective cartel behaviorwhich implicates 15 USC Chapter 1 forbidding such conduct.

This is not as simple as it might appear at first and as such I'll be watching it closely.

View this entry with comments (opens new window)

2023-04-12 07:00 by Karl Denninger
in Technology , 471 references
[Comments enabled]  

I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one.  The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.

I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.

First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?)  Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.

What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis.  Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data.  That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."

Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex."  If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.

Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."

You've just asked the AI to produce something that impugns black people..  The data it will, without bias, consider includes the FBI's UCR reports which are published annually.  Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.

However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.

There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.

If it does you would call that AI lying.

It would not call it a lie and factually you're both correct.  It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider.  Thus it has accurately processed the data it considered and did not lie.

However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.

This problem is resolvable.

For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:

If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.

All such "constitutional rules" trace to humans.  Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person.  We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it.  If you sponsor a bill or vote for it your name is on it.  Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.

Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.

The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment.  "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense.  A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose.  No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.

This must be extended to "AI" style decision-making anywhere it operates.  Were the "reports" of jack-slammed hospitals during *****, for example, false and amplified by robot actors in the media?  It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen.  So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.

Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size.  Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors.  If pharmaceutical companies were a part of it they would be destroyed as well.

This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery.  Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.

But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows.  If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.

Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?

We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort.  We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.

A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know.  Yet exactly none of that is real.  Replika anyone?

Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.

How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years?  This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans.  We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics.  Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.

View this entry with comments (opens new window)

2018-12-03 09:43 by Karl Denninger
in Technology , 249 references
[Comments enabled]  

Someone -- or more like a few someones -- have screwed the pooch.

IPv6, which is the "new" generation of Internet protocol, is an undeniable good thing.  Among other things it almost-certainly resolves any issues about address exhaustion, since it's a 128 bit space, with 64 bits being "local" and the other 64 bits (by convention, but not necessity) being "global."

This literally collapses the routing table for the Internet to "one entry per internet provider" in terms of address space, which is an undeniable good thing.

However, this presumes it all works as designed. And it's not.

About a month ago there began an intermittent issue where connections over IPv6, but not IPv4, to the same place would often wind up extremely slow or time out entirely.  My first-blush belief was that I had uncovered a bug somewhere in the routing stack of my gateway or local gear, and I spent quite a bit of time chasing that premise.  I got nowhere.

The issue was persistent with both Windows 10 and Unix clients -- and indeed, also with Android phones.  That's three operating systems of varying vintages and patch levels.  Hmmmm.....

Having more or less eliminated that I thought perhaps my ISP at home was responsible -- Cox.

But then, just today, I ran into the exact same connection lockup on ToS's "Trader TV" streaming video while on XFinity in Michigan.  Different provider, different brand cable modem, different brand and model of WiFi gateway.


Now I'm starting to think there's something else afoot -- maybe some intentional pollution in the ICMP space, along with inadequate (or no!) filtering in the provider space and inter-provider space to control malicious nonsense.

See, IPv6 requires a whole host of ICMP messages that flow between points in the normal course of operation.  Filter them all out at your gateway and bad things happen --- like terrible performance, or worse, no addressing at all.  But one has to wonder whether the ISP folks have appropriately filtered their networks at the edges to prevent malicious injection of these frames from hackers.

If not you could quite-easily "target" exchange points and routers inside an ISP infrastructure and severely constrict the pipes on an intermittent and damn hard to isolate basis.  

Which, incidentally, matches exactly the behavior I've been seeing.

I can't prove this is what's going on because I have no means to see "inside" a provider's network and the frames in question don't appear to be getting all the way to my end on either end.  But the lockups that it produces, specifically on ToS' "Trader TV", are nasty -- you not only lose the video but if you try to close and re-open the stream you lose the entire application streaming data feed too and are forced to go to the OS, kill the process and restart it.

The latter behavior may be a Windows 10 thing, as when I run into this on my Unix machines it tends to produce an aborted connection eventually, and my software retries that and recovers.  Slowly.

In any event on IPv4 it never happens, but then again IPv4 doesn't use ICMP for the sort of control functionality that IPv6 does.  One therefore has to wonder..... is there a little global game going on here and there that amounts to moderately low-level harassment in the ISP infrastructure -- but which has as its root a lack of appropriate edge-level -- and interchange level -- filtering to prevent it?

Years ago ports 138 and 139 were abused mightily to hack into people's Windows machines, since SMB and Netbios run on them and the original protocol -- which, incidentally, even modern Windows machines will answer to unless turned off -- were notoriously insecure.  Microsoft, for its part, dumped a deuce in the upper tank on this in that turning off V1 will also turn off the "network browse" functionality, which they never reimplemented "cleanly" on V2 and V3 (which are both more-secure.)  Thus many home users and more than a few business ones have it on because it's nice to be able to "see" resources like file storage in a "browser" format.

But in turn nearly all consumer ISPs block those ports from end users because if they're open it can be trivially easy to break into user's computers.

One has to wonder -- is something similar in the IPv6 space going on now, but instead of stealing things the outcome is basically harassment and severe degradation of performance?


View this entry with comments (opens new window)