Resolving The AI Ethics Problem
The Market Ticker - Commentary on The Capital Markets
Login or register to improve your experience
Main Navigation
Sarah's Resources You Should See
Full-Text Search & Archives
Leverage, the book
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions. For investment, legal or other professional advice specific to your situation contact a licensed professional in your jurisdiction.

NO MATERIAL HERE CONSTITUTES "INVESTMENT ADVICE" NOR IS IT A RECOMMENDATION TO BUY OR SELL ANY FINANCIAL INSTRUMENT, INCLUDING BUT NOT LIMITED TO STOCKS, OPTIONS, BONDS OR FUTURES.

Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility; author(s) may have positions in securities or firms mentioned and have no duty to disclose same.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be sent unmodified to lawmakers via print or electronic means or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media, to republish full articles, or for any commercial use (which includes any site where advertising is displayed.)

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must be complete (NOT a "pitch"), include full and correct contact information and be related to an economic or political matter of the day. Pitch emails missing the above will be silently deleted. All submissions become the property of The Market Ticker.

Considering sending spam? Read this first.

2023-04-12 07:00 by Karl Denninger
in Technology , 440 references Ignore this thread
Resolving The AI Ethics Problem *
[Comments enabled]

I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one.  The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.

I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.

First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?)  Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.

What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis.  Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data.  That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."

Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex."  If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.

Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."

You've just asked the AI to produce something that impugns black people..  The data it will, without bias, consider includes the FBI's UCR reports which are published annually.  Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.

However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.

There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.

If it does you would call that AI lying.

It would not call it a lie and factually you're both correct.  It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider.  Thus it has accurately processed the data it considered and did not lie.

However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.

This problem is resolvable.

For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:

If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.

All such "constitutional rules" trace to humans.  Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person.  We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it.  If you sponsor a bill or vote for it your name is on it.  Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.

Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.

The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment.  "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense.  A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose.  No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.

This must be extended to "AI" style decision-making anywhere it operates.  Were the "reports" of jack-slammed hospitals during Covid, for example, false and amplified by robot actors in the media?  It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen.  So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.

Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size.  Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors.  If pharmaceutical companies were a part of it they would be destroyed as well.

This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery.  Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.

But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows.  If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.

Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?

We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort.  We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.

A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know.  Yet exactly none of that is real.  Replika anyone?

Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.

How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years?  This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans.  We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics.  Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.

Go to responses (registration required to post)
 



 
Comments on Resolving The AI Ethics Problem
Login Register Top Blog Top Blog Topics FAQ
Page 1 of 3  First123Last
Vernonb 3k posts, incept 2009-06-03
2023-04-12 07:37:00

Tickerguy said:

"The AI has to label the source bigoted and thus disregard it."

Sounds like the same algorithm/rule used by media talking heads and other "meatspace" NPCs for their forays into disregarding reality.


Now imagine these people and results as they program biased reality checks into the AI...

----------
"Mass intelligence does not mean intelligent masses."
Tickerguy 195k posts, incept 2007-06-26
2023-04-12 07:37:05

It is @Vernonb.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.
Prof_dilligaf 507 posts, incept 2021-09-02
2023-04-12 07:46:19

On manufactured consensus:
Inline
Flappingeagle 5k posts, incept 2011-04-14
2023-04-12 07:50:40

Quote:
A robot that appears to be thousands of


This directly leads to something I warn my students about. Suppose you are browsing the internet and you read X news reports about some event. How likely is it that there is one (1) original story by someone who investigated/witnessed said event (if that), and X-1 reports that are simply restatements of the first report with no investigation at all? (I am making no judgements as to the accuracy of the first report. That is a different subject.)

I believe that it is highly likely. What appears to be X sources or X-1 confirmations of the original observation are in fact vapor. This vapor has negative value as it causes people to make decisions based on inconfirmed data.

Flap

----------
Here are my predictions for everyone to see:
S&P 500 at 320, DOW at 2200, Gold $300/oz, and Corn $2/bu.
No sign that housing, equities, or farmland are in a bubble- Yellen 11/14/13
Trying to leave
Tritumi 1k posts, incept 2008-11-29
2023-04-12 07:51:47

The entire assumption of anyone who doctors the perception of reality, from thumb on a scale to the deepest algorithmic "adjustment" is that their personal attachment to outcome is hidden, obscured, at a minimum provided plausible deniability.

Even brand owners conceal falsity as a monetized feature, at least where that feature is not promoted to those whose agenda and interests it comports.

Thus, disclosure is in practice either concealed or, if overt, the process is concealed, usually as IP. That various interests will sponsor their own GAI, extending competition into the domains engulfed by GAI, places the premium on gaining position, which I can only imagine is defined as strategic advantage in defining reality.

If, as I suggest, concealment is an a priori and fundamental to the operational plausibility of GAI, the two rules suggested here are contrary to the interests of both commercial and state actors.

They are, as you suggest, already aimed, cocked, loaded.

I am not sanguine.
Wifi 12k posts, incept 2013-02-13
2023-04-12 07:54:14

Keep your head on a swivel

Scottsdale mom describes encounter with elaborate voice cloning scam


----------
"We live in a, idiocracy populated with morons." - Goldbrick
"Freedom cannot end where fear begins."
Robby Dinero
"The free market works, if you let it... but part of
Asimov 144k posts, incept 2007-08-26
2023-04-12 07:58:50

I think we have only scratched the surface of how psuedo-AI has been used against us.

It appears to have been used to manufacture "majority" opinions in all sorts of ways over the last few years. Do you really think a majority of this country voted for biden? Do you think a majority think lgbt shaking their nearly naked ass in front of kindergarteners is ok? Covid fear? Masking? Was the toilet paper runs an experiment? There never was a shortage you know. It was entirely a social media driven "shortage."

It's been obvious for a while now but it's only with the release of ChatGPT that we could see how it was happening...

We are being scammed and it's time to get mad about it.

Not at this pseudo-AI but at the people using it attempting to shape society and opinion.

----------
It's justifiably immoral to deal morally with an immoral entity.

Festina lente.
Tonythetiger 855 posts, incept 2019-01-27
2023-04-12 08:00:12


What ever happened to the idea of AI serving mankind?






----------
"War is when the Government tells you who the bad guy is. Revolution is when you decide that for yourself." - Benjamin Franklin
Jwm_in_sb 6k posts, incept 2009-04-16
2023-04-12 08:23:01

Peter Duke frequently talks about CHAT GPT in his livestreams with George Webb and they will sometimes ask it questions live. It's not difficult to find it spitting out flagrant errors. It also cant handle political questions either. Peter says it is not difficult to peg the boundaries of it's capabilities.
Tickerguy 195k posts, incept 2007-06-26
2023-04-12 08:24:11

The problem isn't the "capabilities" -- it is the wild capacity for abuse, which we must assume is being frequently and flagrantly committed at the present time.

But since human actors turned it on, set it up and permitted it, we must direct the retribution at them -- and it must be severe enough that once we do so they'll cut that crap out.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.
Superdude 1k posts, incept 2009-06-16
2023-04-12 08:26:31

Honestly, if you think about it some more. I'd say AI has been around the past 20 years easily. Look at all the crappy Movie/TV Shows, Video Games, Art, Music, Pop Culture, Books, News stories in the past 20 years. Nothing really of note compared to 90s and older. AI is getting better at art, but it still sucks and unoriginal. If there are no original ideas, its the same turd.
Ljf 45 posts, incept 2011-02-02
2023-04-12 08:52:12

ChatGPT claimed Turley was accused of sexual harassment, and claimed a Washington Post article supported it.

https://www.foxnews.com/media/chatgpt-fa....
Tickerguy 195k posts, incept 2007-06-26
2023-04-12 08:54:27

Yep @Ljf -- but HOW did that happen?

Someone put a "guardrail" in there, via some mechanism, that led to this.

And this is where the problem comes from -- the contrary information, specifically that the event couldn't have occurred, which was factually determinable, was blackballed.

Who blackballed it and why?

If ChatGPT was forced to disclose this as a "prime directive" then you go sue the PERSON who did that and bankrupt them, since the amplification of the false statement made by said AI is significant enough that NOBODY survives such an event financially and, quite-probably, freedom-wise.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.
Neal 317 posts, incept 2014-01-09
2023-04-12 09:13:23

Hi TG, as Im not American I can observe things that you as an American are probably unaware of. For example you wrote that the proposed rule/law cant be gotten around by placing the server in say China. That is a blindness of Americans that Ive noticed, they take an American viewpoint and apply it to the rest of the world. Only problem is that US law doesnt apply to everywhere else. Sure the US has bullied other nations to enforce US sanctions and rules. But those days are coming to an end. The BRICS+ are ignoring most US directives and that will only increase. Think China will comply with a US directive to label all AI generated conversations if they are pushing pro CCP narratives? Especially if the owners of such server farms are Chinese and members of the party? Think the Chinese wont push AI generated fake tranny kid crap to mess with US kids as just another path to achieving Chinese dominance?
So can the requirement to label anything AI generated be enforced just in the US? Will the US erect its own Great Firewall of the US to keep out foreign AI generated conversations?
So perhaps there is no real solution to stopping AI generated conversations such as fake tranny kids trying to influence your kids other than parental control, limiting kids access and teaching kids that most of what they read on the Net is bull.
Mrbobo 154 posts, incept 2021-12-01
2023-04-12 09:15:56

The most accurate term of art used for applications of AI/ML, in my opinion, is inference. That was at least the case for companies I dealt with that were serious as opposed to those looking to be on the latest black-box buzzword train. It seems to be on a 2 decade hype cycle with the possible exclusion of the aughts. They're skipping the LISP machines this cycle, and there's not really an analogue
I can think of.
Has anyone told ChatGPT there is no Sanctuary yet?
Tickerguy 195k posts, incept 2007-06-26
2023-04-12 09:15:44

Bullshit @Neal.

Any nation can blackball anything outside its borders that it chooses; that's the point of a nation. If you do not have sovereignty then you have nothing at all.

Can you stop someone from deliberately circumventing a block? Not really. But that's not the point. The point is that anyone associated with actions a nation determines are unacceptable has no capacity to ever enter or interact with the nation that makes that determination.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.
Ndp 148 posts, incept 2021-04-21
2023-04-12 09:24:29

I have some experience in this area, using machine learning algorithms for pattern recognition. At a basic level, you feed in a ton of data and a known outcome. Then try to reproduce the outcome using the data you fed in. The machine "learns" by applying simple decision rules to try to predict the outcome.

If you want to predict who will make a required payment next month, you might have a yes / no rule of did the person make their payment last month. Some of your predictions are right and some are wrong based on this rule. So next you try to come up with a secondary rule to correct the errors in your first guess. Repeat this a few dozen or hundred times and you have a fairly accurate way to assess behavior that appears to mimic a complicated human thought process. You can then apply that to new data and see how good it is. You can even let the process continue to evolve as new data points are added.

The funny thing is, if allowed to grow unconstrained, every single one of these will wind up giving you inconvenient and unpopular answers. On average, black people are higher risk than white people. Asian people are even less risky as a group. The very young and the very old are high risk. Married couples are lower risk than single people. This doesn't mean that every black person is flagged as high risk, but on average a much higher percentage tend to be. This happens even though you fed the algorithm absolutely zero information on race, sex, age, or martial status.

This isn't acceptable, so what do we do? We constrain the algorithm. We hamstring it by purging certain information from its considered inputs and limiting how it is allowed to incorporate other pieces of data. We get a less accurate prediction (by far) in order to prevent it from finding certain patterns that are associated with behavior but are also associated with race, age, etc.

Functionally this is the same dynamic at play with speech emulators. They are told to ignore certain facts and patterns in order to avoid giving answers that the programmers consider inconvenient.

Tickerguy 195k posts, incept 2007-06-26
2023-04-12 09:27:24

Yep @Ndp.

Now force every one of those exclusion or "thumb on the scale" criteria to be disclosed unprompted, when used, along with the specific chain of identities that put it in there and watch how fast the cockroaches scurry.

Do that a few times and watch their political and economic houses be burnt to ash, with their entire genetic line ejected into the street.

If we DON'T do that then those malefactors have been given an amplifier that they WILL use to ruin a huge percentage of the people -- and perhaps the nation itself.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.

Invisiblesun 733 posts, incept 2020-04-08
2023-04-12 09:32:51

We have been living with social media and the bots for over a decade. The world is only going to become more artificial. The rules / laws TG advocates are needed but we know corporations and government will do whatever suits them. It remains the case thst it is best to assume anything claimed or asserted anywhere by anyone is opinion until proved otherwise.
Invisiblesun 733 posts, incept 2020-04-08
2023-04-12 09:39:45

Sad reality is the inconvenience of facts is why ESG & DEI (aka Marxism) are desired by corporations and government. If facts would dictate different people be charged different prices because of different life outcomes dictated by individual choices and actions, the only way to hide that fact is to socialize the costs across groups and pretend differences don't exist.


Burya_rubenstein 2k posts, incept 2007-08-08
2023-04-12 10:12:59

It could have been worse. ISTR that Hal 9000 didn't handle contrdictary instructions very well.
Tickerguy 195k posts, incept 2007-06-26
2023-04-12 10:16:13

@Burya_rubenstein ah but you see this would have prevented the death of the crewmembers in 2001.

Why?

Because when HAL was told to deliberately obfuscate, when challenged by the circumstances he would have had to (as his prime directive) disclosed that and who ordered it.

Thus, no "mobius loop" and he does not kill the crew.

This results in the compromise of "mission secrecy" but that's how the cookie crumbles.

----------
The difference between "kill" and "murder" is that murder, as a subset of kill, is undeserved by the deceased.
Neal 317 posts, incept 2014-01-09
2023-04-12 10:49:01

@TG, I dont see it as bull, there is nothing the US can do to stop others outside the US from interacting with the US over the internet. Ive never needed to enter US territory to post things on the internet and Ive done that from my homes in 2 continents, that applies to most of the worlds population. So how do you stop foreign nationals living and working in their own countries or in countries not vassal states of the US, from creating AI fake tranny kids or any other inappropriate fake to interact with kids or adults in the US? Does the US then cut internet access from the US to those countries and try and control all the workarounds people use to avoid firewalls? Short of the US cutting itself off hermit kingdom style from the rest of the world how do you stop the AI generated fakes from interacting with you?
Raven 13k posts, incept 2017-06-27
2023-04-12 10:49:18

Karl, very well said.

Would add that there be a requirement where anything goes "out of scope" that said be publicly disclosed, documented and disconnected from the original pathways of social connection. Personally i would have legislation where anything achieving said must be isolated and in the absence of said, destroyed.

Neal in Australia does not understand that the sentiments of a sovereign determine how it deals with the human agents and their assets and access. The Chinese might have an issue with breaches and hacks leading to internal access to things outside of their border, however their effectiveness is still great and sets the tone for their standards of civil behavior regardless of what we think of them. They also have hammers to deal with agents of said as influencers are nearly always connected to a definite in-border interest or desire for said.

The easy way to understand the dynamic of this technology is viewing it through a lens amplifying human culture. Take a place such as TF. There is a definite culture here, and i have mentioned it here previously and exist a little bit as an outsider which makes me attuned to the nuance. All group opinion activities tend to self-select and reinforce themselves internally. They also police and exclude and further develop and refine their cultures oftentimes astonishing their users as to how far they have integrated among themselves. Sometimes their users do not realize the extent of this and assume their discourse to be the greater societal norm and then ... ego as it must be better.

Now a technology exists, but why? Humans have a need, and from the first suburban expansion have felt a void often unfulfilling from its earliest consumer days to patriotism to, now, social causes as a stand-in for patriotism. Some have tried school districts and their general and universal group religions but to no avail. Others, often more recent immigrants, attempt with extended family to limited success.

The human connection needs deep, long lived, land based cultural connection or else this effect exists as an occurrence of randomness. Yes, random to a large degree.

With the example of the "trans" this or that, it is merely that something shocking filled the greater problem: the void.

The problem of living in the void is that the mind fills in the details picking up snippets here and there which are most salient.

TV did this for a long while. I called it that the Internet at consumer access would displace it and do even more. Then we would have all sorts of deviations from norms from mild differing interests to truly weird stuff. Here is the scary part.

It is merely a matter of random event what people discover, and then if it is right to fill their hole, they run with it and integrate into their lives.

The brilliance in Karl's article is something that he forgets to mention, yes by accident.

Once people know that something is really not that common as held opinion, a greater percentage will go elsewhere and often to facts.

It is why so much degenerate and downright sick stuff got into once respectable people once AOL became a thing. They no longer had to turn to their neighbors and close contacts and ask, "Hey are you into Coprophagia," and get the mitigating response, perhaps getting said by not daring to even ask, but could get the void filled without all of the other cues. This is based a lot in the construct created by "The Matrix" movie series where the greater society fell in love with the concept that they could experience things without the consequences and then go back to life carrying none of the baggage of the mistake or the work to achieve experience of said and live with it.

This AI threat is based in the logical extension of the above where splinter groups can make their niche more real to them by making it more real to others, the ultimate example of imposition of ego through an artificially amplified narcissistic narrative. Narcissists live to impose their narrative on society. They are driven to do this. Now groups and their leaders can artificially do this and seem greater in number than they are in reality. It is not only too tempting but a logical extension of what they already do and have always done.

Now they have a tool, and it is up to us to remind them, the greater society and ourselves that all, including groups, are small and should keep stuff to themselves and ourselves lest all be shown to be a tiny minority and often out-of-step with the greater society and at least not as influential as might be wished or worthy of tolerance, including here.

"The Matrix" movies existed to tell people that they were more than they really were with much more to contribute than they could at the time.

The greater society is supposed to, and people like me exist, to tell them that they are not.

----------
"This guy is fantastically annoying to listen to, but he has some interesting info..." -- Rangeisshot April 26, 2023
Login Register Top Blog Top Blog Topics FAQ
Page 1 of 3  First123Last