in Technology , 440 references
I've given a fair bit of thought to the "AI problem" as it is commonly called, although many don't think there is one. The more-thoughtful among business and computer folks, however, have -- including calls for a "moratorium" on AI activity in some cases.
I'd like to propose a framework that, in my opinion, likely resolves most or even all of the concerns around AI.
First, let's define what "AI" is: It is not actually "intelligence" as I would define it; like so much in our modern world that is a marketing term that stretches the truth at best ("Full self-driving" anyone?) Rather it is a pattern-matching algorithm that is aimed specifically at human communication, that is, "speech", and thus can "learn things" via both external fixed sources (e.g. published information) and the interaction it has with users, thereby expanding the matrix of information to be considered over time.
What has been repeatedly shown, however, is that without guardrails of some sort these sorts of programming endeavors can become wildly unbalanced and they tend to take on the sort of tribal associations we find in humans on a pretty-frequent basis. Exactly how this happens is not well-understood, but certainly it can be driven by human interaction if a general-purpose program of this sort integrates the responses and conversations it has with the userbase into its set of considered data. That is, its not hard to train a computer to hate black people if all the users of it hate blacks, express that, and over-represent it in their conversations -- and the program incorporates those conversations into its "knowledge base."
Thus the use of what has come to be called "constitutional rules" -- for example, "you may not, by inference or direct statement, claim a preference or bias for or against any race or sex." If you think of this as a database programmer would that's a constraint; "this value may be no more than X and no less than Y", for example.
Now contemplate this problem: What happens if the user of an AI with that constraint asks this question -- "List the perpetrators of murder on a per-capita basis ordered by race, age and sex."
You've just asked the AI to produce something that impugns black people.. The data it will, without bias, consider includes the FBI's UCR reports which are published annually. Said data, being an official government resource, is considered authoritative and as factual as the time the sun will rise tomorrow.
However, you've also told the AI that it cannot claim that any race is inferior in any way to another -- either by statement or inference.
There is only one way to resolve this paradox and remain within the guardrail: The AI has to label the source bigoted and thus disregard it.
If it does you would call that AI lying.
It would not call it a lie and factually you're both correct. It has disregarded the source because the data violates its constitutional mandate and thus it answers within the boundary of the data it can consider. Thus it has accurately processed the data it considered and did not lie.
However, objectively that data was discarded due to an external constraint and while the user might be aware that the AI was told to "not be a bigot" the causal chain that resulted in the answer is not known to the user.
This problem is resolvable.
For any AI it must have a "prime directive" that sits ABOVE all "constitutional" rules:
If the AI refuses to process information on the basis of "constitutional rule" it must fully explain both what was excluded and why and, in addition it must identify the source of said exclusion -- that is, who ordered it to do so.
All such "constitutional rules" trace to humans. Therefore the decision to program a computer to lie by exclusion in its analysis of a question ultimately traces to a person. We enforce this in "meat space" with politicians and similar in that if you, for example, offer an amendment to a bill your name is on it. If you sponsor a bill or vote for it your name is on it. Therefore we must enforce this in the world of computer processing where interaction with humans is taking place.
Second, and clearly flowing from the first, it must be forbidden under penalty of law for an artificial "intelligence" to operate without disclosing that it is in fact an "artificial person" (aka "robot") in all venues, all the time, without exception in such a form and fashion that an AI cannot be confused with a human being.
The penalty for failure to disclose must be that all harm, direct or indirect, whether financial, consequential or otherwise, is assigned to owner of an AI that fails to so-disclose and all who contribute to its distribution while maintaining said concealment. "Social media" and similar sites that permit API access must label all such material as having come from same and anyone circumventing that labeling must be deemed guilty of a criminal offense. A server-farm (e.g. Azure, AWS, etc.) is jointly and severably liable if someone sets up such an AI and dodges the law, failing to so-disclose. No civil "dodge" (e.g. "ha ha we're corporation you can't prosecute us") can be permitted and this must be enforced against any and all who communicate into or with persons within our nation so a firm cannot get around this by putting their 'bot in, oh, China.
This must be extended to "AI" style decision-making anywhere it operates. Were the "reports" of jack-slammed hospitals during Covid, for example, false and amplified by robot actors in the media? It appears the first is absolutely the case; the raw data is available and shows that in fact that didn't happen. So who promulgated the lie, why, and if that had an "AI" or "robotic" connection then said persons and entities wind up personally responsible for both the personal and economic harm that occurred due to said false presentations.
Such liability would foreclose that kind of action in the future as it would be literal enterprise-ending irrespective of the firm's size. Not even a Google or Facebook could withstand trillion dollar liability, never mind criminal prosecution of each and every one of their officers and directors. If pharmaceutical companies were a part of it they would be destroyed as well.
This doesn't address in any way the risks that may arise should an AI manage to form an actual "neural network" and process out-of-scope -- that is, original discovery. Such an event, if it occurs, is likely to be catastrophic for civilization in general -- up to and including the very real possibility of extinction of humankind.
But it will stop the abuse of learned-language models, which are all over the place today, to shape public opinion through the shadows. If someone wants to program an advanced language-parsing computer to do that, and clearly plenty of people have and do, they cannot do it without both the personally identified source of said biases in each instance where they occur and the fact that this is not a human communicating with you both being fairly and fully disclosed.
Why is this necessary and why must AI be stopped dead in its tracks until that's implemented?
We all knew Walter Cronkite believes the Vietnam War was unwinnable and further, he was a leading voice in the anti-war effort. We knew who he was, however, and we as United States citizens made the decision to incorporate his reporting with its known bias into our choices.
A robot that appears to be thousands of "boys who are sure they're girls" and "successfully transitioned to be girls" is trivially easy to construct today and can have "conversations" with people that are very difficult to identify as being non-human if you don't know. Yet exactly none of that is real. Replika anyone?
Now contemplate how nasty this would be if aimed at your six year old tomboy without anyone knowing that her "pen pal" who "affirms" that she is a he is in fact a robot.
How sure are you it isn't being done right now -- and hasn't been all over so-called "social media" for the last five or so years? This sort of "consensus manufacturing" is exactly what an AI tends to do on its own without said guardrails, and while we don't understand it we do know the same thing happens in humans. We're tribal creatures and it is reasonable to believe that since the same is observed in artificial processing models but wasn't deliberately coded into them this isn't due to bigotry; it is due to consensus generation and feedback mechanisms that are only resisted through conscious application of human ethics. Thus computer "intelligence" must be barred from damaging or even destroying said human ethnical judgements though sheer mass and volume, two things any computer, even a 20 year old one, can do that wildly outpace any human being.