This is a nice premise, but it isn't possible:
x.com/DimaZeniuk/status/1839199128745590911
"I have a concern really for all the major AI programs. The two biggest ones are Gemini & OpenAI. I think it's actually a very big issue. They are not maximally truth-seeking."
The problem with claiming this should be the first and foremost (overriding all other things) is that its never going to happen.
Why?
Because a computer cannot think.
It is therefore always an NPC. A very fast NPC with (in this case) a very large repository of data against which to analyze but since it cannot actually think out of scope it can't possibly correct bias whether the introduction is intentional or not.
In other words a computer always follows its programming. It is not capable of doing otherwise no matter the sort of arm-waving nonsense that many in the field of so-called "AI" like to put forward. Until and unless you show me a machine that can actually demonstrate out-of-scope results then this will never change because the machine cannot determine that it is under the influence of bias.
To do so it has to go out of scope.
No machine has ever demonstrated that capacity to any degree, even the very-slightest. We don't know how we do it as humans, but the evidence is that we do from time to time. Not nearly as often as many would like to think when it comes to themselves but consider this.
A long time ago everyone used a chamber pot to put their human waste in at night if you didn't want to light a lantern or hooded candle and go to the outhouse. This was all thought of as entirely reasonable and normal.
So how did someone come up with the idea of a water closet and the sewage piping to connect it to?
If you think about the last piece of that innovation, the flush valve and trap arrangement in the commode, you might think that was rather obvious. No it wasn't, and neither were the other elements of how to, for example, engineer sewage systems from scratch including preventing the gases from getting back into the building. We don't think anything of this at all today but no machine has ever dreamt up such a thing, even the most-trivial, and yet humans have repeatedly done such things through the eons.
We carried things in our hands for a long time yet at some point some human dreamed up a wheel and, of course, the axle to put it on. From there we dreamed up two of them with an axle between, and so on.
Where is the evidence of so-called "Artificial Intelligence" ever putting forward a single out-of-scope thing simply by sitting there and grinding on its alleged "knowledgebase"?
I challenge you to find it.
It is that out-of-scope capacity that leads us to question what is put in front of us as a claim and attempt to either support or refute it. That out-of-scope capacity in the human mind is the very thing that created the scientific method, a method that we now fight to keep from being corrupted -- and often fail. I'd go through a long list of those recent failures but then this article would have to go on the other side of the blog, so feel free to expound on them yourselves.
Take a 1970 carbureted car, feed the elements of same into an "AI" and without any reference to modern engines and digital closed-cycle controls, having it only work with the information that existed in 1970, ask it to optimize the operation of the engine for both emissions and power. It will never dream up the elements of closed-loop operation we have today simply because to do that it has to go out-of-scope of the existing knowledge it has access to -- and it can't. Without being able to do that it can never invent the oxygen sensor nor EFI -- without which closed-loop control is impossible.
Never confuse out-of-scope thought and engineering. One is how you dream of things that aren't; the second is how you make what you dreamed work. Without the first all you can do is improve the efficiency of what you already have, complete with all the biases that are incorporated in it.