So having now been part of a keynote panel discussion on AI Ethics, which I found quite interesting and thought-provoking (and I hope I widened some perspectives among attendees) I'd like to put forward and amplify a few points I've made in the past.
There are a lot of people predicting "the end of humanity" with AI. Meh.
There is a fundamental misunderstanding in how a brain works and how a computer works: They do not map on top of one another, and the latter is likely to be unable to map to the former.
Contemplate this next time you're in your car driving.
Look at the vehicle in front of you and read its license plate. Easy, right? Now, without moving your head or eyes, read the license plate or even the make and model of the vehicle on the left or right. You can't.
That's maybe 10-15 degrees off-axis, but while you know a thing is there you have no central, focused vision there.
The "trees" you "see" in detail while driving you do not actually see. Your brain extrapolates what it knows is a tree into the "details" you "see" but you are not actually seeing them at the time. If you have a dashcam in that car every frame has the entire field of view in focus and instantly readable.
Yet despite what is clearly and objectively wildly superior visual acuity to a human if you hook a computer into that (and one pointing out the back or in the place of your mirrors) no computer has demonstrated the ability to safely operate said vehicle and is not sold as safe to do so; thus you can't climb into the back seat with a six pack and drink it while the car drives itself to your destination.
Musk promised he could. He was wrong.
Again: The machine has much better data available to it visually than you do, yet you outperform it easily.
Why?
Because what you think is easy is almost (or maybe entirely) impossible for a machine, yet what you think you "know" yet are almost always wrong about is trivially easy for a computer. For example, your odds of being killed in a car wreck is about 1 in 8,000. If you are in good health your odds of a recent viral outbreak killing you was much lower than this, yet you freaked out about that while driving to the grocery store every week without a care in the world. (On the other hand if you were ill already your risk was wildly higher.) If you stuck your medical conditions, if any, into a computer over the last three years it could have spit out an exact comparison -- and an accurate one -- within seconds as to whether the virus or getting groceries was more-dangerous.
This in turn means that trying to apply "human" views of how constraints will work and such to machines is very-likely to be at best ineffective and at worst backward, doing the opposite of what you think it will do!
How about impacts on the job market?
Well, if you're a "coder" and your skill set is picking up pieces from StackOverflow and assembling them -- you're screwed. The machine is faster than you are, so your value is now whatever the machine costs to build and run. But if you're someone like me -- who writes code I have no fear an AI can replicate -- then it likely boosts my capacity to demand a higher hourly rate!
Why?
Because as I need to check an API set of calls and such it takes me a couple of minutes to scroll through the particular document and such. The AI can do that in 10 seconds, which means I write more code in less time and that in turn means that where I might have charged someone $500/hr to do work for them its now $600 because I produce more in less time.
If we can pry open the data sets then democratization of AI, which will happen as the chipsets get better and the required storage for the training set shrinks as a consequence it will become increasingly impossible for various bad actors to hide the evidence of their actions. Someone -- basically any someone, no matter their personal skill level can suck that in and analyze it whether the bad guy likes it or not -- in seconds or minutes. Oops.
Contemplate the last three years. Now contemplate what happens if AI is on millions of people's desktop computers. I found papers from a nursing home in Spain and a palliative care hospital in North Carolina. Both were extraordinarily important and both got zero attention from those in the "formal establishment." Never mind the pre-prints in September and December of 2020 on the spike protein. All were out there in time to make a difference but until people found them they were irrelevant and not enough people found them and could raise a stink. With thousands of AIs tasked to scanning the medical paper databases on a daily basis they would have been found instantly when published and analyzed rapidly by thousands of people rather than being manually analyzed by a few such as myself, in some cases months later. The outcome would have been wildly different -- and better -- than it was.
Once democratization happens that sort of "hide the football" game ends whether the so-called "powers that be" like it or not.
This is true even if data access doesn't get easier -- and it probably won't, at least not voluntarily. For example how come de-identified data from CMS isn't public? We do pay for it via our taxes so why can't we see it? No names and there's a high enough density that identifying individual people would be basically impossible, particularly if you remove facility names and addresses or even general locations. But we sure could figure out if there was a correlation between "things one is given medically" and "bad things that happen as evidenced by diagnoses after the things are given" couldn't we? Indeed. This would be a large task for a human to do by sucking said data into a traditional database and then manipulating it. For an AI it would take minutes. When said AI costs nothing and runs on everyone's desktop now the battle becomes singular: Get the data and you got the answer.
Democratization will happen simply because that's what technology does over time. Cars used to be an "elite" thing. Then Henry Ford showed up and... oops. Now every Jack could have and drive one. How about calculators? When I was young my father had one of the Monroe "portable" ones -- the first common one with a battery. It wasn't long before you could get them for a couple of bucks. How about computers? They were expensive. Today they're laughably cheap on a comparable-capability basis. This will happen with AI and as it does the hiding of analytical processes from ordinary people will become increasingly impossible.
So what was that about AI being "bad" in the general sense?
Remember folks, for every "ying" there is almost-always a "yang."