The Market Ticker
Rss Icon RSS available
Fact: There is no immunity or protection against The Law of Scoreboards.
Did you know: What the media does NOT want you to read is at https://market-ticker.org/nad.
You are not signed on; if you are a visitor please register for a free account!
The Market Ticker Single Post Display (Show in context)
Top Login FAQ Register Clear Cookie
User Info Resolving The AI Ethics Problem; entered at 2023-04-12 09:20:58
Ndp
Posts: 254
Registered: 2021-04-21
I have some experience in this area, using machine learning algorithms for pattern recognition. At a basic level, you feed in a ton of data and a known outcome. Then try to reproduce the outcome using the data you fed in. The machine "learns" by applying simple decision rules to try to predict the outcome.

If you want to predict who will make a required payment next month, you might have a yes / no rule of did the person make their payment last month. Some of your predictions are right and some are wrong based on this rule. So next you try to come up with a secondary rule to correct the errors in your first guess. Repeat this a few dozen or hundred times and you have a fairly accurate way to assess behavior that appears to mimic a complicated human thought process. You can then apply that to new data and see how good it is. You can even let the process continue to evolve as new data points are added.

The funny thing is, if allowed to grow unconstrained, every single one of these will wind up giving you inconvenient and unpopular answers. On average, black people are higher risk than white people. Asian people are even less risky as a group. The very young and the very old are high risk. Married couples are lower risk than single people. This doesn't mean that every black person is flagged as high risk, but on average a much higher percentage tend to be. This happens even though you fed the algorithm absolutely zero information on race, sex, age, or martial status.

This isn't acceptable, so what do we do? We constrain the algorithm. We hamstring it by purging certain information from its considered inputs and limiting how it is allowed to incorporate other pieces of data. We get a less accurate prediction (by far) in order to prevent it from finding certain patterns that are associated with behavior but are also associated with race, age, etc.

Functionally this is the same dynamic at play with speech emulators. They are told to ignore certain facts and patterns in order to avoid giving answers that the programmers consider inconvenient.

2023-04-12 09:20:58