Could Bias Make AI Dangerous to Our Future Decisions?

AI has been the rage lately, with a lot of attention due to the parabolic increase in the stock price of Nvidia. But this isn’t purely hype because Nvidia’s earnings have also increased significantly. In other words, the interest in AI is real, as evidenced by firms investing in it. The hopes and benefits have been discussed for years, but now companies are putting their money where their mouths are. This technology shows a lot of promise, but it is still in its infancy. And one must consider not just the potential and upside, but what could go wrong. AI could be dangerous to our future.

AI – What Could Go Wrong

When people talk about AI they often mention the benefit of machine learning and the fact that it would take information, analyze it, and produce output much quicker than any human could. And since it is a program and not a human, it would do it without bias. At least that is the claim.

What we must always remember is that every AI program has a creator. It has a developer. And that developer, who is human, has biases. Some of those biases are conscious, some are unconscious. But if a programmer puts in code that has a bias (because it is the truth to that programmer), that AI program will never be unbiased. And if the current examples of AI are any indication, we should be very concerned about the role of bias influencing AI, and how people respond to the AI output.

Gemini is a Joke

The latest example of bias influencing AI output, but not the only example, is Google’s Gemini. The racial bias was obvious and sickening. No matter where you stand on the political and philosophical spectrum, we must all agree that any AI built with bias (Woke, Right-Wing etc…) is doomed to provide spurious outputs and therefore lead the “blind” down misguided paths. When I refer to the “blind” I am referring to those who will unquestionably accept any AI output as truth simply because it is “AI”, “machine-learning” etc… Gemini/Google can no longer be trusted to create a durable and reliable AI program.

The biases in Gemini and other current-day AI programs are obvious. But what about the non-obvious. That is the real danger.

The Biggest Threat to AI

The biggest threat to AI is bias being coded into the program. Gemini and others allowed that bias to be seen quickly and easily. But what about intentional bias, because people want others to think their way, coded deep within the AI program to show itself over time and imperceptibly? Such a diabolical program could guide many people to draw incorrect conclusions based upon AI’s “unbiased, machine-learning output.”

Put a few years between the programmer, the initial AI launch, and the output and one will say that AI is smarter than us and showing us the right way. They will say it is real truth, or true truth. When it reality it is simply showing the delayed bias that was programmed deep within it years ago.

And what about unconscious bias? When programmers unintentionally program bias into AI because we humans are composed of many unconscious biases? It could have the same results – guiding us on an incorrect path, albeit not diabolically as the AI programs created with intentional, hidden bias.

What Are We To Do?

Just like any new technology or product, we should be skeptical. We should ask questions. We need transparency into how the AI was programmed. That may be difficult to obtain, as many developers will hide behind “proprietary coding.” Common sense may be our greatest asset as we determine if a particular AI is helpful or harmful to our needs.

If the output confirms existing, immutable truths, then we may accept the results with some degree of confidence. However, if the output challenges past truths in the name of “new or more correct truth” our skeptical sensors should be on high alert. We should consider the bias, both intentional and unintentional, that may be programmed within it. And we should adapt the view point that we can choose to accept one AI and reject another. Not all AI will be created equal.

– JAY

(c)2024 Behavioral Finance Network