Are AI Systems Infallible? Absolutely not!
By Betsy Burton
Are AI Systems Infallible? Absolutely not!
In recent weeks, there have been several news stories about AI systems that generated incorrect or biased results. This has led to a flurry of articles discussing mistakes and the fallibility of AI systems.
To those who have been following AI systems for years, the fact that there are issues and challenges with AI systems is not surprising at all.
However, for all the people who have rushed to adopt generative AI, the fact that these systems can make mistakes is catching them off guard.
New Technologies Evolve
This evolution is not surprising to those who follow any new technology.
As the Aragon Research Technology Arc illustrates, any technology goes through an initial period of excitement, rapid adoption, and interest. That is until people realize that maybe their expectations are not in line with the reality of any given technology.
This is exactly the reaction we are seeing with respect to AI systems.
AI Systems Are Intelligent… But Not Infallible.
Let’s remember what intelligence means; intelligence is “the ability to acquire, understand, and use knowledge” (The American Heritage® Dictionary of the English Language).
AI systems are “taught” or trained based on information that is fed into the AI system. They also acquire more information as they interact with humans and other systems.
AI systems are intelligent, but they are only as good as the information that they are being taught with or are learning through interactions.
AI Systems Will Learn Bias
AI systems are being fed information from a person or a company based on the information, beliefs, values, and biases of the person or company that is training the AI system. They will also learn over time based on the humans and other systems they interact with.
There was a famous case of an early AI system learning sexist and racist language while interacting with humans on its support line.
Having intelligence is acquiring and using knowledge.
AI systems will always be as fallible and imperfect as the information they’re being trained on and learning. And, just like a human, an AI system will hold biases based on the information that it is trained or acquires.
And the person/company providing the AI system will likely train it to reflect the perspective that meets their market need.
AI Systems Are Not Fact Generators.
We have a hard enough time as humans agreeing on what a fact is. Anyone who has listened to a Supreme Court oral argument or followed a scientific/medical debate knows that even in the most scientific and legal fields there are disagreements on what a fact is.
And we certainly know in everyday life people can have wildly different views on what a fact is. If we as humans can’t agree on what a fact is, how do we expect an AI system to determine what a fact is?
The best that an AI system could do is to explain the positions and beliefs of different parties based on the context of their belief systems. AI systems may be able to generate a logical argument based on the information they have.
But just like a human, it cannot definitively say one moral argument wins over another. That’s why we have a judicial system, religious beliefs, and philosophies.
AI Systems Are New and Evolving.
Last but certainly not least, AI systems are, for all intents and purposes, brand new, and certainly at the massive scale that they are being used now.
Yes, we’ve been talking about AI systems for years. However, it’s only been within the last two years that AI systems have been available for people to really test on a larger scale.
And the reality is, OpenAI’s release of ChatGPT has forced the other major AI players in the marketplace to rush to release products, likely before they were completely ready.
And many of the organizations that are releasing these generative AI systems have not worked out all the implications or impacts of these systems.
CIOs or business leaders should consider an AI system today in their organization to be largely a beta release. This means you must be very careful about where it is deployed and how much dependency your organization is putting on AI systems.
Bottom Line
Business and IT leaders must define clear governance guardrails and restrictions on how AI systems should be used in their organization based on their tolerance for risk.
The results from AI systems should not be blindly accepted as fact; they must be reviewed like any generated information. And leaders must determine what AI systems they want their users to use and to use with their customers that reflect their brand, markets, and ideals.
It is not a surprise that some are becoming reactive to the issues and challenges with AI systems because they had, very likely, inflated expectations. Examples such as Google having to pause its AI image generation are part of an evolving technology; and it will continue evolving.
Just like human intelligence that is constantly learning and evolving, and making mistakes, so will AI systems. The risk is if we humans believe they are infallible, and act solely or primarily based on AI systems.
Get Ready to Transform With Us for Our First Transform Tour Stop of 2024!
Going LIVE on Thursday, March 15th, 2024 at 10 AM PT | 1 PM ET
Since the launch of OpenAI’s GPT service, organizations have been scrambling to put out their own copilot service.
In our March Transform Tour, taking place on Thursday, March 14, 2024, we will cover “The Battle of Edge vs. SaaS Computing” and “Putting CoPilots to Work in 2024 and 2025.”
>>Blog 1: What is AI Architecture?<<
>>Blog 2: AI Accelerates the Need for Integrative Architecture<<
>>Blog 3: Think Generative AI Adoption Is In The Future? Think Again!<<
>>Blog 4: How Leaders Effectively Govern AI with Principles<<
>>Blog 5: U.S. Executive Order on AI: A Framework We All Can Use<<
>>Blog 6: Will Generative AI Fundamentally Change How We (Humans) Interact with Information?<<
>>Blog 6: The Rise of AI Tests Our Economic and Ethical Boundaries<<
>>Blog 7: EU Reaches a Provisional Agreement on AI Laws<<
>>Blog 8: Give the Gift of Health: KardiaMobile Card for Peace of Mind<<
>>Blog 9: Brand New: Aragon Research’s Technology Arc for Emerging Technologies in 2024<<
>>Blog 10: Understanding AI Architecture in 2024<<
>>Blog 11: Is An AI Platform in Your Near Future?<<
>>Blog 12: Calling All iPaaS and tPaaS Providers: It’s Globe Time<<
>>Blog 13: Neuralink Brain Chip Implant: Don’t Ignore This Trend<<
-
display trackbacks
display trackbacks
display trackbacks
display trackbacks
display trackbacks
display trackbacks
display trackbacks
display trackbacks
Comments { 8 }