Site icon Aragon Research

4 AI Essential Rules: Prompt, Review, Verify, Repeat

AI Essential Rules: Prompt, Review, Verify, Repeat

By Betsy Burton

4 AI Essential Rules: Prompt, Review, Verify, Repeat

I have been thinking about how we are using artificial intelligence systems today. Not surprisingly, when everyone started using generative AI on a massive scale, we started to really test the bounds of these systems. And, again not surprisingly, there has been a lot of backlash about the issues and challenges with AI.

It is important to understand that an artificial intelligence system is learning from the information it is trained on, the information it receives over time, and the connections it makes between information.

That input of information could be clean, dirty, inaccurate, and scientific data. It could be interactions with humans via prompts or visual input from computer vision. The more sophisticated artificial intelligence systems become, the more they will be “learning” from diverse sources.

We all know to not swear in front of a small child; they will immediately start using those words. Artificial intelligence systems learn, in many respects, like a child. They will take in input and use, maybe not having any context of that information, and thus use it incorrectly.

AI Essential Rules

We must be careful and aware of how we use artificial intelligence-generated content. I personally employ what I call the artificial intelligence golden rule, that is:

This means humans must be involved (not replaced) in decision-making.

Error Rates

Artificial intelligence systems are today providing a valuable tool is many enforcement professionals, including policing, customs processing, border and TSA management and defense. Even though generative AI is relatively new to the broader public, predictive analytics and artificial intelligence have been used in enforcement for years.

If we assume for a moment that, in the future, a large city might use artificial intelligence to assist with finding and tracking criminal activity at a 98% accuracy rate (which at the surface might seem good). But if this city is using artificial intelliegnce unmanaged to aid with 300,000 potential criminal activities, then there are potentially over 9,000 incorrect/wrongful interactions. 

And be aware, this is occurring today. In fact, I wrote about this 5 years ago. The artificial intelligence systems have limitations and biases and will make mistakes.

My point is, that the more we scale up our use of AI systems the more critical it is for us to manage them. 

What might seem like good accuracy could have profoundly disastrous implications if used unmanaged for enforcement, warfare, financing, bioresearch, surgeries, teaching, etc.

Bottom Line

Artificial intelligence systems are a tool to aid, not a replacement, for human decision-making.

It is not a good enough excuse to say that they might be more accurate than a human at decision-making. In fact, it is the assumption by humans that information generated by artificial intelligence systems is “correct” that is the most dangerous.

Use these four essential rules when using AI systems. Assume they are potentially incorrect, biased, and limited. Use them as any tool, to aid decision-making, not replace human decision-making. 

 


 

UPCOMING WEBINAR

 

Impact of Artificial Intelligence & GPU Computing on Your Architecture

The move toward cloud computing has been underway for years, but nearly all of the profits have gone to a handful of vendors. Artificial intelligence and GPU computing models are beginning to change this trend.

The era of edge computing era is here and with it will come new deployment and business models.

Key topics to be covered in this webinar:

 

Register for Free Here

Exit mobile version