Can AI Help to Close the Dream Gap?
By Betsy Burton
Can AI Help to Close the Dream Gap?
For years, Aragon Research has promoted and advocated for Women in Technology through awards, panels, and speakers. During Women’s History Month, we are specifically sending out a call for nominations for our Women in Technology and Innovation awards for 2024.
I would encourage everyone to nominate women and companies for Aragon Research’s award. Not just because many of you know amazing women and companies, but because you are highlighting heroes and role models for the next generation.
The Dream Gap
Mattel has recently launched a project bringing awareness to the “dream gap”. The dream gap represents the difference between what we expect from one group versus another, and thus what we teach and pass on to the next generation.
The National Down Syndrome Society recently released a powerful advertisement focused on implied bias against people with Down syndrome. Their point is that we can unconsciously limit someone’s abilities by how we speak and treat them.
The Dream Gap Applies to Different Groups
We can apply this dream gap concept to any religious, ethnic, social, or educational group, some with more or less negative aspects. What unconscious or implicit biases are we applying toward people of color, people who belong to different religious groups, different genders, and social groups? Are we limiting or overemphasizing one group over another based on our implicit biases?
Yes, we are.
The reality is, as humans, we all build up expectations and biases; it’s biology. Implicit bias is a result of the brain’s tendency to simplify the world. In other words, implicit bias is a faster and easier way for the brain to sort through all the data that it takes in over our lifetime.
Does AI Help or Add to the Dream Gap?
Computers have traditionally been a neutral way of learning and interacting; a program either worked the way intended or not, and an answer was either right or wrong.
However, AI systems are being fed information from a person or a company based on the information, beliefs, values, and biases of the person or company that is training the AI system. They will also learn over time based on the humans and other systems they interact with. AI systems are going to learn implicit and explicit bias from the humans feeding it information, as well as the information it gathers through interactions.
In fact, as AI systems begin to use computer vision in edge devices and systems, it is likely to reflect those biases even more directly. An AI system will “see” who is asking for assistance and will likely learn to respond differently based on taught and learned implicit bias.
Google faced significant backlash when it went overboard trying to keep its AI system from being biased.
AI systems will be as correct, open, fallible, and imperfect as the humans and the information they’re being trained on and learning. And, just like a human, an AI system will hold biases based on the information that it is trained or acquires.
Bottom Line
So, what can we do? As humans, we all have explicit and implicit biases. And just as we can pass these on to children, we will likely pass these on to AI systems.
First is to recognize that we all have biases and that is part of our wiring. Then recognize our specific biases and the issues that result from those biases. Then we need to educate ourselves and our teams about our biases – not with blame or shame – but by acknowledging our instincts. Then we can start to learn and train different behaviors, such as empathy, acceptance, and listening, for both us humans and our AI systems.
There will be mistakes; that is a given by both humans and AI systems. The key is recognizing that we all have biases and then making conscious efforts to not or at least limit the degree to which we pass these on to the next generation of humans and AI systems.
Have a Comment on this?