The Rise of AI Tests Our Economic and Ethical Boundaries
By Betsy Burton
The Rise of AI Tests Our Economic and Ethical Boundaries
Every week, we hear some new announcement from a major technology provider or research organization about emerging AI-empowered technology.
Azure AI Speech allows users to choose from a predefined avatar or upload footage of a real person they want to replicate. The software can then be used to create a video of a person seemingly speaking text that is fed to the system.
Criticism of Microsoft Azure AI Speech
After its announcement, Microsoft immediately came under criticism for this tool, particularly given its potential impact on political, social, and economic unrest. The company responded with attempts to clarify its approach toward what it calls ‘fairness’.
‘Regarding representational harms, such as stereotyping, demeaning, or erasing outputs, we acknowledge the risks associated with these issues. While our evaluation process aims to mitigate such risks, we encourage users to consider their specific use cases carefully and implement additional mitigations as appropriate. Having a human in the loop can provide an extra layer of oversight to address any potential biases or unintended consequences.’
It is critical to note that the company’s ‘fairness’ statement and proposed human involvement in the creation of content with Azure AI Speech focus on limiting stereotyping, bias, and demeaning content created with its deepfake software, which is important but certainly not the most significant issue.
Microsoft is not addressing the major short- and long-term issues relating to ethics, security, privacy, and humanism that will potentially have significant political, social, cultural, and economic implications.
It is important to note that Microsoft is not the only company in this position; all the major AI platform providers are and will increasingly face the challenges of balancing ethics and their drive toward increasing profits (e.g., Google, IBM, Amazon, OpenAI, Meta, Salesforce, Oracle, etc.).
AI Ethics and Capitalism
I recently read a searing article by Robert Reich, former US secretary of labor, who is a professor of public policy at the University of California, Berkeley, titled ‘The frantic battle over OpenAI shows that money triumphs in the end’ ().
In his article, he uses the recent upheavals at OpenAI to highlight the tension or disconnect between organizations seeking to do research on AI and the drive toward increasing profits by releasing AI capabilities, with or without the means to manage, secure, control, and govern their usage.
Robert Reich’s bottom line is that in the hands of businesses, using AI to increase revenue will be the driving business model.
And, while businesses may offer some tools and techniques to try to manage and govern this use of their tools for unethical and nefarious purposes, we have already and continue to open the AI Pandora’s box quickly because of AI’s overwhelming potential to increase profits and competitiveness.
The Role of Government
The question then is, can governments help? The answer could be yes, but the execution of this is much more complicated.
There was an unrealistic proposal some months back calling for an AI ‘pause’. But challenges abound. AI development and deployment are worldwide; which governments would govern AI? What about the countries that don’t agree?
Global governments have enough difficulty agreeing to carbon emission levels and trade agreements; governing AI is exponentially complex.
In addition, many in government will be reluctant to regulate AI usage for political purposes. Lastly, the ethical, security, and social impacts of AI are difficult enough for researchers who have been studying AI for years, let alone government representatives and leaders not trained in AI or ethics.
The US administration recently released an executive order titled ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.
However, this executive order does not solve all the ethical, privacy, security, and governance issues related to AI. It is the beginning of a broad conversation on the needed principles, policies, guidance, and management of AI systems.
Government action will be slow, especially at an international level, while the business drivers for AI will continue to be overwhelmingly rapid.
What Can Businesses Do?
Business executives and leaders must not subordinate their organization’s management, governance, and use of AI to the AI platform and technology providers.
The AI platform and technology providers are primarily focused on competition and revenues; each organization must consider its business models, strategy, culture, competitive market, and brand when making decisions.
Business and technology leaders must also define security, privacy, and information usage policies and principles.
Business and technology leaders must determine their organization’s AI future.
What Can Individuals Do?
Individuals must understand the benefits and risks of leveraging AI technologies in terms of their potential impact on their privacy, security, and intellectual well-being. This will become increasingly difficult as generation after generation adopts AI technologies.
It is a slippery slope; how much are we as people willing to give up for the convenience and assistance AI technology can potentially provide?
Individuals must understand that technology and service providers are delivering AI as part of their business model and are not focused on protecting your privacy, security, and well-being.
It will be very difficult, if not impossible, for Microsoft or any other AI platform provider (e.g., Google, IBM, Amazon, OpenAI, Meta, Salesforce, Oracle, etc.) to provide adequate tools, capabilities, and governance to manage the use of AI. Furthermore, it is not their primary focus or business model.
Governments may provide some guidance, as the US government has, but governments will face enormous hurdles to establish security, ethical, privacy, and governance regulations before the use of AI is already pervasive, especially at an international level.
It is largely up to individual leaders within businesses, governments, communities, and homes to determine their use of AI technologies.
AI technologies are going to continue to evolve rapidly with significant benefits and risks. There will be some efforts to manage, watermark, control, or secure AI systems. However, for now, individuals must become better informed regarding the benefits, costs, and risks of AI to define their own risk/security/privacy tolerance.
Get Ready for 2024 with Aragon’s 2024 Q1 Research Agenda!
Aragon Research provides the strategic insights and advice you need to help your business navigate disruption and outperform your goals. Our research is designed to help you understand the technologies that will impact your business–using a number of trusted research methodologies that have been proven to help organizations like yours get to business outcomes faster.
On Wednesday, January 17th, 2024, join Aragon Research CEO and Lead Analyst, Jim Lundy for a complimentary webinar as they walk you through Aragon’s Q1 2024 research agenda.
This webinar will cover:
- Aragon’s coverage areas, analysts, and services
- Research offered by Aragon, including Visual Research
- The research agenda for Q1 2024
- Sneak peek at what’s coming in Q2 2024