Site icon Aragon Research

EU Reaches a Provisional Agreement on AI Laws

EU Reaches a Provisional Agreement on AI Laws

By Betsy Burton

EU Reaches a Provisional Agreement on AI Laws

On December 8th, 2023, the European Union Parliament and member states reached a provisional agreement on comprehensive laws to regulate the use of artificial intelligence in commercial products and services

While the details of this agreement are still being worked out, the laws would initially impact AI solutions capable of performing more than 10 trillion trillion (or septillion) operations. However, the law could be extended to AI services/solutions that support over 10,000 business users.

How is the European Union Approaching Regulating AI?

The European Union is taking a “risk analysis” approach toward regulating AI. The reason for this risk-based approach was the EU Parliament’s specific goals related to protecting people’s safety and rights and respecting existing EU laws. 

To support this approach, their legislators have defined four levels of risk: Unacceptable risk, which is prohibited; high risk, which is regulated; limited risk, which must demonstrate transparency; and low or minimum risk, which have no obligation.

This is the first step of a major government body trying to regulate AI. Many implications of these laws are yet to be determined because the details have not been clearly defined. 

This preliminary agreement still needs formal approval by the European Parliament and all of the it’s member states. Several member states, including Germany and France, have been voicing concerns about excessive regulation of AI and its potential impact on high-tech companies within the EU.

What are the Implications of the New European Union Laws on AI?

It is ambitious and commendable that the European Union is undertaking this effort. However, there are many challenges and issues, particularly in working within a global market and across the diversity of the European Union. 

While the new laws are focused on banning harmful AI practices, they are considered a clear threat to people’s safety, livelihoods, and rights due to unacceptable risk, the implementation of this is less clear. For example, is it the responsibility on the AI service provider or the user of AI to ensure peoples safety, privacy and right are protected?

For instance, if someone uses something such as Microsoft’s Azure Speech to create advertisements biased against or threatening to a particular group, is Microsoft being held responsible or the user creating that advertisement

Additionally, any solution that falls into the high-risk category and is supported by a company within the European Union must register with an EU-wide database, with compliance managed by a human within the European Union Commission. However, a company outside the European Union will just need to perform a self-assessment to use the official EU “CE” marking. 

This could result in multiple scenarios where it becomes harder for EU-governed companies to compete in the global market and easier for non-EU governed companies to introduce higher-risk products into the European Union.

Why is the European Union Taking on AI?

There are various reasons why the EU is taking this aggressive risk-focused approach with respect to AI. 

One reason is the dramatic increase in the visibility, evolution, and adoption of AI technologies, including OpenAI ChatGPT, Microsoft Bing, and Google Bard. 

Another equally important reason is that many EU parliament members and member states felt they were behind the curve in regulating social media and didn’t want to be in that position again with AI.

How are the EU laws Different from What The US administration Recently Released?

The US administration released specific guidelines for how AI should be used within government agencies and its partners. It also defined specific deliverables that government agencies must produce for how AI is being used in different government-supported domains, such as education, transportation, and energy.

As we mentioned in previous blogs and research notes, the efforts by the US administration are focused on providing a framework for making decisions about AI.

Critics do not believe that the US executive order goes far enough in terms of regulating AI. The US has not defined specific laws regarding the use of AI beyond existing laws.

Bottom Line

It is commendable that the EU Parliament and member states have taken this first step to define some regulatory laws regarding the distribution and use of AI. However, these laws must be viewed as a starting point. 

The EU’s risk approach is more a reactive (inside out) approach rather than proactive (outside-in). The EU has defined walls or barriers to try to protect businesses from AI risks rather than defining opportunities and rewards for businesses that support AI in an ethical, secure, and low-risk way. 

Instead of making it more difficult for organizations to support the valuable aspects of AI by focusing on reducing risk, it might be more valuable for governments to drive innovations by giving businesses supporting AI solutions incentives that really prevent AI misuse or nefarious use.

The difference lies in addressing security, privacy, and ethical issues related to AI at the source rather than the endpoint.


Get Ready for 2024 with Aragon’s 2024 Q1 Research Agenda!

Wednesday, January 17th, 2024 at 10 AM PT | 1 PM ET

Aragon Research’s 2024 Q1 Agenda

Aragon Research provides the strategic insights and advice you need to help your business navigate disruption and outperform your goals. Our research is designed to help you understand the technologies that will impact your business–using a number of trusted research methodologies that have been proven to help organizations like yours get to business outcomes faster.

On Wednesday, January 17th, 2024, join Aragon Research CEO and Lead Analyst, Jim Lundy for a complimentary webinar as they walk you through Aragon’s Q1 2024 research agenda.

This webinar will cover:

Register Here


 

>>Blog 1: What is AI Architecture?<<

>>Blog 2: AI Accelerates the Need for Integrative Architecture<<

>>Blog 3: Think Generative AI Adoption Is In The Future? Think Again!<<

>>Blog 4: How Leaders Effectively Govern AI with Principles<<

>>Blog 5: U.S. Executive Order on AI: A Framework We All Can Use<<

>>Blog 6: Will Generative AI Fundamentally Change How We (Humans) Interact with Information?<<

>>Blog 6: The Rise of AI Tests Our Economic and Ethical Boundaries<<

Exit mobile version