Site icon Aragon Research

4 Ethics Challenges for AI Governance

by Adam Pease and Betsy Burton

While the classic “I, Robot,” horror story of an artificial intelligence threat gone berserk has been a media fixture for decades, the ethical challenges that surround AI are often more ordinary and subtle than the popular fear that machines will rise up against their masters.

To be sure, such concerns are important to keep in mind when dealing with technology that may eventually autonomously alter its own functioning, but the most pressing ethical concerns surrounding AI have to do with more immediate issues like prejudice, privacy, and security.

Enterprises must consider the implications of AI in their organizations. Factors such as cultural sensitivity and security are challenging to manage, and this is where digital ethics comes into play.

Four Ethical Challenges for Governing AI

Here are some of the problem areas that digital ethicists may face when adapting governance strategies to handle AI:

1. Learned Prejudice

As Safiya Noble argues in her book Algorithms of Oppression, the mathematical structures that AI-enabled systems depend upon can absorb and reproduce human prejudices. If machine learning systems are trained to regard different categories of people differently, they may become another engine of social inequality. The risk is that systems intended to treat human beings with neutrality and uniformity will begin to replicate the same identity-based biases that hold back and divide human beings. Digital ethicists should strive to understand how the information AI is trained with can embed prejudice and then develop government strategies to mitigate the harmful impact this can have on the humans working alongside the AI system.

2. Cultural Sensitivity

According to World Economic Forum research, the way workers and civilians respond to the inclusion of AI systems in their lives can depend on cultural context. The populations of China and England, for instance, are much more open to AI-enabled surveillance technologies than the populations of Germany and America. Not only will digital ethicists need to understand the technical details of AI-enabled systems, they will need to understand the cultural particularities of how workers will respond to them.

3. Privacy

One of the most immediate areas of ethical concern surrounding AI, especially with emerging machine vision technologies, is the human right to privacy. Workers may be uncomfortable having their faces scanned into a machine learning database, or having their biometric data measured by management. Digital ethicists will need to craft governance strategies that recognize the way their cultural and political communities have articulated this right—what its limits are and what kinds of protections it necessitates.

4. Security

As a recent Forbes article astutely illustrated, the very nature of contemporary artificial intelligence poses new security risks. Artificial intelligence systems could become self-modifying, introducing unexpected protocols that might produce new vulnerabilities.

Additionally, many of the algorithms and code bases that power AI systems are open source, meaning they can be freely altered and appropriated by anyone. As a result, it can be difficult for developers to control for quality, and it is possible for malicious code to creep into an enterprise’s operation. An ethicist might address these issues by taking careful stock of an enterprise’s security standards and ensuring that new systems do not depart from them.

Integrate Ethics Into Governance

As the modern workplace continues to incorporate intelligent systems to augment the capabilities of the traditional workforce, firms will find themselves turning to experts such as digital ethicists to define their governance strategies. Governance must be understood as an integrated discipline that encompasses business, corporate, and IT processes.

While the apparent power of new AI-enabled systems provides a compelling case for installing them as soon as possible, it is important to consider the ethical ramifications of such technologies alongside the practical concerns that come with integrating them into an existing workplace context.

Bottom Line

Aragon recommends that firms integrate digital ethics as a key component of their business governance strategy when it comes to artificial intelligence. Not only does this represent a moral imperative, but it promises to set apart the firms who make privacy, security, and cultural sensitivity a clearly held value.

A digital ethicist can help you get started. To learn more, download your complimentary ebook.

Exit mobile version