IBM’s Decision and the Ethics of Facial Recognition
by Patricia Lundy
IBM recently made headlines for its decision to suspend its facial recognition technology offering. The news comes at a time when the United States is coming to terms with challenging questions regarding the application of such technologies in law enforcement, and the damaging and dangerous effects these technologies can have on the Black community and others. This blog explores the ethical implications of facial recognition and where its future may be headed.
History of Facial Recognition and Implicit Bias
Facial recognition is revolutionary because of the time-saving and security advantages it brings. It’s an everyday convenience to unlock your iPhone hands-free. If someone steals your phone, they’ll have a more difficult time unlocking it. Facial recognition has also been used to find missing children and reunite them with their families. But with as many benefits as it provides, there are numerous drawbacks, specifically concerning the privacy of individuals due to racial profiling and mass surveillance, as well as the use of facial recognition to identify potential criminals by law enforcement.
We’ve previously written on both the bias issues and security issues that facial recognition presents. These issues are not unique to this specific technology—using any sort of AI brings to light ethical questions that must be addressed before the technology is deployed.
AI technologies are often sought-after by organizations because they reduce manual effort and improve the accuracy and efficacy of decision making. However, humans are imperfect. We have implicit biases, and we don’t always recognize these biases in ourselves. Because humans are the ones that train AI systems to learn data, it’s almost impossible for AI technologies to exist without implicit bias.
For example, a recent study conducted by MIT and Stanford researchers found that facial recognition works better on male faces and on white/lighter skin; it had the most errors when used to identify women who had darker skin. Another study found that Amazon’s Rekognition technology in particular misidentified 28 members of congress and confused them with mugshots of criminals. And yet another study, conducted by the National Institute of Standards and Technology (NIST), found that African American and Asian faces were falsely identified 10 to 100 times more often than white faces. These errors are glaring: it's clear that facial recognition is not ready for prime time. And these errors can have severe consequences if facial recognition is being used to indict someone of a crime. It could easily—and statistically speaking it will—identify the wrong person.
IBM’s Decision to Suspend Facial Recognition
On June 8th, IBM CEO Arvind Krishna sent a letter to Congress explaining why IBM would no longer be offering facial recognition technology and condemned its use in racial profiling and mass surveillance. The company also called for police reform and for rigorous testing and auditing of AI technology for bias—what Aragon calls responsible AI parenting.
Krishna wrote that the horrific deaths of Black Americans George Floyd, Ahmaud Arbery, Breonna Taylor, and too many others gave the company the impetus for making the decision now. This is a step IBM is taking to combat racial inequality. But what about the other major facial recognition providers?
Amazon Announces Moratorium on Facial Recognition
Google has detailed its cautious approach to the technology—citing privacy and bias concerns. Earlier this year, Microsoft announced it would no longer invest in facial recognition technology start-ups, and on June 11th, the company said it would not sell its facial recognition technology to police. Microsoft has been vocal for years about the need for facial recognition technology regulation, so these announcements are in line with that philosophy. And now recently, Amazon has joined the likes of these tech giants.
On June 10th, Amazon announced it would be putting a one-year moratorium on the use of its facial recognition technology for police, in an effort to pressure Congress to regulate the technology ethically. However, it has not clarified if this ban also applies to federal law enforcement.
Huawei, on the other hand, has shown no intention of slowing down its massive facial recognition projects, some of which are government related and have raised serious questions of human rights violations.
Going forward, providers who fail to take action may receive pressure or backlash from their clients, partners, and employees if they do not address the ethical consequences of their technology being misused. And as the developers of this technology, they will bear responsibility.
Addressing the Ethical Implications of Technologies Moving Forward
Organizations have not been equipped to deal with the ethical consequences of the technologies they deploy. For one, it’s difficult for executives—who are motivated by the bottom line—to take a step back and evaluate the short- and long-term effects of innovative technologies in an unbiased manner. So the question remains: is it impossible to use these new and innovative technologies responsibly?
The answer is no. But it does require a dedicated effort. Ethics is not a one-and-done operation. It’s a continuous conversation. For that reason, Aragon recommends employing a digital ethicist: a person or team trained and dedicated to understanding the implications of technology-enabled decisions, and to helping individuals and organizations weigh the ethical and moral impacts of these decisions. VP Research Betsy Burton writes that digital ethicists should act as an independent resource for your executives; they should be a peer of legal and regulatory advisors, people who are specifically trained to weigh and evaluate the risks and benefits of deploying new technologies.
These kinds of decisions are not going to disappear. They’re going to become more numerous and complicated. Organizations often adopt emerging technologies to gain a competitive advantage, but with that comes a great responsibility. Those who make technology adoption decisions only after weighing their ethical implications will better serve their communities, and will avoid actively creating harm. IBM’s latest decision only reinforces this, and sets the scene for the kinds of decisions enterprises will have to make now and in the future. With Amazon being the latest provider to take a stand, perhaps more will follow suit.
Have a Comment on this?