Site icon Aragon Research

Computer Vision Provides New Tools to Accurately Detect Deepfakes

by Adam Pease

Recent discoveries in computer science have paved the way for a new system of image detection that can reliably detect images in which human faces have been manipulated to appear like different people—colloquially known as deepfakes. This blog discusses the news and what it means for the AI market overall. 

What Are Deepfakes?

Deepfakes are images or videos in which AI has been used to manipulate preexisting footage, replacing the face or body of one person with the face or body of another. In some cases, the manipulations can appear very convincing, potentially opening the door for people to produce believable videos of events that never actually happened. 

Some examples of deepfaking, such as when a Russian company paid Bruce Willis to use a deepfake of his likeness for an advertisement, seem harmless. However, there are many potential cases where deepfakes could lead to dangerous misunderstandings. One can imagine how potentially destructive the wrong deepfake could be in a military confrontation—a leader could be convincingly made to appear as if they were declaring war when no such footage really existed. The risk of this kind of miscalculation along with other uses by bad actors means that there needs to be a system in place for reliably detecting deepfakes if we hope to keep our information ecosystems honest and reliable. 

How AI Can Help Fix the Problems It Created

Ultimately, the proliferation of deepfakes is a result of the deliberate misuse of rapid and open-source AI advancements that make it possible for everyday users to create false content. And while some have seized upon deepfakes to warn against the rise of AI, it turns out AI will also be the tool that stops deepfakes from getting out of control. 

Several researchers at UC Riverside recently published their research into using a computer vision algorithm to detect deepfake images. While systems have already shown potential in discerning deepfaked videos, the technology for creating false content is constantly improving, and one of the blind spots among existing countermeasures is the ability to reliably detect when facial expressions have been changed, even when the underlying face is left the same. But Riverside’s researchers found that their deep neural network could detect a changed expression with 99% accuracy, making it the most effective tool for detecting deepfake images to date. 

Bottom Line

While AI naysayers have been quick to point to deepfakes as a terrifying consequence of technological innovation, it seems like AI may hold the solution to the very problems it creates. Stories like this show that technological developments are two-sided, and that for every tool bad actors get a hold of, there are countermeasures that stem from the same seedbed of innovation.

Exit mobile version