top of page
Writer's pictureJJ

Article: Deepfake Explainer

Updated: Jan 19, 2021


Art courtesy of Zoé Pruitt, graphic designer for IEOMD newspaper

What is a deepfake?

Deepfake (a combination of "deep learning" and "fake") is a technique for creating simulated images of people using artificial intelligence. By employing a machine learning method that allows computers to “learn” by example, an algorithm is trained with a large dataset - images or videos of the people who will have their faces swapped in the final deepfake - and it eventually creates a model for each of the two faces. With this model, the AI can start to reliably swap out one face for the other.


To create the new deepfake video, the creator selects or films a video that contains the expressions and gestures needed for the final product. The AI then goes through this source material processing it frame by frame, using the model it built with the training data to extract the face in the video and replace it with the intended target’s face. In essence, the model reconstructs the source video “in the context” of the new face, creating a new video with the facial features of the target face but the facial expression and words of the source video’s face.


Politicians and celebrities are common targets, not just because they are recognizable to most people - because they are such public figures, there is plenty of “training material” of them available for an AI to build its model.


Why are they a problem/Why should we care?

Most people are aware of how images can be “photoshopped” and will usually regard a dubious photo with a critical eye, but that same skepticism is not often applied to video. Until recently, the moving images of video were rather difficult to edit without a team of experts, and even then it was usually pretty easy to spot what parts were false, so there was little reason to doubt the truth of a video that seems very real.


With the advent of “deepfake”, though, videos of politicians, celebrities, and others can be created or edited to say just about whatever someone wants. Many are still less than convincing, and display some obvious tells, but researchers are concerned that systems used to make deepfakes are becoming increasingly quick and easy to use. Publicly available software like FakeApp, Faceswap, and DeepFaceLab put the technology in the hands of anyone interested in making a deepfake. The technology itself is also evolving at a rapid pace, and the videos are becoming more convincing as time goes on, leading to a wary, almost fearful view of deepfakes in society.


The rise of deepfakes has revealed some of our gaps in knowledge around this technology, and some of the deeper divides surrounding it. There is a gap in how prepared we are to handle deepfakes: several states have introduced legislation to limit or ban their use in certain contexts, but there is not yet a widespread legal stance on the matter. There is a gap in awareness: while one person derides a deepfake video for its obvious tells, there is another person eagerly sharing the same video, pleased to have caught a certain politician in a compromising situation. Even if the deepfake is not perfect, unless someone is aware of the existence of deepfakes, they are unlikely to think a video could be manipulated in such a way. This is exacerbated by our divide in media trust: with the surge of “fake news” claims and politicians calling into question the validity of news sources, there is concern that deepfake technology could be ammunition for those that deny events even when video proof exists. With the 2020 election looming in the United States, there is growing fear that deepfakes could be weaponized politically and shift what we regard as “proof” and “reality”.


In the age of deepfake, is “seeing” really “believing” anymore?

5 views0 comments

Recent Posts

See All

Comments


Commenting has been turned off.
Post: Blog2_Post
bottom of page