At one time, face swaps were like porn: you knew them when you saw them. Consider Josh Kline’s video Crying Games (2015), which uses digital techniques to graft the faces of the War on Terror—Bush, Blair, Cheney, Rumsfeld, Rice—onto the bodies of actors who plead for forgiveness. Clad in gray coveralls, the figures appear to be languishing in prison for war crimes. Rivulets of tears running down their faces underscore how very, very sorry they are, while distorted areas around the edges of their chins and cheeks, and the glazed look of their eyes, reveal the illusion at the heart of Kline’s historical revisionism.
The kind of pattern recognition technology Kline employed to create the video scans an image, identifies what it thinks are faces, and swaps them. A filter popularized by Snapchat over the past two years allows almost anyone to produce similar face swaps. Part of the appeal is the visible artifice. The results are almost always humorous, albeit with a pleasing dash of mild revulsion. Here’s your toddler looking appalled at his scraggly new goatee, here you are enthusiastically grinning with your dog’s face. Sometimes the artificial intelligence’s apophenia gets the better of it, and it sees faces in someone’s nostrils, or in the grain of a wooden sideboard, leading to absurd exchanges. Yet even the most seamless swaps made within the Snapchat app are appealing because of their inherently goofy, if sometimes uncanny, effects.
Then face swaps became porn. It all happened pretty quickly. In December 2017, a Reddit user known as “deepfakes” began posting videos that spliced celebrity faces with the bodies of adult-film performers. There are inevitable tells in these clips, moments when the face and body don’t track. But for the most part, the results are shockingly realistic. Deepfakes founded an eponymous subreddit, and a community of computer-generated–pornography enthusiasts quickly grew to ninety thousand members. Although the algorithm used to make the videos employed open-source code, and relied on images freely available on social media, a convincing face swap still required a decent understanding of how machines learn: the process involves first collecting images to create a set of training data, then painstakingly training a custom algorithmic model, then using that model to generate deepfakes. Within weeks, however, another Reddit user created the FakeApp, which repackaged deepfakes’s technology as a user-friendly interface.
In 1969 Nam June Paik wrote that he wanted to “shape the TV screen canvas as precisely as Leonardo, as freely as Picasso, as colorfully as Renoir, as profoundly as Mondrian, as violently as Pollock, and as lyrically as Jasper Johns.”1 In the early 2000s, some internet communities popularized a maxim known as Rule 34, which holds that for any subject you can think of, there’s related pornography. The FakeApp makes both Paik’s dream and this adage a reality. If you can assemble a database with a few hundred images of your target, you can make him or her do anything you want on-screen in an unequivocally chilling assault on consent.
The porn industry rarely invents, but what it does well is popularize, driving innovation through the ability to monetize new technologies quickly. Many key advances in image technology, from home video to VR, have been pushed by the economic might of pornographers. The industry’s early adoption of online payment systems also helped fuel e-commerce during the dotcom boom. Given this pattern, it appeared to be only a matter of time before face-swap technology would go mainstream. Yet by early February, Reddit shut down the deepfakes site, Twitter and PornHub purged the fake videos, and FakeApp users were exiled to less visible parts of the Web.
It’s unlikely that these sites banned deepfake videos for ethical reasons. Instead, they probably sought to reduce their liability should face-swap victims file a lawsuit. Thus far, most deepfake videos have featured the faces of white female celebrities like Gal Gadot, Taylor Swift, and Scarlett Johansson, though this is more an outcome of available footage than of the libidinal economy of deepfakers. The videos can be made using any combination of face and body, but they work best when the subjects’ skin and hair colors match. These components don’t even have to come from real people: sometimes the faces are taken from video game characters. When the victims of deepfakes are celebrities or trademarked characters, it is easy to imagine high-powered lawyers filing copyright claims and defamation suits against sites hosting the doctored material.
Most unfamous victims, however, could find themselves in much murkier legal territory. The deepfakes subreddit now brings up a notice from host sites that the ban is “due to a violation of our content policy, specifically our policy against involuntary pornography.” This language echoes the laws against “revenge porn” some states have adopted. But these legal tools, meant to empower those whose private images have been released without their consent, hinge on the claim that a victim’s privacy has been breached. Can there be an expectation of privacy when the actions in question never actually happened, and the depiction is as artificial as the intelligence that constructed it?
At the same time, it’s easy to imagine a condition of peak deepfakes, in which doctored videos are so widespread and so convincing that all videos circulated online have to be regarded as fake until proven otherwise. In that scenario, could evidence of real compromising moments leaked publicly be more easily brushed off, the technology serving as a universal alibi?
Hollywood has made major advances in face-swap technology in recent years to create digital stunt doubles, facilitate reshoots, and revive dead actors. While seeing fondly remembered movie stars again may be pleasing to many, the potential for non-porn uses of deepfakes are perhaps the most troubling.
Thus far, relatively few non-porn uses of the FakeApp have gone viral. Curiously, many of these feature Nicolas Cage’s face in a wide variety of film and television roles. (This focus may be related to Cage’s performance in the cult film Face/Off, in which he and John Travolta play a terrorist and an FBI agent who surgically trade faces.) But a handful of throwaway political videos—mocked up Trump speeches, for example—have also been making the rounds. This is an especially dangerous development in an age that has seen the normalizing of fake news, election hacking, and weaponized lies being deployed for political gain. A politician could claim that any video is fake, and the burden to prove authenticity could fall on the journalists attempting to report accurate information. Similarly, the ease with which fake videos can be made may deepen public suspicion of official videos, such as the footage police officers capture of arrests and violent encounters. As Henry J. Farrell and Rick Perlstein argue in the New York Times, fake videos disseminated through our current polarized media ecosystem could undermine the very idea that a stable truth exists. “Democracy assumes that its citizens share the same reality,” they write. “We’re about to find out whether democracy can be preserved when this assumption no longer holds.”2
Deepfakes have rightly inspired fear and handwringing, but they may also force a broad public engagement with questions about the nature of images. Not long after Paik expressed a vision for infinitely malleable video, Chris Burden staged his famous performance Shoot (1971), for which he was shot in the arm. The video documentation that exists of the event is precisely that: documentation. The grainy, artless images are compelling because they seem to capture the reality of what happened with no aesthetic trickery getting in the way of the truth. We can imagine that one reaction to deepfakes will be a yearning for images that can make such claims to truth. At least in the short term, live streams might be one confirmation of authenticity, as deepfakes currently require hours of processing time.
However, it’s also important not to lose sight of the creative potential that Paik imagined. What would it look like to embrace a world of infinitely changeable video images? The work of artist Gillian Wearing, who often asks interview subjects to don masks while they speak about their lives, suggests how the artifice of face swaps could enable a kind of freedom to tell deeper truths. Kline’s work likewise points to the possibility of using manipulated images to expand our political imagination and offer emotional catharsis where legal justice seems to be off the table. In a perfect world, face-swaps might lead to a healthy public discussion about how cameras can never really be objective, images always embody a certain perspective. But such a discussion would require more nuance than our media and politicians are perhaps capable of, as Perlstein and Farrell argue. Still, we can hope that the troubling consequences of a deepfake world might lead to a more deliberate search for truth and reality.
Endnotes
1. Nam June Paik, quoted in Hannah B. Hölling, Paik’s Virtual Archive: Time, Change, and Materiality in Media Art, Oakland, University of California Press, p. 120.
2. Henry J. Farrell and Rick Perlstein, “Our Hackable Political Future,” New York Times, Feb. 4, 2018, nytimes.com.
Face Off
At one time, face swaps were like porn: you knew them when you saw them. Consider Josh Kline’s video Crying Games (2015), which uses digital techniques to graft the faces of the War on Terror—Bush, Blair, Cheney, Rumsfeld, Rice—onto the bodies of actors who plead for forgiveness. Clad in gray coveralls, the figures appear to be languishing in prison for war crimes. Rivulets of tears running down their faces underscore how very, very sorry they are, while distorted areas around the edges of their chins and cheeks, and the glazed look of their eyes, reveal the illusion at the heart of Kline’s historical revisionism.
The kind of pattern recognition technology Kline employed to create the video scans an image, identifies what it thinks are faces, and swaps them. A filter popularized by Snapchat over the past two years allows almost anyone to produce similar face swaps. Part of the appeal is the visible artifice. The results are almost always humorous, albeit with a pleasing dash of mild revulsion. Here’s your toddler looking appalled at his scraggly new goatee, here you are enthusiastically grinning with your dog’s face. Sometimes the artificial intelligence’s apophenia gets the better of it, and it sees faces in someone’s nostrils, or in the grain of a wooden sideboard, leading to absurd exchanges. Yet even the most seamless swaps made within the Snapchat app are appealing because of their inherently goofy, if sometimes uncanny, effects.
Then face swaps became porn. It all happened pretty quickly. In December 2017, a Reddit user known as “deepfakes” began posting videos that spliced celebrity faces with the bodies of adult-film performers. There are inevitable tells in these clips, moments when the face and body don’t track. But for the most part, the results are shockingly realistic. Deepfakes founded an eponymous subreddit, and a community of computer-generated–pornography enthusiasts quickly grew to ninety thousand members. Although the algorithm used to make the videos employed open-source code, and relied on images freely available on social media, a convincing face swap still required a decent understanding of how machines learn: the process involves first collecting images to create a set of training data, then painstakingly training a custom algorithmic model, then using that model to generate deepfakes. Within weeks, however, another Reddit user created the FakeApp, which repackaged deepfakes’s technology as a user-friendly interface.
In 1969 Nam June Paik wrote that he wanted to “shape the TV screen canvas as precisely as Leonardo, as freely as Picasso, as colorfully as Renoir, as profoundly as Mondrian, as violently as Pollock, and as lyrically as Jasper Johns.”1 In the early 2000s, some internet communities popularized a maxim known as Rule 34, which holds that for any subject you can think of, there’s related pornography. The FakeApp makes both Paik’s dream and this adage a reality. If you can assemble a database with a few hundred images of your target, you can make him or her do anything you want on-screen in an unequivocally chilling assault on consent.
The porn industry rarely invents, but what it does well is popularize, driving innovation through the ability to monetize new technologies quickly. Many key advances in image technology, from home video to VR, have been pushed by the economic might of pornographers. The industry’s early adoption of online payment systems also helped fuel e-commerce during the dotcom boom. Given this pattern, it appeared to be only a matter of time before face-swap technology would go mainstream. Yet by early February, Reddit shut down the deepfakes site, Twitter and PornHub purged the fake videos, and FakeApp users were exiled to less visible parts of the Web.
It’s unlikely that these sites banned deepfake videos for ethical reasons. Instead, they probably sought to reduce their liability should face-swap victims file a lawsuit. Thus far, most deepfake videos have featured the faces of white female celebrities like Gal Gadot, Taylor Swift, and Scarlett Johansson, though this is more an outcome of available footage than of the libidinal economy of deepfakers. The videos can be made using any combination of face and body, but they work best when the subjects’ skin and hair colors match. These components don’t even have to come from real people: sometimes the faces are taken from video game characters. When the victims of deepfakes are celebrities or trademarked characters, it is easy to imagine high-powered lawyers filing copyright claims and defamation suits against sites hosting the doctored material.
Most unfamous victims, however, could find themselves in much murkier legal territory. The deepfakes subreddit now brings up a notice from host sites that the ban is “due to a violation of our content policy, specifically our policy against involuntary pornography.” This language echoes the laws against “revenge porn” some states have adopted. But these legal tools, meant to empower those whose private images have been released without their consent, hinge on the claim that a victim’s privacy has been breached. Can there be an expectation of privacy when the actions in question never actually happened, and the depiction is as artificial as the intelligence that constructed it?
At the same time, it’s easy to imagine a condition of peak deepfakes, in which doctored videos are so widespread and so convincing that all videos circulated online have to be regarded as fake until proven otherwise. In that scenario, could evidence of real compromising moments leaked publicly be more easily brushed off, the technology serving as a universal alibi?
Hollywood has made major advances in face-swap technology in recent years to create digital stunt doubles, facilitate reshoots, and revive dead actors. While seeing fondly remembered movie stars again may be pleasing to many, the potential for non-porn uses of deepfakes are perhaps the most troubling.
Thus far, relatively few non-porn uses of the FakeApp have gone viral. Curiously, many of these feature Nicolas Cage’s face in a wide variety of film and television roles. (This focus may be related to Cage’s performance in the cult film Face/Off, in which he and John Travolta play a terrorist and an FBI agent who surgically trade faces.) But a handful of throwaway political videos—mocked up Trump speeches, for example—have also been making the rounds. This is an especially dangerous development in an age that has seen the normalizing of fake news, election hacking, and weaponized lies being deployed for political gain. A politician could claim that any video is fake, and the burden to prove authenticity could fall on the journalists attempting to report accurate information. Similarly, the ease with which fake videos can be made may deepen public suspicion of official videos, such as the footage police officers capture of arrests and violent encounters. As Henry J. Farrell and Rick Perlstein argue in the New York Times, fake videos disseminated through our current polarized media ecosystem could undermine the very idea that a stable truth exists. “Democracy assumes that its citizens share the same reality,” they write. “We’re about to find out whether democracy can be preserved when this assumption no longer holds.”2
Deepfakes have rightly inspired fear and handwringing, but they may also force a broad public engagement with questions about the nature of images. Not long after Paik expressed a vision for infinitely malleable video, Chris Burden staged his famous performance Shoot (1971), for which he was shot in the arm. The video documentation that exists of the event is precisely that: documentation. The grainy, artless images are compelling because they seem to capture the reality of what happened with no aesthetic trickery getting in the way of the truth. We can imagine that one reaction to deepfakes will be a yearning for images that can make such claims to truth. At least in the short term, live streams might be one confirmation of authenticity, as deepfakes currently require hours of processing time.
However, it’s also important not to lose sight of the creative potential that Paik imagined. What would it look like to embrace a world of infinitely changeable video images? The work of artist Gillian Wearing, who often asks interview subjects to don masks while they speak about their lives, suggests how the artifice of face swaps could enable a kind of freedom to tell deeper truths. Kline’s work likewise points to the possibility of using manipulated images to expand our political imagination and offer emotional catharsis where legal justice seems to be off the table. In a perfect world, face-swaps might lead to a healthy public discussion about how cameras can never really be objective, images always embody a certain perspective. But such a discussion would require more nuance than our media and politicians are perhaps capable of, as Perlstein and Farrell argue. Still, we can hope that the troubling consequences of a deepfake world might lead to a more deliberate search for truth and reality.
Endnotes
1. Nam June Paik, quoted in Hannah B. Hölling, Paik’s Virtual Archive: Time, Change, and Materiality in Media Art, Oakland, University of California Press, p. 120.
2. Henry J. Farrell and Rick Perlstein, “Our Hackable Political Future,” New York Times, Feb. 4, 2018, nytimes.com.