A Picture Worth A Thousand Lies: Free Speech in the Age of Deep Fakes

 

 

EDITOR’S NOTEThis article was originally published in the Spring 2019 Magazine

“Pics or it didn’t happen.” This adage has maintained the integrity of the digital age, ensuring our new social media documentation of every fun or furtive moment alike. Images and video have rapidly become a foundational part of modern news reporting; different outlets highlight moments of political blunder and boldness with carefully captured visuals.

 

Yet the proliferation of visual media in the news is precisely the danger that threatens truth in the United States. Indeed, various images and video grounded the conspiratorial claims that the Parkland shooting survivors were merely “crisis actors.” We hold sacred the conviction that “seeing is believing,” and so photographic evidence has historically been evidence—even when it’s wrong. 

 

On one hand, information leaks have been exceedingly meaningful in keeping our government accountable. But what happens when that information is fake? One study out of The Ohio State University suggests that Democrats who believed one or more fake articles about presidential candidate Hillary Clinton’s were 40% less likely to vote for her. And so doctored documents play to our deepest conspiracy intuitions and move us unlike any truth could. 

 

The spotlight is on fake news, and recent technology has only complicated this trend– in the way that photo editors substantiated conspiracy theories about UFOs, new “deep fakes” are enabling the complete dissolution of the line between fiction and reality. Deep fakes are ultra-realistic videos that splice voice and image to make people say things they never said or do things they never did, and new machine learning techniques have enabled anybody with basic hardware and the appropriate open source software to partake in the making of fakes. From the point of inception, deep fakes have caused controversy over their usage in creating fictitious porn. Celebrities and ordinary citizens alike are the targets of such fake videos, as our ever-expanding digital footprints provide ammunition for models to construct facial expressions and voices to layer over other video. 

 

However, deep fakes are dangerous for more than the ability to create salacious videos. They’re the sinister older cousin of the fake news we have already seen propagating the internet. Deep fakes are especially easy to use on important public figures who have legitimate videos online, as there is a larger pool of original material on which to train networks. Fake videos of political figures are cropping up around the internet; one popular (and parodically upfront) video of comedian Jordan Peele as a fake Obama demonstrates the powerful potential of deepfake technology to defame political figures. Although the videos are created from existing video and sound, this is not misrepresentation; it’s outright fabrication.

 

Interestingly enough, the fact that deep fakes are transformative protects them from copyright laws. In civil and criminal court, a wide array of different protections help creators skirt legal consequences; a creator’s blatant admission that their deep fake is not real exempts them from defamation laws. Particularly, many pornographic videos are clearly listed as “fakes,” and this shield lends them impunity from false light claims. 

 

Deep fakes can be regulated through several different lenses– potential claims have included copyright infringement, tort claims of Intentional (or Negligent) Infliction of Emotional Distress, and False Light tort claims. But on the side of the defendant, free speech proves one of the most arguable defenses, as US courts give First Amendment protections a wide berth. Current laws do not explicitly address emerging cyber-crime, so it is up to courts to interpret these policies beyond their originally intended scope. 

 

Perhaps the most striking issue here is the ease with which deep fakes can be produced– even after an arduous legal battle is fought to strike down one creator, the video can still float around in offline formats after it is taken down from the initial publication platform. Deep fake production is boringly formulaic– given inspiration from the original film, re-creation is as simple as plugging in photographs and a base video into open source software for the automated mapping of the face onto the base. They’re ridiculously replaceable, and the best way to stop their online dissemination would be to prevent their upload to third-party platforms.

 

But that’s dangerous. The 1996 Section 230 of the Communications Decency Act solidified the division between interactive computer service providers (like YouTube, Yelp, and Facebook) and the individuals posting material. This legal separation has inspired the peer-to-peer intellectual revolution by allowing for the creation of robust networks of internet connections. But as rapidly as knowledge traverses the web, so too does misinformation. The third-party platforms have different policies on acceptable use, outlined in their user agreements, but they are only accountable for removing criminal and intellectual property violators. And transferring legal responsibility of other content from poster to platform would unfathomably fetter American free speech. The burden of blocking deep fakes would push platforms to shut down or maintain strict censorship.

 

It is a sad reality that the US laws that could regulate deep fakes largely center around financial impact, and damages are often measured in those terms. But regulating deep fakes is an intricate task with many underlying implications. Even as legislation can be updated to address these new cyber developments, the technology will continue to evolve at a pace outstripping the debates of Court and Congress. To remain vigilant in a time of muddled truth, we must only remind ourselves, “pics and it didn’t happen.”