The real consequences of fake porn and news

There is a movement underway to ban involuntary pornography, videos that use machine learning libraries like TensorFlow to superimpose faces of unwilling participants onto porn actors’ bodies. Yesterday, as TC’s Taylor Hatmaker noted, Reddit published a content policy banning such images and video, while also shutting down a series of subreddits devoted to the practice. Pornhub and other social media sites have similarly published such policies in recent weeks.

Porn, though, is merely the harbinger of a world of involuntary, fake content. Earlier this week, political scientist Henry J. Farrell and Nixonland author Rick Perlstein wrote a column in The New York Times titled, “Our Hackable Political Future.” In their near-dystopian world, partisan saboteurs could undermine a politician’s candidacy for office by creating fake videos of, say, the candidate having sexual relations with a teenager. Projecting out, the authors worry that “Fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court.”

For a democratic society in which the presumption of truth is generally the default response to most content, we will quite soon live in a world where everything must be considered fake without evidence to the contrary. It’s as if we suddenly moved to an authoritarian country and needed to constantly dismiss the propaganda we see every day. When it comes to policy problems facing startups, tech companies, political parties and governments together, this challenge is about as thorny as they come.

Take a situation Reddit might confront sooner rather than later: a grainy video is posted of a politician having sex with a teenager (this for some reason is the canonical example). The authenticity of the video is nebulous — there are no obvious signs of tampering, but it could be, and in fact, is maybe even likely to be fake. What does Reddit do? Are they going to ban what might potentially be the political scandal of the century just to be on the “safe side”?

That’s just the first layer of the challenges though. To see the thorniness in all of its detail, let’s pick a more intricate example than a fake porn video of a politician or a celebrity. Instead, imagine that a dissident of an authoritarian regime uses their smartphone to take video of an atrocity, say the widespread murder of protesting civilians by the regime’s security forces. This video is authentic, was recorded live and is posted anonymously to the internet.

In a world where fakes are predominant, suddenly there is an immediate question as to what the video even is. It’s entirely plausible that a dissident group would create a fake video to bring attention to their plight and garner media attention. The regime publishes its own video showing the streets are entirely safe and clear. Those who believe the dissidents are going to believe them, and those who trust the regime will continue to do so. The persuasive power of the original video is lost.

This brings us to the most obvious solution discussed in my circles: building an encrypted and verifiable “chain of custody” for videos (I’m sensing the “b” word that starts with a block and ends with a chain). The idea is that if all content is assumed fake in this coming world, then let’s use encryption technologies to create metadata that proves the provenance of a particular piece of content. Indeed, there are already startups targeting this market, like Prover.io, which is building a video authentication service to handle exactly the challenge I laid out in my example above.

There is just one problem: Identity, or at least device verification, is typically a prerequisite for these chains to function. After all, we are trying to prove the chain of custody, and the chain needs to know which device or person originated the video. That information is extraordinarily valuable to the authoritarian regime, who would love to know which dissident was shooting that video and shoot right back, and not with a camera.

Maybe the video creator will have granular access controls to manage who can see the identity metadata, so they could hypothetically prove the video to The New York Times but not show the regime. Maybe the blockchain can be anonymous, protecting the dissident. But remember, we are trying to prove that this is a real video, and every attempt at obfuscating the identity of the creator is just one more notch against its authenticity.

Another solution which might become widely available is authenticator software, which can scan a video and determine whether it was forged or not. Machine learning might do a fantastic job, but it may not get every single pixel right. Much as photo analysts today have techniques to determine whether a particular image has been Photoshopped, our world might soon have algorithmic tools to evaluate images and videos for machine-learned editing.

I accept this technology will exist, but will it even matter? Politifact’s Truth-o-Meter is a recognized, reasonably objective third-party verification service of facts used (and abused) by politicians. Good service, but they haven’t been able to stop politicians from misusing facts and evidence nor to get elected on falsehoods. Why would we suddenly expect authenticator software to be any more impactful on people’s beliefs?

Finally, political options to stop the spread of these fake videos are similarly limited. The technology behind these videos has plenty of legitimate uses, including Hollywood special effects, so it hardly makes sense to ban it. Regulating their use similarly seems like an exercise in futility, given that many of these videos are produced anonymously and potentially across national borders, making enforcement impossible. And free speech protections, at least in the United States, would likely protect the creation of these videos. Just swap “libel” for “art” or “satire” and there is a reasonable argument to be made for allowing at least some of these videos into our discourse.

Maybe better education will immunize us to these videos. Maybe it will be more obvious that something is fake or not than we think today. Maybe there will be a resuscitation in our trust in the media to carefully filter garbage content for us. Maybe.

Far more likely is a world in which we will just have to accept that every story, image and video we consume has been doctored, and might be entirely invented. We, startup founders and others in the ecosystem, are going to have to build new tools for democracy to continue to function as we expect. The world of fake porn and fake news is already upon us, and the tools for fighting back are not even ready yet.