We Need to Stop Freaking Out About AI Deepfakes

Photo Illustration by Elizabeth Brockway/The Daily Beast/Reuters
Photo Illustration by Elizabeth Brockway/The Daily Beast/Reuters

The photos are evocative. Former President Donald Trump is yelling, writhing, fighting as he’s detained by police. A swarm of officers surrounds him. His youngest wife and eldest son scream in protest. He’s in a mist—is that pepper spray?—as he charges across the pavement.

The photos are also… off. The pepper spray emerges, ex nihilo, from behind Trump’s head and in front of his chest. Behind him, a storefront sign says “WORTRKE.” In one image, a cop’s arm is outside its empty sleeve. In another, Trump has only half a torso. The officers’ badges are all gibberish. “PIULIECE” reads a cop’s hat behind a grotesque Melania Trump-like creature from uncanny valley.

People Are Already Making Deepfake Videos of Trump’s Arrest

All of this, you see, is fake. The photos are not photos at all but deepfakes, the work of generative AI. They’re a digital unreality created by Midjourney, a program similar to the better-known DALL-E 2 image generator and GPT-4 chatbot. And, for American politics, they’re a portend of things to come.

That’s not necessarily as scary as it may sound. There will be an adjustment period, and the next few years will be uniquely vulnerable to AI-linked confusion and manipulation in political discourse online. But in the longer term, while generative AI almost certainly won’t make our politics any better, it probably won’t make things meaningfully worse, because humans have already made them thoroughly bad.

The near-term risk is twofold. Part of it is about a single man: Trump. His behavior is uniquely outlandish; he has a long record of proven deception around matters large and small; he generates an immediate emotive response in tens of millions of Americans; and he is very difficult to ignore.

That combination makes Trump unmatched as a target for plausible deepfakes. Take these arrest images: They don’t stand up to a second’s serious scrutiny. The garbled words are a giveaway even if you somehow fail to notice the Gumby poses and not-quite-human faces.

But the concept itself isn’t immediately dismissible, is it? Trump is reportedly fixated on the possibility of doing a perp walk in cuffs, and if he wants to make a scene, a few anguished expressions from Your Favorite Martyr would be a good start. The same concept doesn’t and can’t work as well for any other figure of remotely similar prominence, including Trump’s own imitators and would-be successors in the GOP.

The other near-term risk is generational. The canny of “digital natives” is routinely overblown—plenty of young people believe plenty of internet nonsense—but research suggests age is a real factor in the spread of misinformation online. In fact, per a 2019 study published in Science Advances, it’s among the most important factors.

During the 2016 election, “[m]ore than one in 10, or 11.3 percent, of people over age 65 shared links [on Facebook] from a fake news site, while only 3 percent of those age 18 to 29 did so,” the researchers wrote at The Washington Post.

“These gaps between old and young hold up even after controlling for partisanship and ideology,” they found. “No other demographic characteristic we examined—gender, income, education—had any consistent relationship with the likelihood of sharing fake news.” (Incidentally, though institutional distrust and brokenism are relevant factors, too, Republicans are a bit older than Democrats, and studies have found a higher rate of misinformation sharing on the right.)

This difference isn’t something inherent to older or younger generations. It’s just a matter of familiarity with internet culture—an accident of birth. The longer generative AI is with us, then, even as the technology improves, the more we’ll develop that familiarity with its output. We’ll become more accustomed to noticing signs of deception, to subconsciously realizing a piece of content is somehow artificial and untrustworthy.

Or, at least, we’ll develop those instincts of skepticism if we want them. Many won’t.

Ironically, that unfortunate reality is why I don’t share the fears expressed in a New York Times report this week on the prospect of politically biased AI. The risk of partisan “chatbots [making] ‘information bubbles on steroids’ because people might come to trust them as the ‘ultimate sources of truth’” strikes me as overblown.

Our political information environment is already very high-quantity and variant in quality. AI content generation will marginally reduce the barrier of effort it takes to add lies to that mix, but not by much. People are gullible and tribalistic already. Misinformation can even spread by accident. It doesn’t need intelligence, let alone artificial intelligence, to get going.

Moreover, acceptance of fabricated content isn’t typically tied to how well-written or well-designed it is. The pixelated Minions memes propagating garbage “facts” on Facebook aren’t exactly a high-effort product. If anything, it might be easier to realize you were fooled by a fake Trump arrest image than by whatever lie or half-truth those memes tell. After all, Trump will soon appear in public unscathed by the violent arrest that never happened. Untold millions of old-fashioned memes will be share, believed, and never debunked.

So it’s not that chatbots won’t be biased and image generators won’t be used to deceive. They will, on both counts. But we don’t need AI to lie to each other. We don’t need politicized chatbots have information bubbles on steroids. And anyone who thinks a chatbot is the ultimate source of truth wouldn’t have been a discerning political thinker even in a pre-digital age.

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.