Advertisement

Inside Google’s and Meta’s Arms Race to Build the Most Deceptive AI

Illustration by Luis G. Rendon/The Daily Beast
Illustration by Luis G. Rendon/The Daily Beast

If you’ve never played the game Diplomacy before, I don’t recommend starting because it will consume your life. The game is typically a seven player affair that involves a lot of negotiation, persuasion, and alliances—not to mention a healthy dose of deception—in order to control and gain territory on a map of Europe in the run-up to WWI.

But, there are countless other versions of the game out there, some of which feature dozens of players playing on a map the size of the world. Each player is vying for power with the ultimate goal of conquering enough territory to win outright, or simply surviving long enough until you can negotiate a draw. These matches can get very messy, very quickly—which makes it the perfect game for the sick and depraved.

And, as it turns out, it’s also a great game to train AI how to negotiate, cooperate, and even deceive. The most recent effort comes from researchers at Google’s AI research lab DeepMind who published a study on Dec. 6 in the journal Nature Communications about a new approach for teaching bots to play Diplomacy. The authors say that this method allows for better communications between AI “players” while also encouraging cooperation and honesty.

Meta Made DALL-E for Video and There’s No Way This Ends Well

“We view our results as a step towards evolving flexible communication mechanisms in artificial agents, and enabling agents to mix and adapt their strategies to their environment and peers,” the authors wrote.

One of the high-level insights the researchers gained from the experiment was that the AI players were able to build more honesty in negotiations by punishing the ones who broke agreements and lied about what they would do. They found that “negatively responding to broken contracts allows agents to benefit from increased cooperation while resisting deviations.”

So, as it is with history and poetry, the deepest circle of AI hell is still reserved for traitors.

Beyond being able to dominate us in a heated game of Diplomacy, AI trained in this way can potentially be used to help us solve complex issues. After all, bots are already used to do everything from automating manufacturing, to creating efficient shipping routes for the transportation industry. But if AI can also figure out solutions for less black-and-white issues like negotiations and compromises, it can help do things like create contracts or even negotiate political deals.

DeepMind’s AI is just the latest in a long line of strategy gaming bots, including Meta’s own Diplomacy-playing AI announced in November and a recently unveiled Stratego-playing bot from DeepMind. However, AI has had a long history with gaming dating back to Deep Blue, the famous IBM supercomputer that successfully defeated chess grandmaster Garry Kasperov in a series of heated games in 1996 and 1997. Bots have only become more sophisticated, learning how to best humans in a variety of different games that require strategy and deception.

“AI deceiving humans is not a new phenomenon,” Vincent Conitzer, an AI ethics researcher at Carnegie Mellon University, told The Daily Beast. “AI became superhuman at the game of poker before Diplomacy.”

Conitzer explained that perhaps the most significant thing about Diplomacy-playing bots is the fact that they require the use of natural language. Unlike a chess or poker game, there’s often not a clear cut solution or goal. Just like in real life, you have to make deals and compromises with other players. This presents a much more complex set of workflows that a system needs to process in order to come up with a decision.

It also means that the AI models need to take into account whether or not someone is lying—and if it should be deceptive, too.

Stop Saying That Google’s AI Is Sentient, You Dupes

A bot can’t lie in the way we typically define lying; a bot won’t just spout the wrong answer to a question unless it’s glitching up. But by its definition, lying requires an intent to deceive. And bots can have intentions. After all, they’re designed to perform specific functions by humans, and lying may be a part of that functionality.

“It doesn’t understand the full social context of lying, and it understands what it’s saying in, at best, a limited way,” Conitzer said. “But to us, AI systems using language strategically may appear more worrisome.”

He isn’t alone in this logic either. “The introduction of an explicitly deceptive model might not introduce as much new ethical territory as you might think, simply because there isn’t much in the way of intentionality to begin with,” Alexis Elder, an AI ethicist at the University of Minnesota-Duluth, told The Daily Beast. However, she echoed Conitzer’s sentiment about how a convincing and deceptive AI “seems potentially quite worrisome.”

On top of all of the ethical concerns surrounding lying AI is the fact that it’s being funded, researched, and pushed by some of the most powerful and wealthy tech companies in the world—namely Meta and Alphabet. Both companies have had a sordid track record when it comes to AI in the past. Meta, for example, has a track record of racist, sexist, and biased bots. Alphabet came under fire in 2015 after Google Photos labeled dozens of photos of Black people as gorillas. Both companies have had significant missteps when it comes to AI—particularly when it comes to biased and racist behavior.

It’s no surprise those concerns would spring up again when it comes to developing a bot capable of using language to deceive and coerce too. What happens when a bot is used to negotiate an unfair contract between a boss and their workers or a landlord and their tenants? Or if it was weaponized by a political party to disenfranchise people of color by drawing voting districts that don’t accurately reflect the population? Sure, it might not be a reality yet—but unless there’s defined regulation about what these bots can and can’t do, the pathway is there.

It all serves as a good lesson of something that we’re learning time and again: take anything an AI tells you with a big grain of salt.

“If nothing else, it’s an important reminder that the text that AI systems produce isn’t necessarily true,” Conitzer said. “This is true even if the system is not intended to mislead. Large language models such as OpenAI’s GPT-3 and even Meta’s science-focused Galactica produce text full of falsehoods all the time, not because they’re designed to mislead, but rather because they are just producing likely-seeming text without deep understanding of what the text is about.”

For now, though, we simply have bots that are getting better at gaming. While they might not be able to go full HAL-9000 and totally manipulate us (yet), they might be able to dominate us over a game of Diplomacy—and honestly, that might be just as bad.

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.