Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

AI trained on 4chan's most hateful board is just as toxic as you'd expect

The hyper-racist bots posted 15,000 times in one day.

Miguel Sotomayor via Getty Images

Microsoft inadvertently learned the risks of creating racist AI, but what happens if you deliberately point the intelligence at a toxic forum? One person found out. As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan's Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board — and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI's GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended "offensiveness, nihilism, trolling and deep distrust." The video creator took care to dodge 4chan's defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles.

The AI made a few mistakes, such as blank posts, but was convincing enough that it took roughly two days for many users to realize something was amiss. Many forum members only noticed one of the bots, according to Kilcher, and the model created enough wariness that people accused each other of being bots days after Kilcher deactivated them.

The YouTuber characterized the experiment as a "prank," not research, in conversation with The Verge. It's a reminder that trained AI is only as good as its source material. The concern instead stems from how Kilcher shared his work. While he avoided providing the bot code, he shared a partly neutered version of the model with the AI repository Hugging Face. Visitors could have recreated the AI for sinister purposes, and Hugging Face decided to restrict access as a precaution. There were clear ethical concerns with the project, and Kilcher himself said he should focus on "much more positive" work in the future.