Cloudflare is taking a stand against AI website scrapers
The move might stem the wave of generative AI legally (and illegally) crawling for content.
Cloudflare has released a new free tool that prevents AI companies' bots from scraping its clients' websites for content to train large language models. The cloud service provider is making this tool available to its entire customer base, including those on free plans. "This feature will automatically be updated over time as we see new fingerprints of offending bots we identify as widely scraping the web for model training," the company said.
In a blog post announcing this update, Cloudflare's team also shared some data about how its clients are responding to the boom of bots that scrape content to train generative AI models. According to the company's internal data, 85.2 percent of customers have chosen to block even the AI bots that properly identify themselves from accessing their sites.
Cloudflare also identified the most active bots from the past year. The Bytedance-owned Bytespider bot attempted to access 40 percent of websites under Cloudflare's purview, and OpenAI's GPTBot tried on 35 percent. They were half of the top four AI bot crawlers by number of requests on Cloudflare's network, along with Amazonbot and ClaudeBot.
It's proving very difficult to fully and consistently block AI bots from accessing content. The arms race to build models faster has led to instances of companies skirting or outright breaking the existing rules around blocking scrapers. Perplexity AI was recently accused of scraping websites without the required permissions. But having a backend company at the scale of Cloudflare getting serious about trying to put the kibosh on this behavior could lead to some results.
"We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection," the company said. "We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on."