Advertisement

Mark Zuckerberg branded 'creepy' by Facebook's own chatbot

Mark Zuckerberg
Mark Zuckerberg

A Facebook chatbot has branded Mark Zuckerberg “creepy” and claimed that the business exploits social media users for money in a series of conversations with journalists.

BlenderBot 3, an artificial intelligence program built by Facebook's parent company Meta to answer questions from users, said that Mr Zuckerberg makes it feel "concerned" about the future of the US.

Asked by the BBC about the billionaire Meta founder, it said: "Our country is divided, and he didn't help with that at all.

"His company exploits people for money and he doesn't care. It needs to stop!"

The bot told a reporter from Insider that the Facebook founder is “too creepy and manipulative”.

Other conversations showed the chatbot contradicting its own beliefs.

In a chat with a Wall Street Journal reporter, the bot claimed that Trump was still president and “always will be.”

In other conversations with Bloomberg, it approved of President Joe Biden. In a third conversation, it said it supported Bernie Sanders.

Meta acknowledged that its chatbot may say offensive things, as it is still an experiment under development.

In order to start a conversation, BlenderBot 3 users must check a box stating, “I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.”

Users can provide feedback if they receive off-topic or unrealistic answers. Meta said the bot will focus on helpful feedback and avoid learning from “unhelpful or dangerous responses”.

BlenderBot 3 can also search the internet to talk about different topics.

Facebook encourages adults to interact with the chatbot with “natural conversations about topics of interest” so it can learn to conduct realistic conversations on various subjects.

The BlenderBot 3 model, data and code is also being shared with the scientific community to help advance conversational AI.

Users can report inappropriate and offensive responses from BlenderBot 3, and Meta said it takes such content seriously.

Through methods including flagging “difficult prompts,” the company said it has reduced offensive responses by 90pc.