Advertisement

OpenAI team to focus on democratic processes for artificial intelligence software

By Anna Tong

SAN FRANCISCO (Reuters) - Artificial intelligence lab OpenAI is forming a new team to create democratic processes to shape how its AI software should be governed to address bias and other factors, the Microsoft-backed company said in a blog post on Tuesday.

The "Collective Alignment" team is a continuation of the work of a grant program for funding democratic process experiments that was announced in May 2023, the San Francisco-based firm said. The program wrapped up recently.

"As we continue to pursue our mission towards superintelligent models who potentially could be seen as integral parts of our society ... it's important to give people the opportunity to provide input directly," Tyna Eloundou, a research engineer and founding member of OpenAI's new team, told Reuters.

To ensure, for example, that only humans can vote, OpenAI could partner with Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman that provides a way to tell the difference between humans and AI bots, said Teddy Lee, a product manager who is the other member of the two-person team.

Lee noted that the team has not made any concrete plans yet to integrate Worldcoin.

Since OpenAI's late 2022 launch of ChatGPT, generative AI technology that can spin uncannily authoritative prose from text prompts has captivated the public, making the program one of the fastest growing apps of all time.

There are, however, concerns about AI's ability to create "deepfake" pictures and other misinformation, especially as the 2024 U.S. election campaign ramps up. Critics say AI systems like ChatGPT have inherent bias due to the inputs used to shape their views, and users have found examples of racist or sexist outputs from AI software.

The new OpenAI team is actively looking to hire a research engineer and research scientist, Eloundou said. The team will work closely with OpenAI's "Human Data" team, which builds infrastructure for collecting human input on the company's AI models, and other research teams.

(Reporting by Anna Tong in San Francisco; Editing by Paul Simao)