OpenAI says it is investigating complaints about ChatGPT having become “lazy”.
In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.
If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.
In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful. They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
Now ChatGPT’s creators, OpenAI, have said that they are aware of the complaints about the system. But they said that no changes had actually been made to the model that explain why it was behaving differently.
“We’ve heard all your feedback about GPT4 getting lazier!” the company said on Twitter/X. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”
OpenAI gave no indication of whether it was convinced by the complaints, and if it thought ChatGPT had changed the way it responded to queries.
The company has been the subject of a tumultuous few weeks after chief executive Sam Altman was forced out of the company and then joined once more a few days later.