Site icon Wonderful Engineering

Users Have Reported That ChatGPT Has Been Getting Dumber Lately

Users have been expressing their disappointment with OpenAI’s GPT-4, claiming that the AI model has been performing poorly and exhibiting diminished reasoning capabilities. On social media platforms like Twitter and OpenAI’s developer forum, users have reported issues such as weakened logic, more erroneous responses, difficulty following instructions, and even forgetting basic software code syntax.

Developers who rely on GPT-4 to assist with coding functions have compared the model’s current performance to driving a high-performance car that suddenly transforms into a beat-up old pickup truck. Users have also noticed a decline in writing quality, with outputs becoming less clear and concise.

Some users have even experienced GPT-4 looping outputs repeatedly and failing to deliver the same level of intelligence and comprehension as before. This is a significant departure from earlier this year when OpenAI was impressing the world with ChatGPT, and anticipation was high for the launch of GPT-4.

Rumors within the AI community suggest that OpenAI may be planning a major redesign of the system. One possible approach is the implementation of a Mixture of Experts (MOE) model, which would involve creating smaller GPT-4 models specialized in various subject areas, such as biology, physics, or chemistry. When a user poses a question, the system would determine which expert model(s) to consult and combine the results.

The shift to MOE models could potentially reduce costs while maintaining or improving response quality. Experts believe that OpenAI’s recent performance decline with GPT-4 might be related to the training and implementation of these smaller expert models.

OpenAI has not yet responded to inquiries about the reported issues with GPT-4. However, leaked details from AI experts on social media suggest that OpenAI may indeed be using an MOE approach with 16 expert models in GPT-4’s architecture.

While some experts acknowledge the potential trade-off between cost and quality with MOE models, they emphasize that evaluating these models is challenging and the observations made thus far are anecdotal.

It remains to be seen how OpenAI will address the reported shortcomings of GPT-4 and whether the introduction of a fleet of smaller expert models will provide a solution to the performance issues.

Exit mobile version