Several users reported that OpenAI's ChatGPT stopped responding to questions, leading to concerns of decreased functionality.
Since late November 2023, some users have reported issues with the GPT-4 version that caused it to refuse to execute certain queries or provide simple results. For instance, a Reddit user named Acceptable-Amount-14 complained that when asked to fill out a CSV file with many entries, the paid version of ChatGPT replied with the message: "Due to the large nature of the data file, extracting the entire file would be difficult. However, I can provide a sample to you upon request, and then you can complete the remaining items similarly."
Some users reported on a certain day that ChatGPT is responding in a more concise way. Some suggest that ChatGPT may be experiencing "depression" or propose the "winter holiday hypothesis". This hypothesis refers to the tendency of people to work more slowly towards the end of the year and at the beginning of the new year.
On X in early January, a Martian account asked whether major language models were experiencing "seasonal depression." The Polish developer Mike Swoopskee questioned whether people's tendency to slow down in December and delay big projects until the new year might be the reason why ChatGPT had become lazier.
In December, ChatGPT's account X received user feedback that GPT was becoming lazy and was considering a fix.
Catherine Breslin, a scientist and AI consultant in the UK, recently suggested that the reason why ChatGPT is not as active as it used to be could be due to a change in the model. According to her, AI companies could unintentionally cause unwanted changes in various parts of the system, resulting in disruptions, if they retrain or modify the model by adding new data. This statement was reported by The Guardian over the weekend.
According to a person, there might be a connection between a shift in user behavior and their perception of AI systems. Users might assume that since ChatGPT performs one task excellently, it must be capable of performing a similar task equally well. However, the reality is that it might not be as efficient. This perception can lead to users considering AI systems as bad or incompetent. As per the person, this issue is more pronounced in complex AI models.
It's important to note that users' expectations can affect how they perceive AI. Research firm Gartner has identified what is known as the "Hype Cycle" for new technologies, where expectations rise, followed by disappointment, and finally a leveling off of performance. This means that users may feel let down if an AI system does not live up to their initial expectations.
"Maybe some complaints about ChatGPT's laziness are due to unrealistic expectations," The Guardian commented.