Forget hallucinations, ChatGPT has developed full blown dementia

Cal Jeffrey

Posts: 4,174   +1,423
Staff member
WTF?! ChatGPT started glitching out last night and the people lost their minds. Some are calling it the "singularity." Others think it's an elaborate collective hoax that even OpenAI is in on. Some think the LLM is struggling with words as it becomes sentient. My guess: It just a misplaced comma in the code.

Any software under ongoing development is highly likely to experience sudden bugs. About a year ago, Meta's Alpaca started responding to queries with clearly false answers while insisting they were true. Large language model (LLM) developers like OpenAI refer to this phenomenon as a "hallucination."

If we are to stick with anthropomorphizing LLM algorithms when glitches occur, last night ChatGPT began experiencing full-blown dementia. Sometime before 6:40 pm EST on February 20, users began reporting that the OpenAI chatbot's responses to queries had become unintelligible.

The bot's behavior varied widely between users. In multiple instances, it started speaking in Spanglish, a combination of English and Spanish spoken in many American Latino households. However, the users reporting the behavior only used English.

In another example, ChatGPT responded with the phrase, "It is – and it is," repeated for more than a page of text. Someone pointed out that it was reminiscent of that scene in Stephen King's The Shining where struggling author Jack Torrance starts to lose his mind while working on his novel and types the proverb, "All work and no play makes Jack a dull boy" on page after page. The only difference being that at one point ChatGPT inserted the modified phrase, "It is – and it always is," but just once, leaving it a curious oddity.

OpenAI became aware of the issue and began an investigation. It reported that it identified the problem and was working on a fix. The developer's last status update, at 8 pm EST last night, said it was "monitoring the situation." The company reported that systems were fully operational just before publication and closed the case.

OpenAI has not provided a comment or an explanation for the odd behavior, but that has not stopped the collective internet from wildly speculating. Most of the public's reactions are half-jests, but behind every joke is a grain of truth.

"Who knew that the first evidence of AGI [atificial general intellegence] would be emergent Spanglish?" read one tweet.

"It's breaking into singularity. I'm extremely scared," said another user, referring to the concept of a point in time when artificial intelligence progresses beyond human control. This would be very scary, but considering that chatbots, even those as advanced as ChatGPT, are not actually artificial intelligence, we don't have anything to worry about.

Others are skeptical of those posting the nonsensical ChatGPT responses, claiming users are causing the maddness with their prompts, which are not shown in some posts.

"You prompted that response. Try harder," one person accused.

"Why don't they post the whole chat convo from the beginning (including the custom instructions) instead of only the parts where the AI seems to get weird? That's sus," said another.

However, the commenter that seems to have won over the X thread, with over 12,000 likes and 305 retweets, finds the whole thing hilariously 420:

Permalink to story.

I love AI when it does this. Who said AI wasnt sentient? It's already lost its mind!

Of course I'm sure that AI will drive my car anyday now.
Yup, this is it. Thats how AI or AGI going to exterminate the humanity. Just like that.

Everyone (especially fiction script writers) thought AI gonna get sapient, make up a perfect, evil plan and execute it in a flash.
Damn thing will go bananas and kill us all like a lunatic with a knife
Using open ai last night was quite a trip!
Trying to plan a series of destinations over a few months I asked it to count exact days between dates. It couldn’t correctly tell me how many days February had (32) or June (20). It gave a bizarre answer: seeing that there are 32 days in February and 30 in March and 30 in April and …

What made me take a pause was, it corrected itself, poorly. I got a second respond with a “pardon me, there are…”
And still totally wrong.

Whaaaaa? Wow ai buddy, you need some rest.