(San Francisco) ChatGPT, the wildly popular generative artificial intelligence (AI) interface that popularized the technology, was unlocked for several hours on Tuesday, answering users' questions with nonsensical sentences, a reminder that these systems are still in their infancy.
Posted at 3:29 p.m.
OpenAI, the startup that launched the program in late 2022, said on its website Wednesday morning that ChatGPT was working “normally” again.
On Tuesday afternoon – San Francisco time, where she lives – she announced that she was “investigating reports of unexpected responses from ChatGPT.” A few minutes later, the star Silicon Valley company assured that it had “recognized the problem” and was “in the process of solving it.”
Many users have uploaded screenshots showing erratic or incomprehensible responses from the generative AI model. This cutting-edge technology makes it possible to produce all kinds of content (texts, sounds, videos) in everyday language, usually in amazing quality, upon simple request.
On the forum for developers using OpenAI tools, a user named “IYAnepo” noticed ChatGPT’s “strange” behavior.
“It generates completely non-existent words, omits words and, among other anomalies, produces sequences of small keywords that are incomprehensible to me,” he said. “You would think that I would have given such instructions, but that is not the case. I feel like my GPT is being tracked […] “.
Another user, “scott.eskridge,” complained on the same forum that all of his conversations with the language model had “quickly turned into nonsense over the past three hours.”
He copied an excerpt from one of the interface's answers: “Money for the bit and the list is one of the strangers and the Internet, where the currency and the person of the cost is one of the friends and the currency.” Next time look at the system, the exchange and the fact, remember to give something. »
OpenAI did not provide further details about the nature of the incident, reminding us that AI, even generative, has no awareness or understanding of what it “says.”
AI specialist Gary Marcus hopes the incident will be seen as a “wake-up call.”
“These systems have never been stable. No one has ever been able to develop security guarantees for these systems,” he wrote in his newsletter on Tuesday. “The need for completely different technologies that are less opaque, more interpretable, easier to maintain and debug – and therefore easier to implement – remains paramount,” he added.