What Makes Chatbots ‘Hallucinate’ or Say the Wrong Thing? - The World News

What Makes Chatbots ‘Hallucinate’ or Say the Wrong Thing?

We’re already seeing real-world consequences of A.I. hallucination. Stack Overflow, a question-and-answer site for programmers, temporarily barred users from submitting answers generated with ChatGPT, because the chatbot made it far too easy to submit plausible but incorrect responses.

“These systems live in a world of language,” said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute. “That world gives them some clues about what is true and what is not true, but the language they learn from is not grounded in reality. They do not necessarily know if what they are generating is true or false.”

(When we asked Bing for examples of chatbots hallucinating, it actually hallucinated the answer.)

Think of the chatbots as jazz musicians. They can digest huge amounts of information — like, say, every song that has ever been written — and then riff on the results. They have the ability to stitch together ideas in surprising and creative ways. But they also play wrong notes with absolute confidence.

Sometimes the wild card isn’t the software. It’s the humans.

We are prone to seeing patterns that aren’t really there, and assuming humanlike traits and emotions in nonhuman entities. This is known as anthropomorphism. When a dog makes eye contact with us, we tend to assume it’s smarter than it really is. That’s just how our minds work.

And when a computer starts putting words together like we do, we get the mistaken impression that it can reason, understand and express emotions. We can also behave in unpredictable ways. (Last year, Google placed an engineer on paid leave after dismissing his claim that its A.I. was sentient. He was later fired.)

The longer the conversation runs, the more influence you have on what a large language model is saying. Kevin’s infamous conversation with Bing is a particularly good example. After a while, a chatbot can begin to reflect your thoughts and aims, according to researchers like the A.I. pioneer Terry Sejnowski. If you prompt it to get creepy, it gets creepy.

He compared the technology to the Mirror of Erised, a mystical artifact in the Harry Potter novels and movies. “It provides whatever you are looking for — whatever you want or expect or desire,” Dr. Sejnowski said. “Because the human and the L.L.M.s are both mirroring each other, over time they will tend toward a common conceptual state.”

Add a Comment

Your email address will not be published. Required fields are marked *