AI hallucinations can influence search results and other AI, creating a dangerous feedback...

Daniel Sims

Posts: 1,376   +43
Staff
Why it matters: Since the emergence of generative AI and large language models, some have warned that AI-generated output could eventually influence subsequent AI-generated output, creating a dangerous feedback loop. We now have a documented case of such an occurrence, further highlighting the risk to the emerging technology field.

While attempting to cite examples of false information from hallucinating AI chatbots, a researcher inadvertently caused another chatbot to hallucinate by influencing ranked search results. The incident reveals the need for further safeguards as AI-enhanced search engines proliferate.

Information science researcher Daniel S. Griffin posted two examples of misinformation from chatbots on his blog earlier this year concerning influential computer scientist Claude E. Shannon. Griffin also included a disclaimer noting that the chatbots' information was untrue to dissuade machine scrapers from indexing it, but it wasn't enough.

Griffin eventually discovered that multiple chatbots, including Microsoft's Bing and Google's Bard, had referenced the hallucinations he'd posted as if they were true, ranking them at the top of their search results. When asked specific questions about Shannon, the bots used Griffin's warning as the basis for a consistent but false narrative, attributing a paper to Shannon that he never wrote. More concerningly, the Bing and Bard results offer no indication that their sources originated from LLMs.

The situation is similar to cases where people paraphrase or quote sources out of context, leading to misinformed research. The case with Griffin proves that generative AI models can potentially automate that mistake at a frightening scale.

Microsoft has since corrected the error in Bing and hypothesized that the problem is more likely to occur when dealing with subjects where relatively little human-written material exists online. Another reason the precedent is dangerous is that it presents a theoretical blueprint for bad actors to intentionally weaponize LLMs to spread misinformation by influencing search results. Hackers have been known to deliver malware by tuning fraudulent websites to attain top search result rankings.

The vulnerability echoes a warning from June suggesting that as more LLM-generated content fills the web, it will be used to train future LLMs. The resulting feedback loop could dramatically erode AI models' quality and trustworthiness in a phenomenon called "Model Collapse."

Companies working with AI should ensure training continually prioritizes human-made content. Preserving less well-known information and material made by minority groups could help combat the problem.

Permalink to story.

 
I asked Bard to make 500 bucks PC , just to test its ability . It posted some parts but i3 12100F was not compatible with the mobo , which supported 10 and 11 gen Intel only. Then it apologized . Wow !
 
"AI hallucinations and can influence search results and other AI, creating a dangerous feedback loop"

So....was the type an intentional in joke there?
 
I asked Bard to make 500 bucks PC , just to test its ability . It posted some parts but i3 12100F was not compatible with the mobo , which supported 10 and 11 gen Intel only. Then it apologized . Wow !

It does learn if you tell it how it was wrong and it corrects itself using external sources.
 
VaRmeNsI, I agree that it s learning . I noticed it with Google Bard . But as if Microsoft Bing doesnt accept knowledge from random people , even if they re right . I asked Bing what amount of memory Win 11 uses in real , it said 4.5GB but Task Manager showing 0.5GB(user started app s apparently) !!! It cited a post - query at MS Community of a person in need who had little knowledge and needed an assistance . I told it - it s between 1.6GB to over 3GB depending on whether the system was debloated and repeated the question , it plopped the old answer !
 
This AI Kessler Syndrome was the very first thing I predicted when AI image synthesizers and GPT3 started getting big in 2021. I figured it would have started with image search, but it looks like it's gonna go straight for the jugular and make the entire internet even less searchable than it already is.
 
While AI will eventually be a worthy edition to computers and the internet, I think it is still too young and untested, resulting in a number of serious flaws. It's understandable they want to test it in the real world but the real world deserves much better protection from some of it's serious mistakes .....
 
Back