AI search engines fail accuracy test, study finds 60% error rate

Status
Not open for further replies.
My experience is that AI is garbage.

I downloaded DeepSeek and played with it for a while. What irritated me was the inability to tailor the responses. I asked for a list of ALL configuration options, including programming options. It listed 10 vague options. It was not possible to restrict the responses to just Yes/No.

A good tool should be configurable.
I usually provide it a sample JSON output with several sample responses and that seems to help.
 
I use DuckDuckGo which uses Bing BTW, but today I noticed an AI summary at the top of my search page! I have altered my Google search string to not use AI for search (how long before it doesn't work?) but now Bing is bringing it. I wonder if I can alter DDG serach string to remove the AI result.
 
People using LLM's to search for truth rather than facts will always come up short, as these researchers have highlighted.
 
Grok-3 hitting 94% inaccuracy is genuinely impressive. At that point, it’s basically an anti-search engine—just assume the opposite of whatever it tells you, and you might end up with the truth.
 
In a scientific context, the source is just as important as the information.
Oh, that's just sophistry. When Copilot includes superscript links in its answers, the links always accurately go to the site where they got the presented information.

If you Google a quote from, say, an article, Google will usually find the right article because its job is to search verbatim. AI isn't like that -- its objective is to synthesize a conversational answer from numerous sources. If it comes up with an out-in-left-field false answer then you can validly criticize it, but just not being able to find the source of a specific quote like a search engine does is not its job.
 
Oh, that's just sophistry. When Copilot includes superscript links in its answers, the links always accurately go to the site where they got the presented information.

If you Google a quote from, say, an article, Google will usually find the right article because its job is to search verbatim. AI isn't like that -- its objective is to synthesize a conversational answer from numerous sources. If it comes up with an out-in-left-field false answer then you can validly criticize it, but just not being able to find the source of a specific quote like a search engine does is not its job.
Example, I asked AI how a company handles licensing for a specific product. AI summarized "its licensed per enabled account in policy". Then it provided a link to the site where it supposedly gets this information and nowhere in that link does it say anything about being licensed per enabled account in policy.

This is why I hate AI. I prefer regular searches because I want an EXACT hit, not made up bs.
 
That's not what you use AI for. The researchers were using it to generate, essentially, footnotes. AI projects seem to be capable of taking a complex query that requires amalgamating a number of "established" facts and forming one coherent answer, thus saving you a lot of time. I've found it to be quite good at that -- amazing, actually.

Knowing when and how to use AI is the key to deriving benefit from it. Knowing how to design a research study that reflects the realities of a given situation is the key to producing an actually useful research study. The study being written about here fails.
You mean knowing how to twist the truth. Who do you work for.
When I ask it to write code, it can't because it can't think. LLM's are the proof of the Chinese room concept-don't know what I mean, well you might have some trouble finding out on a chat-bot, they can't even tell you who invented the computer you are now using. Garbage in garbage out.
 
Grok-3 hitting 94% inaccuracy is genuinely impressive. At that point, it’s basically an anti-search engine—just assume the opposite of whatever it tells you, and you might end up with the truth.
And the same with any garbage that Musk and Altman spout.
 
Altman maybe. Musk is a proven genius and a visionary. Whether you agree with his motivations or not is not relevant.
So where did he get the notion of a singularity. Musk is a technological ignoramus, and America really needs to start teaching computer science in schools. It's not that difficult, and opens up all sorts of other subjects.
 
America really needs to start teaching computer science in schools.
Both the USA and Canada have been teaching computer science in grade schools since the 1980's and advanced computer science in high schools since the 1990's. Your insults and info are as meaningless as they are inaccurate.
 
Last edited:
This article is completely wrong. The inaccuracy lies in how well the chatbots cited their sources. They returned incorrect results when asked what its sources were 60% of the time. That doesn't mean AI is wrong 60% of the time.
 
Status
Not open for further replies.
Back