Google says users should confirm its Bard AI chatbot's responses are correct


Posts: 9,351   +119
Staff member
Facepalm: Not for the first time, Google has given a warning about the limitations of its chatbot, Bard. The company says that anyone using the generative AI should also use Google search to confirm that its responses are actually correct.

Chatbots such as Bard and ChatGPT are known for their hallucinations, spitting out incorrect answers from time to time. It's something Bard creator Google is well aware of and is advising people to confirm any information it produces.

Google's UK boss Debbie Weinstein told the BBC's Today program that Bard was "not really the place that you go to search for specific information."

Weinstein added that Bard should be considered an "experiment" best suited for "collaboration around problem-solving" and "creating new ideas."

"We're encouraging people to actually use Google as the search engine to actually reference information they found," she said.

AI proponents have claimed that generative AIs could potentially kill off traditional search engines like Google's, so maybe it's in the company's best interest to remind people to check Bard's answers.

The generative AI tools themselves also warn about their tendency to make up "facts." ChatGPT's homepage has a disclaimer at the bottom that states it might produce inaccurate information about people, places, or facts. Bard, meanwhile, reminds users that it has limitations and won't always get it right.

This isn't the first warning Google has issued about chatbots. Parent company Alphabet last month told its employees to be cautious when using the tools, even Bard, and not enter confidential information into the generative AIs. The company also told its engineers to avoid directly using code generated by these services.

Bard started life by generating the wrong answer in its first demo in February. A few months later, we heard that Google employees reportedly told the company not to launch the chatbot, calling it a "pathological liar," "cringe-worthy," and "worse than useless."

One of the most famous cases of an AI hallucination involved two attorneys who submitted fake legal research generated by ChatGPT in a personal injury case. One of the lawyers said he had had no idea that the content created by generative AIs could be false. His attempt at verifying the authenticity of the citations was to ask ChatGPT if the cases were real.

Permalink to story.

"Google says users should confirm its Bard AI chatbot's responses are correct".
Rhetorical question. If Alphabet-Google is an evil company what kind of AI will they produce?
BTW, users can already confirm that Alphabet-Google is an evil company.
Use their software and then research if the response you were given is correct? GTFO Google, do your own damn work.

I wonder if people can "train" these "AI" chat bots to "learn" all the wrong stuff. Keep telling them that the info they provided was wrong and then give them incorrect data and tell them that it's correct. I, for one, don't like these glorified search engines....I'm guessing I'm just part of the minority that feels this way.
I.e. Google wants users to train its AI for free because they know its quite stupid, but can't be bothered to do an openai and pay people to verify and adjust responses as humans, because its Google, and they are as cheap as they can get about everything

Also seems like they are still worried about other AI eventually ripping apart the need for a search engine in a lot of cases, which of course Google / Alphabet cares about because ad revenue, their one true source of actual income, and why I always feel like it's walking on a very wobbly bridge rn, massive company or not