Posts: 9,351 +119
Facepalm: Not for the first time, Google has given a warning about the limitations of its chatbot, Bard. The company says that anyone using the generative AI should also use Google search to confirm that its responses are actually correct.
Chatbots such as Bard and ChatGPT are known for their hallucinations, spitting out incorrect answers from time to time. It's something Bard creator Google is well aware of and is advising people to confirm any information it produces.
Google's UK boss Debbie Weinstein told the BBC's Today program that Bard was "not really the place that you go to search for specific information."
Weinstein added that Bard should be considered an "experiment" best suited for "collaboration around problem-solving" and "creating new ideas."
"We're encouraging people to actually use Google as the search engine to actually reference information they found," she said.
AI proponents have claimed that generative AIs could potentially kill off traditional search engines like Google's, so maybe it's in the company's best interest to remind people to check Bard's answers.
The generative AI tools themselves also warn about their tendency to make up "facts." ChatGPT's homepage has a disclaimer at the bottom that states it might produce inaccurate information about people, places, or facts. Bard, meanwhile, reminds users that it has limitations and won't always get it right.
This isn't the first warning Google has issued about chatbots. Parent company Alphabet last month told its employees to be cautious when using the tools, even Bard, and not enter confidential information into the generative AIs. The company also told its engineers to avoid directly using code generated by these services.
Bard started life by generating the wrong answer in its first demo in February. A few months later, we heard that Google employees reportedly told the company not to launch the chatbot, calling it a "pathological liar," "cringe-worthy," and "worse than useless."
One of the most famous cases of an AI hallucination involved two attorneys who submitted fake legal research generated by ChatGPT in a personal injury case. One of the lawyers said he had had no idea that the content created by generative AIs could be false. His attempt at verifying the authenticity of the citations was to ask ChatGPT if the cases were real.