Google AI chatbot Bard dishes up wrong answer in first demo

Shawn Knight

Posts: 15,294   +192
Staff member
Editor's take: It's been a big week for artificial intelligence but a misstep out of the gate highlights the danger associated with moving too quickly and pushing tech to the masses before it is fully vetted. Such is especially true of AI systems that dole out information that some could interpret as fact.

Microsoft announced an AI-powered Bing search engine and Edge browser while Google introduced the world to Bard, an experimental conversational AI service powered by its Language Model for Dialogue Applications (or LaMDA for short). Chinese tech company Baidu is also working on a ChatGPT-like service called Ernie.

They're all in the early stage of development and will need more time for their respective creators to iron out all the wrinkles as evident by an embarrassing Bard misstep.

In a short video demonstrating how Bard works, the AI was asked about discoveries from the James Webb Space Telescope that could be shared with a 9-year-old. Bard supplied several answers, including that the telescope "took the very first pictures of a planet outside of our own solar system." The problem is, that's not accurate.

According to NASA, the first photo of an exoplanet (2M1207b) was imaged by the European Southern Observatory's Very Large Telescope (VLT) in 2004. Webb captured its first photo of an exoplanet last year, but it wasn't the very first photo of an exoplanet ever captured.

The tweet featuring the incorrect response was published on February 6 and has amassed over a million views. It remains live as of this writing and is still featured on Google's blog post announcing Bard.

In its announcement earlier this week, Google said it was making Bard available to trusted testers ahead of a wider rollout to the public in the coming weeks.

In a FAQ for its AI-generated responses, Microsoft warned that Bing will sometimes misrepresent the information it finds and you could see responses that sound convincing but are inaccurate, incomplete or inappropriate. Redmond encourages people to use their own judgement and double check facts before making decisions or taking action based on Bing's responses.

Image credit: George Becker

Permalink to story.

 
There may be a time when we are no longer able to distinguish between AI and non-AI responses, based on what I've seen we are getting really really close to that point and that is what I'm worried about. Corporations, special interests, and government will be able to manipulate AI to suit their needs.
 
"Redmond encourages people to use their own judgement and double check facts before making decisions or taking action based on Bing's responses."
This defeats the whole purpose of having an AI chatbot sort through all the sources and extract the needed information in the first place. If you have to look up trusted sources to verify the information you got from the chatbot, or if you've to use your own judgement on what's correct and what's false information, then the AI adds no value just an extra step in your discovery process that ultimately will not pay out.

But it gets worse than that, because now with the help of Bing and ChatGPT people will be able to generate original looking content, which in reality will be just cobbled together from mixed information, post that in seconds to the web, which then will be fed back again to said AI agents, which then will repeat the possibly partial or total nonsense even further, and allowing even more of it to be posted, at virtually zero cost, at least partially for financial gain.

And don't even get me started about how spin doctors and propaganda outlets will be able to generate again original looking stuff, which even if it will be totally correct originally as spout out by the AI, they will be able to edit and change quickly to fit their propaganda/goals, and again, post that on the web.

The web and search as we know it (and even AI based search) is doomed and dead (we just don't know it yet), and will have to be torn down and built up again on verified credentials and traceability of sources and information.
 
What happens if webpages/documents created by AI chatbots become prevalent and then as Google/Bing scrape the web for more info to train on they end up finding more info that is generated by other AI chatbots and not actual humans? A chatbot learning from a human has value. A chatbot learning from other chatbots ends badly. Its the blind leading the blind.

Personally, I think that AI chatbots will be more of a boon to spamers (email, text, and voice) than anyone else. Sounds like the perfect tool to scam people and fill our lives with even more BS advertising.
 
Good old Google - still sh1t at everything since the user-privacy-invading search engine from 20 years ago
 
What happens if webpages/documents created by AI chatbots become prevalent and then as Google/Bing scrape the web for more info to train on they end up finding more info that is generated by other AI chatbots and not actual humans? A chatbot learning from a human has value. A chatbot learning from other chatbots ends badly. Its the blind leading the blind.

Personally, I think that AI chatbots will be more of a boon to spamers (email, text, and voice) than anyone else. Sounds like the perfect tool to scam people and fill our lives with even more BS advertising.
I think this is exactly what is happening and we'll be stuck customizing our search queries to have a date restriction on it so we only search in data posted before the invention of AI polluting the internet.

The internet itself is on its way of AI generating articles with slight inaccuracies that will be indexed by AI to give results with slight inaccuracies in a never ending loop.
This will probably result in governments forcing sites to add a meta tag to pages that have AI generated content.

Since it's still up to the creator of the article to add this tag and there's no sure fire way to verify if something was AI generated or not this will have almost no impact.

And that's likely how is going to go down and the internet will be ruined for quite a few decades by auto generated nonsense taken for the truth. At least we're here to see it happen, exciting times!

Considering how a good number of people are anti vaccers and believe the earth is round before there was AI to make it sound credible we're in for a real **** storm now.

Add to this the TikTok generation already taking content from there using automatically generated voices as gospel and herding the masses if you got enough resources is easier than ever. TikTok is just a preview, AI will generate more varied voices, realistic looking faces lipsynced to it back by AI generated photos and hell even videos at some point.

What used to be what religion was (ab)used for will be done with AI instead (guide public opinion). But this time around you don't have to be all that rich as you don't need to bribe lots of people. Just knowledge and enough computers will do. A lucrative future business for the dark web.
 
Last edited:
There is a point when these AIs will start to become trained on outputs from other AIs, either knowingly or unknowingly. If that is the case then there's going to be a massive feedback loop of incorrect and false information that gets embedding into these systems.

Mark my words, actual, high quality, factually correct human written content will become rare.
 
"Redmond encourages people to use their own judgement and double check facts before making decisions or taking action based on Bing's responses."
This defeats the whole purpose of having an AI chatbot sort through all the sources and extract the needed information in the first place. If you have to look up trusted sources to verify the information you got from the chatbot, or if you've to use your own judgement on what's correct and what's false information, then the AI adds no value just an extra step in your discovery process that ultimately will not pay out.

But it gets worse than that, because now with the help of Bing and ChatGPT people will be able to generate original looking content, which in reality will be just cobbled together from mixed information, post that in seconds to the web, which then will be fed back again to said AI agents, which then will repeat the possibly partial or total nonsense even further, and allowing even more of it to be posted, at virtually zero cost, at least partially for financial gain.

And don't even get me started about how spin doctors and propaganda outlets will be able to generate again original looking stuff, which even if it will be totally correct originally as spout out by the AI, they will be able to edit and change quickly to fit their propaganda/goals, and again, post that on the web.

The web and search as we know it (and even AI based search) is doomed and dead (we just don't know it yet), and will have to be torn down and built up again on verified credentials and traceability of sources and information.
You reminded me of an old game I saw on TV many moons ago:
They line up 10 1st graders, whispered something to the first one to whisper to the second one and so on. Then they asked the 10th one what was whispered. It was always wrong and usually amusingly so.

Now we have AI to do that!
 
Back