Google employees call its Bard AI chatbot "a liar," "useless," and "cringe-worthy"

midian182

Posts: 9,745   +121
Staff member
Facepalm: It's no secret that Google rushed out the company's Bard chatbot last month as it tried to keep up in the generative-AI race. But the tech giant should have taken more time with its project. According to a new report, employees told Google not to launch Bard, calling it a "pathological liar," "cringe-worthy," and "worse than useless."

We heard back in February that Google was rushing to launch its own ChatGPT-like technology over fears it would be left behind in the generative-AI revolution following the arrival of OpenAI's tech. A month later, Bard was shown off to an unimpressed public in a demo that saw the chatbot give a wrong answer. Nevertheless, Google decided to launch Bard in March.

According to a new report from Bloomberg, citing internal documentation and 18 current and former employees, it might have been more prudent for Google to keep polishing Bard before allowing early access to the "experimental" AI.

Some of the criticism included an employee writing, "Bard is worse than useless: please do not launch," in an internal message group seen by 7,000 people, many of whom agreed with the assessment. Another employee asked Bard for suggestions on how to land a plane, to which it regularly gave answers that would cause a crash. Bard also gave answers on scuba diving that "would likely result in serious injury or death."

Despite workers' pleas, Google "overruled a risk evaluation" submitted by an internal safety team warning that Bard wasn't ready for release.

Bloomberg's sources say Google, in a bid to keep up with rivals, is offering low-quality information while giving less priority to ethical commitments. It's claimed that staff responsible for the safety and ethical implications of new products have been told to stay away from generative-AI tools in development.

Google, which removed its "Don't be evil" motto from its code of conduct in 2018, fired AI researcher Timnit Gebru in 2020 after she authored a research paper about unethical AI language systems. Margaret Mitchell - the co-lead of the company's Ethical AI team who also wrote the paper – was fired a few months later due to misconduct allegations.

Meredith Whittaker, a former manager at Google, told Bloomberg that "AI ethics has taken a back seat" at the company.

Google is falling behind in its generative AI ambitions at the same pace that Microsoft is pushing ahead with its own. The integration of AI features into Bing has not only seen it pass 100 million daily active users for the first time in the browser's history but also led to Samsung considering switching from Google to Bing as its devices' default search engine.

Permalink to story.

 
Sounds about right to me. Push crap out the door because "Fad" and then have the crap fall directly into the flushing toilet. IMO, its not unlike what everyone did when jumping on the streaming Fad. One would think Execs would learn from past mistakes, but are execs ever wrong in their own eyes? ;)
 
Google always were a deeply dishonest company built around stealing as much data from you with as little consent from you as they could get away with. I'm relieved to see the world waking up to them these days as for years they had this weird 'were the good guys' reputation which they absolutely did not merit. They made a good search engine with a fantastic data-scraper behind it 20 years ago and since then it's one turd after another.
 
Well, "pathological liar" and "cringe-worthy" is quite in line with Google's agenda. Which means their AI is progressing exactly as planned.
 
Back