Facepalm: In what should probably be a red flag for the rest of us, Google parent Alphabet is warning all its employees to be cautious when using AI chatbots, even its own Bard. The company has also told its engineers to avoid directly using code generated by these services.

A Reuters report citing four people familiar with the matter states that Alphabet has advised its workers not to enter confidential information into AI chatbots.

There have been warnings about not oversharing with generative AIs since ChatGPT rocketed into the public eye earlier this year. The NCSC, part of the UK's GCHQ intelligence agency, said that sensitive user queries, such as health questions or confidential company information, are visible to the provider and may be used to teach future versions of chatbots.

Samsung banned the use of ChatGPT in May after three incidents of its semiconductor fab engineers entering sensitive data into the service. Amazon has banned employees from sharing code or confidential information with the chatbot, and Apple has banned it completely. Even its creator, OpenAI, advises users to be wary of what they are typing into the prompt.

In addition to human reviewers potentially reading sensitive data that users have entered, there's also the risk of this information being exposed in a data leak or hack. Back in March, OpenAI took ChatGPT's chat history feature temporarily offline after a bug in the service caused the titles of other users' conversations to appear in the user-history sidebar found on the left side of the webpage.

Despite the restrictions, a recent survey of 12,000 professionals found 43% said they use AI tools such as ChatGPT for work-related tasks, one-third of whom do so without telling their boss.

Alphabet also told its engineers to avoid the direct use of code that chatbots generate. When asked why, the company said Bard can make undesired code suggestions, but it still helps programmers. Google also said it aimed to be transparent about the limitations of its technology.

Bard hasn't had a smooth life so far. It started by generating the wrong answer in its first demo in February. A few months later, we heard that Google employees reportedly told the company not to launch the chatbot, calling it a "pathological liar," "cringe-worthy," and "worse than useless."

There was more bad news for Bard this week when it was revealed that Google won't be launching the chatbot in Europe yet due to privacy concerns.