What just happened? Numerous organizations have repeatedly warned ChatGPT users over the years never to share personal information with OpenAI's chatbot. A recent incident involving a now-removed feature reveals that potentially thousands of people disclosed deeply intimate information with ChatGPT and also inadvertently made it discoverable through Google search.

OpenAI recently confirmed that it has deactivated an opt-in feature that shared chat histories on the open web. Although the functionality required users' explicit permission, its description might have been too vague, as users expressed shock after personal information from chats appeared in Google search results.

Users often share ChatGPT logs with friends, family members, and associates, assuming that only the intended recipients receive the links. However, OpenAI tested an additional option to make chats discoverable, with fine print noting that they would then appear in search engine results.

The company's messaging appears to have been too vague for many. Fast Company discovered almost 4,500 conversations by inputting portions of share links into the Google search bar, with many logs containing information that nobody would likely intentionally publish on the open web.

Although the search results didn't reveal users' full identities, many included their names, locations, and other details in the logged conversations. Some chat logs revealed that numerous people discuss issues in their personal lives with ChatGPT, including anxiety, addiction, abuse, and other sensitive topics.

OpenAI quickly removed the discoverability feature, describing it as an experiment to help spread "useful" conversations. The company is currently working to de-index everything that users shared.

ChatGPT users should be aware that a US court order requires OpenAI to store all chat logs indefinitely. The company would normally delete them periodically, but publishers, including The New York Times, are currently suing OpenAI to investigate whether ChatGPT can reproduce copyrighted information when prompted. Until the case is resolved, designated legal teams can view any information users input.

Prior incidents have shown that this can also include trade secrets. In 2023, Samsung fab employees were caught using ChatGPT in ways that inadvertently handed confidential company information over to OpenAI. For example, asking the chatbot to optimize code or create meeting minutes requires inputting data that might contain trade secrets.

Proton recently launched a rival chatbot amid the controversy. Lumo is the latest part of the company's push to distinguish its product chain as a privacy-focused alternative to Google and Microsoft. The Swiss company promises to encrypt users' communications, never retain personal information, maintain an ad-free business model, and release open source code.