ChatGPT temporarily shut down after bug exposed users' chat histories
Titles, not contents, were exposedBy Rob Thubron
What just happened? ChatGPT could have run into its first significant privacy issue after a bug in the service exposed the titles of other users' chat histories. Creator OpenAI has taken the chat history feature offline as it attempts to rectify the problem.
An OpenAI spokesperson confirmed to Bloomberg that the titles of other users' conversations were visible in the user-history sidebar that's found on the left side of the ChatGPT webpage. They emphasized that only these short descriptions could be seen, not the contents of the conversations.
While OpenAI is still investigating the precise cause of the problem, the company confirmed that it originated from a bug in an unnamed open-source software. Reddit and Twitter users (via The Verge) shared screenshots of the bug, which left users wondering if they'd been hacked - or if ChatGPT had been breached.
If you use #ChatGPT be careful! There's a risk of your chats being shared to other users!--- Jordan L Wheeler (@JordanLWheeler) March 20, 2023
Today I was presented another user's chat history.
I couldn't see contents, but could see their recent chats' titles.#security #privacy #openAI #AI pic.twitter.com/DLX3CZntao
The entire ChatGPT service was shut down by OpenAI on Monday night before coming back late Monday evening without the user chat histories. They are still missing as of this morning. OpenAI's status page states that it continues to work to restore past conversation history to users.
OpenAI itself advises users not to share any sensitive information in conversations with ChatGPT, noting that they may be used for training purposes.
The incident is a reminder of recent warnings about generative AIs such as ChatGPT from the UK's National Cyber Security Centre (NCSC). The agency raised concerns about sensitive user queries being available to the providers (OpenAI, in this case), and being used to train future versions of the chatbot. Examples it gave were a CEO asking about the best way to fire an employee, or someone asking a personal medical question.
The NCSC also proved prescient in its warning that stored queries could accidentally be made publicly available. Luckily, there was no personally identifiable information revealed alongside the conversation titles in this instance.
Ultimately, the bug illustrates why companies like Amazon and JPMorgan have advised their employees not to use ChatGPT over concerns that sensitive information could be leaked.
Masthead: Focal Foto