What just happened? ChatGPT's ability to streamline processes and chew through mundane tasks is well-documented, which is why the generative AI is being used by so many companies. But there have been repeated warnings not to hand over sensitive information as it could be exposed - a warning some Samsung employees failed to heed.

A UK intelligence agency and Europol have both advised people to be careful what they share when interacting with ChatGPT. Even its creator, OpenAI, advises users not to share any sensitive information in conversations with the chatbot, noting that it may be used for training purposes. Amazon and JPMorgan are just two of the companies that have advised their employees not to use ChatGPT for these reasons.

Samsung Semiconductor's fab engineers, however, have been using ChatGPT, and it has led to three sensitive data leaks in just 20 days, writes The Economist.

One incident saw an employee using ChatGPT to check for errors in the source code of the semiconductor facility's measurement database program. Another, more serious case involved a worker entering a program code he created to identify yield and defective facilities, requesting that ChatGPT optimize the code.

The final incident happened when an employee transcribed the meeting he recorded on his smartphone into a document file using the Naver Clova application and then submitted it to ChatGPT to request the preparation of meeting minutes.

By using ChatGPT in this way, the Samsung employees essentially put some of the company's proprietary information and trade secrets into the hands of OpenAI. It led to Samsung Electronics issuing a warning to its employees that once information is entered into ChatGPT and sent, it is transmitted and stored on an external server, making it impossible for Samsung to retrieve/remove it. This sensitive content could be exposed to an unknown number of people.

Samsung Electronics is preparing measures to stop any more sensitive information from being leaked through ChatGPT, including limiting the size of submitted questions to 1024 bytes, but if these incidents continue, it will block the tool from the company's network. It is also planning on developing its own ChatGPT-like generative AI for internal use.