In a nutshell: OpenAI is testing a new "memory" feature for ChatGPT that will enable the chatbot to recall and retain information from previous conversations, allowing users to avoid repeating the same information. OpenAI emphasizes that users have complete control over ChatGPT's memory.

The memory feature is currently undergoing testing with a limited group of free and paid ChatGPT users, with plans for wider adoption in the future. Users can prompt the chatbot to remember specific details or let it pick up information autonomously during a conversation, leading to an improved chatting experience over time.

OpenAI outlines some potential use cases for the memory feature. For instance, a user could specify a preferred format for meeting notes, and ChatGPT will remember and provide formatted recaps for future meetings.

In another scenario, if a parent mentions their toddler's fondness for jellyfish, the chatbot could suggest a jellyfish-themed birthday card, showcasing the versatility of the memory feature.

ChatGPT's memory feature can be disabled at any time, and users can provide instructions for the chatbot to forget specific information. It's possible to delete individual memories, although these are intended to contribute to the chatbot's evolution over time and are not tied to a single conversation.

When a chat is deleted, the associated memories will be retained until the user manually removes them.

OpenAI assures that, like any other information given to ChatGPT, memories will be utilized to enhance the underlying machine learning models "for everyone." Users have the option to disable the ML training in 'Data Control' settings, and enterprise customers are guaranteed that their content will not be used for model training.

Furthermore, the new memory feature will be extended to GPTs, the recently introduced customized versions of ChatGPT that users can transform into personalized AI assistants. OpenAI acknowledges additional privacy and safety considerations with memories, as users may share highly personal information with the for-profit chatbot.

The company commits to addressing potential issues and mitigating biases to ensure that ChatGPT will not "proactively" recall sensitive details such as health information.