If you’re wondering about your data privacy when using ChatGPT, then you’re in the right place – and hats off to you. It’s always best to be aware of the data privacy of the software you use, particularly when it involves personal information.
In this article, I’ll explain:
- What type of data OpenAI saves from your ChatGPT interactions
- How OpenAI uses your data
- How to opt out of using your data to train OpenAI models
- How to delete your ChatGPT data
- OpenAI’s previous data leaks
Ready for your ChatGPT privacy check-up? Let’s get started.
Does ChatGPT save your data?
Yes, ChatGPT saves a lot of your data, including all prompts, questions, and queries that you ask the chatbot.
Like other SaaS applications, ChatGPT saves a variety of user data, including device data, usage data, log data, account information, and user content.
So not only will ChatGPT save your prompts, it will save information like your location and what kind of device you’re using.
The information that OpenAI may collect includes:
- All text input to ChatGPT (e.g. prompts, questions)
- Geolocation data
- Commercial information (e.g. transaction history)
- Contact details
- Device and browser cookies
- Log data (e.g. IP address)
- Account information (e.g. name, email, and contact information)
Does ChatGPT sell data?
No, ChatGPT doesn’t sell your data. ChatGPT doesn’t share user data with third parties without consent. The data collected is used only to improve the chatbot's performance and provide a better user experience.
How does ChatGPT use my data?
OpenAI uses data from their users in order to train their large language models (LLMs), like the upcoming GPT-5.
This doesn’t necessarily mean that employees will look at your data, but OpenAI’s AI trainers are permitted to use ChatGPT conversations when training models.
OpenAI does NOT use your data for:
- Marketing: OpenAI has been clear that user conversations with ChatGPT are not used for marketing purposes.
- Money: OpenAI doesn’t sell your data to third parties without your consent.
How to opt out of training
ChatGPT may use your information to train its models, but users can opt out. By navigating to OpenAI’s privacy portal, you can submit a request to opt out of training.
If you have an Enterprise ChatGPT account, the default is that your input will not be used for training purposes.
The OpenAI Privacy Portal
If you’re interested in what data OpenAI holds, or you want to change your data privacy setting, OpenAI offers a privacy portal. It allows you to:
- Request a copy of your data
- Ask OpenAI to stop training models on your data
- Delete your ChatGPT account and all the data associated with it
- Ask OpenAI to remove your personal data from ChatGPT model outputs
If you’ve made a request, you can also log into the privacy portal to check its status.
Where does ChatGPT store your data?
OpenAI stores user information in secure servers located in the U.S.
The exact locations of their servers remain unconfirmed.
Can ChatGPT see the data stored in a knowledge base?
Whether OpenAI can access the information stored in a knowledge base depends on what platform you use to customize your chatbot.
Most of the LLMs you use to power an AI chatbot will need to access your linked knowledge bases in order to source accurate information for a user’s prompt.
How do I delete my ChatGPT data?
You can delete the data stored by ChatGPT by deleting your account. OpenAI will delete all of your data within 30 days.
But bear in mind: if you want to create a new account, you’ll need to do so with a new email address. You cannot delete your account and then open a new account with the same email.
You can still use ChatGPT without an account, but it will only support one conversation at a time.
Where does ChatGPT get its data?
ChatGPT was trained on a wide variety of information – a combination of licensed data, publicly available data, and data created by human trainers.
The individual datasets used by OpenAI AI trainers are undisclosed, but they include broad swathes of publicly available information on the internet.
Is ChatGPT confidential?
OpenAI employees will only use your data for training purposes. However, if you have hesitations about sharing certain data with ChatGPT, it’s always better to withhold it.
Employees from Samsung discovered this the hard way when a few shared source code with ChatGPT to check for errors. This sharing of proprietary information led to Samsung banning the use of ChatGPT in their workplace.
Since this incident, OpenAI has introduced more data security features to protect users against sharing private information. However, many organizations still prevent employees from sharing data with ChatGPT.
Has ChatGPT had a data leak?
Yes, ChatGPT had a data breach on March 20, 2024. ChatGPT was taken offline after employees found a bug in an open-source library.
The bug allowed users to see titles from other users’ chat history. But the bigger fiasco was that it may have allowed users to see payment information from ChatgPT Plus subscribers who were active during a specific 9-hour window (totaling about 1.2% of ChatGPT Plus users).
Users’ credit card numbers were not leaked, but the following information was:
- First and last name
- Email address
- Payment address
- Credit card type
- Last 4 digits of a credit card
- Credit card expiration date
OpenAI believed the number of users whose data was actually revealed to others was very low – the steps needed to access this information were uncommon.
You can read a full report on the bug from OpenAI here.
The secure way to use LLMs
If you want to harness the power of LLMs, you can customize your own AI chatbot.
An LLM-powered chatbot offers guardrails that prevent information from entering the cloud or being released to third parties.
Our chatbot platform provides enterprise-grade security. We’re used by some of the biggest companies worldwide to deploy safe, on-brand chatbots and AI agents.
Starting building today. It’s free.
Or contact our sales team to learn more.
Table of Contents
Stay up to date with the latest on AI agents
Share this on: