The long-awaited GPT-5 model from OpenAI has been released – and it's called OpenAI o1.
If you're interested in learning more about the o1-preview and o1-mini versions, you can check out our overview of the o1 model here.
What is GPT-5?
OpenAI o1 is the latest series of large language models released by OpenAI on September 12, 2024, currently comprising two models: o1-preview and the o1-mini.
The biggest difference between o1 and the company's previous models is its chain-of-thought reasoning. While it’s not yet released in full, the preview and mini models already blow GPT-4o out of the water on tests of math, science, and coding.
The new model is the first of its kind, able to reason in real time (just like a human).
What does its reasoning ability mean for users? "It's really good, like materially better," said one CEO with advanced access.
When is the GPT-5 release date?
OpenAI's latest LLM was released to the public on September 12, 2024. The release included the o1-preview and the o1-mini models.
Up until release predictions were wide-ranging, estimated by users and journalists alike to be as soon as summer 2024 to as late as 2026.
How smart is GPT-5?
OpenAI has touted a list of STEM benchmarks that show off o1’s reasoning abilities, including:
- A similar performance to PhD students in benchmark tests on physics, chemistry, and biology.
- Placing in the top 500 students in the US qualifier for the USA Math Olympiad.
- Ranking in the 89th percentile in Codeforces, a competitive coding test.
You can read more about o1's reasoning abilities in OpenAI's research release.
Project Strawberry
OpenAI o1 was previously code-named Strawberry, with a heavy side of mystic and intrigue. "How Strawberry works is a tightly kept secret even within OpenAI," an anonymous source shared with Reuters.
The smaller version of this new AI was launched September 12, 2024 as part of an update to ChatGPT. The larger version is likely in use by OpenAI to generate training data for its LLMs, potentially replacing the need for large swathes of real-world data.
An internal all-hands OpenAI meeting on July 9th included a demo of what could be Project Strawberry, and was claimed to display human-like reasoning skills.
What’s the difference between GPT-4 and GPT-5?
OpenAI CEO Sam Altman believes the world has only scratched the surface of AI. At the World Government Summit in January 2024, Altman compared the current models from OpenAI to the early days of cell phones:
While it will take time to get from the flip phone version of GPT to the iPhone version, the o1 model brings us one step closer.
1) Enhanced reasoning abilities
At the center of its general intelligence is o1’s new ability to reason. “Maybe the most important areas of progress will be around reasoning ability,” Altman shared with Gates. “Right now, GPT-4 can reason in only extremely limited ways.”
Reasoning is notoriously difficult. Even for humans. And OpenAI o1 is the first model to claim it.
There’s no shortage of users posting their GPT-4 fails on Reddit and Medium, from group roasts of its problem-solving, to formal explanations of its limited reasoning capabilities.
2) New naming convention
While its name isn't the most exciting thing about the new OpenAI LLM, it is an intentionally meaningful change.
OpenAI o1 is the first model to cast off the 'GPT' moniker, and that's because the company claims it's the first phase of a brand new 'reasoning paradigm', whereas the older models were part of a 'pre-training paradigm'.
The new model spends time reasoning in real time, rather than relying on its pre-training data.
3) Longer wait time
Reasoning in real time takes longer than referencing training data and generating a response. If you ask a question to OpenAI o1-preview compared to other models, you'll be waiting significantly longer.
However, with the ability to outsource reasoning, it's a small price to pay. The speed of the o1 models will likely improve as the next models in the series are released.
4) Identical context windows
While many speculated an increase in content windows from GPT-4 to the next model, the current o1 series remain identical to GPT-4o's content window of 128,000.
Context windows represent how many tokens (words or subwords) a model can process at once. A larger context window enables the model to absorb more information from the input text, leading to more accuracy in its answer.
One of the GPT-4 flaws has been its comparatively limited ability to process large amounts of text. For example, GPT-4 Turbo and GPT-4o have a context window of 128,000 tokens. But Google’s Gemini model has a context window of up to 1 million tokens.
Right now, if your only concern is a large language model that can absorb large amounts of information, the OpenAI LLMs might not be your top choice. If you're curious about which LLM chatbot is right for you, check out our piece on the best LLM chatbots.
What training data does GPT-5 use?
If there’s been any reckoning for OpenAI on its climb to the top of the industry, it’s the series of lawsuits about the models’ complete training.
GPT models are trained on massive datasets taken from the internet, much of it copyrighted. This unauthorized use of data has led to widespread complaints and legal action: a lawsuit from The New York Times, a lawsuit from a series of U.S. news agencies, and claims that the model’s training process violates the EU's General Data Protection Regulation.
A California judge has already dismissed one of the OpenAI copyright lawsuits filed by a group of writers, including celebrities Sarah Silverman and Ta-Nehisi Coates. There are no suggestions yet that OpenAI and company will be substantially held back by these complaints as it continues testing.
The latest model has been trained on a combination of publicly available data and data purchased from companies. OpenAI solicited a wider variety of datasets to better train the model.
It's also likely that o1 was used to create datasets to further train the model. OpenAI explained that Strawberry would be used to train future LLMs.
How much does GPT-5 cost?
The new OpenAI o1 model are free to use on ChatGPT, but with strict limits for the time being.
For API usage, the OpenAI o1-preview model costs $15 per 1 million input token as and $60 per 1 million output tokens.
The o1-mini model costs $3 per 1 million input tokens and $12 per 1 million output tokens, making it a far more accessible model for day-to-day use.
However, these models are more costly OpenAI's previous options. The GPT-4o model is priced at $5 per 1 million input tokens and $15 per 1 million output tokens. The GPT-4o mini is priced at $0.150 per 1 million input tokens and $0.6 per 1 million output tokens.
Pre-release insights from OpenAI
Leading up to the launch of o1 (also previously known as Strawberry and Q*), OpenAI execs and insiders increasingly dropped tidbits of information on on the next-gen model. Here's a trail of what the company stated before its release:
- OpenAI Japan's CEO announced a 2024 release date, as well as partnerships between the new product and Apple, Spotify, and Coca-Cola.
- CEO Sam Altman claimed that the next model would be able to process emails and calendar details, and that it will be more customizable.
- CTO Mira Murati explained in a Dartmouth Engineering interview that GPT-3 had the intelligence of a toddler, GPT-4 was more similar to a smart high-schooler, and that OpenAI o1 has PhD-level intelligence (in certain tasks).
- Microsoft AI CEO Mustafa Suleyman shared that it won't be until GPT-6 in two years' time that the models will be able to 'take action' in novel environments.
- Caution is paramount: CEO Sam Altman was cagey about the release date of the o1 model, explaining that OpenAI had "a lot of other important things to release first." He stated the company would release the model only when they had confidence that they could do it safely and responsibly.
- Altman joked that GPT-5 will make GPT-4 seem "mildly embarrassing" by comparison, in his Stanford interview.
- The US AI Safety Institute received early access to OpenAI's next model, so that the two organizations can "push forward the science of AI evaluations."
- It will have an extended dataset. GPT-5 has been trained on a combination of publicly available data and data purchased from companies. OpenAI has solicited a wider variety of datasets to better train the model.
The Future of ChatGPT
The next generation of large language models will revolutionize how we interact with AI in our day-to-day lives. At Bloomberg’s Tech conference, OpenAI COO Brad Lightcap hinted at how the company plans to revolutionize human-computer interaction, taking GPT from an LLM to a model with agent-like capabilities.
“Will there be such a thing as a prompt engineer in 2026?” Lightcap said. “You don’t prompt engineer your friend.”
A more capable and personalized model with more multimodal capabilities promises just what Altman and OpenAI expect: the unimaginable. The anticipated GPT-5 will be one step closer.
Increased customization
GPT-4 is often used as a one-size-fits-all tool. But future iterations will become more personalized. On Gates’ podcast, Altman reiterated that customizability and personalization will be key to future OpenAI models. “People want very different things out of GPT-4: different styles, different sets of assumptions.”
OpenAI has already introduced Custom GPTs, enabling users to personalize a GPT to a specific task, from teaching a board game to helping kids complete their homework. While customization wasn't at the forefront of OpenAI o1, it’s expected to become a major trend going forward.
In the meantime, you can personalize an AI chatbot equipped with the power of GPT-4o for free. It’s what we do best. Get started here.
More multimodal
Multimodality has been central to the past few iterations of GPT. OpenAI shows no signs of slowing it down.
OpenAI introduced GPT-4o in May 2024, bringing with it increased text, voice, and vision skills. A far stone’s throw from GPT-4 Turbo, it’s able to engage in natural conversations, analyze image inputs, describe visuals, and process complex audio.
Changes in multimodality create huge shifts in how we engage with GPT. Natural conversation flow – when the model can accurately interpret tonal changes and follow human-like speech patterns, like GPT-4o – is a giant leap in AI natural language processing.
And it’s not just heightened voice and text. OpenAI hasn’t been shy to tease their upcoming text-to-video model Sora. The AI model was developed to imitate complex camera motions and create detailed characters and scenery in clips up to 60 seconds.
If their history of multimodality isn’t enough, take it from the OpenAI CEO. Altman confirmed to Gates that video processing, along with reasoning, is a top priority for future GPT models.
The Power of GPT, Customized
What if your AI chatbot automatically synchronized with every GPT update?
Botpress has provided customizable AI chatbot solutions since 2017, providing developers with the tools they need to easily build chatbots with the power of the latest LLMs. Botpress chatbots can be trained on custom knowledge sources – like your website or product catalog – and seamlessly integrate with business systems.
The only platform that ranges from no code set-up to endless customizability and extendability, Botpress allows you to automatically get the power of the latest GPT version on your chatbot – no effort required.
Start building today. It’s free.
Table of Contents
Stay up to date with the latest on AI agents
Share this on: