OpenAI's new flagship model, GPT-5, is almost here. Will it be a game changer or just another overhyped update?
Stay tuned — OpenAI's GPT-5 is expected to launch in the coming months. Will it be an AI blockbuster, popularized by ChatGPT?
Sam Altman confirmed the plan in June, during the first episode of the company's podcast, casually mentioning that the model — which he claims will bring together the capabilities of previous models — would arrive "probably sometime this summer" (i.e., winter in the southern hemisphere).
Some OpenAI observers predict it will be launched in the coming weeks. An analysis of the company's release history shows that GPT-4 was launched in March 2023 and GPT-4-Turbo (which powers ChatGPT) arrived in November 2023. GPT-4o, a faster multimodal model, was released in May 2024. This indicates that OpenAI has been refining and iterating its models more quickly.
But not fast enough for the AI market, which moves in a brutally accelerated and competitive manner. In February, when asked on X when GPT-5 would be released, Altman responded "weeks/months." The weeks turned into months, and in the meantime, competitors have rapidly closed the gap, with Meta spending billions in the last 10 days to hire some of OpenAI's top scientists.
According to Menlo Ventures, OpenAI's market share in the corporate sector has dropped from 50% to 34%, while Anthropic has doubled from 12% to 24%.
Google's Gemini 2.5 Pro has absolutely outperformed competitors in mathematical reasoning, and DeepSeek R-1 has become synonymous with "revolutionary" — surpassing closed-source alternatives — and even xAI's Grok (previously known only for its "fun mode") has started to be taken seriously among programmers.
What to expect from GPT-5
The next GPT model, according to Altman, will effectively be "a model to rule them all."
GPT-5 is expected to unify OpenAI's various models and tools into a single system, eliminating the need to choose between specialized models. Users will no longer have to choose between different options — a single system will be able to handle text, images, audio, and possibly video.
So far, these tasks are distributed among GPT-4.1, Dall-E, GPT-4o, o3, Advanced Voice, Vision, and Sora. Centralizing everything into a truly multimodal single model represents a significant achievement.
The technical specifications are also ambitious. The model is expected to feature a significantly expanded context window, possibly exceeding 1 million tokens — some speculate it could reach 2 million. For comparison, GPT-4o maxes out at 128 thousand tokens. This is akin to the difference between processing a chapter and digesting an entire book.
OpenAI began implementing experimental memory features in GPT-4-Turbo in 2024, allowing the assistant to remember details like the user's name, tone preferences, and ongoing projects. Users can view, update, or delete these memories, which are built gradually over time, rather than based on isolated interactions.
In GPT-5, memory is expected to be even more integrated and fluid — after all, the model will be able to process nearly 100 times more information about the user, with a potential of 2 million tokens instead of 80 thousand. This would allow the model to remember conversations from weeks ago, build contextual knowledge over time, and offer a continuity closer to that of a personalized digital assistant.
Improvements in reasoning are also ambitious. A breakthrough is expected for processing in "structured chain of thought," allowing the model to break down complex problems into logical sequences with multiple steps, mirroring the deliberative process of human thought.
As for parameters, rumors range from 10 to 50 trillion, even reaching a touted 1 quadrillion. However, as Altman himself stated, "the era of scalability by parameters is over," with AI training techniques now prioritizing quality over quantity, with better learning approaches making smaller models extremely powerful.
And this is another major challenge for OpenAI: it is running out of internet data to train its models. The solution? Have the AI itself generate its training data — which could mark a new era in AI development.
What experts are saying
"The next leap will be the generation of synthetic data in verifiable domains," stated Andrew Hill, CEO of the on-chain AI agent platform Recall, to Decrypt.
"We are bumping into the limits of internet-scale data, but advancements in reasoning show that models can generate high-quality training data when there are verification mechanisms. The simplest examples are mathematical problems, where it is possible to check if the answer is correct, and code, where unit tests can be run."
Hill sees this as something transformative: "The leap is in creating new data that is actually better than that generated by humans because it is iteratively refined through verification cycles — and created much, much faster."
Benchmarks are also a battleground: AI expert and educator David Shapiro expects the model to reach 95% on the MMLU and jump from 32% to 82% on the SWEBench — basically a god-level AI model. If half of this holds true, GPT-5 will certainly make headlines. Internally, there is real confidence, with even some OpenAI researchers promoting the model before its release.
Don't believe the hype
Experts consulted by Decrypt warned that those hoping GPT-5 will reach AGI (artificial general intelligence) levels should temper their enthusiasm. Hill stated he expects "an incremental step disguised as a revolution."
Wyatt Mayham, CEO of Northwest AI Consulting, went a step further, predicting that GPT-5 will likely be "a significant leap, not just incremental," adding: "I expect larger context windows, more native multimodality, and changes in how agents can act and reason. I don't bet on a silver bullet, but I think GPT-5 should expand the types of tools we can confidently offer users."
With every two steps forward, there comes one step back, said Mayham: "Each major release addresses the most obvious limitations of the previous generation, but introduces new ones."
GPT-4 resolved the reasoning gaps of GPT-3 but encountered data barriers. Reasoning models (like o3) solved logical thinking but are costly and slow.
Tony Tong, CTO of Intellectia AI — a platform that provides AI insights for investors — is also cautious, expecting a better model, but not something revolutionary, as many AI enthusiasts believe.
"My bet is that GPT-5 will combine deeper multimodal reasoning, better integration with tools or memory, and significant advances in alignment and agent behavior control," Tong told Decrypt. "Think of something more controllable, more reliable, and more adaptable."
Patrice Williams-Lindo, CEO of Career Nomad, predicted that GPT-5 will not be much more than an "incremental revolution." However, she suspects that it might be particularly useful for everyday AI users, rather than for corporate applications.
"The compounded effects of reliability, contextual memory, multimodality, and lower error rates can change the game in how people actually trust and use these systems in their daily lives. This alone would be a great victory," Williams-Lindo said.
Some experts are simply skeptical that GPT-5 — or any other LLM — will be remembered for long.
AI researcher Gary Marcus, a critic of purely scale-based approaches (better models need more parameters), wrote in his usual predictions: "We may not see any model of 'GPT-5 level' (i.e., a generalized quantum leap, according to community consensus) throughout 2025."
Marcus bets on announcements of updates, rather than new fundamental models. Still, he classifies this as one of his low-confidence predictions.
The billion-dollar brain drain
Whether Mark Zuckerberg's push against OpenAI talents will delay the release of GPT-5 is unknown.
"It's definitely slowing down their efforts," said David A. Johnston, lead code maintainer of the decentralized AI network Morpheus, to Decrypt. Beyond the money, Johnston believes that top talents are morally motivated to work on open-source initiatives like Llama, rather than closed alternatives like ChatGPT or Claude.
Still, some experts believe the project is already so advanced that the loss of talent will not affect it.
Mayham stated that "the July 2025 launch seems realistic. Even with some key talents going to Meta, I think OpenAI remains on the right track. They have maintained core leadership and adjusted compensation, so it seems they are stabilizing."
Williams-Lindo added: "The momentum and capital flow from OpenAI are strong. The most impactful thing is not who left, but how those who stayed will recalibrate priorities — especially if they decide to focus on product transformation or take a pause to deal with safety or legal pressures."
If history serves as a guide, the world will see the launch of GPT-5 soon — accompanied by a flood of headlines, hot analyses, and moments of "was that it?" And then, as always, the entire industry will start to ask the next big question: when is GPT-6 coming?