A Week with ChatGPT-5. What Now?
Is GPT-5 proof that the scaling laws have finally hit a wall? Or is OpenAI courting the mainstream?
This piece was originally published on Kludder (a Norwegian tech newsletter). Check them out here.
A week ago, OpenAI finally launched GPT-5! Or rather, they launched a whole family of GPT-5 models: fast, thinking and pro (the names are now a bit more intuitive, but far from intuitive enough for everyone to keep up).
After months of speculation and high expectations, we got a launch best described as solid, but not revolutionary. Fans are disappointed, while AI sceptics are rejoicing. I'll try to summarise what we can learn from this week and look at the road ahead.
GPT-5 is better than GPT-4
Let's start by establishing one thing: GPT-5 is a better tool than GPT-4. It hallucinates less (i.e. fabricates fewer false facts), writes better, and has a smarter system that automatically selects the best model to answer your questions. For those who use ChatGPT as their go-to model, these are improvements you'll notice. Admittedly, several people didn't like GPT-5's new 'personality', which led OpenAI to bring back an earlier model.
More fundamentally, GPT-5 is not the enormous leap people expected. As American AI blogger Zvi Mowshowitz points out in his thorough review, GPT-5 is best understood as an upgrade of existing technology rather than a breakthrough in fundamental intelligence. OpenAI has focused on what he calls 'mundane utility' — i.e. making AI more useful for everyday tasks — rather than pushing the boundaries of what's possible.
So what should we make of GPT-5? I think there are two important things to watch.
The end of scaling laws?
For years, the AI field has lived by a simple rule: more computing power + more data = smarter models. This is what's called scaling, and the scaling laws predicted that this recipe would lead to exponentially more powerful models. For those particularly interested, I've written a long note in Langsikt (the author's newsletter) on precisely this topic.
But GPT-5 does not appear to be the result of massive scaling of the base model, as previous models were. Instead, it seems GPT-5 is the result of better post-training, strategic use of synthetic data, and smarter ways of using existing models.
For those following the debate about artificial general intelligence (AGI) — AI that is either as smart as or much smarter than humans — the GPT-5 launch sends mixed signals.
On the one hand, the launch shows that models are still 'right on track' according to the projections from METR, a well-respected research organisation that tests how long AI models can work on complex tasks without making errors.
On the other hand, GPT-5's underwhelming performance is good reason to think that the most dramatic scenarios, often championed by those who believe we can achieve AGI as early as 2027, will not materialise so soon.
Whether the scaling laws are dead or will rise again remains to be seen. It's worth noting that OpenAI still plans to build Project Stargate in Nevada (not the one they're building in Narvik, for those wondering — that's a separate data centre project in northern Norway) for $500 billion by 2029. This would represent scaling at a level the world has never seen. So while traditional scaling may have temporarily hit a wall, new breakthroughs in data infrastructure could give it new life.
In short: GPT-5 confirms that we're still heading towards smarter AI, but the journey there may take different paths than the straight motorway we thought we were on.
OpenAI wants to win over ordinary people
Another interesting perspective on what GPT-5 represents comes from blogger Peter Wildeford. He writes:
GPT-5 is a small step for intelligence, a giant leap for normal people.
According to him, GPT-5 is not about pushing the boundaries of intelligence as far as possible, but rather about winning the race to become the model that ordinary people, or the general public if you prefer, actually use.
Wildeford highlights several practical improvements that mean more for ordinary users than for the AI elite. The model automatically selects the right 'thinking level' for your task. You no longer need to be an AI guru or nerd who knows the difference between GPT-4o and o1. Now GPT-5 should just choose correctly for you and your needs.
AI professor Ethan Mollick describes the same thing, namely that GPT-5 'just does things.' GPT-5 suggests next steps, creates documents, and even builds complex applications without you needing to guide it through every detail. We're beginning to approach a kind of iPhone moment where AI models become so user-friendly that even your great-grandmother could use them.
Those who crack this first stand to become filthy rich. As Wildeford observes, revenues for AI companies have skyrocketed. OpenAI, Anthropic, and xAI are expected to have a combined annual revenue of $32 billion by the end of 2025 — nearly double the $17 billion at the start of the year. The market valuations of these companies are even more astronomical.
The road ahead
Perhaps the most striking thing about the GPT-5 launch is how quickly we've become jaded. We forget that o3 was launched less than half a year ago (!). Yet it already feels old and outdated. We live in a time when revolutionary technology becomes mundane in weeks, not years.
This speed blindness is dangerous, I believe. We lose perspective on just how insanely fast this is actually moving. A model that three years ago would have been seen as magic is now dismissed as merely an incremental improvement.
GPT-5 may not be the great leap people were waiting for, but when we zoom out, we see that the pace of development should make us dizzy. Just because we don't get monthly revolutions doesn't mean the revolution has stopped.
It has simply become so normal that we no longer notice it.


