AI Helped Me Beat Silicon Valley's Best Minds at a Hackathon. It Shouldn't Have Been Possible
In Silicon Valley, more and more people are talking about their 'AGI moment.' In Norway, we dismiss it as hype. One of us is wrong.
We don't see what's about to hit us | Hokusai

This article was originally published in Altinget (a Nordic political news outlet). Read more from Altinget here.
In recent months, the mood has shifted dramatically in Silicon Valley, where I'm currently studying. More and more people I talk to now speak of a new 'AGI moment'. That is, they truly feel that AI models have made another noticeable leap in intelligence. For me, this feeling came last weekend.
I had signed up for a hackathon organised by the newspaper The Atlantic. Hackathons are practically a sport here on the West Coast. You get locked inside a building for a limited time — in my case, eight hours — and have to code an app. In the sixth hour, the collaboration with my team fell apart and I was left alone. Two and a half hours to go.
I have no experience with coding or programming. Yet within two and a half hours, I was to go on stage and present a product that existed nowhere but in my head. An impossible task by any normal standard.
With the help of Anthropic's AI model Claude Code, I was able to materialise an idea into software. The idea was to build an algorithm that could surface newspapers' timeless content — so-called 'evergreen content' — from The Atlantic's archives and recommend it to readers based on their personal interests.
To my infinite surprise, Claude manages to do what other teams of up to eight experienced Silicon Valley types are doing. With minimal input from my end, I suddenly watch thousands of lines of code being written on my screen. It builds the backend, creates the design, and connects all the pieces. In a few hours, I have a fully functioning prototype.
It all ends with the product I present being selected by the jury as one of three winners out of 20 teams.
It shouldn't have been possible.
A long way to Norway
While people in Silicon Valley talk about 'AGI moments', I find that the discussion in Norway is on a completely different track. The public debate centres on how one might perhaps get a bit more AI into the public sector, or whether we need a few more courses on prompt engineering.
And to the extent anyone mentions artificial general intelligence (AGI), it is almost without exception to assure us that this is just hype from the US.
Take for example Inga Strümke, Norway's most influential AI expert. Recently, researchers from OsloMet posed the question 'Why is there so little discussion in Norway about the danger of losing control of AI development?' in the newspaper Klassekampen (a Norwegian daily newspaper).
Strümke's response is that we talk little about superintelligence because we have a 'relatively enlightened and digitally competent' population and that 'people understand that computers are tools.' Strümke, who also represents the majority view among Norwegian AI researchers, considers concerns about existential risk to be 'futuristic alarmism'.
But how good do the models have to get before we realise that tomorrow's AI systems will surpass most humans in most domains?
You won't believe it until you see it
I believe the newest AI models are a clear step in that direction. Claude Code is no longer just a chatbot that can give you answers back and forth, but an agent that can take over your computer and write code.
For people without a coding background, this doesn't sound all that impressive. It's easy to think that 'AI that can code' is a development that only affects IT engineers. But as Transformer editor Shakeel Hashim explains well, everything we do on computers is fundamentally code. The question one needs to ask is not 'is this code?' but rather 'is this something that can be solved digitally?'
Code, after all, is just a language by which we instruct computers to do things. An AI agent that can code, then, can ... do almost anything you do on a computer.
If the answer is yes, then it also means an AI agent can do it. The implications of this are enormous. There are hardly any limits to what one can accomplish on a computer. With the right setup, AI agents can carry out research, fill in forms, build websites, write reports, manage finances, create content — in short, any knowledge work that's done behind a screen.
Listen to the Americans
It's no longer just top executives at AI companies who talk about AGI, but also top politicians, security experts, economists, and independent AI experts. The list of experts who take the problem seriously is growing — and they are not fringe figures, but include the directors of American intelligence agencies who take the threat of AGI seriously.
This week saw the release of the second version of 'The International AI Safety Report'. The report is written by 100 independent experts and led by Yoshua Bengio, one of the world's foremost AI experts who recently visited Oslo. The report identifies many threats, including the danger of losing control of the AI systems we build.
Among the major AI companies, the majority of all code is now written by the AI models themselves. Software engineers' new role is to review code and provide feedback on structure. Almost nobody writes code from scratch anymore.
February 2020
I've been using Claude for over a year now, and I notice that with each model they release, the number of tasks that only a human can solve shrinks. The latest version of Claude is no longer just a sparring partner, but a genuine productivity tool that handles everything from writing to analysis to coding.
And it does so with an elegance and humour that I can only describe as highly sophisticated. At times, it's simply like having an infinitely knowledgeable assistant built into my computer, available at all times and never in a bad mood.
I believe we're facing a new ChatGPT moment. I believe that within a short time, people will understand that the newest AI models are no longer just conversation partners, but autonomous agents capable of performing complex work. And when that moment arrives, the demand for both regulation and adaptation will explode.
New York Times journalist Kevin Roose compares the moment we find ourselves in now to February 2020, just before Covid hit. Then too, only a small group warned about what was about to strike us, and society was completely unprepared.
In Silicon Valley, I'm sensing a shift in mood. Even the most technology-optimistic people around me are surprised by how fast things are moving. We're facing AI models that neither sleep nor eat, that learn from their mistakes, and that are becoming better at an accelerating pace.
I genuinely don't know where all this leads or where we're heading. But what I do know is that we're not prepared for what's coming.


