Hopp til hovedinnhold

Inga Strümke Is Doing Us All a Disservice

It is a democratic problem that Strümke uses her very strong position and influence in Norwegian public life to stifle a debate about a very serious societal issue.

Anders Eidesvik6 min read

This article was first published by Altinget. Read more from Altinget here.

How afraid should we be of AI? That question was posed by Morgenbladet last week in a very good and thorough feature article about different perspectives on risks posed by artificial intelligence. The starting point for the piece was that Norway's very own "AI queen" Inga Strümke launched a fierce attack against the think tank Langsikt, accusing them of contributing to "PR for the AI industry."

Specifically, she argues that Langsikt's warnings about more powerful AI are merely "futurism" and "fearmongering" that serve the tech companies, and that the debate should instead focus on today's AI problems.

Langsikt is both my former and upcoming workplace, so I consider myself entitled to respond.

Yesterday's futurism is today's reality

Strümke's argument is essentially that we don't know how powerful the models will become, and that it is therefore unscientific to speculate about how AI will develop. She believes all attempts to predict "imminent superintelligence are based on intuition, analogies, extrapolation, or technological optimism."

But if the future is uncertain, that is all the more reason to prepare for a range of different outcomes. Especially because AI development has shown a tendency to surprise the experts time and again.

For what was considered futurism just three years ago? Had someone said in October 2022 that we would soon have language models capable of writing scientific papers, solving unsolved mathematical problems, composing songs that reach the top charts, writing nearly all code at the leading AI companies and being used to bomb Iran and Venezuela – most people would have said you'd been reading too much sci-fi and needed some fresh air.

There were very, very few researchers or others who saw language models coming and what consequences they would have for society as we see today. The few technologists who foresaw the rise of language models and scaling laws – such as Ilya Sutskever, Daniel Kokotajlo, Demis Hassabis and others – did so precisely because they dared to think ahead, extrapolate trends, and follow intuition.

This was no coincidence. As the mathematician Olle Häggström points out in this thorough response to Strümke, intuition, analogies, and extrapolation are in fact an essential part of the scientific method.

Now many of these same technologists are warning that we are heading toward even more powerful AI systems. We should take this seriously, not dismiss it as "futurism."

Bad PR

Strümke also claims that warnings about existential threats are a form of PR for the tech companies. I understand the point: a technology that could become dangerous is also a technology that can make enormous amounts of money.

There are good reasons to distrust the leaders of AI companies. It is, for example, illustrative that OpenAI CEO Sam Altman no longer talks about these threats from AI. Is it because he has learned more, or because he no longer considers it beneficial for the company's growth?

I would, however, not rule out that Anthropic's Dario Amodei, who still speaks about the greatest risks of AI, does so because he has a genuine fear of what they are creating.

The companies are trapped in a competitive dynamic that makes it impossible for any single company to prioritize safety over speed. Amodei's warnings can therefore be seen as a cry for help – a call for collective action to hit the brakes before the companies race each other off a cliff.

Even if this were the case, it is not as though I or others at Langsikt take Amodei or other AI leaders' words at face value. We place far more weight on statements from Nobel laureate Geoffrey Hinton, who left a well-paid job at Google to warn about AI development. Or Yoshua Bengio, another heavyweight in the AI field, who spends his time educating policymakers about the catastrophic consequences AI could bring.

Bought and paid for by Silicon Valley. Really?

The most frustrating thing about Strümke's criticism is that she tries to discredit Langsikt by casting doubt on who we really serve. She tells Morgenbladet: "When other think tanks, such as Civita or Agenda, come with a message, politicians know what they're dealing with (...) With Langsikt, I'm less sure," and adds that it "sounds like they've gotten several talking points from American technology companies."

By this she refers to the fact that Langsikt receives funding through organizations such as Coefficient Giving and Founders Pledge, funded by people including Dustin Moskovitz and Jaan Tallinn. This is something Langsikt has always been open about, and you can read more about it on our website and in Langsikt's response in Morgenbladet.

It may sound alarming with funding from people who have made money in technology. But the purpose of these philanthropic organizations is, among other things, to work for a safer future with AI. To believe that everyone who actively works for a safer AI future is actually working to promote the AI companies' interests is a conspiratorial claim.

If Langsikt and these organizations were in the pocket of the tech giants, one would expect them to advocate for the same policies. But where the tech giants want an absence of regulation, Langsikt actively works for more regulation of leading AI companies.

A democratic problem

Contrary to what Strümke believes, there is room for discussions about both present-day and future challenges. Since we don't know the future, it is also absolutely necessary to have a vibrant debate about the reality of claims from leading AI experts that we may soon have very capable and dangerous AI systems.

If there is a chance that we face extremely intelligent AI within the next decade, we need to start thinking about questions such as universal basic income, mass surveillance, and how to control AI systems already today.

It is therefore a democratic problem that Strümke uses her very strong position and influence in Norwegian public life to stifle a debate about a very serious societal issue.

If we can discuss both short-term and long-term challenges, we can also find common ground on policy measures. For example, both Strümke and Langsikt fully agree that Norway and Europe must secure their digital sovereignty, that the tech giants have too much power, and that more AI research is needed.

That an authority like Strümke dismisses the dangers of powerful AI technology as "futurism" may seem reassuring. But it is rather doing us all a disservice.

This article was first published by Altinget. Read more from Altinget here.

Share this article: