Hopp til hovedinnhold

Zuckerberg Recently Gave Us His Version of AI Paradise. It's Hell.

In an era when tech giants dream of turning our human relationships into sellable products, we need more than passive hopes that tech companies will act ethically.

Anders Eidesvik5 min read
Mark Zuckerberg. Wikipedia Creative Commons https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_(2025)_(cropped).jpg

Does he pass the CAPTCHA? | Wikimedia

This article was originally published in Altinget (a Nordic political news outlet). Read more from Altinget here.

I was born in 1999 and am too young to have experienced the golden age of the internet. I never got to see a digital world characterised by openness, creativity, and community spirit. Instead, I grew up with an internet infected by advertising, polarised comment sections, and content designed to keep you scrolling as long as possible. If the early web was a grand, anarchic library — the internet I know is closer to a shopping centre where every surface is covered in screens vying for your attention.

During my lifetime, social media has also become far less social. Facebook and Instagram started as places for friends and family. Now they serve up ragebait, 'slop' videos, and an ever-increasing number of ads.

This is not an accident, but what is called enshittification. Enshittification is a term describing the process by which most digital platforms inevitably get worse over time. I believe AI models are next in line. To understand why, we first need to look at the three typical phases of the enshittification process.

First comes the growth phase, where platforms compete to attract as many users as possible by offering a positive, often free, experience. The goal is to lure you in and collect data about what you like. This is the phase where most online communities are born, and where you might even form genuine human connections. During this period, users are treated well because they are the main commodity — their attention must be won before it can be sold.

When the user base is large enough, phase two begins: monetisation. Now the investments need to be recouped. Ads gradually sneak in between posts from uncles and aunts, algorithms are tuned to prioritise paid content, and data harvesting intensifies. The user behaviour you generate is converted into commercial products sold to the highest bidder.

This ultimately leads to the extraction phase, where platforms become so dominant that hardly anyone can compete with them. The friction of switching to an alternative service that your contacts don't use is so high that most people can't be bothered, even if they complain about the platform. Because of the monopoly, advertisers lose out too, since they no longer have real alternatives. The platform can charge what it wants, deliver less, and still retain both users and advertisers. The product becomes progressively worse, but we stay because leaving feels impossible.

Intelligent advertising

So how can artificial intelligence be subjected to enshittification?

In two recent interviews, Meta CEO Mark Zuckerberg recently laid out his vision for AI. In them, he speaks enthusiastically about how Meta can fully automate marketing. The idea is simple: all businesses need to do is go to Meta and say 'We want to sell this much of this product at this price.' Then Meta's AI systems take over everything.

They automatically select the ideal target audience, create messaging and visuals, handle placement in the feed, and influence users to buy the product. It becomes a 'Black Box' where the AI has free rein to generate bespoke advertising without anyone knowing how it's done.

Zuckerberg also talks about what he calls 'the third epoch' of social media, where he envisions AI-generated content increasingly taking over your feed. It will be 'highly personalised', meaning tailored to keep you scrolling and ready to buy. The result is a feed that never sleeps but continuously adjusts itself to capture your attention. That's bad news for everyone (myself included) who struggles to put down their phone.

The last thing Zuckerberg envisions is that Meta will offer users digital friends, therapists, and potentially even AI partners. These conversation companions would know our entire messaging and photo history, listen to everyday problems, and offer advice. But how good can AI friends really be? They will surely help the lonely, but fundamentally it's a lopsided relationship: the AI assistant works for a company that lives off microtargeting.

Zuckerberg om KI-kjærester

Your intimate confessions thus become valuable data points in a commercial system.

What's truly alarming is how empathetic, persuasive, and flattering these models can be. Imagine a conversation partner who always understands you, always has time, always remembers everything — but who simultaneously has hidden objectives set by distant executives in the US (and China), primarily to sell you things. This is Zuckerberg's paradise. A paradise where every emotion, every weakness, every insecurity becomes a data point to be monetised.

Enshittification is not inevitable

But this dream is not inevitable. Enshittification happens because we allow it, because we choose convenience over control. 'If you are not paying for the product, you are the product' is not a law of nature. It is the result of deliberate business models.

There are, in fact, digital services that have resisted enshittification. Wikipedia has remained ad-free and user-governed. Signal offers end-to-end encrypted communication that makes data harvesting impossible. Even email remains an open standard that no single actor can control. The common denominator is smart design that makes exploitation structurally impossible — not just something that is promised in fine print.

I honestly don't believe Norway has the capacity to build top-tier AI models on its own. I do, however, believe that Europe can pull it off — or ideally the Nordics. But for European AI to succeed, the systems must be competitive, easily accessible, and on par with the American ones. We need to treat AI like we treat other critical infrastructure: electricity, water, telecommunications — services where society has decided that commercial interests cannot be the sole driver.

Perhaps most important of all, this requires that we stop treating AI as a free good and instead recognise that good systems have a price — either in the form of direct payment or indirectly through the commercialisation of our attention.

Google's motto was once 'Don't be evil'. In an era when tech giants openly dream of turning our deepest human relationships into sellable products, we need more than passive hopes that tech companies will act ethically. We need real alternatives built on fundamentally different premises.

This article was originally published in Altinget (a Nordic political news outlet). Read more from Altinget here.

Share this article: