Warning Shots from Meta
Meta has opened up to using content from private messages to 'improve AI.' That's just the beginning of what's to come.
Anthony Quintano/Wikimedia
It has long been known that Meta likes to push the boundaries when it comes to law, privacy, and basic decency. But now they have also opened up to using content from private messages on Messenger and WhatsApp to "improve" their AI models.This happens if you use the Meta AI feature in a chat, for example to summarize a message or formulate a sentence for you. Meta won't say whether the conversations can be used to train new AI models.Anders EidesvikPhoto: PrivateThe Norwegian Data Protection Authority says they were "blindsided" by Meta's latest move. This shows that we haven't understood the game. This isn't about one company's bad behavior – it's about a fundamental shift in how data is valued. There are two lessons to be drawn:
1. We will see increasingly aggressive data harvesting going forward. For the tech giants, access to data is the very fuel of AI development. And now that they've started running out of publicly available data – the entire internet has been scraped – we should expect the companies to turn their gaze toward private user data.
Meta are the most ruthless, but it's naive to think the other tech giants won't follow suit. We should expect Google to soon "update" Gmail's terms of service, Microsoft to find creative ways to harvest Teams conversations, and every app on your phone to become a potential data collector.
2. The data we give away today trains the AI models that may replace us tomorrow. Most people don't think about the fact that every email they write, every spreadsheet they fill out, or every report they deliver is stored in the cloud. But for the tech companies, this is a gold mine.
Emails don't just reveal private and sensitive information. They also constitute a library of how humans solve work tasks. This can be used to train models that gradually replace more of our jobs.
There is an intense AI race underway where the tech giants are investing hundreds of billions of dollars to develop the most powerful language models. The goal is to create AI models that can replace all human economic work. As the debate about this in DN has shown, nobody knows exactly which jobs will be replaced, but the path to transformative AI runs through our data.
So how do we protect ourselves? Start by turning off the AI feature in Messenger and WhatsApp for sensitive conversations. Then switch to Signal for all important conversations. And we need to stop believing that free services exist; if you're not paying with money, you're paying with data.
But this isn't something individuals can solve. To protect ourselves, we need decisive regulation at the European level. GDPR, even with its many weaknesses, ensures that Europeans have better privacy protections than Americans.
Regulation only takes us so far, however. For Norway and Europe to truly free themselves from the tech giants, we need to develop solutions that are both competitive and uphold our values. This is a difficult balance to strike. At Langsikt, we have assembled a panel of 15 experts who will help give Norway direction for how we should approach AI and data going forward.
It was a huge mistake to leave social media to a small gang in Silicon Valley. Now we stand at a new crossroads with AI – a technology that will change society far more than social media ever did. This time we must ensure that development happens on our terms.
...
This piece was first published in DN. Read more from DN here.


