Meta wants to train AI with your posts, privacy watchdog AP calls for objections

Select Language

English

Down Icon

Select Country

Netherlands

Down Icon

Meta wants to train AI with your posts, privacy watchdog AP calls for objections

Meta wants to train AI with your posts, privacy watchdog AP calls for objections
Facebook and Instagram
· Modified:
© ANP / Hollandse Hoogte / Rob Engelaar Meta wants to train AI with your posts, privacy watchdog AP calls for objections
RTL

If you don't want Meta, the parent company of Facebook and Instagram, to use your posts on those social media for the development of artificial intelligence, you must file an objection. If you don't do this before May 27, Meta will automatically use the data, warns privacy watchdog Autoriteit Persoonsgegevens. "You simply don't know what happens to it."

"The risk is that you as a user lose control over your personal data. You once posted something on Instagram or Facebook and that data will soon be in that AI model. Without you knowing exactly what happens to it," says Monique Verdier, vice-chair of the AP.

According to the AP, Meta wants to use all public data from adult users, such as posts, photos and comments, to train Meta AI. Meta will start training on May 27, 2025. "And once your data is in the AI ​​model, you can't just get it out again," says Verdier.

You can file an objection via Instagram's objection form or Facebook's . Meta has indicated that it will not use the data of anyone who files an objection to train artificial intelligence.

Brussels punishes Apple and Meta, fines of hundreds of millions of euros

It is striking that users who have already filed an objection have received a message saying that their data will no longer be used for AI developments Meta. The words 'no longer' suggest that data collection is already underway.

This is contested by the AP. "If you object before May 27, the data will NOT be used for training AI, Meta has informed the supervisors. If you object after May 27, data will be used from May 27 until the moment of objection, and not afterwards (to the extent possible)," according to a spokesperson for the supervisor.

'Silence is consent'

He emphasizes that it is not about granting permission. "Meta will do it, unless you object yourself. The other way around, so. Meta already has that data." According to the AP, it is questionable whether the company is allowed to use this 'silence means consent' method.

The American tech giant has been asked for a response by RTL Nieuws, but has not yet responded.

Hallucinating bots

The uncertainty about what happens to data is the AP's biggest fear. "The point is that you simply don't know what happens to it. And whether your personal data won't be spat out in some form or another when someone asks Meta AI a question. And we all know the stories about 'hallucinating' AI chatbots. You lose control over your personal data," says the spokesperson.

By hallucinatory AI chatbots, he means chatbots that present incorrect information with great certainty.

Popular chatbots are harmful and addictive, warns regulator

Although the AP says that responsibility for compliance with privacy legislation 'naturally' lies with Meta itself, the European privacy supervisors are on top of things, according to the Dutch watchdog.

The Irish regulator is leading, because the European headquarters of Meta and many other tech companies are in Ireland. According to the AP, the European regulators are in close consultation with the Irish. "After discussions with the Irish regulator, Meta postponed the earlier plans to train AI with data from European users. This was in the summer of 2024."

Whatsapp?

Meta also owns Whatsapp. For some Dutch users of this chat app it is already possible to chat one-on-one with Meta AI or to involve Meta AI in a chat with someone else. The messages shared here are also used to train Meta AI.

"It does not seem possible to object to this," the AP states. If Meta AI does not participate in the conversation, your messages will not be used to train the AI ​​model.

For some, like Eric, an AI chatbot is a godsend. Although he is also aware of the dangers: