According to Cointelegraph, Meta has been granted permission by the European Union's data regulator to utilize publicly shared content from its social media platforms for training its artificial intelligence models. This decision allows Meta to incorporate posts and comments from adult users across its platforms, including Facebook, Instagram, WhatsApp, and Messenger, as well as interactions with its AI assistant, to enhance its AI capabilities. In a blog post dated April 14, Meta emphasized the importance of training its generative AI models on diverse data to comprehend the intricate nuances and complexities of European communities.
Meta highlighted the significance of understanding dialects, colloquialisms, hyper-local knowledge, and the unique ways different countries employ humor and sarcasm on its products. However, the company clarified that private messages between friends and family, as well as public data from EU account holders under the age of 18, remain excluded from AI training. Users have the option to opt out of having their data used for AI training through a form that Meta will provide via in-app notifications and email, ensuring it is accessible and user-friendly.
The approval comes after Meta faced a temporary halt in its AI training plans last July due to complaints from privacy advocacy group None of Your Business in 11 European countries. These complaints led the Irish Data Protection Commission (IDPC) to request a pause until a review was conducted. The concerns centered around Meta's privacy policy changes, which allegedly allowed the use of personal posts, private images, and online tracking data for AI training. Meta has now received confirmation from the European Data Protection Commission that its AI training approach complies with legal obligations, and it continues to engage constructively with the IDPC.
Meta's approach aligns with practices adopted by other tech giants like Google and OpenAI, which have already utilized data from European users for AI model training. Meanwhile, an Irish data regulator initiated a cross-border investigation into Google Ireland Limited last September to assess compliance with EU data protection laws during AI model development. Similarly, X faced scrutiny and agreed to cease using personal data from EU and European Economic Area users for training its AI chatbot Grok. The EU's AI Act, launched in August 2024, established a legal framework for AI technology, addressing data quality, security, and privacy concerns.