Meta Oversight Board Calls Out Company for Failing to Address Ronaldo Nazário Deepfake

Meta’s Oversight Board has mandated the removal of a Facebook post featuring an AI-manipulated video of Brazilian football legend Ronaldo Nazário promoting an online game.

The board ruled that the post violated Meta’s Community Standards on fraud and spam, criticising the company for allowing the deceptive video to remain accessible.

The Oversight Board said in a statement Thursday:

“Taking the post down is consistent with Meta’s Community Standards on fraud and spam. Meta should also have rejected the content for advertisement, as its rules prohibit using the image of a famous person to bait people into engaging with an ad.”

The Oversight Board says Meta isn't doing enough to fight celeb deepfake scams https://t.co/kPwK3YavBx

— Engadget (@engadget) June 5, 2025

As an independent body overseeing content moderation at Meta, the Oversight Board has the authority to uphold or overturn takedown decisions and issue binding recommendations to which Meta must respond.

Established in 2020, it aims to increase transparency and accountability around Meta’s enforcement actions.

This case underscores mounting concerns over AI-generated media that falsely portrays individuals, often misused for scams, fraud, and misinformation.

In the video, a poorly synchronised voiceover of Ronaldo Nazário urged viewers to play a game called Plinko through its app, falsely claiming users could earn more than typical Brazilian jobs.

Scams using AI deepfakes of celebrities have become a major issue for Meta. The Oversight Board has confirmed what critics have said: Meta doesn’t enforce its rules enough and makes it easy for scammers to get away with these schemes.

Meta likely allows much scam content on its… pic.twitter.com/nVCxJh7X8E

— Nova Pov (@NovaPovNP) June 6, 2025

The post amassed over 600,000 views before being flagged.

Despite reports, Meta did not prioritise its removal, and the content remained online.

When the user appealed to Meta, it again failed to receive timely human review—prompting the user to escalate the matter to the Oversight Board.

Rise of Deepfake Technology Sparks Concern

Meta has faced repeated criticism over its management of celebrity deepfakes.

Just last month, actress Jamie Lee Curtis publicly challenged CEO Mark Zuckerberg on Instagram after an AI-generated ad using her likeness appeared on the platform.

While Meta disabled the ad, it left the original post intact.

View this post on Instagram

A post shared by Jamie Lee Curtis (@jamieleecurtis)

The Oversight Board noted that only specialised teams within Meta have the authority to remove such content, highlighting a broader issue of inconsistent enforcement.

The Board urged Meta to apply its anti-fraud policies more rigourously and uniformly across its services.

This scrutiny comes amidst increasing legislative efforts to combat deepfake abuse.

In May, President Donald Trump signed the bipartisan Take It Down Act, requiring platforms to remove non-consensual, intimate AI-generated images within 48 hours.

The law addresses a surge in deepfake pornography and image-based abuse targeting celebrities and minors.

Notably, Trump himself was recently the subject of a viral deepfake depicting him advocating for dinosaurs to guard the US southern border—an example of how deepfake technology continues to challenge the boundaries of truth and misinformation.

How prepared are platforms and lawmakers to keep pace with this rapidly evolving threat?