Oversight Board calls on Meta to rewrite ‘incoherent’ rules against faked videos

The Oversight Board, the external advisory group that Meta created to review its moderation decisions on Facebook and Instagram, issued a decision on Monday concerning a doctored seven-second video of President Biden that made the rounds on social media last year.

The original video showed the president accompanying his granddaughter Natalie Biden to cast her ballot during early voting in the 2022 midterm elections. In the video, President Biden pins an “I Voted” sticker on his granddaughter and kisses her on the cheek.

A short, edited version of the video removes visual evidence of the sticker, setting the clip to a song with sexual lyrics and looping it to depict Biden inappropriately touching the young woman. The seven-second clip was uploaded to Facebook in May 2023 with a caption describing Biden as a “sick pedophile.”

In its decision, issued Monday, the Oversight Board agrees with Meta’s choice to leave the video online but calls the relevant policy “incoherent.” Meta’s Oversight Board announced that it would take on the case last October after a Facebook user reported the video and ultimately escalated the case when the platform declined to remove it.

“As it stands, the policy makes little sense,” Oversight Board co-chair Michael McConnell said. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”

McConnell also pointed to the policy’s failure to address manipulated audio, calling it “one of the most potent forms of electoral disinformation.” While the board’s decision to keep the video online is technically binding,

The Oversight Board’s decision argues that instead of focusing on how a particular piece of content was created, Meta’s rules should be guided by the harms they are designed to prevent. Any changes should be implemented “urgently” in light of global elections, according to the decision, but technically only the board’s decision whether content stays or goes is binding.

Meta created the Oversight Board in 2020. By then, the company still known as Facebook had endured years of scrutiny over the proliferation of disinformation, extremism and other dangerous content on its platforms. While the board can issue final decisions about the individual content moderation cases it reviews — assuming that Meta continues to agree to implement them — the company has only committed to “consider” deeper recommendations to change Facebook and Instagram’s rules.

Beyond expanding its manipulated media policy, the Oversight Board suggested that Meta add labels to altered videos flagging them as such instead of relying on content takedowns initiated by fact-checkers, a process the group criticizes as “asymmetric depending on language and market.” By labeling more content rather than taking it down, the Oversight Board believes that Meta can maximize freedom of expression, mitigate potential harm and provide more context and information for users.

In a statement to TechCrunch, a Meta spokesperson confirmed that the company is “reviewing the Oversight Board’s guidance” and will issue a public response within 60 days.

As the Oversight Board noted when it accepted the Biden “cheap fake” case, Meta stood by its decision to leave the altered video online because its policy on manipulated media — misleadingly altered photos and videos — only applies when AI is used or when the subject of a video is portrayed saying something they didn’t say.

The manipulated media policy, designed with deepfakes in mind, applies only to “videos that have been edited or synthesized… in ways that are not apparent to an average person, and would likely mislead an average person to believe.”

Meanwhile, the altered video continues to circulate on X, formerly Twitter. Last month, a verified X account with 267,000 followers shared the clip with the caption “The media just pretend this isn’t happening.” The video has more than 611,000 views.

The Biden video isn’t the first time that the Oversight Board has instructed Meta to go back to the drawing board for its rules. When the group weighed in on Facebook’s decision to ban former President Trump, it decried the “vague, standardless” nature of the indefinite punishment while ultimately agreeing with the choice to suspend his account. The Oversight Board has generally urged Meta to provide more detail, consistency and transparency in its platform policies, across cases.

The Oversight Board first lumbered into action in late 2020, missing an opportunity for relevance during that year’s U.S. election and attracting skepticism over its unavoidable ties to the company it was created to audit. Ultimately, Meta gets to decide whether it listens to the Oversight Board or not, even as the entity works as a useful shield against political criticism of its content moderation processes.

Critics of Meta’s policy-making experiment have dismissed the self-designed review board as too little, far too late.

Meta may have a standardized content moderation review system in place now, but misinformation and other dangerous content moves more quickly than that appeals process — and much more quickly than the world could have imagined just two U.S. general election cycles ago.

Researchers and watchdog groups are bracing for an onslaught of misleading claims and AI-generated fakes as the 2024 presidential race ramps up. But even as new technologies enable dangerous falsehoods to scale, social media companies have quietly slashed their investments in trust and safety and turned away from what once appeared to be a concerted effort to stamp out misinformation.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said.