On Wednesday, a coalition of a dozen state attorneys general called on Facebook and Twitter to step up their enforcement of their community guidelines to curtail the spread of COVID-19 vaccine misinformation on their platforms. Their letter specifically identified 12 “anti-vaxxer” accounts that were responsible for a sizable 65% of public anti-vaccine content on Facebook, Instagram and Twitter. In today’s House hearing on disinformation and extremism, Twitter and Facebook’s CEOs, along with Google CEO Sundar Pichai, were directly asked if they would be willing to take down these 12 accounts.
Their answers were a mixed bag and a demonstration of social media execs’ unwillingness to take a simple action — taking down a handful of disinformation sources — that could have a significant impact on Americans’ willingness to get vaccinated to end the pandemic.
Over the course of the hearing, Congressman Mike Doyle (D-PA) pointed out that nearly 550,000 Americans had lost their lives to COVID-19, and an independent study found that Facebook users in five countries, including the U.S., had been exposed to COVID-19 disinformation 3.8 billion times. Now that the U.S. is rushing to get shots into people’s arms to reduce the spread of the deadly virus, it’s still having to deal with social media sites continuing to promote and recommend content leading to vaccine hesitancy.
“My staff found content on YouTube telling people not to get vaccines, and was recommended to similar videos. The same was true on Instagram, where it was not only easy to find vaccine disinformation, but platforms recommended similar posts,” said Doyle. “The same thing happened on Facebook, except they also had anti-vax groups to suggest, as well. And Twitter was no different.”
“You can take this content down,” Doyle said. “You can reduce the vision. You can fix this, but you choose not to,” he told the CEOs.
He later directly asked the CEOs if they would be willing to take down the 12 accounts the attorneys general had identified in their letter as the so-called “super-spreaders” of misinformation.
The coalition had written that both Facebook and Twitter had yet to remove the accounts of 12 prominent anti-vaxxers, who repeatedly violated the company’s terms of service. These users’ accounts, associated organizations, groups and websites were responsible for 65% of public anti-vaccine content across Facebook, Twitter and Instagram, as of March 10, the letter noted.
In response to the question of taking down these dozen accounts, Zuckerberg hedged. He said that Facebook’s team would have to first look at the exact examples being referenced, leading to Doyle cutting him off.
Pichai tried to start his answer by noting that YouTube had removed more than 850,000 videos with misleading coronavirus information, but was also cut off as Doyle re-asked the question as to whether or not YouTube would take down the accounts of the 12 super-spreaders.
“We have policies to take down content,” Pichai said, but added that “some of the content is allowed, if it’s people’s personal experiences.”
When Twitter CEO Jack Dorsey was posed the same question, he said, “yes, we remove everything against our policy” — a better answer, but also one that’s not necessarily a confirmation that Twitter would, indeed, remove those specific 12 accounts.
Dorsey, earlier in the hearing, had also spoken broadly about Twitter’s long-term vision for dealing with misinformation, “Bluesky,” its vision for a decentralized future. He explained how Bluesky would leverage a base, open-source protocol that’s shared, allowing for “increased innovation around business models, recommendation algorithms, and moderation controls which are placed in the hands of individuals, rather than private companies,” Dorsey said. The answer indicated Twitter’s vision for moderation was ultimately about handing off the responsibility to others — something Facebook has also done in recent months with its Oversight Committee, an external body that will weigh in on the hardest moderation decisions.
These moves indicate that social networks have decided for themselves that they’re not capable of handling the responsibilities of content moderation on their own. But whether the U.S. government will actually step in to regulate them as result still remains to be seen.