The pandemic is already reshaping tech’s misinformation crisis

Since 2016, social media companies have faced an endless barrage of bad press and public criticism for failing to anticipate how their platforms could be used for dark purposes at the scale of populations — undermining democracies around the world, say, or sowing social division and even fueling genocide.

As COVID-19 plunges the world into chaos and social isolation, those same companies may face a respite from focused criticism, particularly with the industry leveraging its extraordinary resources to pitch in with COVID-19 relief efforts as the world looks to tech upstarts, adept at cutting through red tape and fast-forwarding scientific progress in normal times, while government bureaucracies lag. But the same old problems are rearing their ugly heads just the same, even if less of us are paying attention.

On YouTube, a new report from The Guardian and watchdog group Tech Transparency Project found that a batch of videos promoting fake coronavirus cures are making the company ad dollars. The videos, which promoted unscientific methods including “home remedies, meditative music, and potentially unsafe levels of over-the-counter supplements like vitamin C” as potential treatments for the virus, ran ads from unwitting advertisers including Liberty Mutual, Quibi, Trump’s 2020 reelection campaign and Facebook. In Facebook’s case, a banner ad for the company ran on a video suggesting music that promotes “cognitive positivity by using subtle yet powerful theta waves” could ward off the virus.

In the early days of the pandemic, YouTube prohibited ads on any videos related to the coronavirus. In mid-March, as the real scope of the event became clear, the company walked that policy back, allowing some channels to run ads. On Thursday, the company expanded that policy to allow ads for any videos that adhere to the company’s guidelines. One of the major tenets in those guidelines forbids the promotion of medical misinformation, including “promotion of dangerous remedies or cures.” Most of the videos in the new report were removed after being flagged by a journalist.

This example, and the many others like it, calls into question how to judge major tech platforms during these exceedingly strange times. Social media companies have been uncharacteristically transparent about the shifts the pandemic is creating within their own workflows. On a call in March, Facebook founder Mark Zuckerberg admitted that, with its army of 15,000 contract moderators sent home on paid leave, users can expect more “false positives” as the company shifts to rely more heavily on artificial intelligence to filter what belongs on the platform and what does not. The work of sorting through a platform’s most unsavory content — child pornography, extreme violence, hate speech and the like — is not particularly portable, given its potential psychological and legal ramifications.

YouTube similarly warned that it will “temporarily start relying more on technology” to fill in for human reviewers, warning that the automated processes will likely mean more video removals, “including some videos that may not violate policies.” Twitter noted the same new reliance on machine learning “to take a wide range of actions on potentially abusive and manipulative content,” though the company will offer an appeals process that loops in a human reviewer. Companies offered fewer warnings about what might fall through the cracks in the interim.

What will become of moderation once things return to normal, or, more likely, settle on a new normal? Will artificial intelligence have mastered the task, obviating the need for human reviewers once and for all? (Unlikely.) Will social media companies have a fresh appreciation for the value of human efforts and bring more of those jobs in-house, where they can perform their bleak work with more of the sunny perks afforded to their full-time counterparts? Like most things examined through the nightmarish haze of the pandemic, the outcomes are hazy at best.

If the approach to holding platforms to account was already piecemeal, an uneven mix of investigative reporting, anecdotal tweets and official corporate post-mortems, the truth will be even more difficult to get at now, even as the coronavirus pandemic provides countless new deadly opportunities for price-gougers and myriad bad actors to create chaos within chaos.

We’ve seen deadly consequences already in Iran, where hundreds died after drinking industrial alcohol — an idea they got “in messages forwarded and forwarded again” amplifying a tabloid story that suggested the act could protect them from the virus. Most consequences will likely go unnoticed beyond the lives they impact and unreported due to tightened newsroom resources and perhaps even more constricted attention spans.

Much has been written about the coronavirus and the fog of war, most of it rightly focused on scientific research pressing on as the virus threatens the globe and the devastating on-the-ground reality in hospitals and health facilities overwhelmed with COVID-19 patients while life-saving supplies dwindle. But the crisis of viral misinformation — and deliberately sown disinformation — is its own fog, now intermixing with an unprecedented global crisis that has entirely upended business and relentlessly dominated the news cycle. This as the world’s foremost power heads into a completely upended presidential election cycle — its first since four years ago, when an unexpected election outcome coupled with deep U.S.-centrism in tech circles revealed nefarious forces at play just under the surface of the social networks we hadn’t thought all that much about.

In the present, it will be difficult for outsiders to determine where new systems implemented during the pandemic have failed and what bad outcomes would have happened anyway. To sort those causes out, we’ll have to take a company’s word for it, a risky kind of credulity that already offered mixed results in normal times. Even as we rely on them now more than ever to forge and nurture connections, the virtual portals we immerse ourselves in daily remain black boxes, inscrutable as ever. And as with so many aspects of life in these norm-shattering times, the only thing to expect is change.