Facebook announces plan to fight misinformation campaigns

Facebook made its most direct statements about how the platform has been used to spread misinformation in a report released today by its security team. The report acknowledges how actors created a coordinated campaign on the platform to spread misinformation during the 2016 U.S. election, and explains measures Facebook is taking to combat it.

“Our mission is to give people the power to share and make the world more open and connected,” the reports’ authors — Facebook CSO Alex Stamos and Threat Intelligence team members Jen Weedon and William Nuland — wrote. “The reality is that not everyone shares our vision, and some will seek to undermine it — but we are in a position to help constructively shape the emerging information ecosystem by ensuring our platform remains a safe and secure environment for authentic civic engagement.”

Facebook calls the campaigns “information operations” and says the goals of such campaigns are usually to distort or manipulate political sentiment. Ordinary users can get caught up in the operations and take part in the spread of misinformation, Facebook said.

The company’s response includes collaboration with other organizations to educate users, undermining campaigns that have a financial motivation, creating new products that slow down the spread of fake news and informing users when they encounter untrustworthy information.

Facebook explains that information operations on the platform often manifest in three ways: targeted data collection, content creation, and false amplification. Stealing and publishing data allows actors to control public discourse, the company said, and that data can then be amplified across fake Facebook profiles.

These tactics allow operations to sway public opinion about specific issues, sow distrust in political institutions, and spread confusion. This kind of behavior is often attributed to bots, but Facebook claims that most of the activity it sees on its network isn’t automated.

“In the case of Facebook, we have observed that most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts,” Facebook said. The company added that specific language skills and knowledge of regional political context indicated that those involved in the misinformation campaigns were humans, not bots.

To fight back, Facebook is amping up its efforts to detect false amplification. It’s trying to block the creation of fake accounts and use machine learning to detect abuse.  The company says that the new measures are proving effective in France, where an election is currently underway.

“In France, for example, as of April 13, these improvements recently enabled us to take action against over 30,000 fake accounts,” the report says.

Facebook used the recent U.S. election of Donald Trump as a case study into misinformation on its platform. The company concluded that a coordinated campaign existed, “with the intent of harming the reputation of specific political targets.” The campaign included inauthentic Facebook accounts that were used to amplify certain themes and information, the report notes, adding:

These incidents employed a relatively straightforward yet deliberate series of actions:

  • Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook);

  • Dedicated sites hosting this data were registered;

  • Fake personas were created on Facebook and elsewhere to point to and amplify awareness of this data;

  • Social media accounts and pages were created to amplify news accounts of and direct people to the stolen data.

  • From there, organic proliferation of the messaging and data through authentic peer groups and networks was inevitable

Although Facebook admitted it was the unwitting host of a disinformation campaign during the election, the company said that the reach of this operation was “statistically very small” in comparison with overall political activity and engagement.

Facebook also said it did not have enough data to definitively attribute the campaign to its creators, but nodded to a report published by the Director of National Intelligence which attributed hacking campaigns during the election season to Russian operatives and said that Facebook’s data does not contradict the findings of the Director.

Putting the responsibility for fighting misinformation under the purview of its security team is an interesting move for Facebook, indicating that the company views the problem as a security risk similar to hacking or fraud.

The company said it would continue to work directly with politicians and campaigns to make sure they use the social network securely.

“Our dedicated teams focus daily on account integrity, user safety, and security, and we have implemented additional measures to protect vulnerable people in times of heightened cyber activity such as elections periods, times of conflict or political turmoil, and other high profile events.”