Europe dials up pressure on tech giants over election security

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May.

The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) Greater co-ordination across EU Member States, including by sharing alerts about threats; 3) Increased pressure on online platforms, including to increase transparency around political ads and purge fake accounts; and 4) raising awareness and critical thinking among EU citizens.

The Commission says 67% of EU citizens are worried about their personal data being used for political targeting, and 80% want improved transparency around how much political parties spend to run campaigns on social media.

And it warned today that it wants to see rapid action from online platforms to deliver on pledges they’ve already made to fight fake news and election interference.

The EC’s plan follows a voluntary Code of Practice launched two months ago, which signed up tech giants including Facebook, Google and Twitter, along with some ad industry players, to some fairly fuzzy commitments to combat the spread of so-called ‘fake news’.

They also agreed to hike transparency around political advertising. But efforts so far remain piecemeal, with — for example — no EU-wide roll out of Facebook’s political ads disclosure system.

Facebook has only launched political ad identification checks plus an archive library of ads in the US, Brazil and the UK so far, leaving the rest of the world to rely on the more limited ‘view ads’ functionality that it has rolled out globally.

The EC said it will be stepping up its monitoring of platforms’ efforts to combat election interference — with the new plan including “continuous” monitoring.

This will take the form of monthly progress reports, starting with a Commission progress report in January and then monthly reports thereafter (against what it slated as “very specific targets”) to ensure signatories are actually purging and disincentivizing bad actors and inauthentic content from their platform, not just saying they’re going to.

As we reported in September the Code of Practice looked to be a pretty dilute first effort. But ongoing progress reports could at least help concentrate minds — coupled with the ongoing threat of EU-wide legislation if platforms fail to effectively self-regulate.

Digital economy and society commissioner Mariya Gabriel said the EC would have “measurable and visible results very soon”, warning platforms: “We need greater transparency, greater responsibility both on the content, as well as the political approach.”

Security union commissioner, Julian King, came in even harder on tech firms — warning that the EC wants to see “real progress” from here on in.

“We need to see the Internet platforms step up and make some real progress on their commitments. This is stuff that we believe the platforms can and need to do now,” he said, accusing them of “excuses” and “foot-dragging”.

“The risks are real. We need to see urgent improvement in how adverts are placed,” he continued. “Greater transparency around sponsored content. Fake accounts rapidly and effectively identified and deleted.”

King pointed out Facebook admits that between 3% and 4% of its entire user-base is fake.

“That is somewhere between 60M and 90M face accounts,” he continued. “And some of those accounts are the most active accounts. A recent study found that 80% of the Twitter accounts that spread disinformation during the 2016 US election are still active today — publishing more than a million tweets a day. So we’ve got to get serious about this stuff.”

Twitter declined to comment on today’s developments but a spokesperson told us its “number one priority is improving the health of the public conversation”.

“Tackling co-ordinated disinformation campaigns is a key component of this. Disinformation is a complex, societal issue which merits a societal response,” Twitter’s statement said. “For our part, we are already working with our industry partners, Governments, academics and a range of civil society actors to develop collaborative solutions that have a meaningful impact for citizens. For example, Twitter recently announced a global partnership with UNESCO on media and information literacy to help equip citizens with the skills they need to critically analyse content they are engaging with online.”

We’ve also reached out to Facebook and Google for comment on the Commission plan.

King went on to press for “clearer rules around bots”, saying he would personally favor a ban on political content being “disseminated by machines”.

The Code of Practice does include a commitment to address both fake accounts and online bots, and “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”. And Twitter has previously said it’s considering labelling bots; albeit with the caveat “as far as we can detect them”.

But action is still lacking.

“We need rapid corrections, which are given the same prominence and circulation as the original fake news. We need more effective promotion of alternative narratives. And we need to see overall greater clarity around how the algorithms are working,” King continued, banging the drum for algorithmic accountability.

“All of this should be subject to independent oversight and audit,” he added, suggesting the self-regulation leash here will be a very short one.

He said the Commission will make a “comprehensive assessment” of how the Code is working next year, warning: “If the necessary progress is not made we will not hesitate to reconsider our options — including, eventually, regulation.”

“We need to be honest about the risks, we need to be ready to act. We can’t afford an Internet that is the wild west where anything goes, so we won’t allow it,” he concluded.

Commissioner Vera Jourova also attended the briefing and used her time at the podium to press platforms to “immediately guarantee the transparency of political advertising”.

“This is a quick fix that is necessary and urgent,” she said. “It includes properly checking and clearly indicating who is behind online advertisement and who paid for it.”

In Spain regional elections took place in Andalusia on Sunday and — as noted above — while Facebook has launched a political ad authentication process and ad archive library in the US and the UK, the company confirmed to us that such a system was not up and running in Spain in time for that regional European election.

In the vote in Andalusia a tiny Far Right party, Vox, broke pollsters’ predictions to take twelve seats in the parliament — a first since the country’s return to democracy after the death of the dictator Francisco Franco in 1975.

Zooming in on election security risks, Jourova warned that “large-scale organized disinformation campaigns” have become “extremely efficient and spread with the speed of light” online. She also warned that non-transparent ads “will be massively used to influence opinions” in the run up to the EU elections.

Hence the pressing need for a transparency guarantee.

“When we allow the machines to massively influence free decisions of democracy I think that we have appeared in a bad science fiction,” she added. “The electoral campaign should be the competition of ideas, not the competition of dirty money, dirty methods, and hidden advertising where the people are not informed and don’t have a clue that they are influenced by some hidden powers.”

Jourova urged Member States to update their election laws so existing requirements on traditional media to observe a pre-election period also apply online.

“We all have roles to play, not only Member States, also social media platforms, but also traditional political parties. [They] need to make public the information on their expenditure for online activities as well as information on any targeting criteria used,” she concluded.

A report by the UK’s DCMS committee, which has been running an enquiry into online disinformation for the best part of this year, made similar recommendations in its preliminary report this summer.

Though the committee also went further — calling for a levy on social media to defend democracy. Albeit, the UK government did not leap into the recommended actions.

Also speaking at today’s presser, EC VP, Andrus Ansip, warned of the ongoing disinformation threat from Russia but said the EU does not intend to respond to the threat from propaganda outlets like RT, Sputnik and IRA troll farms by creating its own pro-EU propaganda machine.

Rather he said the plan is to focus efforts on accelerating collaboration and knowledge-sharing to improve detection and indeed debunking of disinformation campaigns.

“We need to work together and co-ordinate our efforts — in a European way, protecting our freedoms,” he said, adding that the plan sets out “how to fight back against the relentless propaganda and information weaponizing used against our democracies”.

Under the action plan, the budget of the European External Action Service (EEAS) — which bills itself as the EU’s diplomatic service — will more than double next year, to €5M, with the additional funds intended for strategic comms to “address disinformation and raise awareness about its adverse impact”, including beefing up headcount.

“This will help them to use new tools and technologies to fight disinformation,” Ansip suggested.

Another new measure announced today is a dedicated Rapid Alert System which the EC says will facilitate “the sharing of data and assessments of disinformation campaigns and to provide alerts on disinformation threats in real time”, with knowledge-sharing flowing between EU institutions and Member States.

The EC also says it will boost resource for national multidisciplinary teams of independent fact-checkers and researchers to detect and expose disinformation campaigns across social networks — working towards establishing a European network of fact-checkers.

“Their work is absolutely vital in order to combat disinformation,” said Gabriel, adding: “This is very much in line with our principles of pluralism of the media and freedom of expression.”

Investments will also go towards supporting media education and critical awareness, with Gabriel noting that the Commission will to run a European media education week, next March, to draw attention to the issue and gather ideas.

She said the overarching aim is to “give our citizens a whole array of tools that they can use to make a free choice”.

“It’s high time we give greater visibility to this problem because we face this on a day to day basis. We want to provide solutions — so we really need a bottom up approach,” she added. “It’s not up to the Commission to say what sort of initiatives should be adopted; we need to give stakeholders and citizens their possibility to share best practices.”

This report was updated with a correction to include Brazil in the list of countries where Facebook had launched a system for confirming the identity of political advertisers. The other two countries were correctly stated as the US and the UK. Facebook has also since launched an identity check for political advertisers in India