After holding a series of hearings in the wake of the Facebook-Cambridge Analytica data misuse scandal this summer, and attending a meeting with Mark Zuckerberg himself in May, the European Union parliament’s civil liberties committee has called for an update to competition rules to reflect what it dubs “the digital reality”, urging EU institutions to look into the “possible monopoly” of big tech social media platforms.
Top level EU competition law has not touched on the social media axis of big tech yet, with the Commission concentrating recent attention on mobile chips (Qualcomm); and mobile and ecommerce platforms (mostly Google; but Amazon’s use of merchant data is in its sights too); as well as probing Apple’s tax structure in Ireland.
But last week Europe’s data protection supervisor, Giovanni Buttarelli, told us that closer working between privacy regulators and the EU’s Competition Commission is on the cards, as regional lawmakers look to evolve their oversight frameworks to respond to growing ethical concerns about use and abuse of big data, and indeed to be better positioned to respond to fast-paced technology-fuelled change.
Local EU antitrust regulators, including in Germany and France, have also been investigating the Google, Facebook adtech duopoly on several fronts in recent years.
The Libe committee’s call is the latest political call to spin up and scale up antitrust effort and attention around social media.
The committee also says it wants to see much greater accountability and transparency on “algorithmic-processed data by any actor, be it private or public” — signalling a belief that GDPR does not go far enough on that front.
Libe committee chair and rapporteur, MEP Claude Moraes, has previously suggested the Facebook Cambridge Analytica scandal could help inform and shape an update to Europe’s ePrivacy rules, which remain at the negotiation stage with disagreements over scope and proportionality.
But every big tech data breach and security scandal lends weight to the argument that stronger privacy rules are indeed required.
In yesterday’s resolution, the Libe committee also called for an audit of the advertising industry on social media — echoing a call made by the UK’s data protection watchdog, the ICO, this summer for an ‘ethical pause‘ on the use of online ads for political purposes.
The ICO made that call right after announcing it planned to issue Facebook with the maximum fine possible under UK data protection law — again for the Cambridge Analytica breach.
While the Cambridge Analytica scandal — in which the personal information of as many as 87 million Facebook users was extracted from the platform without the knowledge or consent of every person, and passed to the now defunct political consultancy (which used it to create psychographic profiles of US voters for election campaigning purposes) — has triggered this latest round of political scrutiny of the social media behemoth, last month Facebook revealed another major data breach, affecting at least 50M users — underlining the ongoing challenge it has to live up to claims of having ‘locked the platform down’.
In light of both breaches, the Libe committee has now called for EU bodies to be allowed to fully audit Facebook — to independently assess its data protection and security practices.
Buttarelli also told us last week that it’s his belief none of the tech giants are directing adequate resource at keeping user data safe.
And with Facebook having already revealed a second breach that’s potentially even larger than Cambridge Analytica fresh focus and political attention is falling on the substance of its security practices, not just its claims.
While the Libe committee’s MEPs say they have taken note of steps Facebook made in the wake of the Cambridge Analytica scandal to try to improve user privacy, they point out it has still not yet carried out the promised full internal audit.
Facebook has never said how long this historical app audit will take. Though it has given some progress reports, such as detailing additional suspicious activity it has found to date, with 400 apps suspended at the last count. (One app, called myPersonality, also got banned for improper data controls.)
The Libe committee is now urging Facebook to allow the EU Agency for Network and Information Security (ENISA) and the European Data Protection Board, which plays a key role in applying the region’s data protection rules, to carry out “a full and independent audit” — and present the findings to the European Commission and Parliament and national parliaments.
It has also recommended that Facebook makes “substantial modifications to its platform” to comply with EU data protection law.
Commenting on the resolution in a statement, Libe chair Moraes said: “This resolution makes clear that we expect measures to be taken to protect citizens’ right to private life, data protection and freedom of expression. Improvements have been made since the scandal, but, as the Facebook data breach of 50 million accounts showed just last month, these do not go far enough.”
We’ve reached out to Facebook for comment on the recommendations — including specifically asking the company whether it’s open to an external audit of its platform. Update: The company declined to provide an on the record comment in response to our question but a spokesperson emailed the below statement:
We are grateful to the European Parliament for the number of opportunities to come and explain the changes we have made to our platform. We are working relentlessly to ensure the transparency, safety and security of people who use Facebook. Over the last months we have developed sophisticated systems that combine technology and people to prevent election interference on our services. This is part of a broader challenge for us at Facebook to be more proactive about protecting our community from harm and taking a broader view of our responsibility overall.
The company added that its internal audit of apps with access to a large amount of information prior to policy changes made in 2014 to tighten its APIs is continuing.
The Libe committee has also made a series of proposals for reducing the risk of social media being used as an attack vector for election interference — including:
- applying conventional “off-line” electoral safeguards, such as rules on transparency and limits to spending, respect for silence periods and equal treatment of candidates;
- making it easy to recognize online political paid advertisements and the organisation behind them;
- banning profiling for electoral purposes, including use of online behaviour that may reveal political preferences;
- social media platforms should label content shared by bots and speed up the process of removing fake accounts;
- compulsory post-campaign audits to ensure personal data are deleted;
- investigations by member states with the support of Eurojust if necessary, into alleged misuse of the online political space by foreign forces.
A couple of weeks ago, the Commission outed a voluntary industry Code of Practice aimed at tackling online disinformation which several tech platforms and adtech companies had agreed to sign up to, and which also presses for action in some of the same areas — including fake accounts and bots.
However the code is not only voluntary but does not bind signatories to any specific policy steps or processes so it looks like its effectiveness will be as difficult to quantify as its accountability will lack bite.
A UK parliamentary committee which has also been probing political disinformation this year also put out a report this summer with a package of proposed measures — with some similar ideas but also suggesting a levy on social media to ‘defend democracy’.
Meanwhile Facebook itself has been working on increasing transparency around advertisers on its platform, and putting in place some authorization requirements for political advertisers (though starting in the US first).
But few politicians appear ready to trust that the steps Facebook is taking will be enough to avoid a repeat of, for example, the mass Kremlin propaganda smear campaign that targeted the 2016 US presidential election.
The Libe committee has also urged all EU institutions, agencies and bodies to verify that their social media pages, and any analytical and marketing tools they use, “should not by any means put at risk the personal data of citizens”.
And it goes as far as suggesting that EU bodies could even “consider closing their Facebook accounts” — as a measure to protect the personal data of every individual contacting them.
The committee’s full resolution was passed by 41 votes to 10 and 1 abstention. And will be put to a vote by the full EU Parliament during the next plenary session later this month.
In it, the Libe also renews its call for the suspension of the EU-US Privacy Shield.
The data transfer arrangement, which is used by thousands of businesses to authorize transfers of EU users’ personal data across the Atlantic, is under growing pressure ahead of an annual review this month, as the Trump administration has failed entirely to respond as EU lawmakers had hoped their US counterparts would at the time of the agreement being inked in the Obama era, back in 2016.
The EU parliament also called for Privacy Shield to be suspended this summer. And while the Commission did not act on those calls, pressure has continued to mount from MEPs and EU consumer and digital and civil rights bodies.
During the Privacy Shield review process this month the Commission will be pressuring US counterparts to try to gain concessions that it can sell back home as ‘compliance’.
But without very major concessions — and who would bank on that, given the priorities of the current US administration — the future of the precariously placed mechanism looks increasingly uncertain.
Even as more oversight coming down the pike to rule social media platforms looks all but inevitable in Europe.