What does ‘regulating Facebook’ mean? Here’s an example

Many officials claim that governments should regulate Facebook and other social platforms, but few describe what it actually means. A few days ago, France released a report that outlines what France — and maybe the European Union — plans to do when it comes to content moderation.

It’s an insightful 34-page document with a nuanced take on toxic content and how to deal with it. There are some brand new ideas in the report that are worth exploring. Instead of moderating content directly, the regulator in charge of social networks would tell Facebook and other social networks a list of objectives. For instance, if a racist photo goes viral and is distributed to 5 percent of monthly active users in France, you could consider that the social network has failed to fulfill its obligations.

This isn’t just wishful thinking as the regulator would be able to fine the company up to 4 percent of the company’s global annual turnover in case of a systemic failure to moderate toxic content.

The government plans to turn the report into new pieces of regulation in the coming months. France doesn’t plan to stop there. It is already lobbying other countries (in Europe, the Group of 7 nations and beyond) so that they could all come up with cross-border regulation and have a real impact on moderation processes. So let’s dive into the future of social network regulation.

Facebook first opened its doors

When Facebook CEO Mark Zuckerberg testified before Congress in April 2018, it felt like regulation was inevitable. And the company itself has been aware of this for a while.

That’s why the French government used this opportunity to open a conversation with the company. Regulators could introduce widespread regulation without consulting the company. Or Facebook could choose to cooperate with the French government to help them understand how Facebook works and introduce fine-grained regulation.

In November 2018, Facebook and the French government announced that they were going to cooperate. French regulators launched an informal investigation on algorithm-powered and human moderation processes when it comes to toxic content.

Regulators from multiple administrations focused on different areas — how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image.

Emmanuel Macron and Nick Clegg of Facebook unveiled the initiative at a lunch reception at the Élysée

Ten people working for multiple ministries and regulators (Arcep, CSA, DILCRAH, DINSIC and other complicated acronyms…) spent a few months asking questions and requesting information from Facebook so that they could come up with recommendations for future regulation.

“The report that involved multiple administrations represents the overall philosophy of what the government is going to do over the coming months,” France’s Digital Minister Cédric O told me last week.

That’s why it’s worth exploring that document page by page to understand how it’s going to affect social networks.

Diving in the report

Methodology

French regulators first visited Facebook’s offices in Paris, Dublin (Facebook’s headquarters in Europe) and Barcelona (one of Facebook’s moderation centers). They discussed the company’s policies on hate speech, resources and internal processes. They met Facebook again to focus on algorithms that automatically detect hateful content as well as basic principles behind the news feed.

The report

Right from the start of the report, it doesn’t sound like they could access everything they were asking for:

“Although the mission received a very open welcome from Facebook, it did not have access to particularly detailed, let alone truly confidential information. This was due to the speed of the work, the lack of a formal legal framework and the limits of Facebook’s transparency policy.”

While Facebook first partnered with the French government for this report, four social networks are named directly — Facebook, YouTube, Twitter and Snapchat. That’s why French regulators also had meetings with Google, Twitter, Snap, as well as German and British regulators, various administrations and nonprofits.

Definitions

But what is a social network exactly? According to the report, a social network is a service with user-generated content that you can broadcast to some or all other users of the service.

Authors insist that they vary a lot, from thousands of users to billions of users. For instance, comments on a news website could be considered as a social network. But they all have one thing in common — there’s no ex ante moderation or editorial selection.

Anybody can post something — and this is the big difference between social networks and traditional media websites with an editorial staff (like TechCrunch!).

But that doesn’t mean social networks aren’t media companies of some sort — people often say ‘social media’ instead of ‘social networks’ after all. Due to the volume of content, social networks select content and define a hierarchy.

When you type facebook.com in your browser, you only see a dozen posts. Unlike a traditional media company, that selection is personalized for each user and generated by algorithms based on multiple criteria that are mostly kept secret. That feature alone explains why some posts become viral (a key issue according to the report).

While social networks all have terms of service, the biggest ones just can’t moderate every single post that breaks the terms of service. According to the report, too many of them fall through the cracks and it has an impact on the fabric of society.

The report names content inciting hatred, terrorist content, child pornography, online harassment, identity theft, fake news and attempts to manipulate public opinion as problematic content.

It’s interesting to see that they don’t spend a lot of time defining problematic content. Social networks and the French government probably don’t agree on what constitutes content inciting hatred or online harassment.

For instance, Holocaust denial is a crime in France. But US-based companies could consider it as free speech, as Mark Zuckerberg already suggested. Facebook says that it does not allow hate speech in its community standards, but it’s unclear what it actually means.

I just searched “Holocaust happen” on Facebook to see if any result would come up. In the video tab, the third video was a video called “HOLOCAUST NEVER HAPPENED”.

Should a Facebook user based in the U.S. be able to deny that the Holocaust happened? And, in the light of today’s report, should a user based in France be able to see this video?

French authorities currently have no say on this issue. If the user who posted that video isn’t based in France, they haven’t committed a crime in France.

That’s why the report thinks social networks should be partly responsible for content that they host and distribute around the world.

“Even if the abuses are committed by users, social networks’ role in the presentation and selective promotion of content, the inadequacy of their moderation systems and the lack of transparency of their platforms’ operation justify intervention by the public authorities, notwithstanding the efforts made by certain operators.”

Philosophy

While authors of the report noticed that Facebook and YouTube have ramped up their efforts when it comes to self-regulation, they say they’re not doing enough.

Here’s what they expect from Facebook and social networks:

  • More transparency with public authorities. Governments currently have as much information as a normal user reading public statements.
  • More checks and balances as social networks write their own terms of service, modify them without any outside input, interpret them and report on their effectiveness.
  • More cooperation as authors of the report think social networks have been promoting self-regulation in order to avoid proper regulation and a public outcry.

This is where the report gets interesting. Regulators don’t think they should review content directly. Instead, they believe it would be more effective to empower social networks.

Here’s how the relationship would work:

  • Authorities would give them public interest objectives.
  • Social networks would figure out how to implement processes to meet those objectives.
  • A French regulator could sanction companies that don’t meet their objectives.

Social networks wouldn’t have to block all hate speech because that’s just impossible. But they’d have to prove that they’re doing a good enough job to limit hate speech to a tiny portion of total content.

“It’s just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think that’s how we should think about it,” Cédric O told me in a recent interview.

While I don’t know much about banking regulation, I talked with someone working in the banking industry that confirmed that it’s an accurate description of banking regulation.

Eventually, the European Union could transpose this model into European regulation. Just like with GDPR, social networks would fall under those new rules if they distribute posts in EU Member States.

For instance, even if Facebook’s European headquarters are in Ireland, regulation would apply in other EU countries and follow national law.

If EU Member States can’t agree, disparities between national regulation could complicate things too much. Big players would be the only one with enough resources to adapt their product to each country, creating a barrier to entry.

Implementation

Regulation wouldn’t apply to all social networks — only the biggest ones. If 10 to 20 percent of France’s population has an account on a particular social network, regulation would kick in.

If that social network fails to comply with regulation and posts regularly go viral and affect 0 to 5 percent of monthly active users, the regulator could interfere.

As you can see, it’s hard to put an exact number on those floor levels. If those percentages are too low, social networks could end up deleting a ton of content, even legit content. If those percentage levels are too high, then regulation is useless.

If a social network infringes the law, the regulator would fine the company up to 4 percent of the company’s global annual turnover

When it comes to transparency, the regulator should be able to get detailed information about how the news feed works — this is a big ask. Similarly, the regulator should be able to audit moderation processes, internal rules, a detailed list of trusted flaggers and media partners, as well as various statistics on the virality of flagged content, false positives, etc.

With this effort, regulation would reduce the information asymmetry between the government and the regulator on one side, and social networks on the other side.

Also worth noting, every time something is deleted, the platform should notify both the author of the post and the person who reported the post with a valid reason.

All of this sounds great but who would be in charge of implementing such a piece of regulation?

The report doesn’t name the regulator on purpose. It’s still unclear whether France wants to create a new regulator, work with an existing regulator or merge multiple regulators to form a sort of super-regulator.

“On a personal level, I’m extremely torn on this choice,” Cédric O said.

Here’s the role and scope of that mythical regulator:

  • The regulator would use fake identities to log in and browse social networks. It would also have access to moderation algorithms using API calls. This way, they could check if social networks are giving fake numbers in their public statements and reports.
  • Laws on trade secrets and data privacy wouldn’t apply there. The regulator would be able to access a lot of stuff.
  • If a social network infringes the law, the regulator would fine the company up to 4 percent of the company’s global annual turnover (just like GDPR). Social networks would have to announce that they didn’t comply with regulation to their users.

Yes, you read that correctly. 4 percent of the company’s global annual turnover.

This is what it looks like in one chart, but it looks like a bunch of indecipherable arrows and bullet points:

Limits

The main criticism against GDPR is that it creates a barrier to entry. Smaller players are faced with a ton of requirements and waste time implementing policies that comply with GDPR. Some people even go as far as to say that GDPR has helped Google, Facebook and other data giants.

The same thing could be said for regulation on social networks. What is going to happen to smaller companies once they reach a certain size and have to comply with social network regulation?

Another limit, while most of this report focuses on public content or content that you broadcast to a group of people in a semi-public way, you can’t use the same framework for group messaging apps. Rumors on WhatsApp have led to multiple deaths in India for instance.

Finally, regulating all social networks seems like an impossible task. Facebook, YouTube, Snap and Twitter seem like a good start. But, as the report suggests, it’s going to be much harder to convince 4chan or 8chan to comply with French and European regulation.

A small part of Facebook’s user base could also move to fringe social networks. It would balkanize the issue of hate speech across many different platforms.

Christchurch, G7, the EU and France

Now that the French government has developed a regulation model, the government wants to convince as many countries as possible that they should follow in France’s footsteps. This is when diplomacy kicks in.

There are multiple groups of countries that could potentially be interested in social network regulation. And if France wants to have a significant influence on the moderation processes of social networks, the government needs allies.

That’s why last week’s announcements were significant. World leaders and tech giants signed a pledge called the Christchurch Call. Named after the recent terrorist attack in Christchurch, world leaders asked tech platforms to increase their efforts when it comes to blocking toxic content.

By putting the spotlight on a specific issue, it was easier to convince both governments and private companies to sign this non-binding pledge. While the U.S. hasn’t signed the pledge, 17 countries and 8 companies signed it.

New Zealand’s Prime Minister Jacinda Ardern French President Emmanuel Macron announce the Christchurch Call (Photo credit: Yoan Valat/AFP/Getty Images)

The pledge most likely won’t lead to anything substantial. But digital ministers of the Group of 7 nations also met last week in order to discuss an upcoming charter on toxic content and tech regulation at large.

Those countries plan to sign the charter during the annual G7 meeting in Biarritz, France in August. Longer negotiations combined with a smaller group of countries could lead to more concrete results.

Digital ministers of G7 countries (Photo credit: Eric Piermont/AFP/Getty Images)

Discussions with other European countries are also key. While it’s often hard to find common ground when it comes to European regulation, some European countries will likely side with France on this issue as they’ve already been working on similar regulation on their own. Regulation at the EU-level is the most likely outcome of France’s diplomatic efforts.

Co-regulation or smart regulation?

Discussions between Facebook and the French government show that words have a strong meaning. When Facebook first announced that French regulators would have a look at the company’s internal processes, Facebook said it would lead to “co-regulation”.

“It is in that context significant and welcome that the French government and Facebook are going to announce a new initiative. That model of co-regulation of the public tech sector is absolutely key,” former British Deputy Prime Minister and Facebook VP for Global Affairs and Communications Nick Clegg said.

Emmanuel Macron talking about smart regulation at VivaTech (Photo credit: Philippe Lopez/AFP/Getty Images)

But the French government doesn’t use the phrase co-regulation. At a tech conference in Paris, France’s President Emmanuel Macron talked about “smart regulation”.

“What we want to do is to increase regulation against hate speech. It is sometimes very complicated,” he said. “If you pass regulation on your own, sometimes it is non-feasible. Sometimes it’s not adaptable and you can block everything, the dynamic of the system. And you can have side effects you didn’t see as a regulator. So what we decided to do with some platforms is to send the regulators, embed them with the company and the tech guys in order precisely to work together during months. And we’re building smart regulation against hate speech.”

It might seem like a small difference in wording, but it’s quite telling. Facebook wants you to think that it is already doing a lot in order to protect you from hate speech and that is still in control, co-regulating to improve its processes. And the French government doesn’t want you to think that Facebook is writing the law with them.

Given that Mark Zuckerberg has a bigger audience than Emmanuel Macron, chances are we’ll talk about co-regulation in the coming years.