Startups

What does ‘regulating Facebook’ mean? Here’s an example

Comment

Image Credits: Marc Piasecki / Getty Images

Many officials claim that governments should regulate Facebook and other social platforms, but few describe what it actually means. A few days ago, France released a report that outlines what France — and maybe the European Union — plans to do when it comes to content moderation.

It’s an insightful 34-page document with a nuanced take on toxic content and how to deal with it. There are some brand new ideas in the report that are worth exploring. Instead of moderating content directly, the regulator in charge of social networks would tell Facebook and other social networks a list of objectives. For instance, if a racist photo goes viral and is distributed to 5 percent of monthly active users in France, you could consider that the social network has failed to fulfill its obligations.

This isn’t just wishful thinking as the regulator would be able to fine the company up to 4 percent of the company’s global annual turnover in case of a systemic failure to moderate toxic content.

The government plans to turn the report into new pieces of regulation in the coming months. France doesn’t plan to stop there. It is already lobbying other countries (in Europe, the Group of 7 nations and beyond) so that they could all come up with cross-border regulation and have a real impact on moderation processes. So let’s dive into the future of social network regulation.

Facebook first opened its doors

When Facebook CEO Mark Zuckerberg testified before Congress in April 2018, it felt like regulation was inevitable. And the company itself has been aware of this for a while.

That’s why the French government used this opportunity to open a conversation with the company. Regulators could introduce widespread regulation without consulting the company. Or Facebook could choose to cooperate with the French government to help them understand how Facebook works and introduce fine-grained regulation.

In November 2018, Facebook and the French government announced that they were going to cooperate. French regulators launched an informal investigation on algorithm-powered and human moderation processes when it comes to toxic content.

Regulators from multiple administrations focused on different areas — how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image.

Emmanuel Macron and Nick Clegg of Facebook unveiled the initiative at a lunch reception at the Élysée

Ten people working for multiple ministries and regulators (Arcep, CSA, DILCRAH, DINSIC and other complicated acronyms…) spent a few months asking questions and requesting information from Facebook so that they could come up with recommendations for future regulation.

“The report that involved multiple administrations represents the overall philosophy of what the government is going to do over the coming months,” France’s Digital Minister Cédric O told me last week.

That’s why it’s worth exploring that document page by page to understand how it’s going to affect social networks.

Diving in the report

Methodology

French regulators first visited Facebook’s offices in Paris, Dublin (Facebook’s headquarters in Europe) and Barcelona (one of Facebook’s moderation centers). They discussed the company’s policies on hate speech, resources and internal processes. They met Facebook again to focus on algorithms that automatically detect hateful content as well as basic principles behind the news feed.

The report

Right from the start of the report, it doesn’t sound like they could access everything they were asking for:

“Although the mission received a very open welcome from Facebook, it did not have access to particularly detailed, let alone truly confidential information. This was due to the speed of the work, the lack of a formal legal framework and the limits of Facebook’s transparency policy.”

While Facebook first partnered with the French government for this report, four social networks are named directly — Facebook, YouTube, Twitter and Snapchat. That’s why French regulators also had meetings with Google, Twitter, Snap, as well as German and British regulators, various administrations and nonprofits.

Definitions

But what is a social network exactly? According to the report, a social network is a service with user-generated content that you can broadcast to some or all other users of the service.

Authors insist that they vary a lot, from thousands of users to billions of users. For instance, comments on a news website could be considered as a social network. But they all have one thing in common — there’s no ex ante moderation or editorial selection.

Anybody can post something — and this is the big difference between social networks and traditional media websites with an editorial staff (like TechCrunch!).

But that doesn’t mean social networks aren’t media companies of some sort — people often say ‘social media’ instead of ‘social networks’ after all. Due to the volume of content, social networks select content and define a hierarchy.

When you type facebook.com in your browser, you only see a dozen posts. Unlike a traditional media company, that selection is personalized for each user and generated by algorithms based on multiple criteria that are mostly kept secret. That feature alone explains why some posts become viral (a key issue according to the report).

While social networks all have terms of service, the biggest ones just can’t moderate every single post that breaks the terms of service. According to the report, too many of them fall through the cracks and it has an impact on the fabric of society.

The report names content inciting hatred, terrorist content, child pornography, online harassment, identity theft, fake news and attempts to manipulate public opinion as problematic content.

It’s interesting to see that they don’t spend a lot of time defining problematic content. Social networks and the French government probably don’t agree on what constitutes content inciting hatred or online harassment.

For instance, Holocaust denial is a crime in France. But US-based companies could consider it as free speech, as Mark Zuckerberg already suggested. Facebook says that it does not allow hate speech in its community standards, but it’s unclear what it actually means.

I just searched “Holocaust happen” on Facebook to see if any result would come up. In the video tab, the third video was a video called “HOLOCAUST NEVER HAPPENED”.

Should a Facebook user based in the U.S. be able to deny that the Holocaust happened? And, in the light of today’s report, should a user based in France be able to see this video?

French authorities currently have no say on this issue. If the user who posted that video isn’t based in France, they haven’t committed a crime in France.

That’s why the report thinks social networks should be partly responsible for content that they host and distribute around the world.

“Even if the abuses are committed by users, social networks’ role in the presentation and selective promotion of content, the inadequacy of their moderation systems and the lack of transparency of their platforms’ operation justify intervention by the public authorities, notwithstanding the efforts made by certain operators.”

Philosophy

While authors of the report noticed that Facebook and YouTube have ramped up their efforts when it comes to self-regulation, they say they’re not doing enough.

Here’s what they expect from Facebook and social networks:

  • More transparency with public authorities. Governments currently have as much information as a normal user reading public statements.
  • More checks and balances as social networks write their own terms of service, modify them without any outside input, interpret them and report on their effectiveness.
  • More cooperation as authors of the report think social networks have been promoting self-regulation in order to avoid proper regulation and a public outcry.

This is where the report gets interesting. Regulators don’t think they should review content directly. Instead, they believe it would be more effective to empower social networks.

Here’s how the relationship would work:

  • Authorities would give them public interest objectives.
  • Social networks would figure out how to implement processes to meet those objectives.
  • A French regulator could sanction companies that don’t meet their objectives.

Social networks wouldn’t have to block all hate speech because that’s just impossible. But they’d have to prove that they’re doing a good enough job to limit hate speech to a tiny portion of total content.

“It’s just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think that’s how we should think about it,” Cédric O told me in a recent interview.

While I don’t know much about banking regulation, I talked with someone working in the banking industry that confirmed that it’s an accurate description of banking regulation.

Eventually, the European Union could transpose this model into European regulation. Just like with GDPR, social networks would fall under those new rules if they distribute posts in EU Member States.

For instance, even if Facebook’s European headquarters are in Ireland, regulation would apply in other EU countries and follow national law.

If EU Member States can’t agree, disparities between national regulation could complicate things too much. Big players would be the only one with enough resources to adapt their product to each country, creating a barrier to entry.

Implementation

Regulation wouldn’t apply to all social networks — only the biggest ones. If 10 to 20 percent of France’s population has an account on a particular social network, regulation would kick in.

If that social network fails to comply with regulation and posts regularly go viral and affect 0 to 5 percent of monthly active users, the regulator could interfere.

As you can see, it’s hard to put an exact number on those floor levels. If those percentages are too low, social networks could end up deleting a ton of content, even legit content. If those percentage levels are too high, then regulation is useless.

When it comes to transparency, the regulator should be able to get detailed information about how the news feed works — this is a big ask. Similarly, the regulator should be able to audit moderation processes, internal rules, a detailed list of trusted flaggers and media partners, as well as various statistics on the virality of flagged content, false positives, etc.

With this effort, regulation would reduce the information asymmetry between the government and the regulator on one side, and social networks on the other side.

Also worth noting, every time something is deleted, the platform should notify both the author of the post and the person who reported the post with a valid reason.

All of this sounds great but who would be in charge of implementing such a piece of regulation?

The report doesn’t name the regulator on purpose. It’s still unclear whether France wants to create a new regulator, work with an existing regulator or merge multiple regulators to form a sort of super-regulator.

“On a personal level, I’m extremely torn on this choice,” Cédric O said.

Here’s the role and scope of that mythical regulator:

  • The regulator would use fake identities to log in and browse social networks. It would also have access to moderation algorithms using API calls. This way, they could check if social networks are giving fake numbers in their public statements and reports.
  • Laws on trade secrets and data privacy wouldn’t apply there. The regulator would be able to access a lot of stuff.
  • If a social network infringes the law, the regulator would fine the company up to 4 percent of the company’s global annual turnover (just like GDPR). Social networks would have to announce that they didn’t comply with regulation to their users.

Yes, you read that correctly. 4 percent of the company’s global annual turnover.

This is what it looks like in one chart, but it looks like a bunch of indecipherable arrows and bullet points:

Limits

The main criticism against GDPR is that it creates a barrier to entry. Smaller players are faced with a ton of requirements and waste time implementing policies that comply with GDPR. Some people even go as far as to say that GDPR has helped Google, Facebook and other data giants.

The same thing could be said for regulation on social networks. What is going to happen to smaller companies once they reach a certain size and have to comply with social network regulation?

Another limit, while most of this report focuses on public content or content that you broadcast to a group of people in a semi-public way, you can’t use the same framework for group messaging apps. Rumors on WhatsApp have led to multiple deaths in India for instance.

Finally, regulating all social networks seems like an impossible task. Facebook, YouTube, Snap and Twitter seem like a good start. But, as the report suggests, it’s going to be much harder to convince 4chan or 8chan to comply with French and European regulation.

A small part of Facebook’s user base could also move to fringe social networks. It would balkanize the issue of hate speech across many different platforms.

Christchurch, G7, the EU and France

Now that the French government has developed a regulation model, the government wants to convince as many countries as possible that they should follow in France’s footsteps. This is when diplomacy kicks in.

There are multiple groups of countries that could potentially be interested in social network regulation. And if France wants to have a significant influence on the moderation processes of social networks, the government needs allies.

That’s why last week’s announcements were significant. World leaders and tech giants signed a pledge called the Christchurch Call. Named after the recent terrorist attack in Christchurch, world leaders asked tech platforms to increase their efforts when it comes to blocking toxic content.

By putting the spotlight on a specific issue, it was easier to convince both governments and private companies to sign this non-binding pledge. While the U.S. hasn’t signed the pledge, 17 countries and 8 companies signed it.

New Zealand’s Prime Minister Jacinda Ardern French President Emmanuel Macron announce the Christchurch Call (Photo credit: Yoan Valat/AFP/Getty Images)

The pledge most likely won’t lead to anything substantial. But digital ministers of the Group of 7 nations also met last week in order to discuss an upcoming charter on toxic content and tech regulation at large.

Those countries plan to sign the charter during the annual G7 meeting in Biarritz, France in August. Longer negotiations combined with a smaller group of countries could lead to more concrete results.

Digital ministers of G7 countries (Photo credit: Eric Piermont/AFP/Getty Images)

Discussions with other European countries are also key. While it’s often hard to find common ground when it comes to European regulation, some European countries will likely side with France on this issue as they’ve already been working on similar regulation on their own. Regulation at the EU-level is the most likely outcome of France’s diplomatic efforts.

Co-regulation or smart regulation?

Discussions between Facebook and the French government show that words have a strong meaning. When Facebook first announced that French regulators would have a look at the company’s internal processes, Facebook said it would lead to “co-regulation”.

“It is in that context significant and welcome that the French government and Facebook are going to announce a new initiative. That model of co-regulation of the public tech sector is absolutely key,” former British Deputy Prime Minister and Facebook VP for Global Affairs and Communications Nick Clegg said.

Emmanuel Macron talking about smart regulation at VivaTech (Photo credit: Philippe Lopez/AFP/Getty Images)

But the French government doesn’t use the phrase co-regulation. At a tech conference in Paris, France’s President Emmanuel Macron talked about “smart regulation”.

“What we want to do is to increase regulation against hate speech. It is sometimes very complicated,” he said. “If you pass regulation on your own, sometimes it is non-feasible. Sometimes it’s not adaptable and you can block everything, the dynamic of the system. And you can have side effects you didn’t see as a regulator. So what we decided to do with some platforms is to send the regulators, embed them with the company and the tech guys in order precisely to work together during months. And we’re building smart regulation against hate speech.”

It might seem like a small difference in wording, but it’s quite telling. Facebook wants you to think that it is already doing a lot in order to protect you from hate speech and that is still in control, co-regulating to improve its processes. And the French government doesn’t want you to think that Facebook is writing the law with them.

Given that Mark Zuckerberg has a bigger audience than Emmanuel Macron, chances are we’ll talk about co-regulation in the coming years.

More TechCrunch

Ahead of the AI safety summit kicking off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. The AI…

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

14 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities