Tech CEOs to face faster criminal liability under UK online safety law

The U.K. is speeding up the application of powers that could see tech CEOs sent to prison if their businesses fail to comply with incoming safety-focused internet content legislation, the government confirmed today.

The latest revisions to the draft legislation include a radically reduced time frame for being able to apply criminal liability powers against senior tech execs who fail to cooperate with information requests from the regulator — down to just two months after the legislation gets passed. (And since the government enjoys a large majority in the House of Commons, the incoming Online Safety regulation — already years in the making — could become law this year.)

While the draft bill, which was published in May 2021, has already seen a string of revisions — with more being announced today — the core plan has remained fairly constant: The government is introducing a dedicated framework to control how social media companies and other content-focused platforms must respond to certain types of problematic content (not only illegal content), which will include a regime of Codes of Practice overseen by the media and communications regulator, Ofcom, in a vastly expanded role, and hefty powers to fine rule-breakers up to 10% of their global annual turnover.

As the bill’s name suggests, the government’s focus is on a very broad “de-risking” of internet platforms — which means the bill aims to tackle not just explicitly illegal stuff (such as terrorism or child sexual abuse material, or CSAM) but aims to set rules for how the largest internet platforms need to approach “legal but harmful” online content, such as trolling.

Child safety campaigners especially have been pressing for years for tech firms to be forced to purge toxic content.

The government gradually and then quickly embraced this populist cause, saying its stated aim for the bill is to make the U.K. the safest place in the world to go online, loudly banging a child-protection drum.

But it has also conceded that there are huge challenges to effective regulation of such a sprawling arena.

The revised draft bill will be introduced in parliament Thursday — kicking off a wider, cross-party debate of what remains a controversial yet populist plan to introduce a “duty of care” on social media companies and other user-generated-content-carrying platforms. The plan enjoys broad (but not universal) support among U.K. lawmakers.

Update: The revised bill (pdf) has now been published — and can be downloaded from the government’s website here.

Commenting on the introduction of the bill to parliament in a statement, Digital Secretary Nadine Dorries said:

“The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.

“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the well-being and innocence of countless generations of children to the power of unchecked algorithms.

“Since taking on the job I have listened to people in politics, wider society and industry and strengthened the bill, so that we can achieve our central aim: to make the U.K. the safest place to go online.”

It’s fair to say there is broad backing inside the U.K. parliament for cracking the whip over tech platforms when it comes to content rules (MPs surely haven’t forgotten how Facebook’s founder snubbed earlier content questions).

There is, however, a diversity of opinion and dispute on the detail of how best to do that. So it will be interesting to see how parliamentarians respond to the draft as it goes through the legislative scrutiny process in the coming months.

Much of the U.K.’s Online Safety proposal still remains unclear, though — not least how well (or poorly) the regime will work in practice and what its multifaceted requirements will mean for in-scope digital businesses, large and small.

The detail of what exactly will fall into the fuzzier “legal but harmful” content bucket, for example, will be set out in secondary legislation to be agreed by MPs — the latter being another new stipulation the government announced today, arguing this will avoid the risk of tech giants becoming de facto speech police, which was one early criticism of the plan.

In what looks like a bid to play down further potential for controversy, the government’s press release couches the aims of the bill in very vanilla terms — saying it’s is intended to ensure platforms “uphold their stated terms and conditions” as well as arguing these are merely “balanced and proportionate” measures that will finally force tech giants to take notice and effectively tackle illegal and abusive speech. (Or, else, well, their CEOs might find themselves behind bars.)

Unsurprisingly, digital rights groups have been quick to seize on this implicitly contradictory messaging — reiterating warnings that the legislation represents a massively chilling attack on freedom of expression. The Open Rights Group (ORG) wasted no time in likening the threat of prison for social media execs to powers being exercised by Vladimir Putin in Russia.

“Powers to imprison social media executives should be compared with Putin’s similar threats a matter of weeks ago,” ORG’s executive director, Jim Killock, said in a statement responding to the Department for Digital, Culture, Media and Sport’s latest revisions.

“The fact that the bill keeps changing its content after four years of debate should tell everyone that it is a mess, and likely to be a bitter disappointment in practice,” he added.

“The bill still contains powers for ministers to decide what legal content platforms must try to remove. Parliamentary rubber stamps for ministerial say-so’s will still compromise the independence of the regulator. It would mean state-sanctioned censorship of legal content.”

The government’s response to criticism of the potential impact on freedom of speech includes touting requirements in the bill for social media firms to “protect journalism” and “democratic political debate,” as its press release puts it — although it’s less clear how (or whether) platforms will/can actually do that.

Instead, DCMS reiterates that “news content” has been given a carve out — emphasizing that this particular definition-stretching category is “completely exempt from any regulation under the bill.” It’s easy to see how “compliance” already sounds awfully messy — does that cover anyone online who claims to be a journalist? A line in DCMS’ press release appears to concede at least one looming mess — and/or the need for even more revisions/measures to be added — noting: “Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.”

On the headline-grabbing criminal liability risk for senior tech execs — likely a populist measure that the government hopes will drum up public support to drown out objecting expert voices like ORG’s — Dorries had already signaled during parliamentary committee hearings last fall that she wanted to accelerate the application of criminal liability powers. (Memorably, she wasted no time brandishing the threat of faster jail time at Meta’s senior execs — saying they should focus on safety and forget about the metaverse.)

The original draft of the bill, which predated Dorries’ tenure heading up the digital brief, had deferred the power for at least two years. But that time frame was criticized by child safety campaigners — who warned that unless the law has real teeth it would be ineffective as platforms will just be able to ignore it. (A pressing risk of jail time for senior tech executives, such as Meta’s Nick Clegg, a former deputy prime minister of the U.K., could certainly concentrate certain C-suite minds on compliance.)

The speedier jail time power is by no means the first substantial revision of the draft bill. As Killock points out, there has been a whole banquet of “revisions” at this point — manifested, in recent weeks, by DCMS putting out a running drip-feed of announcements that it’s further expanding the scope of the bill and amping up its power.

This has included bringing scam ads and porn websites into scope (in the latter case to force them to use age verification technologies); expanding the list of criminal content added to the face of the bill and introducing new criminal offenses, including cyberflashing; and setting out measures to tackle anonymous trolling by leaning on platforms to squeeze freedom of reach.

Two parliamentary committees that scrutinized the original proposal last year went on to warn of major flaws — and urged a series of changes — recommendations that DCMS ] said it considered in making these revisions.

There are even more extras today, including making in-scope companies’ senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and obstructing the regulator when it enters company offices.

DCMS notes that it’s breaching these offenses that could lead senior execs of major platforms to be sentenced to up to two years in prison or fined.

Another addition, related to what the government describes as “proactive technology” — tools for content moderation, user profiling and behavior identification intended to “protect users” — would allow Ofcom to “set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary.”

“Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator,” it adds, also stipulating that Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.

Platforms will also now be required to report CSAM content they detect on their platforms directly to the National Crime Agency, in another change that replaces an existing voluntary reporting regime and that DCMS said “reflects the government’s commitment to tackling this horrific crime.”

“Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high-quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimization by preventing the ongoing recirculation of illegal content,” it specified. “In-scope companies will need to demonstrate existing reporting obligations outside of the U.K. to be exempt from this requirement, which will avoid duplication of company’s efforts.”

Having made so many revisions to what the government likes to brand “world-leading” legislation, even before formal parliamentary debate kicks off, suggests accusations that the proposal is both overblown and half-baked look hard to shake.

MPs may also identify a lack of coherence being costumed in populist conviction and spy an opportunity to grandstand and press for their own personal pet hates to be rolled into the mix, too (as one former minister of state has warned) — with the risk that the bill ends up even more unwieldy and laden with impossible asks.