Government & Policy

UK’s online safety regulator puts out draft guidance on illegal content, saying child safety is priority

Comment

child's hand touching tablet
Image Credits: Sally Anscombe / Getty Images

The U.K.’s newly empowered Internet content regulator has published the first set of draft Codes of Practice under the Online Safety Act (OSA) which became law late last month.

More codes will follow but this first set — which is focused on how user-to-user (U2U) services will be expected to respond to different types of illegal content — offers a steer on how Ofcom is minded to shape and enforce the U.K.’s sweeping new Internet rulebook in a key area.

Ofcom says its first priority as the “online safety regulator” will be protecting children.

The draft recommendations on illegal content include suggestions that larger and higher risk platforms should avoid presenting kids with lists of suggested friends; should not have child users appear in others’ connection lists; and should not make children’s connection lists visible to others.

It’s also proposing that accounts outside a child’s connection list should not be able to send them direct messages; and kids’ location information should not be visible to other users, among a number of recommended risk mitigations aimed at keeping kids safe online.

“Regulation is here, and we’re wasting no time in setting out how we expect tech firms to protect people from illegal harm online, while upholding freedom of expression. Children have told us about the dangers they face, and we’re determined to create a safer life online for young people in particular,” said dame Melanie Dawes, Ofcom’s chief executive, in a statement.

“Our figures show that most secondary-school children have been contacted online in a way that potentially makes them feel uncomfortable. For many, it happens repeatedly. If these unwanted approaches occurred so often in the outside world, most parents would hardly want their children to leave the house. Yet somehow, in the online space, they have become almost routine. That cannot continue.”

The OSA puts a legal duty on digital services, large and small, to protect users from risks posed by illegal content, such as CSAM (child sexual abuse material), terrorism and fraud. Although the list of priority offences in the legislation is long — also including intimate image abuse; stalking and harassment; and cyberflashing, to name a few more.

The exact steps in-scope services and platforms need to take to comply are not set out in the legislation. Nor is Ofcom prescribing how digital businesses should act on every type of illegal content risks. But detailed Codes of Practice it’s developing are intended to provide recommendations to help companies make decisions on how adapt their services to avoid the risk of being found in breach of a regime that empowers it to levy fines of up to 10% of global annual turnover for violations.

Ofcom is avoiding a one-size-fits all approach — with some more costly recommendations in the draft code being proposed for only larger and/or riskier services.

It also writes that it is “likely to have the closest supervisory relationships” with “the largest and riskiest services” — a line that should bring a degree of relief to startups (which generally won’t be expected to implement as many of the recommended mitigations as more established services). It’s defining “large” services in the context of the OSA as those that have more than 7 million monthly users (or around 10% of the U.K. population).

“Firms will be required to assess the risk of users being harmed by illegal content on their platform, and take appropriate steps to protect them from it. There is a particular focus on ‘priority offences’ set out in the legislation, such as child abuse, grooming and encouraging suicide; but it could be any illegal content,” it writes in a press release, adding: “Given the range and diversity of services in scope of the new laws, we are not taking a ‘one size fits all’ approach. We are proposing some measures for all services in scope, and other measures that depend on the risks the service has identified in its illegal content risk assessment and the size of the service.”

The regulator appears to be moving relatively cautiously in taking up its new responsibilities, with the draft code on illegal content frequently citing a lack of data or evidence to justify preliminary decisions to not recommend certain types of risk mitigations — such as Ofcom not proposing hash matching for detecting terrorism content; nor recommending the use of AI to detect previously unknown illegal content.

Although it notes that such decisions could change in future as it gathers more evidence (and, doubtless, as available technologies change).

It also acknowledges the novelty of the endeavour, i.e. attempting to regulate something as sweeping and subjective as online safety/harm, saying it wants its first codes to be a foundation it builds on, including via a regular process of review — suggesting the guidance will shift and develop as the oversight process matures.

“Recognising that we are developing a new and novel set of regulations for a sector without previous direct regulation of this kind, and that our existing evidence base is currently limited in some areas, these first Codes represent a basis on which to build, through both subsequent iterations of our Codes and our upcoming consultation on the Protection of Children,” Ofcom writes. “In this vein, our first proposed Codes include measures aimed at proper governance and accountability for online safety, which are aimed at embedding a culture of safety into organisational design and iterating and improving upon safety systems and processes over time.”

Overall, this first step of recommendations look reasonably uncontroversial — with, for example, Ofcom leaning towards recommending that all U2U services should have “systems or processes designed to swiftly take down illegal content of which it is aware” (note the caveats); whereas “multi-risk” and/or “large” U2U services are presented with a more comprehensive and specific list of requirements aimed at ensuring they have a functioning, and well enough resourced, content moderation system.

Another proposal it’s consulting on is that all general search services should ensure URLs identified as hosting CSAM should be deindexed. But it’s not making it a formal recommendation that users who share CSAM be blocked as yet — citing a lack of evidence (and inconsistent existing platform policies on user blocking) for not suggesting that at this point. Though the draft says it’s “aiming to explore a recommendation around user blocking related to CSAM early next year”.

Ofcom also suggests services that identify as medium or high risk should provide users with tools to let them block or mute other accounts on the service. (Which should be uncontroversial to pretty much everyone — except maybe X-owner, Elon Musk.)

It is also steering away from recommending certain more experimental and/or inaccurate (and/or intrusive) technologies — so while it recommends that larger and/or higher CSAM-risk services perform URL detection to pick up and block links to known CSAM sites it is not suggesting they do keyword detection for CSAM, for example.

Other preliminary recommendations include that major search engines display predictive warnings on searches that could be associated with CSAM; and serve crisis prevention information for suicide-related searches.

Ofcom is also proposing services use automated keyword detection to find and remove posts linked to the sale of stolen credentials, like credit cards — targeting the myriad harms flowing from online fraud. However it is recommending against using the same tech for detecting financial promotion scams specifically, as it’s worried this would pick up a lot of legitimate content (like promotional content for genuine financial investments).

Privacy and security watchers should breathe a particular sigh of relief on reading the draft guidance as Ofcom appears to be stepping away from the most controversial element of the OSA — namely its potential impact on end-to-end encryption (E2EE).

This has been a key bone of contention with the U.K.’s online safety legislation, with major pushback — including from a number of tech giants and secure messaging firms. But despite loud public criticism, the government did not amend the bill to remove E2EE from the scope of CSAM detection measures — instead a minister offered a verbal assurance, towards the end of the bill’s passage through parliament, saying Ofcom could not be required to order scanning unless “appropriate technology” exists.

In the draft code, Ofcom’s recommendation that larger and riskier services use a technique called hash matching to detect CSAM sidesteps the controversy as it only applies “in relation to content communicated publicly on U2U [user-to-user] services, where it is technically feasible to implement them” (emphasis its).

“Consistent with the restrictions in the Act, they do not apply to private communications or end-to-end encrypted communications,” it also stipulates.

Ofcom will now consult on the draft codes it’s released today, inviting feedback on its proposals.

Its guidance for digital businesses on how to mitigate illegal content risks won’t be finalized until next fall — and compliance on these elements isn’t expected until at least three months after that. So there’s a fairly generous lead-in period in order to give digital services and platforms time to adapt to the new regime.

It’s also clear that the law’s impact will be staggered as Ofcom does more of this ‘shading in’ of specific detail (and as any required secondary legislation is introduced).

Although some elements of the OSA — such as the information notices Ofcom can issue on in-scope service — are already enforceable duties. And services that fail to comply with Ofcom’s information notices can face sanction.

There is also a set timeframe in the OSA for in-scope services to carry out their first children’s risk assessment, a key step which will help determine what sort of mitigations they may need to put in place. So there’s plenty of work digital business should already be doing to prepare the ground for the full regime coming down the pipe.

“We want to see services taking action to protect people as soon as possible, and see no reason why they should delay taking action,” an Ofcom spokesperson told TechCrunch. “We think that our proposals today are a good set of practical steps that services could take to improve user safety. Nonetheless, we are consulting on these proposals and we note that it is possible that some elements of them could change in response to evidence provided during the consultation process.”

Asked about how the risk of a service will be determined, the spokesperson said: “Ofcom will determine which services we supervise, based on our own view on the size of their user base and the potential risks associated with their functionalities and business model. We have said that we will inform these services within the first 100 days after Royal Assent, and we will also keep this under review as our understanding of the industry evolves and new evidence becomes available.”

On the timeline of the illegal content code the regulator also told us: “After we have finalised our codes in our regulatory statement (currently planned for next autumn, subject to consultation responses), we will submit them to the Secretary of State to be laid in parliament. They will come into force 21 days after they have passed through parliament and we will be able to take enforcement action from then and would expect services to start taking action to come into compliance no later than then. Nonetheless, some of the mitigations may take time to put in place. We will take a reasonable and proportionate approach to decisions about when to take enforcement action having regard to practical constraints putting mitigations into.”

“We will take a reasonable and proportionate approach to the exercise of our enforcement powers, in line with our general approach to enforcement and recognising the challenges facing services as they adapt to their new duties,” Ofcom also writes in the consultation.

“For the illegal content and child safety duties, we would expect to prioritise only serious breaches for enforcement action in the very early stages of the regime, to allow services a reasonable opportunity to come into compliance. For example, this might include where there appears to be a very significant risk of serious and ongoing harm to UK users, and to children in particular. While we will consider what is reasonable on a case-by-case basis, all services should expect to be held to full compliance within six months of the relevant safety duty coming into effect.”

More TechCrunch

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own…

OpenAI offers a peek behind the curtain of its AI’s secret instructions

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years. The…

US Patent and Trademark Office confirms another leak of filers’ address data

As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police obtained information from the encrypted services Wire and Proton, which helped the authorities…

Encrypted services Apple, Proton and Wire helped Spanish police identify activist

Match Group, the company that owns several dating apps, including Tinder and Hinge, released its first-quarter earnings report on Tuesday, which shows that Tinder’s paying user base has decreased for…

Match looks to Hinge as Tinder fails

Private social networking is making a comeback. Gratitude Plus, a startup that aims to shift social media in a more positive direction, is expanding its wellness-focused, personal reflections journal to…

Gratitude Plus makes social networking positive, private and personal

With venture totals slipping year-over-year in key markets like the United States, and concern that venture firms themselves are struggling to raise more capital, founders might be worried. After all,…

Can AI help founders fundraise more quickly and easily?

Google has found a way to bring a variation of its clever “Circle to Search” gesture to iPhone users. The new interaction, launched in January, allows Android users to search…

Google brings a variation on ‘Circle to Search’ to iPhone users

A new sculpture going live on Wednesday in the Flatiron South Public Plaza in New York is not your typical artwork. It combines technology, sociology, anthropology and art to let…

Always-on video portal lets people in NYC and Dublin interact in real time

Apple’s iPad event had a lot to like. New iPads with new chips and new sizes, a new Apple Pencil, and even some software updates. If you are a big…

TechCrunch Minute: When did iPads get as expensive as MacBooks?

Autonomous, AI-based players are coming to a gaming experience near you, and a new startup, Altera, is joining the fray to build this new guard of AI agents. The company announced…

Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt

Google DeepMind has taken the wraps off a new version of AlphaFold, their transformative machine learning model that predicts the shape and behavior of proteins. AlphaFold 3 is not only…

Google DeepMind debuts huge AlphaFold update and free proteomics-as-a-service web app

Uber plans to deliver more perks to Uber One members, like member-exclusive events, in a bid to gain more revenue through subscriptions.  “You will see more member-exclusives coming up where…

Uber promises member exclusives as Uber One passes $1B run-rate

We’ve all seen them. The inspector with a clipboard, walking around a building, ticking off the last time the fire extinguishers were checked, or if all the lights are working.…

Checkfirst raises $1.5M pre-seed to apply AI to remote inspections and audits

Close to a decade ago, brothers Aviv and Matteo Shapira co-founded a company, Replay, that created a video format for 360-degree replays — the sorts of replays that have become…

Controversial drone company Xtend leans into defense with new $40 million round

Usually, when something starts to rot, it gets pitched in the trash. But Joanne Rodriguez wants to turn the concept of rot on its head by growing fungus on trash…

Mycocycle uses mushrooms to upcycle old tires and construction waste

Monzo has raised another £150 million ($190 million), as the challenger bank looks to expand its presence internationally — particularly in the U.S. The new round comes just two months…

UK challenger bank Monzo nabs another $190M as US expansion beckons

iRobot has announced the successor to longtime CEO, Colin Angle. Gary Cohen, who previous held chief executive role at Timex and Qualitor Automotive, will be heading up the company, marking a major…

iRobot names former Timex head Gary Cohen as CEO

Reddit — now a publicly-traded company with more scrutiny on revenue growth — is putting a big focus on boosting its international audience, starting with francophones. In their first-ever earnings…

Reddit tests automatic, whole-site translation into French using LLM-based AI

Mushrooms continue to be a big area for alternative proteins. Canada-based Maia Farms recently raised $1.7 million to develop a blend of mushroom and plant-based protein using biomass fermentation. There’s…

Meati Foods bites into another $100M amid growth to 7,000 retail locations

Cleaning the outside of buildings is a dirty job, and it’s also dangerous. Lucid Bots came on the scene in 2018 with its Sherpa line of drones to clean windows…

Lucid Bots secures $9M for drones to clean more than your windows

High interest rates and financial pressures make it more important than ever for finance teams to have a better handle on their cash flow, and several startups are hoping to…

Israeli startup Panax raises a $10M Series A for its AI-driven cash flow management platform

The European Union has deepened the investigation of Elon Musk-owned social network, X, that it opened back in December under the bloc’s online governance and content moderation rulebook, the Digital Services Act…

EU grills Elon Musk’s X about content moderation and deepfake risks

For the founders of Atlan, a data governance startup, data has always been at the heart of what they do, even before they launched the company. In fact, co-founders Prukalpa…

Atlan scores $105M for its data control plane, as LLMs boost importance of data

It is estimated that about 2 billion people, especially those in lower and middle-income countries, lack access to quality and affordable essential medicines. The situation is exacerbated by low-quality or even killer…

Axmed raises $2M from Founderful to streamline drug supply chains in underserved markets

For decades, the Global Positioning System (GPS) has maintained a de facto monopoly on positioning, navigation and timing, because it’s cheap and already integrated into billions of devices around the…

Xona Space Systems closes $19M Series A to build out ultra-accurate GPS alternative

Bankruptcy lawyers representing customers impacted by the dramatic crash of cryptocurrency exchange FTX 17 months ago say that the vast majority of victims will receive their money back — plus interest. The…

FTX crypto fraud victims to get their money back — plus interest

On Wednesday, Google launched its digital wallet in India with local integrations, nearly two years after the app was relaunched as a digital wallet platform in the U.S. As TechCrunch exclusively reported last month,…

Google Wallet is now available in India

Bluesky has launched a new product roadmap for the coming months. The decentralized social network said on Tuesday that it is planning to introduce direct messages, support for videos, improved…

Bluesky to add DMs, video support and in-app custom feed curation

Samsung Medison, a medical device unit of Samsung Electronics that specializes in developing diagnostic imaging devices, said on Wednesday it plans to acquire Sonio, a Paris-based startup that makes AI-powered software…

Samsung Medison to acquire French AI ultrasound startup Sonio for $92.7M