Government & Policy

EU publishes election security guidance for social media giants and others in scope of DSA

Comment

The human hand drops the ballot into the box.
Image Credits: Boris Zhitkov / Getty Images

The European Union published draft election security guidelines Tuesday aimed at the around two dozen (larger) platforms with more than 45 million+ regional monthly active users who are regulated under the Digital Services Act (DSA) and — consequently — have a legal duty to mitigate systemic risks such as political deepfakes while safeguarding fundamental rights like freedom of expression and privacy.

In-scope platforms include the likes of Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X.

The Commission has named elections as one of a handful of priority areas for its enforcement of the DSA on very large online platforms (VLOPs) and very large online search engines (VLOSEs). This subset of DSA-regulated companies are required to identify and mitigate systemic risks, such as information manipulation targeting democratic processes in the region, in addition to complying with the full online governance regime.

Per the EU’s election security guidance, the bloc expects regulated tech giants to up their game on protecting democratic votes and deploy capable content moderation resources in the multiple official languages spoken across the bloc — ensuring they have enough staff on hand to respond effectively to risks arising from the flow of information on their platforms and act on reports by third-party fact-checkers — with the risk of big fines for dropping the ball.

This will require platforms to pull off a precision balancing act on political content moderation — not lagging on their ability to distinguish between, for example, political satire, which should remain online as protected free speech, and malicious political disinformation, whose creators could be hoping to influence voters and skew elections.

In the latter case, the content falls under the DSA categorization of systemic risk that platforms are expected to swiftly spot and mitigate. The EU standard here requires that they put in place “reasonable, proportionate, and effective” mitigation measures for risks related to electoral processes, as well as respecting other relevant provisions of the wide-ranging content moderation and governance regulation.

The Commission has been working on the election guidelines at pace, launching a consultation on a draft version just last month. The sense of urgency in Brussels flows from upcoming European Parliament elections in June. Officials have said they will stress-test platforms’ preparedness next month. So the EU doesn’t appear ready to leave platforms’ compliance to chance, even with a hard law in place that means tech giants are risking big fines if they fail to meet Commission expectations this time around.

User controls for algorithmic feeds

Key among the EU’s election guidance aimed at mainstream social media firms and other major platforms are that they should give their users a meaningful choice over algorithmic and AI-powered recommender systems — so they are able to exert some control over the kind of content they see.

“Recommender systems can play a significant role in shaping the information landscape and public opinion,” the guidance notes. “To mitigate the risk that such systems may pose in relation to electoral processes, [platform] providers … should consider: (i.) Ensuring that recommender systems are designed and adjusted in a way that gives users meaningful choices and controls over their feeds, with due regard to media diversity and pluralism.”

Platforms’ recommender systems should also have measures to downrank disinformation targeted at elections, based on what the guidance couches as “clear and transparent methods,” such as deceptive content that’s been fact-checked as false and/or posts coming from accounts repeatedly found to spread disinformation.

Platforms must also deploy mitigations to avoid the risk of their recommender systems spreading generative AI-based disinformation (aka political deepfakes). They should also be proactively assessing their recommender engines for risks related to electoral processes and rolling out updates to shrink risks. The EU also recommends transparency around the design and functioning of AI-driven feeds and urges platforms to engage in adversarial testing, red-teaming, etc., to amp up their ability to spot and quash risks.

On GenAI the EU’s advice also urges watermarking of synthetic media — while noting the limits of technical feasibility here.

EU dials up scrutiny of major platforms over GenAI risks ahead of elections

Recommended mitigating measures and best practices for larger platforms in the 25 pages of draft guidance published today also lay out an expectation that platforms will dial up internal resourcing to focus on specific election threats, such as around upcoming election events, and putting in place processes for sharing relevant information and risk analysis.

Resourcing should have local expertise

The guidance emphasizes the need for analysis on “local context-specific risks,” in addition to member state specific/national and regional information gathering to feed the work of entities responsible for the design and calibration of risk mitigation measures. And for “adequate content moderation resources,” with local language capacity and knowledge of the national and/or regional contexts and specificities — a long-running gripe of the EU when it comes to platforms’ efforts to shrink disinformation risks.

Another recommendation is for them to reinforce internal processes and resources around each election event by setting up “a dedicated, clearly identifiable internal team” ahead of the electoral period — with resourcing proportionate to the risks identified for the election in question.

The EU guidance also explicitly recommends hiring staffers with local expertise, including language knowledge. Platforms have often sought to repurpose a centralized resource — without always seeking out dedicated local expertise.

“The team should cover all relevant expertise including in areas such as content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference], fundamental rights and public participation and cooperate with relevant external experts, for example with the European Digital Media Observatory (EDMO) hubs and independent factchecking organisations,” the EU also writes.

The guidance allows for platforms to potentially ramp up resourcing around particular election events and de-mobilize teams once a vote is over.

It notes that the periods when extra risk mitigation measures may be needed are likely to vary, depending on the level of risks and any specific EU member state rules around elections (which can vary). But the Commission recommends that platforms have mitigations deployed and up and running at least one to six months before an electoral period, and continue at least one month after the elections.

Unsurprisingly, the greatest intensity for mitigations is expected in the period prior to the date of elections, to address risks like disinformation targeting voting procedures.

Hate speech in the frame

The EU is generally advising platforms to draw on other existing guidelines, including the Code of Practice on Disinformation and Code of Conduct on Countering Hate Speech, to identify best practices for mitigation measures. But it stipulates they must ensure users are provided with access to official information on electoral processes, such as banners, links and pop-ups designed to steer users to authoritative info sources for elections.

“When mitigating systemic risks for electoral integrity, the Commission recommends that due regard is also given to the impact of measures to tackle illegal content such as public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices in the democratic debate, in particular those representing vulnerable groups or minorities,” the Commission writes.

“For example, forms of racism, or gendered disinformation and gender-based violence online including in the context of violent extremist or terrorist ideology or FIMI targeting the LGBTIQ+ community can undermine open, democratic dialogue and debate, and further increase social division and polarization. In this respect, the Code of conduct on countering illegal hate speech online can be used as inspiration when considering appropriate action.”

It also recommends they run media literacy campaigns and deploy measures aimed at providing users with more contextual info — such as fact-checking labels; prompts and nudges; clear indications of official accounts; clear and non-deceptive labeling of accounts run by member states, third countries and entities controlled or financed by third countries; tools and info to help users assess the trustworthiness of information sources; tools to assess provenance; and establish processes to counter misuse of any of these procedures and tools — which reads like a list of stuff Elon Musk has dismantled since taking over Twitter (now X).

Notably, Musk has also been accused of letting hate speech flourish on the platform on his watch. And at the time of writing, X remains under investigation by the EU for a range of suspected DSA breaches, including in relation to content moderation requirements.

Transparency to amp up accountability

On political advertising, the guidance points platforms to incoming transparency rules in this area — advising they prepare for the legally binding regulation by taking steps to align themselves with the requirements now. (For example, by clearly labeling political ads, providing information on the sponsor behind these paid political messages, maintaining a public repository of political ads, and having systems in place to verify the identity of political advertisers.)

Elsewhere, the guidance also sets out how to deal with election risks related to influencers.

Platforms should also have systems in place enabling them to demonetize disinformation, per the guidance, and are urged to provide “stable and reliable” data access to third parties undertaking scrutiny and research of election risks. Data access for studying election risks should also be provided for free, the advice stipulates.

More generally the guidance encourages platforms to cooperate with oversight bodies, civil society experts and each other when it comes to sharing information about election security risks — urging them to establish comms channels for tips and risk reporting during elections.

For handling high-risk incidents, the advice recommends platforms establish an internal incident response mechanism that involves senior leadership and maps other relevant stakeholders within the organization to drive accountability around their election event responses and avoid the risk of buck passing.

Post-election, the EU suggests platforms conduct and publish a review of how they fared, factoring in third-party assessments (i.e., rather than just seeking to mark their own homework, as they have historically preferred, trying to put a PR gloss atop ongoing platform manipulated risks).

The election security guidelines aren’t mandatory, as such, but if platforms opt for another approach than what’s being recommended for tackling threats in this area, they have to be able to demonstrate their alternative approach meets the bloc’s standard, per the Commission.

If they fail to do that, they’re risking being found in breach of the DSA, which allows for penalties of up to 6% of global annual turnover for confirmed violations. So there’s an incentive for platforms to get with the bloc’s program on ramping up resources to address political disinformation and other info risks to elections as a way to shrink their regulatory risk. But they will still need to execute on the advice.

Further specific recommendations for the upcoming European Parliament elections, which will run June 6–9, are also set out in the EU guidance.

On a technical note, the election security guidelines remain in draft at this stage. But the Commission said formal adoption is expected in April once all language versions of the guidance are available.

EU’s draft election security guidelines for tech giants take aim at political deepfakes

More TechCrunch

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google’s expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers – and to some extent, consumers –  why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and using wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it’s raised a $10 million Series A funding round, bringing its total raised to over $12M.…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, Colab, to build a better way. The…

Colab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools

European Union enforcers of the bloc’s online governance regime, the Digital Services Act (DSA), said Thursday they’re closely monitoring disinformation campaigns on the Elon Musk-owned social network X (formerly Twitter)…

EU ‘closely’ monitoring X in wake of Fico shooting as DSA disinfo probe rumbles on

Wind is the largest source of renewable energy in the U.S., according to the U.S. Energy Information Administration, but wind farms come with an environmental cost as wind turbines can…

Spoor uses AI to save birds from wind turbines

The key to taking on legacy players in the financial technology industry may be to go where they have not gone before. That’s what Chicago-based Aeropay is doing. The provider…

Cannabis industry and gaming payments startup Aeropay is now offering an alternative to Mastercard and Visa

Facebook and Instagram are under formal investigation in the European Union over child protection concerns, the Commission announced Thursday. The proceedings follow a raft of requests for information to parent…

EU opens child safety probes of Facebook and Instagram, citing addictive design concerns

Bedrock Materials is developing a new type of sodium-ion battery, which promises to be dramatically cheaper than lithium-ion.

Forget EVs: Why Bedrock Materials is targeting gas-powered cars for its first sodium-ion batteries

Private equity giant Thoma Bravo has announced that its security information and event management (SIEM) company LogRhythm will be merging with Exabeam, a rival cybersecurity company backed by the likes…

Thoma Bravo’s LogRhythm merges with Exabeam in more cybersecurity consolidation

Consumer protection groups around the European Union have filed coordinated complaints against Temu, accusing the Chinese-owned ultra low-cost e-commerce platform of a raft of breaches related to the bloc’s Digital…

Temu accused of breaching EU’s DSA in bundle of consumer complaints

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

The AI industry moves faster than the rest of the technology sector, which means it outpaces the federal government by several orders of magnitude.

Senate study proposes ‘at least’ $32B yearly for AI programs

The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers…

FBI seizes hacking forum BreachForums — again

The announcement signifies a significant shake-up in the streaming giant’s advertising approach.

Netflix to take on Google and Amazon by building its own ad server

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn