Understanding Europe’s big push to rewrite the digital rulebook

European Union lawmakers have set out the biggest update of digital regulations for around two decades — likening it to the introduction of traffic lights to highways to bring order to the chaos wrought by increased mobility. Just switch cars for packets of data.

The proposals for a Digital Services Act (DSA) to standardize safety rules for online business, and a Digital Markets Act (DMA), which will put limits on tech giants aimed at boosting competition in the digital markets they dominate, are intended to shape the future of online business for the next two decades — both in Europe and beyond.

The bloc is far ahead of the U.S. on internet regulation. So while the tech giants of today are (mostly) made in the USA, rules that determine how they can and can’t operate in the future are being shaped in Brussels.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder.

The latter part of this year has seen Ursula von der Leyen’s European Commission, which took up its five-mandate last December, unleash a flotilla of digital proposals — and tease more coming in 2021. The Commission has proposed a Data Governance Act to encourage reuse of industrial (and other) data, with another data regulation and rules on political ads transparency proposal slated as coming next year. European-flavored guardrails for use of AI will also be presented next year.

But it’s the DSA and DMA that are core to understanding how the EU executive body hopes to reshape internet business practices to increase accountability and fairness — and in so doing promote the region’s interests for years to come.

These are themes being seen elsewhere in the world at a national level. The U.K., for example, is coming with an “Online Safety Bill” next year in response to public concern about the societal impacts of big tech. While rising interest in tech antitrust has led to Google and Facebook facing charges of abusive business practices on home turf.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder. Both are now live possibilities — so entrepreneurs can dare to dream of a different, freer and fairer digital playground. One that’s not ruled over by a handful of abusive giants. Though we’re certainly not there yet.

With the DSA and DMA the EU is proposing an e-commerce and digital markets framework that, once adopted, will apply for its 27 Member States — and the ~445 million people who live there — exerting both a sizable regional pull and seeking to punch up and out at global internet giants.

While there are many challenges ahead to turn the planned framework into pan-EU law, it looks a savvy move by the Commission to separate the DSA and DMA — making it harder for big tech to co-opt the wider industry to lobby against measures that will only affect them in the 160+ pages of proposed legislation now on the table.

It’s also notable that the DSA contains a sliding scale of requirements, with audits, risk assessments and the deepest algorithmic accountability provisions reserved for larger players.

Tech sovereignty — by scaling up Europe’s tech capacity and businesses — is a strategic priority for the Commission. And rule-setting is a key part of how it intends to get there — building on data protection rules that have already been updated, with the GDPR being applied from 2018.

Though what the two new major policy packages will mean for tech companies, startup-sized or market-dominating, won’t be clear for months — or even years. The DSA and DMA have to go through the EU’s typically bruising co-legislative process, looping in representatives of Member States’ governments and directly elected MEPs in the European parliament (which often are coming at the process with different policy priorities and agendas).

The draft presented this month is thus a starting point. Plenty could shift — or even change radically — through the coming debates and amendments. Which means the lobbying starts in earnest now. The coming months will be crucial to determining who will be the future winners and losers under the new regime so startups will need to work hard to make their voices heard.

While tech giants have been pouring increasing amounts of money into Brussels “whispering” for years, the EU is keen to champion homegrown tech — and most of big tech isn’t that.

A fight is almost certainly brewing to influence the world’s most ambitious digital rulebook — including in key areas like the surveillance-based adtech business models that currently dominate the web (to the detriment of individual rights and pro-privacy innovation). So for those dreaming of a better web there’s plenty to play for.

Early responses to the DSA and DMA show the two warring sides, with U.S.-based tech lobbies blasting the plan to expand internet regulation as “anti-innovation” (and anti-U.S.), while EU rights groups are making positive noises over the draft — albeit, with an ambition to go further and ensure stronger protections for web users.

On the startup side, there’s early relief that key tenets of the EU’s existing e-commerce framework look set to remain untouched, mingled with concern that plans to rein in tech giants may have knock-on impacts — such as on startup exits (and valuations). European founders, whose ability to scale is being directly throttled by big tech’s market muscle, have other reasons to be cheerful about the direction of policy travel.

In short, major shifts are coming and businesses and entrepreneurs would do well to prepare for changing requirements — and to seize new opportunities.

Read on for a breakdown of the key aims and requirements of the DSA and the DMA, and additional discussion on how the policy plan could shape the future of the startup business.

Digital Services Act

The DSA aims to standardize rules for digital services that act as intermediaries by connecting consumers to goods, services and content. It will apply to various types of digital services, including network infrastructure providers (like ISPs); hosting services (like cloud storage providers); and online platforms (like social media and marketplaces) — applying to all that offer services in the EU, regardless of where they’re based.

The existing EU e-Commerce Directive was adopted in the year 2000 so revisiting it to see if core principles are still fit for purpose is important. And the Commission has essentially decided that they are. But it also wants to improve consumer protections and dial up transparency and accountability on services businesses by setting new due diligence obligations — responding to a smorgasbord of concerns around the impact of what’s now being hawked and monetized online (whether hateful content or dangerous/illegal products).

Some EU Member States have also been drafting their own laws (in areas like hate speech) that threatens regulatory fragmentation of the bloc’s single market, giving lawmakers added impetus to come with harmonized pan-EU rules (hence the DSA being a regulation, not a directive).

The package will introduce obligations aimed at setting rules for how internet businesses respond to illegal stuff (content, services, goods and so on) — including standardized notice and response procedures for swiftly tackling illegal content (an areas that’s been managed by a voluntary EU code of conduct on illegal hate speech up til now); and a “Know Your Customer” principle for online marketplaces (already a familiar feature in more heavily regulated sectors like fintech) that’s aimed at making it harder for sellers of illegal products to simply respawn within a marketplace under a new name.

There’s also a big push around transparency obligations — with requirements in the proposal for platforms to provide “meaningful” criteria used to target ads (Article 24); and explain the “main parameters” of recommender algorithms (Article 29), as well as requirements to foreground user controls (including at least one “nonprofiling” option).

Here the overarching aim is to increase accountability by ensuring European users can get the information needed to be able to exercise their rights.

There’s some skepticism over how viable the planned transparency provisions will prove in practice, though. Dr. Leif-Nissen Lundbæk, co-founder of Germany-based Xayn, a search engine doing privacy-safe personalized search, voices this concern.

“It is about time that the EU fights back against the problematic practices of some of the big tech companies (invading data privacy, being nontransparent about their algorithms and how they influence their users). We all have a right to know more about what’s happening behind the curtains of those companies that know so much about us,” he tells us. “The issue that I of course see here is that for most of these companies it won’t be possible to show transparently how their algorithms work and give users the possibility to influence it. So I’m really curious about what happens then.”

There are also early concerns that the Commission has dropped the ball on the chance to regulate surveillance-based business models in the DSA — opting for a level of disclosure about targeted ads, rather than stronger user controls. (Suggesting that former MEP Nick Clegg, who heads up Facebook’s regional lobbying efforts, has had some success in flogging the self-serving claim that Facebook’s “personalized” advertising business model is vital to Europe’s economic prosperity.)

“There is a little bit of a lack of ambition in the proposal concerning targeted advertisement. And also the type of algorithms that the platforms are allowed to use — where there is no real option of having a more neutral algorithm, allowing you to not be encouraged to spend more time on the platform than necessary,” said MEP Karen Melchior, raising this concern during an online panel organized by industry trade association, Dot Europe (formerly EDiMA).

“I think this is a one-time opportunity for us to regulate the platforms and regulate the internet of today and hopefully the internet of tomorrow and we shouldn’t lose this,” she added.

On the content side, the Commission has chosen to limit the DSA’s regulation to speech that’s illegal (e.g., hate speech, terrorism propaganda, child sexual exploitation, etc.) — rather than trying to directly tackle fuzzier “legal but harmful” content (e.g., disinformation), as it seeks to avoid inflaming concerns about impacts on freedom of expression.

Although a beefed up self-regulatory code on disinformation is coming next year, as part of a wider European Democracy Action Plan. And that (voluntary) code sounds like it will be heavily pushed by the Commission as a mitigation measure platforms can put toward fulfilling the DSA’s risk-related compliance requirements.

EU lawmakers do also plan on regulating online political ads in time for the next pan-EU elections, under a separate instrument (to be proposed next year) and are continuing to push the Council and European parliament to adopt a 2018 terrorism content takedown proposal (which will bring specific requirements in that specific area).

But the aim with the DSA is to keep the rulebook broad (or “horizontal,” in EU legislator speak) — complementing issue-specific instruments such as the Audiovisual Media Services Directive and the digital copyright reforms that was passed in 2019. 

The proposal also doesn’t define what’s illegal — that’s a matter of Member State law. The DSA is about streamlining reporting and establishing a system of oversight to generally boost safety and reduce uncertainty by setting standardized mechanisms and processes.

Again on the content moderation side, there’s a big push around platform cooperation with so-called “trusted flaggers” aimed at speeding up takedowns. Some of the early pushback to the Commission’s proposal relates to concerns about who exactly can be designated a trusted flagger, as well as wider worry over how much emphasis is being put on takedowns, with the concern that platforms will be pushed to remove more than they should.

Kinzen, a Dublin-based startup that offers content moderation tools that combine human expertise and machine learning, unsurprisingly has concerns here.

Co-founder Mark Little told us: “The concern is that this will encourage the tech platforms to engage in the worst kind of content moderation: Fully automated content takedowns that are neither fully effective nor consistent with free speech. The EU needs to make sure the act encourages more human oversight of content moderation.”

He added that his preference is for regulation to make Europe “a test bed for a better form of content moderation that is transparent and effective,” i.e., rather than have laws that encourage platforms to mindlessly automate takedowns to shrink their risk of being found fencing illegal content.

The Commission is alive to the risk of ratcheting up regulatory pressure that chills free expression — claiming the proposed rules are “carefully calibrated and accompanied by robust safeguards” to strike a balance between underremoval and overremoval of content on the grounds of illegality.

Notably, though, the current proposal leaves it up to platforms to make assessments on whether speech is illegal — another bone of contention for some (albeit the alternative scenario, of a dedicated army of judges making decisions on each piece of disputed content, would raise major questions over cost and speed).

Another worry for those concerned about the impact on speech is that the proposal allows for automated takedowns (although there’s no mandate for — nor ban on — the use of content filters in the draft). The Commission says it doesn’t want to ban such tools, as that could have a disproportionate impact on smaller businesses that can’t afford to hire thousands of content moderators.

It also points to another key requirement in the proposal — that decisions must be explained to users who must also be given a mechanism to complain if content is removed — which it’s intending as a bulwark against poor applications of automation.

The need to protect freedom of expression is a constant refrain in the proposal. The Commission’s clear hope is that requirements on platforms to explain decisions and provide the means to challenge them will work to counterbalance any incentive to overremove content.

It is also noteworthy that EU lawmakers have chosen to maintain the ban on a general obligation on content monitoring — although any move to delete that provision would have been highly controversial. (The DSA does make reference to a key ruling by the CJEU last year that opened the door to targeted monitoring of specific illegal speech that was judged compatible with wider EU law.)

The Commission has also decided to retain other key principles of the e-Commerce Directive — namely: The country of origin principle (which simplifies compliance for cross-border EU business); and the limited liability regime for intermediaries — with some clarifications that the Commission says are to remove existing disincentives for platforms to carry out “voluntary own-initiative investigations” to detect illegal activity.

“The proposal keeps many of the fundamental principles around the safe harbors for internet intermediaries that we know from the e-Commerce Directive and that have served well for the last 20 years,” said Sebastian Felix Schwemer, a researcher in algorithmic content regulation and intermediary liability at the University of Copenhagen who wrote one of the DSA background studies for the Commission.

“Importantly, it also continues the horizontal approach of the e-Commerce Directive and focuses on illegal information, leaving the blurrier category of ‘harmful’ information untouched.”

“Given that the DSA would replace the current European safe harbors regime in the e-Commerce Directive, the framework will be highly relevant [for startups],” he also told us. “The proposed instrument is a Regulation. This should, compared to the current landscape, make it easier for startups to know which rules to follow.

“In addition to the proposed new obligations applying to all intermediary services, relating e.g. to terms and conditions, transparency reporting and notice-and-action mechanisms, there are range of additional obligations for online platforms (in Articles 17 to 24). The latter, however, do not apply to micro and small enterprises.”

Given how much is being retained of the current e-commerce regime there does look to be a lot for startups to like here, while harmonized requirements for handling illegal content should help cross-border operations gain greater certainty about moderation requirements.

In a first-response statement, startup association, Allied for Startups, gave a broad thumbs-up to the Commission’s decision to maintain the fundamentals of the e-Commerce Directive, writing: “The intermediary liability exemption, the country of origin principle and the prohibition of general monitoring are three key principles of a fully functioning platform economy. The reaffirmation of these principles in the DSA is paramount to the scalability of startups in the European Union.”

VLOPs: With great power, more responsibilities

Requirements in the DSA are also graduated on the basis of digital services’ size and impact — with certain additional requirements for the largest platforms (which are given the unfortunate acronym VLOPs: aka, very large online platforms).

The highest penalty for noncompliance with the DSA — up to 6% of global annual turnover — is reserved for these larger players.

This suggests startups aren’t going to be exposed to the same level of risk as businesses that have already scaled usage considerably. Entrepreneurs and early-stage startups should also face lighter compliance demands and a freer hand to experiment with novel products without being obliged to think about (and mitigate) potential risks as VLOPs must, per the current proposal. They will also be able to avoid the deepest level of oversight (audits) — at least until they scale up.

Who exactly qualifies as a VLOP? The Commission has proposed the size threshold be set at 45 million+ regional users (or 10% of the EU’s population) — which means that a platform like Twitter may not qualify while Snap likely would, for example (based on their current levels of usage). But expect a fight over exactly where that usage line gets drawn.

As well as having greater transparency requirements than startups and smaller platforms, VLOPs are required to provide researchers with access to key data to aid public interest oversight into the societal impacts of their services. And, as noted above, all of this compliance is subject to independent auditing.

Another VLOP-specific requirement is they must perform risk assessments ahead of launching new services — with an accompanying obligation to mitigate identified risks — including by involving relevant representatives from their own user-base and external experts, such as from civic society groups.

It’s an interesting idea to try to force major platforms to be proactive about societal risks versus only caring about scaling. It looks inspired, at least in part, by elements of the GDPR — such as the requirement for data protection impact assessments (as a preemptive check on misuse/abuse of people’s information before the privacy horse bolts). Though plenty of devil will be in the detail — and how effective oversight proves.

The Commission is trying to find a way to put meaningful limits on platforms to address wide-ranging concerns about individual and societal impacts without sledgehammering content and users in the process — so it talks about needing to wield a scalpel; saying there’s a fine line between mitigating risk and asking platforms to monitor content (with all the speech chilling effects that would result) that it absolutely does not want to cross. But there will surely be plenty of debate about how to get this balance right.

Another key detail for VLOPs: Oversight is not being left solely to Member States. The Commission has recognized the need to beef up enforcement in the case of digital services that scale across the EU (a current weakness of the GDPR). And a lot of attention is likely to be paid to the exact choice/structure of enforcement in the DSA, again because of how weak GDPR enforcement has been in cross-border cases.

There have been early concerns raised from some quarters about the Commission proposing to manage VLOP oversight itself — i.e., rather than creating a dedicated, independent body to hold and wield what will be major new powers. So, again, this aspect looks set to get a lot of scrutiny in the coming months.

A common critique of existing EU regulations (such as GDPR) is they create a moat for tech giants that have the resources to plough into compliance and/or endless legal wheezes to dodge requirements.

For the DSA to deliver the best results it therefore needs enforcement to be as firmly felt at the top of the market as elsewhere — and even, potentially, more firmly, given asymmetrical requirements for VLOPs.

Discussing the proposed governance structure of the DSA in a public webinar this week, Irene Roche Laguna, of DG-Connect — which is responsible for Commission policymaking for the digital single market — acknowledged these oversight challenges, saying: “Believe me this has been difficult. And this is going to be difficult during the negotiations.”

One challenge is that as a horizontal regulation there’s no universal candidate regulator that already exists across all Member States to take on DSA oversight that both spans some fairly distinct issues (content/speech moderation versus e-commerce/marketplace requirements like KYC), and applies to both tiny and giant-sized businesses — so doesn’t obviously suit any single existing national body.

Given varying competencies within Member States’ media/telecoms regulators, Roche Laguna suggested the DSA may merit more than one regulator being responsible for enforcing different elements per country — which isn’t perhaps as much streamlining as some digital businesses might be hoping for. (Though it’s still a lot better than dealing simultaneously with regulators in each of the 27 EU Member States).

The other major DSA enforcement challenge is VLOPs themselves. (Or “how to treat platforms equally when they merit to be treated treated differently?”, as Roche Laguna put it.)

Here the Commission is proposing to separate out enforcement of the subset of obligations that only apply to these larger platforms — and take on this role itself (but still with the help of the Member State of origin, i.e., where the business has legally established itself in the EU).

Roche Laguna argued that VLOPs “merit different treatment and treatment at the European level,” suggesting also that it will be difficult for the Member State of origin to tackle such cases — and “might be unfair that one single Member State tackles that for the whole of Europe.”

That sounds like another Commission admission that the GDPR has been hamstrung by the bottlenecks that have built up in a few key Member State agencies, such as Ireland’s Data Protection Commission.

“The Commission does not want to become a ‘Federal Bureau of Intelligence’ — this is not our role,” she added. “But it is true that there are some problems that need to be solved at the European level because any solution at national level first might be insufficient and second we will have a clash between different Member States who don’t agree on what is the right way to proceed.”

The DSA proposal also includes a new oversight entity, called the European Board for Digital Services (EBDS) — which is envisaged (initially) in an advisory/support role (i.e., rather than as a body with legal powers), with the relevant Member State agencies represented, and the board helping to coordinate joint investigations and work on standard setting. Here, the Commission is also suggesting itself as the EBDS chair — so it’s clearly pushing for a central role in major digital enforcement (which would be a major change for Europe’s digital rulebook).

The enforcement structure of both the DSA and the DMA will certainly be crucial to whether these frameworks deliver as intended. The GDPR is as an ongoing reminder of what weak enforcement looks like. Though it’s less clear what alternative enforcement structure might work best and be politically feasible within the EU project — to achieve the sought-for safety, fairness and market efficiency across the digital realm.

“Whether DSA/DMA package reshapes anything, depends on how they are implemented and whether the Member States authorities will be able to bring the necessary technology-wise competencies to the regulators,” Dr. Lukasz Olejnik, an independent researcher and consultant based in Europe, told us. “Currently this is not always the case, so this dimension of enforcement challenge will be key.

“The DSA says in particular, that ‘the Board should be able to rely on the expertise and human resources of the Commission and of the competent national authorities’. It does not even recognize the problem of lacking technical resources. We have seen a similar resources challenge with GDPR. In fact we still see it, two years after entering into force.”

Digital Markets Act 

The second major piece of EU digital policymaking (the DMA) is by no means secondary.

It intends to define a subset of platform giants as “gatekeepers” — a status that will lock-in additional requirements (which the Commission colloquially refers to as a list of “do’s and don’ts”) for these most powerful “top of the VLOPs” (if I can put it that way) in a slew of areas — like self-preferencing, interoperability and data use.

Who are gatekeepers exactly? The Commission has proposed three main criteria for determining whether a tech giant falls under the scope of the DMA, which are: Size of the business (annual regional turnover equal to or above €6.5 billion in the last three financial years, or average market cap/equivalent market value of at least €65 billion in the last financial year); being a gatekeeper (i.e., intermediating via a platform service that has 45 million+ monthly active end users locally, or 10,000+ yearly active business users in the last financial year); and market dominance — which can mean being already entrenched or being expected to become entrenched.

So — at least as proposed initially — the usual tech giant suspects look destined to be classified and regulated as “gatekeepers” in Europe.

The list of obligations a gatekeeper must abide by are set out in the DMA (Articles 5 and 6). Currently, this boils down to a laundry list of practices that rivals of tech giants like Google, Apple, Amazon and Facebook have been complaining about for years.

Among the listed prohibitions are a ban on unfair self-preferencing (hi Google); and on putting blocks on users from uninstalling preloaded apps or restricting access to services outside the walled garden (hi Apple).

There’s a ban on gatekeepers using data obtained from third parties to compete with those other businesses (hi Amazon). Advertising platforms must also give advertisers and publishers free access to internal performance measuring tools and data so they can independently verify metrics of ads hosted by a gatekeeper (hi Facebook). There’s even a ban (with some caveats) on platforms not allowing third party app stores to run inside their walled gardens (hi Apple).

On the interoperability and data portability side of the DMA, there’s a requirement to provide “effective” portability of data and the tools to facilitate it — “including by the provision of continuous and real-time access,” which opens some interesting potential for third-party developers to build services atop dominant platforms, without needing to have to out-compete the giants first.

Gatekeepers must also be able to provide business users with a real-time firehose of the data they generate on the platform. (For end users wanting to let other businesses use their walled garden (i.e., personal) data there’s an emphasis on doing this only with the proper consent in place, to avoid clashing with the GDPR.)

Platform giants will get six months after being designated a gatekeeper to come into compliance with the various requirements while platforms that have not yet reached an entrenched market position (but would otherwise qualify as gatekeepers) will have to comply with a subset of requirements to try to prevent them from unfairly winning the market.

The list of gatekeeper obligations can also be added to in the future (based on the Commission running its own market investigations) — with the aim of preventing giants from innovating new forms of market abuse to workaround the pre-written rules.

The proposed penalty regime for breaching the “do’s and don’ts” is substantial (at least at its theoretical maximum): The DMA allows for fines of up to 10% of global annual turnover (and “periodic” penalty payments of up to 5% of the same).

But of course even very large fines haven’t put any checks on big tech yet.

It will also be interesting to see what happens if gatekeepers try to claim they were unable to develop the necessary capabilities in that timeframe or if they just put out horribly buggy APIs. (The Commission seems to think that’s a risk that needs to be mitigated in commitments obtained from Google to clear its acquisition of Fitbit this month).

There are certainly plenty of question marks over how viable the envisaged regime will be as a tactic to rebalance tech market power — even as the DMA is clearly a Commission confession that EU competition law has failed digital markets.

Yet the very same regulator just chose not to block Google from further entrenching its market dominance by greenlighting its purchase of fitness wearable maker Fitbit, which heavily implies that the Commission’s intention with the DMA is not to torpedo tech giants. Simply put: It doesn’t feel legally capable (nor politically comfortable) of doing that.

A key architect of the DMA within the Commission, EVP and competition commissioner Margrethe Vestager, has been very public in expressing no appetite to break up big tech. Her stated preference is to regulate access to data and with the DMA she’s getting the framework to test her theory.

But, in a way, this regulatory framework is an enabler for mega VLOPs. Because the Commission is making it clear it’s okay with monopoly levels of market dominance. It just wants a toolbox to compel big tech to play by European “e-commerce fairness” rules — plus twist its arm over a few extra requirements in areas like openness and interoperability.

If tech giants do exactly as they’re told, the Commission believes it’ll create conditions for flourishing competition. The $900 billion question is whether this bet on long-term regulation — rather than blocking further market consolidation and breaking up tech empires — is the right one.

One inconvenient truth for the Commission is the giants’ playbook has, to date, been to ignore inconvenient EU rules (like GDPR’s consent requirements — which include the threat of fines of up to 4% of global turnover) in order to keep on scaling usage. Expecting a radical shift by threatening slightly larger fines through centralized enforcement almost feels like an article of faith.

There’s also the shamefully widespread use of dark patterns across the top of the tech industry to consider — which underlines how manipulation to be a systemic component of the tech giant service model.

Playing unfair is how they make the game work for them. And it’s not clear how unethical business strategy can be reformed with a few “do’s and don’ts.” Plus the Commission hasn’t shown any appetite to stand in the way of unethical, rights-hostile business models, which rely on using people’s private information to manipulate their behavior — suggesting such issues should be handled via the GDPR (even as it admits GDPR enforcement has failed to deliver). This is, to put it politely, weak sauce.

The Commission’s thesis in the DMA draws on multiple EU antitrust cases against big tech — including three Google cases and a number of others into Amazon and Apple. The “gatekeeper” status looks very obviously intended to apply to all three giants at a minimum, as listed obligations are geared toward long-standing antitrust complaints associated with their business practices.

Practices such as Amazon using merchant data to gain an advantage over third-party sellers on its platform; Google applying self-preferencing to quickly grab marketshare in new verticals like travel; and mandatary requirements Apple applies to developers selling wares in its App Store, to name a few of Europe’s open antitrust cases against big tech. (Though the Commission has carefully avoided naming names of likely contenders when presenting the policy package.)

So, again, the bloc’s failure to row back digital dominance through competition enforcement — despite issuing billions of dollars in fines — is what the DMA is all about.

But this also means Facebook’s rights-hostile surveillance-based business model is getting a lighter pass in the current proposal. Perhaps as a result of the Commission having spent a lot less time and energy digging into how its ad targeting mass surveillance business works (it only relatively recently opened a preliminary Facebook probe).

Nor is it clear whether high level micromanagement of tech giants will, ultimately, lead to a fairer distribution of marketshare for homegrown startups. Or — indeed — greater choice for internet users and less abuse of their rights, especially from surveillance-based business models that underpin a major chunk of big tech.

So while the Commission appears to believe it’s cracked the unfair formula that big tech has been using to crush competition — and also thinks a set of strict operational requirements with the threat of major fines will be enough to reverse engineer systematic abuse and return the market to “fair functioning” — this is, to put it mildly, a very big bet.

At the same time, the DMA won’t replace competition enforcement in the EU. It’s intended to supplement specific competition investigations.

Open probes of big tech will continue and can keep feeding the regulation’s list of obligations. The Commission’s hope is the DMA’s parallel track will help speed up antitrust interventions, addressing complaints they haven’t kept pace with big tech’s market-abusing playbook.

Once gatekeeper status has been conferred on a tech giant all the DMA’s pre-set obligations kick in within six months. This is a major change from EU regulators having to first investigate a specific practice to prove abuse and then issue an enforcement decision — allowing time for tech giants to shape self-serving remedies or micropivot their processes to route around regulatory incursions. (Related: Three Commission decisions against Google have led to no tangible decrease in its regional marketshare.)

But the DMA proposal doesn’t look set to change the Commission’s broader “don’t rock the boat” digital markets ethos. The risk is therefore that it could simply enable big tech to entrench its dominance for the next two decades — leaving it up to the U.S. to actually tackle digital monopolies by unwinding key acquisitions.

As noted above, the EU hasn’t shown any appetite to block big tech from consolidating its dominance through buying up rivals. To date it hasn’t blocked a single tech merger — most recently greenlighting Google-Fitbit by applying some time-limited conditions, ignoring a massive civic society clamor to reject the deal entirely.

It’s also not clear how the Commission will be able to prevent tech giants from pushing an indefinite “pause” on DMA obligations by making recourse to their usual army of lawyers — such as to fight gatekeeper classification and/or dispute the detail of specific obligations ad infinitum, as it continues to in EU privacy law enforcement.

Mathias Vermeulen, a director at the AWO digital data rights agency, agreed this looks like an “obvious risk” when we asked — one which he suggested there is no immediately obvious path to mitigate.

Arguably, then, vigorous or even aggressive “tone-setting” enforcement of the DMA seems a prerequisite for making big tech sit up and take notice — given how much hay they’ve made from underresourced, risk-averse “oversight” via cripplingly inactive, decentralized EU enforcement that’s robbed the GDPR of its promise on paper.

At very least, centralized enforcement looks essential for the DMA to avoid the pitfalls of forum shopping and major case-load backlogs that have marred the GDPR. (Albeit, even highly centralized, efficient EU regulatory oversight is likely a lot more comfortable for tech giants than the more “radical” alternatives being suggested on their home turf.)

Again, the Commission has proposed taking on responsibility itself for assessing whether a platform service is a gatekeeper and for enforcing the DMA — much like it wants to be in charge of DSA enforcement for VLOPs.

In the event of repeat breaches, the DMA does also allow for structural remedies — such as breaking up a business or forcing the sale of a unit.

But Vestager has described such a step as an ultimate resort. So, again, the Commission looks more than happy to pursue a policy of “America First.”

The DMA does not appear to put a blocker on further market consolidation by gatekeepers, either. The Commission has only said it will monitor acquisition plans, including by expanding notification requirements where they want to buy smaller companies or startups that wouldn’t normally trigger regulatory oversight.

The lack of any pre-set limits on acquisitions will likely please investors concerned about the regulation making it harder for startups to exit to Google, et al (and potentially impacting valuations). Hence the Commission may be seeking to balance wider tech industry concerns by not putting hard limits on consolidation. But, again though, the risk of not taking a stronger line here undermines the core aim of rebalancing tipped digital markets — which would be bad for European startups trying to scale a business in markets already owned by tech giants.

Setting competition issues aside, the Commission suggests an added boon of the DMA will be reduced regulatory fragmentation for online businesses and greater operational clarity for cross-border digital services. “Common rules across the single market will foster innovation, growth and competitiveness, and facilitate the scaling up of smaller platforms, small and medium-sized enterprises and startups who will have a single, clear framework at EU level,” it suggests.

It also touts reduced compliance costs for companies operating in the internal market. Albeit, reducing compliance costs for tech giants is a bit of a funny “plus” if you’re aiming to level the playing field for smaller digital players who are likely to need to increase their investments in order to take up DMA-enabled opportunities to better compete with gatekeepers (whether from analyzing new data they get back from gatekeepers or in order to be able to make the most of new, mandatory APIs). Obviously, though, the motivating idea for legislators is that such investments will pay marketshare dividends for smaller players down the line — as they’re able to grab a larger share of big tech’s pie.

It’ll be years before any startups are in a position to say for sure whether the Commission’s gamble has paid off. But in the meantime businesses that feel the long arm of big tech have reasons to feel cheerful.

Johannes Reck, co-founder of vacation experience startup GetYourGuide — one of several in the sector that have urged stronger EU competition enforcement against Google this year — sounds pleased with the direction of travel. (Though he admitted to being too busy at the end of an exceptionally difficult year for the travel sector to have had time to parse the Commission’s proposals in detail.)

“Overall I am super positive about the developments and strongly support the view of Vestager to create a more level playing field,” he told us. “I am personally not worried about ‘overregulation’ at this point. We saw with GDPR that the regulation was less painful than originally anticipated.”