Antitrust enforcers admit they’re in a race to understand how to tackle AI

Antitrust enforcers on both sides of the Atlantic are grappling to get a handle on AI, a conference in Brussels heard yesterday. It’s a moment that demands “extraordinary vigilance” and clear-sighted focus on how the market works, suggested top U.S. competition law enforcers.

From the European side, antitrust enforcers sounded more hesitant over how to respond to the rise of generative AI — with a clear risk of the bloc’s shiny new ex ante regime for digital gatekeepers missing a shifting tech target.

The event — organized by the economist Cristina Caffarra and entitled Antitrust, Regulation and the New World Order — hosted heavy-hitting competition enforcers from the U.S. and European Union, including FTC chair Lina Khan and the DoJ’s assistant attorney general Jonathan Kanter, along with the director general of the EU’s competition division, Olivier Guersent, and Roberto Viola, who heads up the bloc’s digital division which will start enforcing the Digital Markets Act (DMA) on gatekeeping tech giants from early next month.

While conference chatter ranged beyond the digital economy, much of the discussion was squarely focused here — and, specifically, on the phenomenon of big-ness (Big Tech plus big data & compute fuelled AI) and what to do about it.

U.S. enforcers take aim at AI

“Once markets have consolidated cases take a long time. Getting corrective action is really, really challenging. So what we need to do is be thinking in a future looking way about how markets can be built competitively to begin with, rather than just taking corrective action once a problem has condensed,” warned FTC commissioner Rebecca Slaughter. “So that is why you’re going to hear — and you do hear from competition agencies — a lot of conversation about AI right now.”

Speaking via videolink from the U.S., Khan, the FTC’s chair, further fleshed out the point — describing the expansion and adoption of AI tools as a “key opportunity” for her agency to put into practice some of the lessons of the Web 2.0 era when she said opportunities were missed for regulators to step in and shape the rules of the game.

“There was a sense that these markets are so fast moving it’s better for government just to step back and get out of the way. And two decades on, we’re still reeling from the ramifications of that,” she suggested. “We saw the solidification and acceptance of exploitative business models that have catastrophic effects for our citizenry. We saw dominant firms be able to buy out a whole set of nascent threats to them in ways that solidified their moats for a long time coming.

“The FTC as a case on going against Meta, of course, that’s alleging that the acquisitions of WhatsApp and Instagram were unlawful. And so we just want to make sure that we are learning from those experiences and not repeating some of those missteps, which just requires being extraordinarily vigilant.”

The U.S. Department of Justice’s antitrust division has “a lot” of work underway with respect to AI and competition, including “numerous” active investigations, per Kantar, who suggested the DoJ will not hesitate to act if it identifies violations of the law — saying it wants to engage “quickly enough to make a difference”.

“We’re a law enforcement agency and our focus is on making sure that we are enforcing the law in this important space,” he told the conference. “To do that, we need to understand it. We also need to have the expertise. But we need to start demystifying AI. I think it’s talked about in these very grand terms almost as if it’s this fictional technology — but the fact of the matter is these are markets and we need to think about it from the chip to the end user.

“And so where is their accommodations? Where is their concentration? Where are their monopolistic practices? It could be in the chips. It could be in the datasets. It can be in the development and innovation on the algorithms. It can be in the distribution platforms and how you get them to end users. It can be in the platform technologies and the APIs that are used to help make some of that technology extensible. These are real issues that have real consequences.”

Kantar said the DoJ is “investing heavily”, including in its own technology and technologists, to “make sure we understand these issues at the appropriate level of sophistication and depth” — not only to be able to have the firepower to enforce the law on AI giants but also, he implied, as a sort of shock therapy to avoid falling into the trap of thinking about the market as a single “almost inaccessible” technology. And he likened the use of AI to how a factory may be used in lots of different parts of business and different industries.

“There’s going to be lots of different flavours and implementation. And it’s extremely important that we start digging in and having a sophisticated, hands-on approach to how we think about these issues,” he said. “Because the fact of the matter is one of the realities about these kinds of markets is that they have massive feedback effects. And so the danger of these markets tipping the danger of these markets becoming the dominant choke points is perhaps even greater than in other types of markets, more traditional markets. And the impact on society here is so massive, and so we have to make sure that we are doing the work now, at the front end, to get out in front of these issues to make sure that we are preserving competition.”

Asked how the FTC’s dealing with AI, Khan flagged how the agency has also built up a team of in-house technologists — which she said is enabling it to go “layer by layer”, from chips, cloud and compute to foundational models and apps, to get a handle on key economic properties and look for emerging bottlenecks.

“What is the source of that bottleneck? Is it, you know, supply issues and supply constraints? Is it market power? Is it self reinforcing advantages of data that are risking locking in some of the existing dominant players — and so it’s a moment of diagnosis and wanting to make sure that our analysis and understanding across the stack is accurate so that we can then be using any policy or enforcement tools as appropriate to try to get ahead where we can. Or at least not be decades and decades behind.”

“There’s no doubt that these tools could provide enormous opportunity that could really catalyse growth and innovation. But, historically, we’ve seen that these moments of technological inflection points and disruption can either open up markets or they can be used to close off markets and double down on existing monopoly power. And so we are taking a holistic look across the AI stack,” she added.

Khan pointed to the 6(b) inquiry the FTC launched last month, focused on generative AI and investments, which she said would look to understand whether there are expectations of exclusivity or forms of privileged access that might be giving some dominant firms the ability to “exercise influence or control over business strategy in ways that can be undermining competition”.

She also flagged the agency’s consumer protection and privacy mandate as top of mind. “We’re very aware of the ways in which you see both shapeshifting by players but also the ways in which conglomerate entities can sometimes get a further advantage in the market if they’re collecting data from one arm and then able to endlessly use it throughout the business operations. So those are just some of the issues that are top of mind,” she said.

“We want to make sure that the hunger to vacuum up people’s data that’s going to be stemming from the incentive to constantly be refining and improving your models, that that’s not leading to wholesale violations of people’s privacy. That’s not baking in, now, a whole other set of reasons to be engaging in surveillance of citizens. And so that those are some issues that we’re thinking about as well.”

We have huge mindfulness about the lessons learned from the hands off approach to the social media era,” added Slaughter. “And not wanting to repeat that. There are real questions about whether we have already missed a moment given the dominance of large incumbents in the critical inputs for AI, whether it’s chips or compute. But I think we are not willing to take a step back and say this has already happened so we need to let it go.

“I think we’re saying how can we make sure we understand these things and move forward? It’s why, again, we’re trying to use all the different statutory tools that Congress gave us to move forward, not just ex post enforcement cases or merger challenges.”

Former FTC commissioner, Rohit Chopra, now a director of the Consumer Financial Protection Bureau, also used the conference platform to deliver a a pithy call-to-action on AI, warning: “It is incumbent upon us, as we see big tech firms and others continue to expand their empires, that it is not for regulators to worship them but for regulators to act.”

“I think actually the private sector should want the government to be involved to make sure it is a race to the top and not a race to the bottom; that it is meaningful innovation, not fake, fraudulent innovation; that it’s human improving and not just beneficial to a click at the top,” he added.

EU takes stock of Big Tech

On the European side, enforcers taking to the conference stage faced questions about shifting attitudes to Big Tech M&A, with the recent example of Amazon abandoning its attempt to buy iRobot in the face of Commission opposition. And how — or whether — AI will fall in scope of the new pan-EU DMA.

Caffarra wondered whether Amazon ditching its iRobot purchase is a signal from the EU that some tech deals should just not be attempted — asking if there’s been a shift in bloc’s attitude to Big Tech M&A? DG Comp’s Guersent replied by suggesting regional regulators have been getting less comfortable with such mergers for a while.

“I think the signal was given some time ago,” he argued. “I mean, think of Adobe Figma. Think of Nvidia Arm. Thinks of Meta Kustomer, and even think — just to give the church in the middle of the village, as we say in France — think about Microsoft Activision. So I do not think we are changing our policy. I think that it is clear that the platforms, to take a vocabulary of the 20th century, in many ways acquired a lot of characteristics of what we used to call essential facilities.”

“I don’t know if we would have prohibited [Amazon iRobot] but certainly DG Comp and EVP [Margrethe] Vestager would have proposed to the college to do it and I’ve no indication that the college would have had a problem with that,” he added. “So the safe assumption is probably good with that. But, for me, it’s a relatively classical case, even if it’s a bit more subtle — we will never know because we will never publish the decision we have drafted — of self referencing. We think we have very good case for this. A lot of evidence. And we actually think that this is why Amazon decided to drop the case — rather than take a negative decision and challenge it in court.”

He suggested the bloc has evolved its thinking on Big Tech M&A — saying it’s been “a learning curve” and pointing back to the 2014 Facebook WhatsApp merger as something of a penny dropping moment.

The EU waived the deal through at the time, after Meta (then Facebook) told it it could not automatically match user accounts between the two platforms. A couple of years later it did exactly what it had claimed it couldn’t. And a few years further on Facebook was fined $122 million by the EU for a misleading filing. But the damage to user privacy — and further market power entrenchment — was done.

“I don’t know whether we would accept it today,” said Guersent of the Facebook WhatsApp acquisition. “But that was [about] eight years ago. And this is where we started to say we were lacking the depths of reflection. We had never thought enough about it. We didn’t have the empirical work… Like everything it’s not that you wake up a morning and decide I will change my policy. It takes time.”

“It’s about entrenchment. And of course the sophistication of the practices, the sophistication of what they could do, or they actually do, is increasing and therefore the sophistication of the analysis has to be increasing as well. And that is a real challenge as well as the number of data we have to crunch,” he added.

If Guersent was willing to confess to some past missteps, there was little sense from him the EU is in a hurry to course correct — even now it has its shiny new ex ante regime in place.

“There is and will be a learning curve,” he predicted of the DMA. “You shouldn’t expect us to have bright ideas about what to do on everything under the sun. Certainly not with 40 people — a slight message to whoever has a say on the staffing.”

He went on to cast doubt on whether AI should fall in direct scope of the regulation, suggesting issues arising around artificial intelligence and competition may be best tackled by a wider team effort that loops in national competition regulators across the EU, rather than falling just to the Commission’s own (small) staff of gatekeeper enforcers.

“Going forward we have the cloud. We have AI. AI is a divisive issue in basically all the fields. We have… all sorts of bundling, tying and nothing really new but should it be designated? Is it a DMA issue? Is it one or two or national equivalent standard issue?” he said. “I think the the only way to effectively tackle these issues — for me, I know, for my colleagues — is within the ECN [European Competition Network] because we need to have a critical mass of brains and manned force that the Commission doesn’t have and will not have in the near future.”

Guersent also ruffled a few feathers at the conference by dubbing competition a mere “side dish”, when it comes to fixing what he suggested are complex global issues — a remark which earned him some pushback from Slaughter during her own turn on the conference stage.

“I don’t agree with that. I think competition underlies and is implicated by all the work of government. And we’re either going to do that with open eyes thinking about the competition effect of different government policies and choices or we’re gonna do that with our eyes closed. But either way we’re gonna affect competition,” she argued.

Another EU enforcer, DG Connect’s Roberto Viola, sounded a little more positive that the bloc’s newest tool might be handy to addressing AI-powered market abuse by tech giants. But asked directly during a fireside chat with Caffarra whether (and when) the issue of market power actors extending their power into AI — “because they own critical infrastructure, critical inputs” — will get looked at by the Commission, he danced around an answer.

“Take a voice assistant, take a search engine, take the cloud and whatever. You immediately understand that AI can come in scope of DMA quite quickly,” he responded. “Same for DSA [Digital Services Act (DSA) — which, for larger platforms, brings in transparency and accountability requirements on algorithms that may produce systemic risks]. If toward the more kind of societal risk end. I mean, if a search engine which is in scope of the DSA is fuelled by AI they are in scope.”

Pressed on the process that would be required — at least in the case of the DMA — to bring generative AI tools in scope of the ex ante rules, he conceded there probably wouldn’t be any overnight designations. Though he suggested some applications of AI might fall in scope of the regime indirectly, by merit of where/how they’re being applied.

“Look, if it walks like a duck and quacks like a duck it’s a duck. So take… a search engine. I mean, if the search function is performed through an algorithm it’s clearly in scope. I mean, there’s no doubt. I’m sure when we go to the finesse of it there will be in an army of legal experts that will argue all sorts of things about the fine distinction between one or the other. In any case, DMA can look at also other services, can look at the tipping markets, can look at an expansion of the definition. So in any case, if necessary, we can go that way,” he said.

“But, largely, when we see how AI generative AI is used in enhancing the offering of web services — such as [in search functions]… the difference between one or the other becomes very subtle. So I’m not saying that tomorrow we’ll jump to the conclusion that those providing generative AI fall straight into into the DMA. But, clearly, we are looking at all the similarities or the blending of those services. And the same applies for DSA.”

Speaking during another panel, Benoit Coeure, president of France’s competition authority, had a warning for the Commission over the risks of strategic indecision — or, indeed, dither and delay — on AI.

“The cardinal sin in politics is jumping from one priority to another without delivering and without evaluating. So that means not only DMA implementation but DMA enforcement. And there the Commission will have to make difficult choices on whether they want to keep the DMA narrow and limited — or whether they want to make the DMU a dynamic tool to approach cloud services, AI and so on and so forth. And if they don’t, it will come back to antitrust — which I will love because that will bring lots of fantastic cases to me. But that might not be the most efficient. So there’s a very important strategic choice to be made here on the future of the DMA.”

Much of the Commission’s mindshare is clearly taken up by the demand to get the DMA’s engine started and the car into first gear — as it kicks off its new role enforcing on the six designated gatekeepers, beginning March 7.

Also speaking at the one-day conference and giving a hint of what’s to come here in the near term, Alberto Bacchiega, a director of platforms at DG Comp, suggested some of the DMA compliance proposals presented by gatekeepers so far don’t comply with the law. “We will need to take action on those relatively quickly,” he added, without offering details of which proposals (or gatekeepers) are in the frame there.

At the same time, and also with an air of managing expectations against any big bang enforcement moment dropping on Big Tech in a little over a month’s time, Bacchiega emphasized that the DMA is intended to steer gatekeepers into an ongoing dialogue with platform stakeholders — where complaints can be aired and concessions extracted, will be the hope — noting that all the gatekeepers have been invited to explain their solutions in a public workshop that will take place a few weeks after March 7 (i.e. in addition to handing in their compliance reports to the Commission for formal assessment).

“We hope to have good conversations,” he said. “If a gatekeeper proposes certain solution they must be convinced that these are good solutions — and they cannot be in a vacuum. They must be convinced and convincing. So that’s the only way to be convincing. I think it’s an opportunity.”

How quickly could the Commission arrive at a non-compliance DMA decision? Again, there was no straight answer from the EU side. But Bacchiega said if there are “elements” of gatekeeper actions the EU thinks are not complying “with the letter and the spirit of the DMA” then action “needs to be very quick”. That said, an actual non compliance investigation of a gatekeeper could take the EU up to 12 months to establish a finding, or six months for preliminary findings, he added.