Early this month Google quietly began trials of “Privacy Sandbox”: Its planned replacement adtech for tracking cookies, as it works toward phasing out support for third-party cookies in the Chrome browser — testing a system to reconfigure the dominant web architecture by replacing individual ad targeting with ads that target groups of users (aka Federated Learning of Cohorts, or FLoCs), and which — it loudly contended — will still generate a fat upside for advertisers.
There are a number of gigantic questions about this plan. Not least whether targeting groups of people who are non-transparently stuck into algorithmically computed interest-based buckets based on their browsing history is going to reduce the harms that have come to be widely associated with behavioral advertising.
If your concern is online ads that discriminate against protected groups or seek to exploit vulnerable people (e.g. those with a gambling addiction), FLoCs may very well just serve up more of the abusive same. The EFF has, for example, called FLoCs a “terrible idea”, warning the system may amplify problems like discrimination and predatory targeting.
Advertisers also query whether FLoCs will really generate like-for-like revenue, as Google claims.
Competition concerns are also closely dogging Google’s Privacy Sandbox, which is under investigation by U.K. antitrust regulators — and has drawn scrutiny from the U.S. Department of Justice too, as Reuters reported recently.
Adtech players complain the shift will merely increase Google’s gatekeeper power over them by blocking their access to web users’ data even as Google can continue to track its own users — leveraging that first-party data alongside a new moat they claim will keep them in the dark about what individuals are doing online. (Though whether it will actually do that is not at all clear.)
Antitrust is of course a convenient argument for the adtech industry to use to strategically counter the prospect of privacy protections for individuals. But competition regulators on both sides of the pond are concerned enough over the power dynamics of Google ending support for tracking cookies that they’re taking a closer look.
And then there’s the question of privacy itself — which obviously merits close scrutiny too.
Google’s sales pitch for the “Privacy Sandbox” is evident in its choice of brand name — which suggests its keen to push the perception of a technology that protects privacy.
This is Google’s response to the rising store of value being placed on protecting personal data — after years of data breach and data misuse scandals.
A terrible reputation now dogs the tracking industry (or the “data industrial complex”, as Apple likes to denounce it) — as a result of high-profile scandals like Kremlin-fuelled voter manipulation in the U.S. but also just the demonstrable dislike web users have of being ad-stalked around the internet. (Very evident in the ever increasing use of tracker- and ad-blockers; and in the response of other web browsers which have adopted a number of anti-tracking measures years ahead of Google-owned Chrome).
Given Google’s hunger for its Privacy Sandbox to be perceived as pro-privacy it’s perhaps no small irony, then, that it’s not actually running these origin tests of FLoCs in Europe — where the world’s most stringent and comprehensive online privacy laws apply.
AdExchanger reported yesterday on comments made by a Google engineer during a meeting of the Improving Web Advertising Business Group at the World Wide Web Consortium on Tuesday. “For countries in Europe, we will not be turning on origin trials [of FLoC] for users in EEA [European Economic Area] countries,” Michael Kleber is reported to have said.
TechCrunch had a confirmation from Google in early March that this is the case. “Initially, we plan to begin origin trials in the U.S. and plan to carry this out internationally (including in the U.K. / EEA) at a later date,” a spokesman told us earlier this month.
“As we’ve shared, we are in active discussions with independent authorities — including privacy regulators and the U.K.’s Competition and Markets Authority — as with other matters they are critical to identifying and shaping the best approach for us, for online privacy, for the industry and world as a whole,” he added then.
At issue here is the fact that Google has chosen to auto-enroll sites in the FLoC origin trials — rather than getting manual sign ups which would have offered a path for it to implement a consent flow.
And lack of consent to process personal data seems to be the legal area of concern for conducting such online tests in Europe where legislation like the ePrivacy Directive (which covers tracking cookies) and the more recent General Data Protection Regulation (GDPR), which further strengthens requirements for consent as a legal basis, both apply.
Asked how consent is being handled for the trials Google’s spokesman told us that some controls will be coming in April: “With the Chrome 90 release in April, we’ll be releasing the first controls for the Privacy Sandbox (first, a simple on/off), and we plan to expand on these controls in future Chrome releases, as more proposals reach the origin trial stage, and we receive more feedback from end users and industry.”
It’s not clear why Google is auto-enrolling sites into the trial rather than asking for opt-ins — beyond the obvious that such a step would add friction and introduce another layer of complexity by limiting the size of the test pool to only those who would consent. Google presumably doesn’t want to be so straightjacketed during product dev.
“During the origin trial, we are defaulting to supporting all sites that already contain ads to determine what FLoC a profile is assigned to,” its spokesman told us when we asked why it’s auto-enrolling sites. “Once FLoC’s final proposal is implemented, we expect the FLoC calculation will only draw on sites that opt into participating.”
He also specified that any user who has blocked third-party cookies won’t be included in the Origin Trial — so the trial is not a full “free-for-all”, even in the U.S.
There are reasons for Google to tread carefully. Its Privacy Sandbox tests were quickly shown to be leaking data about incognito browsing mode — revealing a piece of information that could be used to aid user fingerprinting. Which obviously isn’t good for privacy.
“If FloC is unavailable in incognito mode by design then this allows the detection of users browsing in private browsing mode,” wrote security and privacy researcher, Dr Lukasz Olejnik, in an initial privacy analysis of the Sandbox this month in which he discussed the implications of the bug.
“While indeed, the private data about the FloC ID is not provided (and for a good reason), this is still an information leak,” he went on. “Apparently it is a design bug because the behavior seems to be foreseen to the feature authors. It allows differentiating between incognito and normal web browsing modes. Such behavior should be avoided.”
Google’s Privacy Sandbox tests automating a new form of browser fingerprinting is not “on message” with the claimed boost for user privacy. But Google is presumably hoping to iron out such problems via testing and as development of the system continues.
(Indeed, Google’s spokesman also told us that “countering fingerprinting is an important goal of the Privacy Sandbox”, adding: “The group is developing technology to protect people from opaque or hidden techniques that share data about individual users and allow individuals to be tracked in a covert manner. One of these techniques, for example, involves using a device’s IP address to try and identify someone without their knowledge or ability to opt out.”)
At the same time it’s not clear whether or not Google needs to obtain user consent to run the tests legally in Europe. Other legal bases do exist — although it would take careful legal analysis to ascertain whether or not they could be used. But it’s certainly interesting that Google has decided it doesn’t want to risk testing if it can legally trial this tech in Europe without consent.
Likely relevant is the fact that the ePrivacy Directive is not like the harmonized GDPR — which funnels cross border complaints via a lead data supervisor, shrinking regulatory exposure at least in the first instance.
Any EU DPA may have competence to investigate matters related to ePrivacy in their national markets. To wit: At the end of last year France’s CNIL skewered Google with a $120 million fine related to dropping tracking cookies without consent — underlining the risks of getting EU law on consent wrong. And a privacy-related fine for Privacy Sandbox would be terrible PR. So Google may have calculated it’s simply less risky to wait.
Under EU law, certain types of personal data are also considered highly sensitive (aka “special category data”) and require an even higher bar of explicit consent to process. Such data couldn’t be bundled into a site-level consent — but would require specific consent for each instance. So, in other words, there would be even more friction involved in testing with such data.
That may explain why Google plans to do regional testing later — if it can figure out how to avoid processing such sensitive data. (Relevant: Analysis of Google’s proposal suggests the final version intends to avoid processing sensitive data in the computation of the FLoC ID — to avoid exactly that scenario.)
If/when Google does implement Privacy Sandbox tests in Europe “later”, as it has said it will (having also professed itself “100% committed to the Privacy Sandbox in Europe”), it will presumably do so when it has added the aforementioned controls to Chrome — meaning it would be in a position to offer some kind of prompt asking users if they wish to turn the tech off (or, better still, on).
Though, again, it’s not clear how exactly this will be implemented — and whether a consent flow will be part of the tests.
Google has also not provided a timeline for when tests will start in Europe. Nor would it specify the other countries it’s running tests in beside the US when we asked about that.
At the time of writing it had not responded to a number of follow up questions either but we’ll update this report if we get more detail. Update: Google said it can’t currently offer any more detail on questions including how consent will be handled once FLoCs are deployed (i.e. post-trial, post-launch); and whether it believes it will be unnecessary to obtain individual consent to do cohort-based targeting once the system is fully developed. It also declined to specify the legal basis it will be relying upon for running tests in Europe “later”.
“We’re very engaged on this topic and thinking carefully about it — but answers to questions about compliance with specific laws and obligations will ultimately turn on the technical operation of the Sandbox proposals, which are still being developed,” said its spokesman.
The (current) lack of regional tests raises questions about the suitability of Privacy Sandbox for European users — as The New York Times’ Robin Berjon has pointed out, noting via Twitter that “the market works differently”.
“Not doing origin tests is already a problem… but not even knowing if it could eventually have a legal basis on which to run seems like a strange position to take?” he also wrote.
Google is surely going to need to test FLoCs in Europe at some point. Because the alternative — implementing regionally untested adtech — is unlikely to be a strong sell to advertisers who are already crying foul over Privacy Sandbox on competition and revenue risk grounds.
Ireland’s Data Protection Commission (DPC), meanwhile — which, under GDPR, is Google’s lead data supervisor in the region — confirmed to us that Google has been consulting with it about the Privacy Sandbox plan.
“Google has been consulting the DPC on this matter and we were aware of the roll-out of the trial,” deputy commissioner Graham Doyle told us today. “As you are aware, this has not yet been rolled-out in the EU/EEA. If, and when, Google present us with detail plans, outlining their intention to start using this technology within the EU/EEA, we will examine all of the issues further at that point.”
The DPC has a number of investigations into Google’s business triggered by GDPR complaints — including a May 2019 probe into its adtech and a February 2020 investigation into its processing of users’ location data — all of which are ongoing.
But — in one legacy example of the risks of getting EU data protection compliance wrong — Google was fined $57 million by France’s CNIL back in January 2019 (under GDPR as its EU users hadn’t yet come under the jurisdiction of Ireland’s DPC) for, in that case, not making it clear enough to Android users how it processes their personal information.