FTC may consider rule curbing algorithmic discrimination and ‘commercial surveillance’

The Federal Trade Commission appears to be preparing to consider a rule aimed at digital platforms that either invasively track their users or allow others to do so. The “Trade Regulation Rule on Commercial Surveillance” is at a very early stage but could be the first major anti-Big-Tech action by new FTC Chair Lina Khan.

It is currently only described in this Office of Management and Budget summary, after the FTC submitted information to that agency on potential upcoming regulatory actions. According to the listing, the rule would “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” There is no public draft of the rule and no indication of whether the rule-making is more than notional at this point.

An FTC spokesperson offered the following statement, but declined to comment further: “The FTC is prepared to use all of our tools to combat harmful commercial surveillance practices and protect Americans’ privacy.”

Senator Brian Schatz called attention to the filing, lauding the agency (perhaps a bit prematurely) for taking this on: “I’m glad to see Chair Khan and the agency take this step to crack down on companies using discriminatory algorithms. It’s crucial that we address this,” he said in a statement.

The rule-making would use the FTC’s Section 18 authority to regulate “Unfair or deceptive acts or practices,” which is an established source of authority for establishing a variety of requirements and prohibitions.

This rule could potentially be along the lines of the proposed (and discarded) Broadband Privacy Act, among other attempts to shield consumers from the hidden depredations of everything from social media companies to internet providers.

Of course the FTC cannot simply invent rules out of thin air — they do establish rules, but it’s often when a new or changing industry has managed to circumvent rules that already exist for good reason.

For instance, if a yogurt label says it’s fat-free and testing shows it has fat, that’s obviously false advertising. But if a social media company says (as they often do) that you “own your data” or some such, but you can’t download it, prevent them from selling it or completely delete it from their platform, is that false advertising? It might not be until the FTC updates its rules and guidance to include the practice.

“Unlawful discrimination” might also occur when an algorithm fed with badly configured data ends up favoring or disfavoring a group of people based on a protected status like religion, race or medical status. At present there is little in the way of formal requirements for vetting algorithms, as they and their sources are often trade secrets or otherwise shielded from public view and inquiry. An FTC rule could help make this a requirement rather than a voluntary action.

Sometimes these efforts are backed up by legislation or executive action, and Khan certainly has something of a mandate from the White House to rein in Big Tech (her rapid ascendance from legal adviser to top of the heap is a strong endorsement of actions like this potential rule).

To be clear, at this point this is more or less a twinkle in the FTC’s eye, but the filing with the OMB does establish it as a likely priority in 2022 — an election year when the current administration would like to appear effective. Taking on Big Tech is one of very few bipartisan efforts extant so work on these lines could be a plank in a few political platforms.