FTC finally proposes ban on fake reviews

The FTC has proposed a new rule banning numerous forms of fake reviews online, from outright fabricated ones to those that are sketchily repurposed or secretly manipulated. It may not totally rehabilitate the notoriously unreliable online review ecosystem, but it could help make things a bit more bearable.

This rule has been a long time in the making, which is par for the course at any federal regulator. The FTC’s first case of this type was in 2019, against a merchant that was making misleading claims and paying for fake reviews. Before that, it had taken on “influencer marketing” where a person didn’t disclose that they were being paid to promote a product.

Now the agency is ready to take comprehensive action with rules they first previewed last October and have now put in near-final form. The proposed rule is the result of much research and of consultation with businesses, consumers and even advertising trade organizations that predictably advised the FTC not to bother cracking down on this lucrative business.

The Association of National Advertisers, for instance, says the agency “has not demonstrated evidence of prevalence” and worried that new rules would be “burdensome.” But consumer advocacy organizations, major online companies and common sense argue otherwise — public numbers of fake reviews taken down add up to billions by now, and anyone who has tried to buy a product on Amazon knows it’s completely compromised. The regulators also note “the widespread emergence of generative AI, which is likely to make it easier for bad actors to write fake reviews.”

Even so, the FTC has no doubt carefully tailored the rules it is proposing so that legitimate commerce and acceptable review solicitation (like providing a product for an honest review) are not affected.

You can read the full notice of proposed rulemaking here, but as NPRMs tend to be, it’s quite long and mostly about establishing the need and legality of the rule. The agency summarizes what is newly prohibited in a news release, though, which I have further condensed below:

  • No selling or soliciting fake reviews. This includes fake profiles, AI generated reviews or anyone who has not actually used a product, and businesses can face penalties if they do this knowingly.
  • No review hijacking, like shifting reviews for one product to another — one company just had to pay $600,000 for doing this.
  • No buying positive or negative reviews for your own or other products.
  • No reviews from company leadership or related persons (family, employees) without disclosure.
  • No running a review site for your own products and pretending it’s “independent.”
  • No suppressing reviews via legal threats or intimidation, like saying a bad review is defamation.
  • No selling fake engagement like followers and video views.

The rule is now open for public comment, and after 60 days the FTC will weigh any new information and adjust the rules accordingly if needed, before putting the finalized rule to a vote.

In response to my questions, the FTC acknowledged the difficulty of getting at companies abroad doing these things, but of course it can hit the companies in the U.S. who pay for the fake reviews. On the definition and detection of AI-generated content and fake engagement the agency had no further details.