In an interesting twist, Facebook is being sued in the UK for defamation by consumer advice personality, Martin Lewis, who says his face and name have been repeatedly used on fake adverts distributed on the social media giant’s platform.
Lewis, who founded the popular MoneySavingExpert.com tips website, says Facebook has failed to stop the fake ads despite repeat complaints and action on his part, thereby — he contends — tarnishing his reputation and causing victims to be lured into costly scams.
“It is consistent, it is repeated. Other companies such as Outbrain who have run these adverts have taken them down. What is particularly pernicious about Facebook is that it says the onus is on me, so I have spent time and effort and stress repeatedly to have them taken down,” Lewis told The Guardian.
“It is facilitating scams on a constant basis in a morally repugnant way. If Mark Zuckerburg wants to be the champion of moral causes, then he needs to stop its company doing this.”
In a blog post Lewis also argues it should not be difficult for Facebook — “a leader in face and text recognition” — to prevent scammers from misappropriating his image.
“I don’t do adverts. I’ve told Facebook that. Any ad with my picture or name in is without my permission. I’ve asked it not to publish them, or at least to check their legitimacy with me before publishing. This shouldn’t be difficult,” he writes. “Yet it simply continues to repeatedly publish these adverts and then relies on me to report them, once the damage has been done.”
“Enough is enough. I’ve been fighting for over a year to stop Facebook letting scammers use my name and face to rip off vulnerable people – yet it continues. I feel sick each time I hear of another victim being conned because of trust they wrongly thought they were placing in me. One lady had over £100,000 taken from her,” he adds.
Some of the fake ads appear to be related to cryptocurrency scams — linking through to fake news articles promising “revolutionary Bitcoin home-based opportunity”.
So the scammers look to be using the same playbook as the Macedonian teens who, in 2016, concocted fake news stories about US politics to generate a mint in ad clicks — also relying on Facebook’s platform to distribute their fakes and scale the scam.
In January Facebook revised its ads policy to specifically ban cryptocurrency, binary options and initial coin offerings. But as Lewis’ samples show, the scammers are circumventing this prohibition with ease — using Lewis’ image to drive unwitting clicks to a secondary offsite layer of fake news articles that directly push people towards crypto scams.
It would appear that Facebook does nothing to verify the sites to which ads on its platform are directing its users, just as it does not appear to proactive police whether ad creative is legal — at least unless nudity is involved.
Here’s one sample fake ad that Lewis highlights:
And here’s the fake news article it links to — touting a “revolutionary” Bitcoin opportunity, in a news article style mocked up to look like the Daily Mirror newspaper…
The lawsuit is a personal action by Lewis who is seeking exemplary damages in the high court. He says he’s not looking to profit himself — saying he would donate any winnings to charities that aim to combat fraud. Rather he says he’s taking the action in the hopes the publicity will spotlight the problem and force Facebook to stamp out fake ads.
In a statement, Mark Lewis of the law firm Seddons, which Lewis has engaged for the action, said: “Facebook is not above the law – it cannot hide outside the UK and think that it is untouchable. Exemplary damages are being sought. This means we will ask the court to ensure they are substantial enough that Facebook can’t simply see paying out damages as just the ‘cost of business’ and carry on regardless. It needs to be shown that the price of causing misery is very high.”
In a response statement to the suit, a Facebook spokesperson told us: “We do not allow adverts which are misleading or false on Facebook and have explained to Martin Lewis that he should report any adverts that infringe his rights and they will be removed. We are in direct contact with his team, offering to help and promptly investigating their requests, and only last week confirmed that several adverts and accounts that violated our Advertising Policies had been taken down.”
Facebook’s ad guidelines do indeed prohibit ads that contain “deceptive, false, or misleading content, including deceptive claims, offers, or business practices” — and, as noted above, they also specifically prohibit cryptocurrency-related ads.
But, as is increasingly evident where big tech platforms are concerned, meaningful enforcement of existing policies is what’s sorely lacking.
The social behemoth claims to have invested significant resources in its ad review program — which includes both automated and manual review of ads. Though it also relies on users reporting problem content, thereby shifting the burden of actively policing content its systems are algorithmically distributing and monetizing (at massive scale) onto individual users (who are, by the by, not being paid for all this content review labor… hmmm… ).
In Lewis’ case the burden is clearly also highly personal, given the fake ads are not just dodgy content but are directly misappropriating his image and name in an attempt to sell a scam.
“On a personal note, as well as the huge amount of time, stress and effort it takes to continually combat these scams, this whole episode has been extremely depressing – to see my reputation besmirched by such a big company, out of an unending greed to keep raking in its ad cash,” he also writes.
The sheer scale of Facebook’s platform — which now has more than 2BN active users globally — contrasts awkwardly with the far smaller number of people the company employs for content moderation tasks.
And unsurprisingly, given that huge discrepancy, Facebook has been facing increasing pressure over various types of problem content in recent years — from Kremlin propaganda to hate speech in Myanmar.
Last year it told US lawmakers it would be increasing the number of staff working on safety and security issues from 10,000 to 20,000 by the end of this year. Which is still a tiny drop in the ocean of content distributed daily on its platform. We’ve asked how many people work in Facebook’s ad review team specifically and will update this post with any response.
Given the sheer scale of content continuously generated by a 2BN+ user-base, combined with a platform structure that typically allows for instant uploads, a truly robust enforcement of Facebook’s own policies is going to require legislative intervention.
And, in the meanwhile, Facebook operating a policy that’s essentially unenforceable risks looking intentional — given how much profit the company continues to generate by being able to claim it’s just a platform, rather than be ruled like a publisher.