Facebook has denied it offers tools to advertisers to target users when they are in vulnerable emotional states.
Yesterday The Australian published a report claiming Facebook uses algorithms to target advertising when users of its service are feeling insecure — including targeting children as young as 14 when they are feeling vulnerable.
The report, based on a confidential 2017 internal document seen by the newspaper, claimed that by monitoring posts, pictures, interactions and Internet activity in real-time Facebook’s algorithms can identify “moments when young people need a confidence boost”.
Facebook can determine when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”, the report claimed, suggesting the company allows advertisers to target users based on its algorithmic understanding of their well-being.
Detailed information on mood shifts among young people is “shareable under non-disclosure agreement only”, the document added.
Contacted by TechCrunch about the newspaper’s claims a Facebook spokesperson described the report as “misleading” , and denied the company provides tools for advertisers to target Facebook users based on their emotional state.
“The premise of this article is misleading. We do not offer tools to target people based on their emotional state. The analysis done by an Australian researcher was intended to help marketers understand how people express themselves on Facebook. It was never used to target ads and was based on data that was anonymous and aggregated,” the spokesperson said.
However Facebook is not denying the existence of the document, nor its apparent ability to assess users’ psychological states based on ongoing tracking of their online activity.
In earlier comments to the newspaper Facebook also apologized, and said it had opened an investigation to “understand the process failure and improve our oversight”.
“We will undertake disciplinary and other processes as appropriate,” it added.
Facebook has an established process for reviewing internal research, and TechCrunch understands the research in question did not follow that process — hence the internal review.
It’s by no means the first time questions have been raised about Facebook’s approach to the emotional well-being of its users. The company got into ethical hot water back in 2014 when it emerged it had experimented on some 700,000 users without their consent or knowledge, to see whether it could influence how they felt.
Facebook’s exercise in mass emotional manipulation, which was actually carried out in 2012, involved it showing users either more positive or more negative content to see if it could influence their mood. When details of the internal experiment emerged it was roundly condemned as unethical.
Last fall the company also garnered criticism for its ad targeting practices after it emerged it was allowing ads to be targeted based on users’ ethnic affinity — with concerns this could enable discriminatory advertising. Facebook subsequently disabled ethnic affinity targeting for housing, employment and credit-related ads.
And while the social media giant has distanced itself from these latest suggestions it’s tracking users emotions for marketing-related sales opportunities, a longitudinal study of Facebook usage, conducted by external researchers and published in the American Journal of Epidemiology in February, claims to have found an association between activity on the social network and negative feelings of well-being.
“The negative associations of Facebook use were comparable to or greater in magnitude than the positive impact of offline interactions, which suggests a possible tradeoff between offline and online relationships,” the researchers wrote in the abstract to the paper.
More recently, Facebook has also been accused of allowing data on millions of its users to be covertly harvested by a Trump campaign affiliate — and potentially used to target political advertising at Americans ahead of the 2016 Presidential election. Although it has refuted these claims, telling the Intercept in March: “Our investigation to date has not uncovered anything that suggests wrongdoing.”