Facebook’s attempt to learn whether the type of content users of its service are exposed to affects their mood or not has landed it into plentiful hot water this week, generating much debate on the ethics of user manipulation.
The latest development in the unfortunate saga for Zuck & co is that the fallout from the 2012 emotion study — which only came to light recently — has caught the eye of the UK’s data protection watchdog, the ICO, which is seeking to determine whether any laws were broken by the tech giant.
An ICO spokesperson confirmed to TechCrunch it has started making enquiries. “We’re aware of this issue, and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances,” it said in an emailed statement.
The Irish DP authority is being looped in because Facebook — like many a big tech brand — makes its European base in Ireland where the corporate tax climate is favorable.
The ICO’s move comes after it emerged (via Forbes) that Facebook changed the conditions of its data use policy to specify that users might be subject to research some four months after the study took place. At the time the study took place, in January 2012, there was no mention of user data being fodder for research.
It has also emerged there was no age filter on the study, meaning Facebook users as young as 13 may have been unwitting participants. Data protection laws regarding minors tend to be more stringent so that may be one area the ICO intends to investigate, although it said its enquiries are at too early a stage to comment beyond the above statement as yet.
The controversial study was carried out by Facebook data scientist Adam Kramer. The aim was to discover whether the emotional tone of a users’ News Feed content had an impact on their own emotional makeup, as measured by the tone of the material they then posted to Facebook after viewing the positively or negatively skewed content.
Nearly 700,000 Facebook users were shown either more positive, or more negative content during one week in 2012. The study found that users who were shown more positive News Feeds did indeed post more positive things to Facebook, and vice versa.
However the 689,003 study participants were not informed that Facebook was attempting to manipulate their emotions in such a specific fashion — leading to accusations of unethical behaviour by the company.
The ethics of performing direct negative emotional manipulation on a group of your users without consent seems pretty clearly problematic. However it remains to be seen whether Facebook has also acted illegally by using its platform in this way.
We’ve reached out to Facebook to ask for comment and will update this story with any response.