Media & Entertainment

Facebook And The Ethics Of User Manipulation [Updated]

Comment

[This post has been updated to include comments from Adam Kramer, the Facebook employee who coauthored the study, and new information from Cornell indicating that the study was funded internally. ]

A recent study conscripted Facebook users as unwitting participants during a weeklong experiment in direct emotional manipulation. The study set out to discover if the emotional tone of a users’ News Feed content had an impact on their own emotional makeup, measured through the tone of what they posted to the social service after viewing the skewed material.

Nearly 700,000 Facebook users were shown either more positive, or more negative content. The study found that users who were given more positive news feeds posted more positive things, and users who were given more negative news feeds posted more negative things.

Surprising? Doubtful. Unethical? Yes.

Bear it in mind that the impact of the study wasn’t contained merely to those it directly manipulated. It notes that around 155,000 users from the positive and negative groups each “posted at least one status update during the experimental period.” So, hundreds of thousands of status updates were posted by the negatively-induced user group. Those negative posts likely caused more posts of similar ilk.

Contagion, after all, doesn’t end at the doorstep.

We won’t know if the experiment did any more than darken the days of a few hundred thousand users for a week in 2012. But it could have. And that’s enough to make a call on this: Allowing your users to be unwitting test subjects of emotional manipulation is beyond creepy. It’s a damn disrespectful and dangerous choice.

Not everyone is in a good emotional spot. At any given moment, a decent chunk of Facebook’s users are emotionally fragile. We know that because at any given moment, a decent chunk of humanity of emotionally fragile, and Facebook has a massive number of active users. That means that among the negatively influenced were the weak, the vulnerable, and potentially the young. I’ve reached out to Facebook asking if the study excluded users between the ages of 13 and 18, but haven’t yet heard back.

Adding extraneous, unneeded emotional strain to a person of good mental health is an unkindness. Doing so to a person who needs encouragement and support is cruel.

The average Facebook user has something akin to an unwritten social contract with the company: I use your product, and you serve ads against the data I’ve shared. Implicit to that is expected polite behavior, the idea that Facebook won’t abuse your data, or your trust. In this case, Facebook did both, using a user’s social graph against them, with intent to cause emotional duress.

We’re all manipulated by corporations. Advertising is among the more blatant examples of it. There’s far more of it out there than we realize. The pervasiveness of the manipulation makes us slightly inured to it, undoubtedly. But that doesn’t mean we can’t point out things that are over the line when we are shown what’s going on behind the curtain. If Facebook was willing to allow this experiment — lead author of which, according to the study itself is a Facebook employee working on its Core Data Science Team — what else might it allow in the future?

I am not arguing that Facebook has a moral imperative to make news feed content more positive on average. That would render the service intolerable — not all life events are positive, and the ability to commiserate with friends and loved ones digitally is now part of the human experience. And Facebook certainly tweaks its news feed over time for myriad reasons to improve its experience.

That’s all perfectly reasonable. Deliberately looking to skew the emotional makeup of its users, spreading negativity for no purpose other than curiosity without user assent and practical safeguards is different. It’s irresponsible.

For more on the topic, read our follow on the ethics of the experiment: The Morality Of A/B Testing

Here’s the response from Facebook’s Kramer:

OK so. A lot of people have asked me about my and Jamie and Jeff’s recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.

Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.

And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.

The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper

 

 

More TechCrunch

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect