I recently published an essay to help contextualize the data from the famous 2014 Facebook Emotion Contagion study– a study in which Facebook researchers removed between 10 and 90% of either positive or negative emotional content on user news feeds to see if it affected their emotions.

The study sparked massive outrage and articles in The Atlantic, Wired, Forbes, The NY Times, NPR, the BBC, and just about every other major news organization.  It launched an ethical sub-genre about big-tech in books and papers (1, 2)– most notably, a team of 27 bio-ethicists wrote an op-ed in Nature defending the work. It was the most shared academic research of 2014.  It's been cited almost 3000 times.

As you'll see from in my earlier essay, for a quite drastic intervention, the effect on user behavior is both incredibly small and completely unrelated to underlying affective state.  Here I will address the ethics of the study, and how it has been (and continues to be) misused by the popular press.

The Study was Ethical

Prior to the Emotion Contagion Study, the prevailing theory about Facebook was one of social reference– most psychologists would have hypothesized that the overwhelmingly positive nature of facebook posts were making people depressed and envious when considering their own lives in comparison to curated identities. Removing positive posts and negative posts were both expected to increase user well-being; it's only in retrospect, with the insights from the study, that we've come to believe that removing positive content might negatively affect users.

Given this theory, the right thing for Facebook researchers to do was to (1) check if its true, and if so, (2) change how content is curated so as not to depress everyone.  It's the right thing to do.  This research answers a fundamental, extremely important question about how we should design social media.  Sharing it with the world was a generous thing to do.

The key for this kind of study is to implement it in such a way that there isn't a real risk of causing depression or significant emotional harm to users.  Such a risk– outside of the presumed, 'common-man' risk assumed by users of the platform– requires explicit consent.  

For this study, there is no evidence that any meaningful risk was incurred.  We can see that this intervention had no practical effect on people's emotional states; once again, the leading psychologist would have predicted positive effects on user well-being if anything.  In the end, no one even noticed that the intervention was taking place– that's not an indication that something powerful, mysterious, and surreptitious; it's an indication that the changes were not a big deal.

Few people care that facebook A/B tests button color or layout– interventions with subtle psychological and behavioral implications.  Every design choice carries with it some risk; it's our job to make an a priori best guess at the possible implications, minimize uncertainty, and consent people when risks meaningfully exceed expectations.  We tacitly agree that testing UI design changes is okay, even though it might marginally affect user behavior.  We should accept continuous testing, because the company is able to improve user experience.  

This is a brand of 'rule utilitarianism'– the value is high for this kind of continuous improvement, and consenting to every small change would make the experience terrible and degrade the quality of the service.  I only should be consented when the risk to me is real and unexpected.  There are presumed, reasonable, 'common-man' risks when consuming any media.

In using Facebook, you've already willingly subjected yourself to the psychological impact of a certain kind of media.  There is no evidence that this intervention meaningfully deviated from that core experience.  Consent in this case is like consenting people for making a specific movie scene slightly more or less violent when they're already an action movie fanatic; the implicit, presumed, accepted risk subsumes the effect of the intervention.

Improper Interpretations

This study is still frequently miscited in two egregious ways– (1) it's used as an example of the callous indifference of Big Tech to user well-being, and (2) it's used as a damning example of 'the power of AI to surreptitiously and powerfully manipulate people' (see i.e. Shoshana Zuboff's famous 2019 'Surveillance Capitalism' book).  

I hope that we've adequately dismissed the first of these misconceptions above.  This was research for the common good, not something antagonistic to users.  Defining the ethical line for consent is a nuanced process, but to suggest this research was done in bad faith with nefarious intentions is disingenuous.  Facebook had no incentive to publish this result publicly if they had truly poor intentions, and it's obvious that the researchers didn't see their work as unethical.  No one would've willing subjected themselves to the mudslinging, PR nightmare that ensued from this publication.

The second conception of this study– that this is evidence of Big Tech's ability to powerfully and subliminally manipulate users– is also clearly wrong.  It follows from misconstruing 'a statistically significant effect' (which there is) with a 'powerful and important effect' (which there is clearly not), and making an invalid leap from a behavioral measurement to an underlying emotional state.  It's pushed along by a widely-held, blatantly wrong view that subtle design choices exert a powerful influence on human behavior– a view with roots in discredited and inaccurate social priming research.

To reiterate, this incredibly invasive intervention on emotional facebook content has nearly negligible effects on user behavior, and no known effect on user well-being or affect.

This paper is not evidence of either a big-tech conspiracy or a new breed of sophisticated coercion.  Its proper interpretation makes the opposite point.  With this research, Facebook took a step to understand, contextualize, and share the emotional impact they have on their users.  They demonstrated a very tiny behavioral effect with a dramatic intervention.  

It seems that if Facebook were to delete most of the positive posts on your newsfeed, you probably wouldn't notice, and you probably wouldn't care.