Skip to content

Is Outrage Over the Facebook Mood Manipulation Study Anti-Science or Ignorance?

2014 July 2
by Richard N. Landers

ResearchBlogging.orgBy now, you’ve probably heard about the latest controversy coming from Facebook – a researcher internal to Facebook, along with two university collaborators, recently published a paper in PNAS[1] that included an experimental manipulation of mood. Specifically, the researchers randomly assigned about 700,000 Facebook users to test an interesting causal question: does the emotional content of posts read on Facebook alter people’s moods?

This is an interesting and a compelling way to study that question. Research to this point has focused on scouring pre-existing news feeds and inferring emotion from posted content. This has led to some other well-known Facebook research about the Facebook envy spiral, in which people post exaggerated accounts of the positive events in their lives, which leads reading users to become jealous and post even more exaggerated accounts of their own lives. In the end, on Facebook, everyone else looks like they’re doing better than you. The downside of this focus on pre-existing news is that it’s impossible to demonstrate causality – we don’t know that viewing content on Facebook actually causes you to think or behave differently. To demonstrate causation (not just correlation), you need an experiment, and that’s exactly what these researchers did. The result? Manipulating viewed post mood did affect the apparently mood in posts made by the study participants – but the effect was tiny (d = 0.02 was the largest observed effect). So it makes a difference – but perhaps not as big of one as previous research has suggested.

Reactions have been mixed. Psychologists and science writers have tended to focus on explaining the study to non-psychologists, whereas popular sources have focused (as usual) on the outrage.

Facebook-controversy-related comic from cad-comic.com

Facebook-controversy-related comic from cad-comic.com

With that, we come to the heart of my title-question: Is the outrage over the Facebook mood manipulation study mere ignorance or anti-science? Let me explain with three points.

First, we know that web media companies, especially those who have hired marketing consultants, engage in internal experimental research with our data and our moods all the time. They do this to improve the attractiveness of their products. By randomly assigning people viewing web content to two different experiences, they can observe the difference between the two in terms of time spent on the site, information shared, and a variety of other metrics. Data in hand, web developers can then change their entire site to the format that was most successful in experimental testing. Websites like Amazon, Yelp, and Facebook all do this with user-contributed data, altering the algorithms that shape what you see. Even Wikipedia’s example of this approach involves randomly assigning people to receive different marketing emails. For people that didn’t know this approach was so common, despite agreeing to Facebook’s terms of service that indicate your permission for them to experiment on you freely, their outrage is likely driven by ignorance. I suggest these people go delete their Facebook accounts right now, or at least read the terms of service more carefully.

Second, if you are aware that Facebook is collecting your data and manipulating you, but you didn’t find the Facebook experience to be worth that loss of data control, you should have also deleted your account by now. As Facebook is a private company with a product they are trying to sell you, you are perfectly free to give up on that product at will. You can’t be mood-manipulated if you don’t use their service. If you’re angry and haven’t deleted your account, you have already, through your actions, demonstrated that access to Facebook is more important to you than avoiding being manipulated by a massive technology company.

Third, if you already knew that Facebook was collecting your data and manipulating you, the major difference with this study and the hundreds of studies that Facebook has undoubtedly already done is that it is published in the scientific research literature. Most research like this goes on behind closed doors. You know that Big Data is processing all of the status updates that you provide to Facebook and that experimental manipulations are being used to compel you to stare at Facebook longer. But this time, they went public with the results – and only now is there anger.

From xkcd.com

From xkcd.com

Having said all that, could the researchers have handled this better to minimize negative public reactions? Certainly yes.

First, obtaining informed consent from participants that an experiment was going on would undoubtedly have biased their results, so obtaining consent ahead of time would not have been possible – a justifiable risk, given the exceptionally low probability of harm (in the terms of IRB, “minimal risk”) to participants in this study. However, it would have been courteous to have either debriefed participants (even now, we don’t know exactly who was manipulated) or given participants the opportunity to withhold their data from the final dataset, even if users have explicitly agreed to be experimented on in their day-to-day interactions on Facebook.

Second, the ethical review process for this study was business-as-usual for many IRBs – in other words, orchestrated by the researchers to minimize the hassle they needed to go through to meet the minimum possible institutional requirements for that review.  Facebook apparently conducted an internal review of the study protocol proposed by the first author (a Facebook employee), presumably in consultation with his two Cornell faculty co-authors. No surprises there – Facebook manipulates its users all the time, this manipulation was within its terms of service, so why would they care? Once the data were collected, the Cornell faculty authors applied to their own IRB to use the data collected by Facebook as “pre-existing data.” Given the sheer scope of this study, knowing that they’d be trying to alter the interactions of three-quarters of a million people, the faculty would have been better off sending the study for full IRB review.  Having said that, a full IRB probably would have approved the protocol anyway because the manipulation was well in line with what Facebook already does on a regular basis anyway – the researchers just redirected Facebook’s business-as-usual to conduct science instead of, or perhaps in addition to, profit.

Footnotes:
  1. Kramer, A., Guillory, J., & Hancock, J. (2014). Experimental evidence of massive-scale emotional contagion through social networks Proceedings of the National Academy of Sciences, 111 (24), 8788-8790 DOI: 10.1073/pnas.1320040111 []
Share

SIOP 2014: Reflections on SIOP Honolulu

2014 June 19
tags: ,
by Richard N. Landers

SIOP 2014 Coverage: Schedule Planning | Live-Blog | Reflections

Normally, SIOP is held during the final weeks of April. It’s probably the worst conceivable timing for the academic community – immediately before and sometimes during finals of the Spring semester. This year, SIOP was unusually scheduled due to its unusual location: Honolulu, Hawaii. With the wise assumption that many SIOP attendees would dual-purpose their trip to Honolulu into a vacation, the conference was pushed forward to mid-May (and post-semester). The daily conference schedule was also pushed forward, starting at 7AM and ending around 3PM, in order to allow people to enjoy their afternoons in Hawaii. That resulted in a very different SIOP feel than usual. Not really better or worse – just different.

My personal conference schedule was packed full, with 8 separate events (9 total presentations) spread over all three days of the conference. That didn’t leave a whole lot of time to attend other people’s sessions, to listen and reflect, which is really the best part of SIOP for me. My overall goals in attending SIOP are to be inspired in my own research, to incorporate new ideas into my thinking, and to remind myself of the excitement of research and why I took a research career in the first place. Even with my schedule, I think these goals were met.

In terms of specific content, I also use SIOP as a barometer of the tech sophistication of the practitioner HR consultant crowd. I used to think the Academy of Management conference would be the best place for this, since they do study management after all, but after attending AOM for a few years, I realized that the Academy crowd is not generally very concerned with people are actually doing in real organizations. So now I stick with SIOP to learn what I need to know.

In general, things are looking good for technology in IO. At a minimum, people are paying attention and heeding the calls I’ve been making for years – namely that technology is becoming an overwhelming force of marketing in the HR consulting community, with both good and bad implications, and we need to pay attention.

On the good side, technology enables us to do a lot we couldn’t do before. One of the most fascinating examples I saw was a manufacturing simulation in which consultants could simulate an entire car-building process, recording every mouse click, every physical movement, every eye glance of the operators for the purpose of either developmental or administrative assessment. For softer skills, assessment is improving but isn’t yet great. Leadership assessment, for example, usually takes place online and is video-based, in the tradition of situational judgment tests (SJTs). That’s more convenient than an in-person role-play, but we can do better. The line between SJTs and assessment centers is blurring, and I think this will ultimately force purveyors of both to really think about whether they are really unique concepts and what unique value each brings, if any.

On the bad side, technology is edging out the more important part of the equation – the underlying psychological mechanisms that enable these technologies to work in the first place. In many cases, assessment technology is more about technology than assessment; learning technology is more about technology than learning. That is not the way things should be, and the IO crowd is starting to take notice. We are the ones to make that stand, and we need more people fighting the good fight!

A great example of this is the recent highly visible gamification effort by Knack called Wasabi Waiter. It is essentially a casual web browser game (reminds me of classic casual game Diner Dash), but it purportedly can be used to select high performers on “social intelligence” among other constructs. The media has gone crazy over this thing. Yet there is no published psychometric evidence or even a white paper describing the construct validity of whatever it is that this game is supposed to assess. Without that, we should not trust this game – it may be all flash and no substance – yet the press is impressed with flashy, and that’s what people remember. IO should bring the science!

Another assessment gamification effort called Insanely Driven provides another case study that I learned about at SIOP. This game supposedly assesses candidates on… something. I honestly don’t know. And I’m not convinced the creators do either! It’s an entertaining experience, and personality profiles are spit out at the end, depending upon your performance, but is it valid? Again – nothing published, nothing shared, so who knows?

In any case, next year’s SIOP is not anywhere quite so exciting (Philadelphia!), so we’ll see then if the tech enthusiasm gets even stronger!

Share

SIOP 2014: Live-Blog

2014 May 24
tags: ,
by Richard N. Landers

SIOP 2014 Coverage: Schedule Planning | Live-Blog | Reflections

One of the side effects of a Hawaii conference is a week of relaxation after the conference ends.  Ahhh.  Reflections on the conference will be forthcoming, but for now, I’m archiving my live coverage of the conference from Twitter here.

  • May 11 – Final presentations and posters for #SIOP14 Hawaii all complete… 9 presentations! Not much time to listen to others! #academicproblems
  • May 15 – 21 hours of traveling, 3 planes, and STILL not at #siop14 … Next year, how about we have it where I live?
  • May 15 – 28h of travel from Virginia, incl one extra plane reroute, all bags lost, hotel room wasn’t ready at 11PM… #siop14 not a stellar start!
  • May 15 – @josuepht Thanks! I think I’m skipping the plenary after all this travel madness.
  • May 15 – After sleep crash, up at 7 but feel surprisingly normal. Still no luggage or clothes here, so presenting’s going to be interesting. #siop14
  • May 15 – @TRPoeppelman so what is a #siop prize? Journal subscriptions? ;) #killingtimeontheSIOPshuttle
  • May 15 – @ericknud As I learned this morning, it was actually in the airport with me yesterday… But no one there knew that!
  • May 15 – @SIOPtweets thanks but a late night run to the hotel store for detergent and some bathroom sink laundry has me set! #commited2siop
  • May 15 – Tammy Allen has a tough intro act to follow! #siop14
  • May 15 – Some reminders that all press is good press from Tammy Allen #siop14 #governmenttherapist
  • May 15 – Nice to see some #bigdata approaches to quantifying our discipline #siop14
  • May 15 – I really appreciate the branding efforts for #siop14, but this video is painful :)
  • May 15 – Talking about mobile assessment in Room 310! #siop14
  • May 15 – Learning about new MTurk research and ready to see one of my students present at #siop14 !
  • May 15 – Feels like a lot of MTurk criticisms are common to all online research #siop14
  • May 15 – My day of back-to-back #siop14 presenting finally over… Time for back-to-back social events!
  • May 16 – So if #siop14 starts at 7AM due to the time change, why are evening events going to midnight?? #exhausted
  • May 16 – As predicted, midnight #siop14 events != 7:30am sessions – managing 9 though!
  • May 16 – #SIOP14 come by my symposium w/ @rnlanders Social Media in Selection: Validity, Applicant Reactions, + Legality today 11 am Room 314
  • May 16 – Ready for some master collaborating #siop14
  • May 16 – Automated tech-driven assessment is further blurring line between assessment centers and SJTs #siop14
  • May 16 – About to be wowed by @workpsy at #siop14
  • May 16 – Multimedia SJTs looking quite fancy these days… Better validity, lower adverse impact, but still tech challenges. #siop14
  • May 16 – @workpsy faces challenges turning IO psychologists into scriptwriters for video SJTs – sounds tricky! #siop14
  • May 16 – Nancy Tippins arguing human judgment shouldn’t be removed from assessment of leadership. That may be true…but only for now #siop14
  • May 16 – Pretty nice crowd for social media in selection! #siop14 pic.twitter.com/CssF4Vnr5O

  • May 16 – Still reeling from an interesting but at times frustrating #gamification roundtable at #siop14
  • May 16 – @iopsychology @IOpsychgossip Mmm, need more data. Please submit your password immediately.
  • May 16 – @Bizarre_HR @iopsychology Glad to have you!
  • May 17 – I always feel on #siop14 day 3 like we’ve only just begun and also like I’m going to collapse in a psychology haze.
  • May 17 – I wonder if @djgeoffe was warned about #siop14 being a tough crowd? :)
  • May 17 – Interesting to hear the marketing perspective on what the new logo is intended to convey, emphasis on O instead of I (uhoh) #siop14
  • May 17 – I for one welcome our new Cortina overlord #siop14
  • May 17 – Cortina proposes a reviewer bootcamp to improve reviewer quality… Bold plan, and many will hate it #siop14
Share