Skip to content

Torture, the APA, and I/O Psychology

2015 August 12
by Richard N. Landers

If you haven’t heard by now, based upon several independent investigations and public accusations of wrongdoing, the American Psychological Association commissioned a law firm to create what is referred to in the media as the Hoffman report, an independent investigation of the degree to which the APA was involved in the support of the CIA’s torture program in the wake of 9/11. Within this report, it was found that numerous APA officials, including the ethics director, colluded with the CIA to ensure that the stance of the APA, as conveyed in public policy statements, provided broad support to their policies of torture.  In reaction to this, the APA has apologizedsome psychologists resigned from the APA, and a federal investigation is looming. In general, this seems to be a failure of individuals within the APA, and not of the organization’s ethics code – instead, that code was interpreted as “loosely” as possible to create these statements.  Regardless, without a doubt, public trust in the APA – and psychologists in general – has been diminished.

SIOP has also responded, and the gist is: “torture is bad, and we are all harmed by this.” I was personally not surprised by the lack of a more specific response. The relationship between the military and psychology is a complicated one, and there is a long history of collaboration with I/O psychologists specifically. We even credit the development of assessment center methodology to the selection of spies in World War II, and the classic Army Alpha and Army Beta tests, created to better place soldiers in World War I, were a major force driving early selection research. We’ve done a lot of good via military research, and a lot of that good has then been applied to other, much smaller organizations. Our most major and longest-lasting framework for defining job performance across all jobs comes from a massive military project called Project A.

So where should the line be drawn? This is a question I’ve been personally struggling with, as I ponder pursuing military funding for some of my own projects. I don’t think the military is inherently bad – if it could be considered evil, it is a necessary one, to some degree – but in the back of my mind, I must now consider if the outcomes of my research could ever be used to help kill people. As of five years ago, that’s not a question I ever thought I’d ask myself.

On one hand, the military is an organization much like any other, albeit huge. It is a highly complex network of hierarchical command relationships, cultures and subcultures, many of which are dysfunctional or at least non-optimized. By bringing psychological research to the military, as I/Os have for over a century, those relationships can be improved. Soldiers can live better lives. People can be placed into positions where they can display their talents. Training and development can waste less time. Skills needed for civilian careers can be developed through military service. And given the scope of the military, even small gains can have significant organization-wide and even nationwide benefits.

On the other hand, a much more straightforward problem: the military quite literally kills people to fulfill its core organizational mission. In many cases, this is done without their target even being provided an opportunity to fight back, many of them likely to be innocent civilians. To an extent, taking money from the military or conducting research with the military implicitly approves of its tactics.

There is also the practical side. Although this is a nuanced issue in reality, a researcher that has taken military funding at any point, for any reason, instantly becomes branded as biased, lacking credibility, and bought-and-purchased among a fair portion of the public. The public, in general, does not care for nuance. To what extent any particular I/O psychologist cares about that is a matter of individual differences.

What the Hoffman report should highlight for us is that the questions struggled with by the APA leadership – whether to bend our morality for the sake of the our organization’s growth, prosperity, and influence – is something I/O’s have dealt with regarding the military since I/O has existed. I imagine our per-member connections to the military are much higher in SIOP than in the APA, broadly. So I’d like to think we’re pretty good at managing it, but do we really know?

Have any I/O’s ever sacrificed their ethics to get access to data, to increase their influence, to make their careers? Do we only not know the names of these people because I/O has, to this point, been pretty small and insignificant in the grand scheme of things? As we continue to grow, perhaps time will tell.

Did Home-Owning Millennials All Get Help from Their Parents?

2015 July 16
by Richard N. Landers

Occasionally, I see an article in the press that is just so ridiculously misleading in how it treats statistics that I think: “Wow, what a teachable moment.” This is one such moment.

The Atlantic recently ran an article entitled, “Millennials Who Are Thriving Financially Have One Thing in Common…Rich Parents.” What a showstopper, huh? If you’re a Millennial, as I am by most definitions, the phrasing alone just drives you to click and comment, right? If you have a home, you want to say, “But I did this without any rich parents!” If you don’t have a home, you want to say, “This is my parents’ fault!” And if you’re not a Millennial, you want to say, “Stop blaming your parents and get off my lawn!” No one wins, except perhaps the Atlantic, since they get all sorts of ad revenue off of our outrage and perhaps a few extra readers due to social shares caused by that outrage.

All of this would be acceptable, if a bit annoying, if the Atlantic were presenting its statistics in a clear, forthright manner – that is, if the thing that financially thriving Millennials had in common was, indeed, rich parents. Unfortunately, like most news outlets these days, there’s a lot of cherry-picking going on in this article to present a particular point of view. For your convenience, I’ve extracted the three key statistics reported in that article that inform this view and pasted them here, so that you don’t need to wade through all the filler to find them yourself. Importantly, these were themselves borrowed from a Zillow.com analysis of Federal data.

  1. “of the 46 percent of Millennials who pursued post-secondary education (that’s everything from associates degrees to doctorates), about 61 percent received some financial help with their educational expenses from their parents.”
  2. “43 percent of Millennials who got help from their parents in paying for school were also able to become homeowners”
  3. “These are the select few whose families had enough money to not only help them with college, but to then also assist them with a down payment on a home. This group accounts for more than half of the Millennial homeowners in the Zillow’s data, though they account for only 3 percent of the total Millennial population.”

So, let’s treat this like a word problem. What percentage of Millennials are we talking about here?

First, notice that we’re talking only about “Millennials who pursued post-secondary education.” 61% of 46% – which is 28% – of all Millennials received some financial help from their parents to afford such an education. That also means 100% – 61% = 39% of those purusing post-secondary education received no support. And let’s also remember that “some help with educational expenses” could mean “my parents paid for my books one semester.”

Second, notice that 43% of those people became homeowners. Not 43% received help from their parents – 43% became homeowners.  That means 43% of 28%: so 12% of all Millennials are homeowners who received support from their parents for education.  But that includes neither homeowners who didn’t pursue post-secondary education nor those who did but didn’t get any help. The number of homeowning Millennials in general is closer to 36%, meaning 36% – 12% = 24% of Millennials are homeowners without educational help (by definition, the non-rich-parent group) plus those who didn’t pursue post-secondary education.

Third, finally turning to our “thriving” group, “more than half” (such precision!) of Millennial homeowners had parents that helped them with both college and a down payment, which accounts for 3% of all Millennials.  Those are some interesting numbers to choose, since none of them speak to the thesis of this piece – that “thriving” Millennials, in general, all have rich parents in common. In the narrative of the article, “rich parents” is essentially synonymous with “gave money for both college and a house,” whereas “thriving” is essentially synonymous with “homeowner.” If you’ve been following along, you’ll notice we actually have the numbers to solve our word problem.  If 36% of all Millennials are homeowners and 3% of all Millennials we consider homeowners with rich parents, what percentage of homeowning Millennials have rich parents?  3% / 36% = 8%.

I’d say it’s pretty hard for members of a group to all have something “in common” when only 8% of them share it. This article, like so many before it, throws out a deluge of percentages with the hope that you won’t take a moment to think: does this actually make any sense or is someone trying to give a statistical impression rather than statistical truth?

So what’s the real conclusion here? People with parents willing to fund higher education and home payments are more likely to get higher education and buy a home, but plenty of people without rich parents still manage to do the same thing. It is just a bit harder. SURPRISE.

Unfortunately, this article is also just the most recent example in what I’m beginning to think of as a vast media victimization of Millennials – we poor, poor souls, lost in a deluge of horrible economic conditions from which we cannot escape. I can’t even decide if this is the better or worse than the “Millennials are the root of all evil in the world” articles. Economic suckitude certainly has done its damage – but it has done that damage to everyone. Why pretend otherwise?

To their credit, Zillow.com focused upon the 3%, and never made any claims about “one thing in common.” The most likely reason: unlike the Atlantic‘s report, Zillow.com’s report was written by an economist.  It also contains this telling footnote: “The sample size associated with these five conditions – age, education, financing of education, current housing tenure and source of funding for a down payment – is small and should be interpreted with caution.”  Indeed it should!

How Will #MTurkGate Affect the Validity of Research?

2015 July 1
tags: ,
by Richard N. Landers

If you haven’t heard by now, #MTurkGate refers to a sudden up-to-400% fee increase by Amazon on MTurk Requesters hiring MTurk Workers to do work on Amazon Mechanical Turk. Or more specifically, MTurkGate refers to the online backlash resulting from that fee increase.

There are a few odd features of the MTurkGate fee increase. Most notably for academic researchers, the price hike is largest for people doing a high volume of small requests – for example, 200 people completing a 30-minute survey. This has, understandably, led many academic researchers to believe that Amazon is targeting them, perhaps with the mistaken impression that social scientists have lots of money to spare. Some researchers have even gone to Facebook and Twitter to demand an academic pricing tier.

Most of the commentary to this point, predictably, has focused on the increased cost. A few have discussed the potential decrease in already-low Worker pay given tight budgets. Undoubtedly, using MTurk will now be more expensive. But no commentary that I have seen has covered the impact on the the validity of research conducted on it. Will this help, harm, or have no impact on the quality of research conclusions using MTurk samples?

The short answer is: it depends.

I’ve written on the use of MTurk by researchers before, and in that discussion, I noted that MTurk is an acceptable source of data for only some research questions – specifically, those research questions where the specific advantages and disadvantages of MTurk don’t harm the conclusions of the particular study you’re trying to do. For example, it’s probably not generally a good idea to conduct research on MTurk where you rely on a surprise, naive reaction, because many MTurk users are completing a huge number of studies and likely to have seen your novel stimulus several times before. If you’re testing something stable and unchanging – like personality or attitudes – then this is less of a problem.

Whether this price hike will change that is an interesting question. In my view, several things could happen here:

  • MTurk Requesters cannot afford to run on MTurk anymore, so they stop posting tasks. Fewer tasks potentially means more naive participants. This is potentially a positive for our research conclusions.
  • If fewer tasks are posted, veteran MTurk Workers (such as Master Workers) may head elsewhere. Again, this is a surprising positive. If the veterans on MTurk leave for other services, there will be a larger proportion of naive Workers available – although there will potentially be fewer Workers in general. Studies may take a little longer, but that’s not a problem in and of itself.
  • If veteran Workers head elsewhere, the popularity of MTurk may follow. This is where things get dicey. If veteran Workers lead a charge to a different, competing website with better pricing, those naive Workers may leave too. Fewer Workers in general means that the population of MTurk becomes highly specialized, which is a distinct problem for researchers. Right now, the characteristics of MTurk are pretty broad – people with all sorts of jobs, backgrounds, expertises, and so on. It is a melting pot of random people. If specific people are encouraged to leave, that diversity could be lost, and if that diversity is important to your research questions, you should look elsewhere for data.
  • If the highest performing Workers leave MTurk, remaining Workers may be more desperate for work. Those Workers that stick around may not be the Workers you want. With an even lower pay rate – and MTurk is already quite low – we may end up with a pool of desperate people, quite unlike any other population (in a statistical sense) in whom we might be interested.
  • If the proportion of experienced Workers to inexperienced Workers doesn’t change, we probably have nothing to worry about. If experienced and inexperienced Workers leave in equal proportions, we effectively maintain the status quo. Data collection may go a bit more slowly if fewer people are on the service in general, but our results will not be biased by MTurkGate – at least, not any more than they already are.

Importantly, none of these are issues with crowdsourced data in general. They are problems with MTurk specifically. Remember for any research question to carefully match your data collection strategy to your research question. If another crowdsourced data source would be better, use it. If it would be better to abandon crowdsourcing altogether – remember that you have that option. There are even research questions for which Psychology department undergraduates are more generalizable, more predictable, and in general preferable to MTurk Workers. But that’s a question you need to evaluate for yourself, for your own research questions. In convenience sampling, there are no easy answers.