Skip to content

Can I Use Mechanical Turk (MTurk) for a Research Study?

2014 November 13
by Richard N. Landers

ResearchBlogging.orgAmazon Mechanical Turk (MTurk) has quickly become a highly visible source of participants for human subjects research. Psychologists, in particular, have begun to use MTurk as a major source of quick, cheap data. Studies with hundreds or thousands of participants can be identified in mere days, or sometimes, even a few hours. When it takes a full semester to get a couple hundred undergraduate research participants, the attractiveness of MTurk is obvious. But is it safe to use MTurk this way? Won’t our research be biased toward MTurk workers?

New work by Landers and Behrend[1] explore the suitability of MTurk and other convenience sampling techniques for research purposes within the field of industrial/organizational psychology (I/O).  I/O is a particularly interesting area to explore this problem because I/O research focuses upon employee behavior, and it has been a longtime concern in that field to answer questions of sampling, like: Under what conditions does sampling undergraduates lead to valid conclusions about employees?

Traditionally, there are some very extreme opinions here. Because I/O research is concerned with organizations, some researchers say the only valid research is research conducted within real organizations. Unfortunately, this preference is based largely in tradition and a largely superficial understanding of sampling.

Sampling in the social sciences works by choosing a population you’re interested in and then choosing people at random from that population.  For example, you might say, “I’m interested in Walmart employees” (that’s your population), so you send letters to every Walmart employee asking them to respond to a survey.  This is called probability sampling.

The key issue is that probability sampling is effectively impossible in the real world for most psychological questions (including those in I/O). Modern social science research is generally concerned with global relationships between variables. For example, I might want to know, “In general, are people that are highly satisfied with their jobs also high performers?”  To sample appropriately, I would need to randomly select employees from every organization on Earth.

Having access to a convenient organization does not solve this problem. Employees in a particular company are convenience samples just as college students and MTurk Workers are.  The difference that researchers should pay attention to is not the simple fact that these are convenience samples, but instead, what does that convenience do to your sample?

For example, if we use a convenient organization, we’re also pulling with it all the hiring procedures that the company has ever used. We’re grabbing organizational culture. We’re grabbing attrition practices. We’re grabbing all sorts of other sample characteristics that are part of this organization. As long as none of our research questions have anything to do with all these extra characteristics we’ve grabbed, there’s no problem. The use of convenience sampling in such a case will introduce unsystematic error – in other words, it doesn’t bias our results and instead just adds noise.

The problems occur only when what we grab is related to our research questions. For example, what if we want to know the relationship between personality and job performance? If our target organization hired on the basis of personality, any statistics we calculate based upon data from this organization will potentially be biased. Fortunately there are ways to address this statistically (for the statistically included: corrections for range restriction), but you must consider all of this before you conduct your study.

MTurk brings the same concerns. People use MTurk for many reasons. Maybe they need a little extra money. Maybe they’re just bored. As long as the reasons people are on MTurk and the differences between MTurkers and your target population aren’t related to your research question, there’s no problem. MTurk away! But you need to explicitly set aside some time to reason through it.  If you’re interested in investigating how people respond to cash payments, MTurk probably isn’t a good choice (at least as long as MTurk workers aren’t your population!).

As the article states:

  1. Don’t assume organizational sampling is a gold standard sampling strategy.  This applies to whatever your field’s “favorite” sampling strategy happens to be.
  2. Don’t automatically condemn college students, online panels, or crowdsourced samples.  Each of these potentially brings value.
  3. Don’t assume that “difficult to collect” is synonymous with “good data.”  It’s so natural to assume “difficult = good” but that’s a lazy mental shortcut.
  4. Do consider the specific merits and drawbacks of each convenience sampling approach.  Unless you’ve managed to get a national or worldwide probability sample, whatever you’re using is probably a convenience sample. Think through it carefully: why are people in this sample? Does it have anything to do with what I’m doing in my study?
  5. Do incorporate recommended data integrity practices regardless of sampling strategy. Poor survey-takers and lazy respondents exist in all samples. Deal with these people before your study even begins by incorporating appropriate detection questions and plan those statistical analyses a priori.
Footnotes:
  1. Landers, R.N., & Behrend, T.S. (2015). An inconvenient truth: Arbitrary distinctions between organizational, Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology, 8 (2) []
Share

How to Improve Internet Comments

2014 October 29
by Richard N. Landers

ResearchBlogging.orgThe most promising and yet most disappointing aspects of the Internet are the written comments left by the general public.  On one hand, comment sections are a great democratization of personal opinion.  With public commenting, anyone can make their opinion known until the world on whatever topic interests them.  On the other hand, comment sections give voice to absolutely any nutjob with Internet access.  As it turns out, and this is evident to anyone who has ever scrolled down on any video anywhere on YouTube, comment sections often devolve into base attacks, non sequiturs, and general insanity.

So how to deal with that problem?  In an upcoming paper, in the Journal of Computer-Mediated Communication, Stroud, Scacco, Muddiman and Curry[1] explore this issue on news websites by randomly assigning 70 political posts made by a local TV station to one of three conditions:

  • Condition 1: An unidentified staff member from the TV station participated in the discussion.
  • Condition 2: A political reporter participated in the discussion.
  • Condition 3: The discussion was permitted to run unmonitored.

2703 comments were made on these posts.  And as it turns out, participation in your comment section can change the tone.  Findings included:

  • Reporter participation improved the deliberative tone of discussion, decreased uncivil comments (17% reduction in probability), and increased the degree to which commenters provided supporting evidence for their points (15% increase in probability).
  • Reporter participation also appeared to improve the probability commenters left comments relevant to the post and asked genuine questions, but these effects were much smaller (and not statistically significant, given the sample size).
  • Presence of the staff member had no effect in comparison to unmonitored discussion.  Troublingly, uncivil comments in fact went up and genuine questions went down when staff members were present.
  • The specific type of prompt right before the discussion began had a smaller effect on discussion quality.  Specifically, open-ended questions (e.g., “What do you think about x?”) versus closed questions produced slightly more genuine questions and evidence, but only changes in probabilities were only about 10%.

The take-home here is that an “official”, knowledgeable, and active participant in the comment section did improve the quality of discussion.  Considering the link between discussion quality and time spent on websites, this has important implications for the use of discussion forums in contexts other than that of TV stations.

And although I don’t think we’re going to fix YouTube any time soon, this is definitely a step in the right direction.

Footnotes:
  1. Stroud, N., Scacco, J., Muddiman, A., & Curry, A. (2014). Changing Deliberative Norms on News Organizations’ Facebook Sites Journal of Computer-Mediated Communication DOI: 10.1111/jcc4.12104 []
Share

How to Write an APA Style Research Paper Introduction [INFOGRAPHIC]

2014 October 23
by Richard N. Landers

Writing APA-style papers is a tricky business. So to complement my discussion of writing publishable scientific articles, I’ve created an infographic showing some of the major ideas you should consider when writing the introduction to an APA-style research paper. This approach will work well in most social scientific fields, especially Psychology. If you’re writing a short paper, you might only have the first section (the “intro to the intro”) and something like either Subheading 2 or 3. It all depends on the particular research question you’re asking!

The key to writing scientific papers is that you throw out most of what you know about writing stories. Scientific research papers are based upon logical, organized arguments, and scientists expect to see a certain structure of ideas when they read papers. So if you want your paper to be read, you need to meet those expectations.

Intro-Infographic

 

Share