Skip to content

Stats and Methods Urban Legend 2: Control Variables Improve Your Study

2011 April 27

ResearchBlogging.orgIn what I can only assume is a special issue of Organizational Research Methods, several researchers discuss common statistical and methodological myths and urban legends commonly seen in the organizational sciences (for more introduction, see the first article in the series). Second in the exploration: Spector and Brannick1 write “Methodological Urban Legends: The Misuse of Statistical Control Variables.”

Spector and Brannick criticize the tendency for researchers conducting correlational research to blindly include “control variables” in an attempt to get better estimates of population correlations, regression slopes, and other statistics. Such researcher effort is typically an attempt to improve methodological rigor when true experimentation isn’t possible, feasible, or convenient.  Unfortunately, this is a methodological urban legend.  And yet, shockingly, the authors report a study finding a mean of 7.7 control variables in macro-org research and 3.7 in micro-org research.

I will let the authors explain the problem:

Rather than being included on the basis of theory, control variables are often entered with limited (or even no) comment, as if the controls have somehow, almost magically, purified the results, revealing the true relationships among underlying constructs of interest that were distorted by the action of the control variables. This is assumed with often little concern about the existence and nature of mechanisms linking control variables and the variables of interest. Unfortunately, the nature of such mechanisms is critical to determining what inclusion of controls actually does to an analysis and to conclusions based on that analysis.

The authors call the blind inclusion of control variables in any attempt to get more accurate results the purification principle.  The problem with the purification principle is that it is false; the inclusion of statistical controls does not purify measurement.  Instead, it simply removes the covariance between the control variable and the other variables from later analyses, even though that covariance may be meaningful to the researcher’s hypotheses.  The authors give this illustrative example:

A supervisor’s liking for a person might inflate the supervisor’s rating of that person’s job performance across multiple dimensions. Correlations among those dimensions might well be influenced by liking, which in effect has contaminated ratings of performance. Thus, researchers might be tempted to control for liking when evaluating relationships among rating dimensions. Note, however, that whether it is reasonable to control liking in this instance depends on whether liking is in fact distorting observed relationships. If it is not (perhaps, liking is the result of good performance), treating liking as a control will lead to erroneous conclusions. This is because removing variance attributable to a control variable (liking) that is caused by a variable of interest (performance) will remove the effect you wish to study (relationships among performance dimensions) before testing the effect you wish to study, or “‘throwing out the baby with the bathwater.”

So how should one actually use control variables?  Two recommendations are given:

  1. Use specific, well-explored theory to drive the inclusion of controls, which goes beyond simple statements like, “previous researchers used this control” or “this variable is correlated with my outcomes.”  If you believe that a specific relationship may be contaminating your results, this may be justification for a control, but you should explicit state why and defend this decision when describing your methods.  Follow up on this discussion; test hypotheses about control varibles.
  2. Don’t control for demographic variables, e.g. race, gender, sex, age. For example, if you find a gender difference in your outcome of interest, controlling for that variable may hide real variance in the outcome that could be explained by whatever real phenomenon is causing that difference.  In my own research are, it is not uncommon to control for age when examining the effects of technology on outcomes of interest (e.g. learning).  But age does not itself cause trouble with technology; instead, underlying differences like familiarity with technology or comfort with technology or other characteristics may be driving those differences.  Simply controlling for age not only removes “real” variance that should remain in the equation but also camouflages a real relationship of interest.

So generally, Spector and Brannick are calling for an organizational science based on iterative theory building, progressively testing alternative hypotheses and narrowing in on answers bit by bit.  This approach is closer to what is employed in the natural sciences; instead of testing one-off theories, they build and build, approaching a problem from as many perspectives as possible to narrow in real results.

My only concern is this: since one-off studies with revealing and/or controversial results are the ones most often rewarded with recognition, is this an approach that organizational researchers will really take?

  1. Spector, P., & Brannick, M. (2010). Methodological urban legends: The misuse of statistical control variables. Organizational Research Methods, 14 (2), 287-305 DOI: 10.1177/1094428110369842 []
Previous Post:
Next Post:
One Response leave one →
  1. April 27, 2011

    I’ve just been reading this special issue of ORM. It’s a good one. Thanks for summarising the article on covariates for the blogosphere.

    I often encounter this problem of inappropriate use of covariates.
    I’d add a few other issues:
    * not justifying causal mechanisms in a mediational models
    * not justifying causal mechanisms in a moderating hypothesis, and
    * not justifying causal claims for the direction of a bivariate relationship in an observational study.

    A common thread across all these issues, including the covariate one, seems to be an inadequate reflection of causal mechanism.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS