Skip to content

Scientific Research in Organizations

2009 June 12

“Scientific research” is a term thrown around for a lot of reasons.  It conveys a sense of truth and authority, which makes it a useful term to bend others to a particular viewpoint or sell them a particular product, be it for good or evil.

But what is scientific research?  If you try to pin down a specific definition, you’ll find it surprisingly difficult.

Is it, most simply, “research conducted by a scientist?”  But then, who is a scientist?  One who conducts research?  One who has been trained in research methods?  Or simply someone with a few initials (Ph.D., M.D.) after their name?

In the statistics classes that I taught at the University of Minnesota, as a wrap-up on the last day of class, I brought in several examples of how statistical concepts were misused and abused in advertising and popular media.  Perhaps the most well-received of these examples was ExtenZe, a “male enhancement” medication.  ExtenZe uses the “scientific research” paradigm to sell its product – “No gimmick… just real science!” is repeated several times in their late-night commercials and is the second most-prominent text on their website (second only to the product name).

But what does this mean?  Presumably, “scientific research” has been used to validate the claims of this product.  But these scientific findings are never reported.  So all we have to go on are 1) the M.D. behind the creator’s name, and 2) the claims of “real science.”1 Are either enough?  We know nothing about the specific research methods employed: experimental design, reference population, specific sample characteristics.  Can we trust such “scientific research”?

But that’s a silly example, right?  Of course we can’t trust a company advertising its own products, you say.  But what other “scientific research” do we base decisions upon?  In real organizations, how is scientific research identified, consumed, and acted upon?  Is the wheat sorted from the chaff?

Probably not.  Gimmicks abound in OBHRM, and generally, the desire to create new competitive advantages for organizations makes new scientific research (quality or otherwise) less valuable.  The reason for this is simple: time.  Well-designed scientific research takes time, and time is what organizations don’t have.  If you want a leg up on your competitors, you need to find a unique advantage and act on it before they do.  Delays are costly.  What this ultimately means is that innovation in the ivory tower usually takes a back seat to innovation in the field – by the time ivory tower scientists discover, validate, and deploy a new HR practice, the organizations they are serving have implemented two or three others sans research, and they all become difficult to differentiate.  This ivory tower invention might also be the best HR practice ever, but if all the organization’s competitors already know about it (since ivory tower research is usually public domain), it’s not going to do the organization much good, financially-speaking.

This means that most organizations experience a stream of new programs and technologies, one after the other.  Some come from within, some from the ivory tower, and it’s virtually impossible to tell if any particular new program actually has an effect on the organization.  There often simply isn’t time to conduct scientific research the way it should be conducted: implementing a new program, keeping other aspects of the organization as consistent as possible during data collection, waiting to collect a sufficient sample, and then analyzing the results. Scientific research is simply judged “not worth it.”

And while this appears logical, it is still a poor practice.  Without conducting research on the techniques an organization is implementing, there is no way to know if they did any good, or worse, if they were harmful.  Even though it might be a little less convenient and a little more expensive.  In a small business, this isn’t as necessary, because a new-program-gone-awry can be easily identified by the owners and eliminated.  But when you have a hundred employees or more (about the point where dedicated HR staff are needed), it’s suddenly harder to keep track of what effect any program is actually having.

Most importantly, I think this is one of the areas where I/O psychologists can make an impact.  We have been trained on the creation and interpretation of scientific research and understand its value, in a way that is unique among the various areas of organizational science.  The trick is finding a way to communicate that expertise and its value.  As a whole, our profession is good for organizations.  They just need to know that.

  1. Please note that I am unaware of any publicly available scientific validation, such as double-blind clinical trials, of ExtenZe, but that doesn’t mean they do not exist.  This post should be taken as a recommendation neither for nor against this specific product – only a commentary on their advertising practices. []
Previous Post:
Next Post:
2 Responses leave one →
  1. June 15, 2009

    This is absolutely true. My own experience demonstrates that people have no problem using the term “research”, or “scientific study” when nothing of the sort has ever taken place.

    In my own industry, I often hear that some scale has been subjected to “validation” when in fact, the people who are stating this don’t actually know what it means. Sadly, in the absence of regulation, it is a buyer beware situation. The only way to fix this is to make sure someone on the purchasing team has background in organizational science. Otherwise, we might just be purchasing snake oil with a clever branding campaign.

  2. June 22, 2009

    This made me think… wouldn’t a national regulatory body run by I/O psychologists 1) serve to protect organizations from such snake oil while 2) increasing visibility of I/O dramatically?

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS