Skip to content

Do Interactive Experiences Aid Employee Recruitment?

2014 October 15
by Richard N. Landers

ResearchBlogging.orgMany modern organizations try to compete for top talent by adding fancy, interactive experiences to their recruitment process – think of something like a virtual tour.  Such interactive experiences are expensive, but their creators hope that they will attract a higher class of recruit.  New research from Badger, Kaminsky and Behrend[1] in the Journal of Managerial Psychology explores the impact of such media on some recruitment-related outcomes, concluding that highly demanding interactive recruitment activities are likely to harm how well recruits remember important information and do not provide other benefits.

To understand why this might be the case, you must first understand that there are two competing arguments about interactive experiences in recruitment.  On one hand, media richness theory suggests that richer communication techniques lead to more accurate beliefs and know more about organizations when recruited through richer means.  Often, this is done in comparisons of face-to-face recruitment versus computer-mediated; in general, face-to-face is better.  If media richness is really the mechanism behind such gains, we would also expected richer media to be richer than other media – for example, 3D virtual worlds should produce better recruitment outcomes than an ordinary website.

On the other hand, cognitive load theory suggests that humans have a finite amount of cognitive resources from which they must draw.  If overdrawn – for example by having too many mental demands simultaneously – then we’d expect recruitment outcomes to be worse with richer environments.  If cognitive load theory is correct, that same 3D virtual world should produce worse recruitment outcomes than an ordinary website.

The researchers furthermore distinguished between the types of outcomes each theory speaks best to.  Media richness theory speaks more to affective reactions – richer media lead to a feeling of greater affiliation and understanding of company culture.  Cognitive load theory speaks more to cognitive outcomes – higher loads lead to less information remembered.

To test these ideas, the researchers conducted a quasi-experiment of 471 MTurk workers to experience either a traditional website or an interactive recruitment experience in the 3D virtual world, Second Life.

Using path analysis, the researcher concluded from their quasi-experiment that the richer media environment did indeed reduce how well participants remembered the recruitment message.  They also found that this was mediated by the experience of increased cognitive load, supporting cognitive load theory.  There was no effect on culture-related information.

Overall, this supports cognitive load theory as the more relevant theory for studying the role of technology in recruitment outcomes, at least in regards to Second Life.  Although the experience of Second Life was certainly richer, it was also much more demanding, which overrode any potential benefits.  What this study does not speak to are innovative interactive experiences that are a little less demanding – like online recruitment games.  Such games, or other such experiences, may strike a better balance between cognitive load and media richness, but are left for future research.

  1. Badger, J.M., Kaminsky, S.E., & Behrend, T.S. (2014). Media richness and information acquisition in internet recruitment Journal of Managerial Psychology, 29 (7), 866-883 : 10.1108/JMP-05-2012-0155 []

Don’t Take Hiring Advice from Google

2014 October 8
by Richard N. Landers

A number of articles have been appearing in my news feeds lately from representatives of Google talking about hiring interviews. Specifically, Mashable picked up a Q&A exchange posted on Quora in which someone asked: “Is there a link between job interview performance and job performance?”

As any industrial/organizational (I/O) psychologist knows, the answer is, “It depends.” However, an “ex-Googler” and author of The Google Resume provided a different perspective: “No one really knows, but it’s very reasonable to assume that there is a link. However, the link might not be as strong as many employers would like to believe.”

The New York Times also recently posted provocatively titled, In Head-Hunting, Big Data May Not Be Such a Big Deal, in which Senior Vice President of People Operations at Google was interviewed about interviews, who noted:

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.

You might ask from this, quite reasonably, “If the evidence is so strong that interviews are poor, why not get rid of interviews?” My question is different: “Why should we let Google drive hiring practices across industries?”

I/O psychology, if you aren’t familiar with it, involves studying and applying psychological science to create smarter workplaces. I/O systematically examines questions like the one asked by this Quora user, in order to provide recommendations to organizations around the world. This science is conducted by a combination of academic and organizational researchers, working in concert to help organizations and their employees. It thus includes far more perspective than internal studies within an international technology corporation can provide, which is important to remember here.

Current research in I/O, conducted by scores of researchers across many organizations and academic departments all working together, in fact contradicts several of the points made by both Google and the ex-Googler.  Let me walk you through a few, starting with that longer quote above.

“It’s a complete random mess” makes for a great out-of-context snippet, but the reality is much more complex than that, for two reasons.

First, unstructured interviews – those interviews where random interviews ask whatever random questions they want to ask – are in fact terrible. If your organization uses these, it should stop right now. But structured interviews, in which every applicant receives the same questions as every other applicant, fare far better. There are many reasons that unstructured interviews are terrible, but they generally involve realizing that interviewers – and people in general – are highly biased. When we can ask any question we want to evaluate someone, we ask questions that get us the outcome we want. If we like the person, we ask easy, friendly questions. If we don’t like the person, we ask difficult, intimidating questions. That reduces the effectiveness of interviews dramatically. But standardizing the questions helps fix this problem.

Second, we’re talking about Google. By the time that Google interviews a job applicant, they have already been through several rounds of evaluations: test screens, resume screens, educational screens, and so on. Google does not interview every candidate that walks through the front door. If you’re running a small business, are you getting a steady stream of Stanford grads with 4.0 GPAs? I doubt you are, and if you aren’t, interviews might still help your selection system. Google’s advice is not good advice to follow.

Some interviewers are harsher graders than others. A hiring committee can see that your 3.2 was a very good score from the guy who gives averages of a 2.3 score, but a mediocre score from the guy who gives averages of a 3.0 average. The study likely doesn’t take this into account.

That’s true; however, it doesn’t change anything. Reducing the impact of inter-interviewer differences is part of standardization. In an organization as large as Google, standardizing your interviewers completely is impossible. In your business, where you’re hiring less than 100 people per year, you can have the same two interviewers talk to every person that gets an interview. Because you’re not Google, you don’t face this problem.

The interview score is merely a quantitative measure of your performance, but it’s not the whole picture of your performance. It’s possible that your qualitative interview performance (you know, all those words your interviewer says about you) is quite correlated with job performance, but the quantitative measure is less correlated. The people actually deciding to hire you or not are judging you based on the qualitative measure.

For the reasons I outline above, people are terrible at making judgments. But translating those judgments into numbers actually helps people be more fair. When judging “qualitative performance”, an interviewer is more likely to convince him/herself of things that just aren’t true. “This candidate didn’t answer the questions very well, but I just have a good feeling about him.” “This candidate had all the right answers, but there was something I just didn’t like.” Numbers are your friends. Don’t be afraid.

People who are hired basically all have average scores in a very small range. I would suspect 90% of hired candidates have between a 3.1 to 3.3 average. Thus, when you say there’s no correlation between interview performance and job performance, we’re really saying that a 3.1 candidate doesn’t do much worse in their job than a 3.3 candidate. That’s a tiny range.

One of two problems here. One, this might be a problem with the system used to determine interview scores, not interviews in general. There are a variety of techniques to improve judgment related to numeric scales. One approach is anchoring – giving specifically behavioral examples related to each point on the scale. For example, the interview designer might assign the behavior “Made poor eye contact” to 3 so that everyone has the same general idea of what a “3” looks like. If your scale is bad, the scores from that scale won’t be predictive of performance.

Two, this may be a symptom of the same “Google problem” I described before. People that make it to the interview stage at Google are already highly qualified. Maybe Google really doesn’t need interviews. But that doesn’t mean interviews are useless for everyone.

Ultimately, it’s almost impossible to truly understand if there’s a correlation between interview performance and job performance. No company is willing to hire at random to test this.

I’ll try to avoid getting too technical here, but 1) hiring at random is not necessary to make such a conclusion and 2) this is a red herring. In no single organization do we need to answer, “do interviews predict performance in general, regardless of context?” Instead, we only need to know if interviews add more to the hiring process versus other parts of the hiring system currently in use. For example, right now, you use resume screens, interviews, and some personality tests. With the right data collected, you can easily determine how well these parts work together (or don’t). That’s what organizations need to know, and Google’s hiring practices don’t help you answer that question for your organization.

Your organization is not Google. The problems you face, and the appropriate solutions to those problems, are not the same as Google’s. Although Google’s stories are interesting, interesting stories are not something on which to build your employee selection system. You need science for a smarter workplace.

The key to all of this is that you need to make the decisions that are right for your organization. That involves conducting research within your own organization to answer the questions you find important and meaningful. If you are in charge of or have influence over human resources at your organization, you have the power to answer such questions yourself. You can develop, deploy and test a new hiring system yourself, and that’s the only way to “really know.” Don’t take hiring advice from Google, because Google’s success or lack of success using interviews, or anything else, often has little to do with how successful it will be for you.


Gamifying Surveys to Increase Completion Rate and Data Quality

2014 October 2
by Richard N. Landers

ResearchBlogging.orgOne of the biggest challenges for research involving surveys is maintaining a high rate of completion and compliance with survey requirements. First, we want a reasonably representative sample of whomever we send the survey to. Second, we want those that do complete the survey to do so honestly and thoughtfully. One approach that researchers have taken to improve these outcomes is to gamify their surveys. But does gamification actually improve data quality?

In an empirical study on this topic, Mavletova[1] examined the impact of gamification on survey completion rate and data quality among 1050 Russian children and adolescents aged 7 to 15. In her study, Mavletova compared a traditional text-only survey with a visually interesting survey which incorporated graphics, background colors, interactive slider bars, and Adobe Flash-based responses, and a gamified survey.

To gamify the survey, Mavletova first developed four guidelines for effective gamified assessment: 1) narrative, 2) rules and goals, 3) challenges, and 4) rewards. To realize this vision, students in the gamified condition set their name, specified an avatar, and followed a story about the respondent traveling in the Antarctic among friendly penguins. In the story, respondents were asked to tell some characters in the story about themselves to the penguins in order to travel home. They also played mini-games between sections of the survey. Regular feedback was also provided indicating progress through the survey.

So did all this extra pay off? Here’s what happened:

  • Total respondents (N = 1050) to the text, visual, and gamified surveys on desktops and laptops did not differ in the total number of drop-outs.
  • On mobile devices (N = 136), fewer respondents to the gamified surveys dropped out than those completing the visual surveys, who dropped out less than those completing the text-based surveys.
  • Participants took the least amount of time on the text version (13.9 min), more on the visual (15.2 min), and the most on the gamified version (19.4 min). However, the gamified version was also substantially longer, due to the extra content.
  • When asked about the amount of time they spent, participants in all three conditions reported approximately the same subjective experience of time.
  • Respondents found the gamified survey easier to complete than either the visual or text surveys (however, this was analyzed by comparing response rates to “Strongly Agree” on the scale across conditions, which is a strange way to do it).
  • More respondents were interested in further surveys when competing either the visual or gamified surveys in comparison to the text-based surveys (although this was determined with the strange analytic approach described above).
  • The highest non-response rate was found in the gamified survey, less in the visual survey, and the least in the text-based survey.
  • When omitting Flash-based questions (which required additional technology to view), there was no difference in non-response rate between conditions.
  • There were no differences in socially desirable responding by condition.
  • Straight-line responses (answering all “c” or all “b”, for example) were more common in the text survey (11.4%) than in both the visual (2.8%) and gamified (3.2%) survey.
  • Extreme responses (answering “a” or “e” on a five point scale) did not differ by condition.
  • Middle responses (answering “c”s) were more common in the text survey than in either of the other two conditions.
  • There were no differences on open-ended questions in regards to length of responses, number of examples provided, or the distribution of that number.

So is the effort to create a gamified survey with a narrative worthwhile? Gamified surveys had better drop-off numbers for some respondents, respondents found the survey a little easier, and some types of lazy responding decreased. However, they also had a higher non-response rate. Most benefits were small and also realized by the visually interesting survey.

Overall, this is not very good news for gamified surveys. It appears that the (rather extreme) time and cost investment to develop gamified surveys did not really help much. However, because this study design did not isolate particular gamification elements, it’s difficult to say what did or did not lead to this result. Were the gains seen because of the mini-games? Perhaps the gains seen simply because the respondents were able to take breaks between sections of the survey? Did the narrative help? We don’t really know.

In fact, the cleanest comparison available here is between the text-based survey and the visual survey, since the visual survey was essentially the text-based survey plus graphics and interactive item responses.  This is a relatively small investment (relative to gamification) and thus I can recommend it more easily.

This doesn’t mean that gamifying surveys is a bad idea – it just means that gamification designed like this – with mini-games and Flash-based response scales – is unlikely to do much.  This type of gamification may simply be too simple; fancy graphics may not convince anyone, even kids, that your survey is more interesting than it really is.  More transformative gamification, such as those approaches integrating the story into the questions (which wasn’t done here) or taking more innovative approaches to data collection (rather than dressing up normal Likert-type scales), is an area of much greater promise.

  1. Mavletova, A. (2014). Web Surveys Among Children and Adolescents: Is There a Gamification Effect? Social Science Computer Review DOI: 10.1177/0894439314545316 []