Skip to content

Don’t Take Hiring Advice from Google

2014 October 8
by Richard N. Landers

A number of articles have been appearing in my news feeds lately from representatives of Google talking about hiring interviews. Specifically, Mashable picked up a Q&A exchange posted on Quora in which someone asked: “Is there a link between job interview performance and job performance?”

As any industrial/organizational (I/O) psychologist knows, the answer is, “It depends.” However, an “ex-Googler” and author of The Google Resume provided a different perspective: “No one really knows, but it’s very reasonable to assume that there is a link. However, the link might not be as strong as many employers would like to believe.”

The New York Times also recently posted provocatively titled, In Head-Hunting, Big Data May Not Be Such a Big Deal, in which Senior Vice President of People Operations at Google was interviewed about interviews, who noted:

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.

You might ask from this, quite reasonably, “If the evidence is so strong that interviews are poor, why not get rid of interviews?” My question is different: “Why should we let Google drive hiring practices across industries?”

I/O psychology, if you aren’t familiar with it, involves studying and applying psychological science to create smarter workplaces. I/O systematically examines questions like the one asked by this Quora user, in order to provide recommendations to organizations around the world. This science is conducted by a combination of academic and organizational researchers, working in concert to help organizations and their employees. It thus includes far more perspective than internal studies within an international technology corporation can provide, which is important to remember here.

Current research in I/O, conducted by scores of researchers across many organizations and academic departments all working together, in fact contradicts several of the points made by both Google and the ex-Googler.  Let me walk you through a few, starting with that longer quote above.

“It’s a complete random mess” makes for a great out-of-context snippet, but the reality is much more complex than that, for two reasons.

First, unstructured interviews – those interviews where random interviews ask whatever random questions they want to ask – are in fact terrible. If your organization uses these, it should stop right now. But structured interviews, in which every applicant receives the same questions as every other applicant, fare far better. There are many reasons that unstructured interviews are terrible, but they generally involve realizing that interviewers – and people in general – are highly biased. When we can ask any question we want to evaluate someone, we ask questions that get us the outcome we want. If we like the person, we ask easy, friendly questions. If we don’t like the person, we ask difficult, intimidating questions. That reduces the effectiveness of interviews dramatically. But standardizing the questions helps fix this problem.

Second, we’re talking about Google. By the time that Google interviews a job applicant, they have already been through several rounds of evaluations: test screens, resume screens, educational screens, and so on. Google does not interview every candidate that walks through the front door. If you’re running a small business, are you getting a steady stream of Stanford grads with 4.0 GPAs? I doubt you are, and if you aren’t, interviews might still help your selection system. Google’s advice is not good advice to follow.

Some interviewers are harsher graders than others. A hiring committee can see that your 3.2 was a very good score from the guy who gives averages of a 2.3 score, but a mediocre score from the guy who gives averages of a 3.0 average. The study likely doesn’t take this into account.

That’s true; however, it doesn’t change anything. Reducing the impact of inter-interviewer differences is part of standardization. In an organization as large as Google, standardizing your interviewers completely is impossible. In your business, where you’re hiring less than 100 people per year, you can have the same two interviewers talk to every person that gets an interview. Because you’re not Google, you don’t face this problem.

The interview score is merely a quantitative measure of your performance, but it’s not the whole picture of your performance. It’s possible that your qualitative interview performance (you know, all those words your interviewer says about you) is quite correlated with job performance, but the quantitative measure is less correlated. The people actually deciding to hire you or not are judging you based on the qualitative measure.

For the reasons I outline above, people are terrible at making judgments. But translating those judgments into numbers actually helps people be more fair. When judging “qualitative performance”, an interviewer is more likely to convince him/herself of things that just aren’t true. “This candidate didn’t answer the questions very well, but I just have a good feeling about him.” “This candidate had all the right answers, but there was something I just didn’t like.” Numbers are your friends. Don’t be afraid.

People who are hired basically all have average scores in a very small range. I would suspect 90% of hired candidates have between a 3.1 to 3.3 average. Thus, when you say there’s no correlation between interview performance and job performance, we’re really saying that a 3.1 candidate doesn’t do much worse in their job than a 3.3 candidate. That’s a tiny range.

One of two problems here. One, this might be a problem with the system used to determine interview scores, not interviews in general. There are a variety of techniques to improve judgment related to numeric scales. One approach is anchoring – giving specifically behavioral examples related to each point on the scale. For example, the interview designer might assign the behavior “Made poor eye contact” to 3 so that everyone has the same general idea of what a “3” looks like. If your scale is bad, the scores from that scale won’t be predictive of performance.

Two, this may be a symptom of the same “Google problem” I described before. People that make it to the interview stage at Google are already highly qualified. Maybe Google really doesn’t need interviews. But that doesn’t mean interviews are useless for everyone.

Ultimately, it’s almost impossible to truly understand if there’s a correlation between interview performance and job performance. No company is willing to hire at random to test this.

I’ll try to avoid getting too technical here, but 1) hiring at random is not necessary to make such a conclusion and 2) this is a red herring. In no single organization do we need to answer, “do interviews predict performance in general, regardless of context?” Instead, we only need to know if interviews add more to the hiring process versus other parts of the hiring system currently in use. For example, right now, you use resume screens, interviews, and some personality tests. With the right data collected, you can easily determine how well these parts work together (or don’t). That’s what organizations need to know, and Google’s hiring practices don’t help you answer that question for your organization.

Your organization is not Google. The problems you face, and the appropriate solutions to those problems, are not the same as Google’s. Although Google’s stories are interesting, interesting stories are not something on which to build your employee selection system. You need science for a smarter workplace.

The key to all of this is that you need to make the decisions that are right for your organization. That involves conducting research within your own organization to answer the questions you find important and meaningful. If you are in charge of or have influence over human resources at your organization, you have the power to answer such questions yourself. You can develop, deploy and test a new hiring system yourself, and that’s the only way to “really know.” Don’t take hiring advice from Google, because Google’s success or lack of success using interviews, or anything else, often has little to do with how successful it will be for you.

Share

Gamifying Surveys to Increase Completion Rate and Data Quality

2014 October 2
by Richard N. Landers

ResearchBlogging.orgOne of the biggest challenges for research involving surveys is maintaining a high rate of completion and compliance with survey requirements. First, we want a reasonably representative sample of whomever we send the survey to. Second, we want those that do complete the survey to do so honestly and thoughtfully. One approach that researchers have taken to improve these outcomes is to gamify their surveys. But does gamification actually improve data quality?

In an empirical study on this topic, Mavletova[1] examined the impact of gamification on survey completion rate and data quality among 1050 Russian children and adolescents aged 7 to 15. In her study, Mavletova compared a traditional text-only survey with a visually interesting survey which incorporated graphics, background colors, interactive slider bars, and Adobe Flash-based responses, and a gamified survey.

To gamify the survey, Mavletova first developed four guidelines for effective gamified assessment: 1) narrative, 2) rules and goals, 3) challenges, and 4) rewards. To realize this vision, students in the gamified condition set their name, specified an avatar, and followed a story about the respondent traveling in the Antarctic among friendly penguins. In the story, respondents were asked to tell some characters in the story about themselves to the penguins in order to travel home. They also played mini-games between sections of the survey. Regular feedback was also provided indicating progress through the survey.

So did all this extra pay off? Here’s what happened:

  • Total respondents (N = 1050) to the text, visual, and gamified surveys on desktops and laptops did not differ in the total number of drop-outs.
  • On mobile devices (N = 136), fewer respondents to the gamified surveys dropped out than those completing the visual surveys, who dropped out less than those completing the text-based surveys.
  • Participants took the least amount of time on the text version (13.9 min), more on the visual (15.2 min), and the most on the gamified version (19.4 min). However, the gamified version was also substantially longer, due to the extra content.
  • When asked about the amount of time they spent, participants in all three conditions reported approximately the same subjective experience of time.
  • Respondents found the gamified survey easier to complete than either the visual or text surveys (however, this was analyzed by comparing response rates to “Strongly Agree” on the scale across conditions, which is a strange way to do it).
  • More respondents were interested in further surveys when competing either the visual or gamified surveys in comparison to the text-based surveys (although this was determined with the strange analytic approach described above).
  • The highest non-response rate was found in the gamified survey, less in the visual survey, and the least in the text-based survey.
  • When omitting Flash-based questions (which required additional technology to view), there was no difference in non-response rate between conditions.
  • There were no differences in socially desirable responding by condition.
  • Straight-line responses (answering all “c” or all “b”, for example) were more common in the text survey (11.4%) than in both the visual (2.8%) and gamified (3.2%) survey.
  • Extreme responses (answering “a” or “e” on a five point scale) did not differ by condition.
  • Middle responses (answering “c”s) were more common in the text survey than in either of the other two conditions.
  • There were no differences on open-ended questions in regards to length of responses, number of examples provided, or the distribution of that number.

So is the effort to create a gamified survey with a narrative worthwhile? Gamified surveys had better drop-off numbers for some respondents, respondents found the survey a little easier, and some types of lazy responding decreased. However, they also had a higher non-response rate. Most benefits were small and also realized by the visually interesting survey.

Overall, this is not very good news for gamified surveys. It appears that the (rather extreme) time and cost investment to develop gamified surveys did not really help much. However, because this study design did not isolate particular gamification elements, it’s difficult to say what did or did not lead to this result. Were the gains seen because of the mini-games? Perhaps the gains seen simply because the respondents were able to take breaks between sections of the survey? Did the narrative help? We don’t really know.

In fact, the cleanest comparison available here is between the text-based survey and the visual survey, since the visual survey was essentially the text-based survey plus graphics and interactive item responses.  This is a relatively small investment (relative to gamification) and thus I can recommend it more easily.

This doesn’t mean that gamifying surveys is a bad idea – it just means that gamification designed like this – with mini-games and Flash-based response scales – is unlikely to do much.  This type of gamification may simply be too simple; fancy graphics may not convince anyone, even kids, that your survey is more interesting than it really is.  More transformative gamification, such as those approaches integrating the story into the questions (which wasn’t done here) or taking more innovative approaches to data collection (rather than dressing up normal Likert-type scales), is an area of much greater promise.

Footnotes:
  1. Mavletova, A. (2014). Web Surveys Among Children and Adolescents: Is There a Gamification Effect? Social Science Computer Review DOI: 10.1177/0894439314545316 []
Share

Grad School: Online I/O Psychology Master’s and PhD Program List

2014 September 25
tags: ,
by Richard N. Landers

Grad School Series: Applying to Graduate School in Industrial/Organizational Psychology
Starting Sophomore Year: Should I get a Ph.D. or Master’s? | How to Get Research Experience
Starting Junior Year: Preparing for the GRE | Getting Recommendations
Starting Senior Year: Where to Apply | Traditional vs. Online Degrees | Personal Statements
Interviews/Visits: Preparing for Interviews | Going to Interviews
In Graduate School: What to Expect First Year
Rankings/Listings: PhD Program Rankings | Online Programs Listing

A common question I get is, “What online I/O programs are worthwhile for graduate study?” As I’ve discussed elsewhere, you are generally best served these days by a brick-and-mortar program. Employability, salary of first job, and a host of other outcomes are better. If you decide that you just can’t manage brick-and-mortar though, the available choices are not equal. Some programs are better than others. When making such comparisons, most people are ultimately concerned about employability, and the best way to answer that question is to contact some current and former students in that program, asking if they are currently employed in an I/O-related field.

But before you get to that point, you might want to just get a broad overview of which online I/O programs are available, and what they offer. I found this a surprisingly difficult task when I tried to figure it out myself, so I decide to compile what information I could find into one list.  Please note that I am unaffiliated with any of these programs, so these figures are “unofficial.”  They are based entirely upon what I could find on university websites and via Google.

How to navigate this list? Here’s what I’d look at, in order:

  1. Check if the program has a long-standing in-person PhD program.  You’ll have the best job prospects in an online I/O program associated with a long-standing brick-and-mortar I/O program, because the reputation of the in-person program will generally carry over to the online program.
  2. Check if the program is not-for-profit (public or private in the list below).  Even without a long-standing brick-and-mortar program, not-for-profit programs have no motivation to be a diploma mill, so you have a better chance at a quality education.  That’s not always true – some private not-for-profits may still be in it for the money, and some for-profits may legitimately care about their students – but this is a good general rule.
  3. Check how stringent the admission requirements are.  Generally, more-difficult-to-get-into programs are going to have better training – they only accept qualified students because they challenge those students – and overcoming educational challenges is what gives you the skills to get a job.

Note that you can click on the column headings to sort!

UniversityDegreeDegree AreaTypeCost/Credit-hourRanked PhD Program?Requirements
Colorado State UniversityMasters of Applied I/O (MAIOP)Applied I/O PsychologyPublic$665Yes3.0 GPA, GRE, B or higher in I/O course, B or higher in stats course
Kansas State UniversityMaster of Science (MS)I/O PsychologyPublic$304Yes3.0 GPA or GRE, 2 years managerial experience, coursework in Psych, HR, Management, and/or Statistics
Austin Peay State UniversityMaster of Arts (MA)I/O PsychologyPublic$462No2.5 GPA (above 3.0 recommended), GRE
Birkbeck University of LondonMaster of Science (MSc)Org PsychologyPublic£12,570 / programNoBachelor’s or work experience
University of LeicesterMaster of Science (MSc)Occupational PsychologyPublic£9,220 / programNo2.2 UK degree or international equivalent
Adler School of Professional PsychologyMaster of Arts (MA)I/O PsychologyPrivate$1040No3.0 GPA, C or higher average in Psychology, Org experience
Baker CollegeMaster of Science (MS)I/O PsychologyPrivateUnlistedNoUnlisted
Carlos Albizu UniversityMaster of Science (MS)I/O PsychologyPrivate$505NoUnlisted
Chicago School of Professional PsychologyMaster of Arts (MA)I/O PsychologyPrivateUnlistedNoC or higher in Intro Psych, C or better in stats course, C or better in methods course
Grand Canyon UniversityDoctor of Philosophy (PhD)I/O PsychologyPrivate$495NoUnlisted
Southern New Hampshire UniversityMaster of Science (MS)I/O PsychologyPrivate$627NoSuccessful completion of a stats course and a methods course
Touro UniversityDoctor of Psychology (PsyD)Human and Org PsychologyPrivate$700NoMaster’s degree, 3.4 GPA
University of Southern CaliforniaMaster of Science (MS)Applied Psych, I/O conc.PrivateUnlistedNoGRE and transcript (specific grades not specified)
Argosy UniversityMaster of Arts (MA)I/O PsychologyFor-profitUnlistedNo2.7 GPA or 3.0 GPA on last 60 hours
California Southern UniversityMaster of Science (MS)I/O PsychologyFor-profitUnlistedNoUnlisted
Capella UniversityMaster of Science (MS)I/O PsychologyFor-profit$458No2.3 GPA
Capella UniversityDoctor of Philosophy (PhD)I/O PsychologyFor-profit$510No3.0 GPA
Northcentral UniversityMaster of Science (MS)I/O PsychologyFor-profit$752NoUnlisted
Northcentral UniversityDoctor of Philosophy (PhD)I/O PsychologyFor-profit$930NoUnlisted
University of PhoenixMaster of Science (MS)I/O PsychologyFor-profit$740No2.5 GPA
University of PhoenixDoctor of Philosophy (PhD)I/O PsychologyFor-profit$740NoMaster’s degree, 3.0 GPA, C or better in stats or methods course, work experience, access to own research library
University of the RockiesMaster of Arts (MA)Business PsychologyFor-profit$824NoUnlisted
Walden UniversityDoctor of Philosophy (PhD)I/O PsychologyFor-profit$555NoUnlisted
Share