Hiring Managers Fear Being Replaced by Technology
Just a few days ago, the new and very promising open-access I/O psychology journal Personnel Assessment and Decisions released its second issue. And it’s full of interesting work, just as I thought it would be. This issue, in fact, is so full of interesting papers that I’ve decided to review/report on a few of them. The first of these, a paper by Nolan, Carter and Dalal, seeks to understand a very common problem in practice: despite overwhelmingly positive and convincing research supporting various I/O psychology practices, hiring managers often resist actually implementing them. A commonly discussed concern from these managers is that they will be recognized less for their work, or perhaps even be replaced, as a result of new technologies being introduced into the hiring process. But this evidence, to date, has been largely anecdotal. Does this really happen? And do managers really resist I/O practices the way many I/Os intuitively believe they do? The short answer suggested by this paper: yes and yes!
In a pair of studies, Nolan and colleagues explore each of these questions specifically in the context of structured interviews. In this case, “structure” is the technology that threatens these hiring managers. But do managers that use structured interviews really get less credit? And are practitioner fears about this really associated with reduced intentions to adopt new practices, despite their evident value?
In the first study, 468 MTurk workers across 35 occupations were sampled. Each was randomly assigned to a 2 x 2 x 3 matrix of condition, crossing interview structure (yes/high or no), decision structure (mechanical or intuitive combination), and outcome (successful, unsuccessful, unknown). Each participant then read a standard introduction:
Imagine yourself in the following situation…The human resource (HR) manager at your company just hired a new employee to fill an open position. Please read the description of how this decision was made and answer the questions that follow.
After that prompt, the descriptions varied by condition (one of 12 descriptions followed), but were consistent by variable. Specifically, “high interview structure” always indicated the same block of text, regardless of the other conditions. This was done to carefully standardize the experience. Afterward, participants were asked a) if they believed the hiring manager had control over and was the cause of the hiring decision and b) if the decision was consistent (a sample question for this scale: “Using this approach, the same candidate would always be hired regardless of the person who was making the hiring decision.” I/Os familiar with applicant reactions theory may recognize this as a perceived procedural justice rule.
So what happened? First, outcome didn’t matter much for either outcome. Regardless of the actual decision made, outcome and all of its interactions accounted for 1.9% and 3.3% of the total variance in each DV. Of course, in some areas of I/O, 3.3% of the variance is enough to justify something as extreme as theory revision, but in the reactions context, this is a pitifully small effect. So the researchers did not consider it further.
Second, the interview structure and decision structure manipulations created huge main effects. 14% of causality/control’s variance was explained by each manipulation. The total model accounted for 27%, which is a huge effect! For stability, the effect was smaller but still present – 9% and 7% for each manipulation respectively, and 17% for the full model. People perceived managers as having less influence on the process as a result of either type of structure, and because the interactions did not add much predictive power to the model, these effects were essentially independent.
One issue with this study is that these are “paper people.” Decisions about and reactions to paper people can be good indicators of the same situations when involving real people, but there’s an inferential leap required. So if you don’t believe people would react the same way to paper people as real people, then perhaps the results of this study are not generalizable. My suspicion is that the use of paper people may strengthen the effect. So real-world effects are probably a little smaller than this – but at 27% of the variance explained (roughly equivalent to r = .51), there’s a long way down before the effect would disappear. So I’m pretty confident it exists, at the least.
Ok – so people really do seem to judge hiring managers negatively for adopting interview structure. But does that influence hiring manager behavior? Do they really fear being replaced?
In the second study, MTurk was used again, but this time anyone who had no experience with hiring was screened out of the final sample. This resulted in 150 people with such experience, 70% of which were currently in a position involving supervision of direct reports. Thus, people with current or former hiring responsibilities participated in this second part. People who could be realistically replaced by I/O technology.
The design was a bit different. Seeing no results for outcome type, this block was eliminated from their research design, crossing only interview and decision structure (a 2 x 2, only 4 conditions). Perceptions of causality/control and consistency were assessed again. But additionally, perceived threat of unemployment by technology was examined (“Consistently using this approach to make hiring decisions would lessen others’ beliefs about the value I provide to my employing organization.”) as well as intentions to use (“I would use this approach to make the hiring decision.”)
The short version: this time, it was personal. The survey asked if the hiring managers used these techniques, what would happen? Could I be replaced?
As you might expect in a move to more real-world processes, the effects were a bit smaller this time, but still quite large: 22% of causality/control explained, and 9% of consistency. You can see the effect on causality in this figure.
Nolan et al, 2016, Figure 3. Perceptions of causality/control on conditions.
Interestingly, the consistency effect turned into an interaction: apparently having both kinds of structure is worse than just one.
Nolan and colleagues also tested a path model confirming what we all expected:
- Managers who believe others will judge them negatively for using new technologies are less likely to use those technologies.
- This effect occurs indirectly via perceived threat of unemployment by technology.
In summary, some people responsible for hiring fear being replaced by technology, and this decreases their willingness to adopt those technologies. This explains cases I/Os often hear about implementing a new practice in an organization only to discover that nothing has changed because the managers never actually changed anything. In an era where robots are replacing many entry level jobs, this is a legitimate concern!
The key, I think, is to design systems for practice that take advantage of human input. There are many things that algorithms and robots can’t do well (yet) – like making ratings for structured interviews! Emphasizing this and ensuring it is understood up and down the chain of command could reduce this effect.
So for now, we know that being replaced by our products is something managers worry about. This suggests that simply selling a structured interview to a client and leaving it at that is probably not the best approach. Meet with managers, meet with their supervisors, and explain why people are still critical to the process. Only with that can you have some confidence that your structured interview will actually be used!
Always remember the human component to organizations, especially when adding new technology to a process that didn’t have it before. People make the place!Footnotes:
- K.P. Nolan, N.T. Carter, & D.K. Dalal (2016). Threat of technological unemployment: Are hiring managers discounted for using standardized employee selection practices? Personnel Assessment and Decisions, 2 (1) [↩]
|Previous Post:||Internet Scraping for Research: A Python Tutorial for Psychologists|
|Next Post:||The Difference Between Industrial and Organizational Psychology|