Skip to content

Hiring Managers Fear Being Replaced by Technology

2016 July 27
by Richard N. Landers

ResearchBlogging.orgJust a few days ago, the new and very promising open-access I/O psychology journal Personnel Assessment and Decisions released its second issue. And it’s full of interesting work, just as I thought it would be. This issue, in fact, is so full of interesting papers that I’ve decided to review/report on a few of them.  The first of these, a paper by Nolan, Carter and Dalal[1], seeks to understand a very common problem in practice: despite overwhelmingly positive and convincing research supporting various I/O psychology practices, hiring managers often resist actually implementing them. A commonly discussed concern from these managers is that they will be recognized less for their work, or perhaps even be replaced, as a result of new technologies being introduced into the hiring process. But this evidence, to date, has been largely anecdotal. Does this really happen? And do managers really resist I/O practices the way many I/Os intuitively believe they do? The short answer suggested by this paper: yes and yes!

In a pair of studies, Nolan and colleagues explore each of these questions specifically in the context of structured interviews. In this case, “structure” is the technology that threatens these hiring managers. But do managers that use structured interviews really get less credit? And are practitioner fears about this really associated with reduced intentions to adopt new practices, despite their evident value?

In the first study, 468 MTurk workers across 35 occupations were sampled. Each was randomly assigned to a 2 x 2 x 3 matrix of condition, crossing interview structure (yes/high or no), decision structure (mechanical or intuitive combination), and outcome (successful, unsuccessful, unknown). Each participant then read a standard introduction:

Imagine yourself in the following situation…The human resource (HR) manager at your company just hired a new employee to fill an open position. Please read the description of how this decision was made and answer the questions that follow.

After that prompt, the descriptions varied by condition (one of 12 descriptions followed), but were consistent by variable. Specifically, “high interview structure” always indicated the same block of text, regardless of the other conditions. This was done to carefully standardize the experience. Afterward, participants were asked a) if they believed the hiring manager had control over and was the cause of the hiring decision and b) if the decision was consistent (a sample question for this scale: “Using this approach, the same candidate would always be hired regardless of the person who was making the hiring decision.” I/Os familiar with applicant reactions theory may recognize this as a perceived procedural justice rule.

So what happened? First, outcome didn’t matter much for either outcome. Regardless of the actual decision made, outcome and all of its interactions accounted for 1.9% and 3.3% of the total variance in each DV. Of course, in some areas of I/O, 3.3% of the variance is enough to justify something as extreme as theory revision, but in the reactions context, this is a pitifully small effect. So the researchers did not consider it further.

Second, the interview structure and decision structure manipulations created huge main effects.  14% of causality/control’s variance was explained by each manipulation.  The total model accounted for 27%, which is a huge effect! For stability, the effect was smaller but still present – 9% and 7% for each manipulation respectively, and 17% for the full model. People perceived managers as having less influence on the process as a result of either type of structure, and because the interactions did not add much predictive power to the model, these effects were essentially independent.

One issue with this study is that these are “paper people.” Decisions about and reactions to paper people can be good indicators of the same situations when involving real people, but there’s an inferential leap required. So if you don’t believe people would react the same way to paper people as real people, then perhaps the results of this study are not generalizable. My suspicion is that the use of paper people may strengthen the effect. So real-world effects are probably a little smaller than this – but at 27% of the variance explained (roughly equivalent to r = .51), there’s a long way down before the effect would disappear. So I’m pretty confident it exists, at the least.

Ok – so people really do seem to judge hiring managers negatively for adopting interview structure. But does that influence hiring manager behavior? Do they really fear being replaced?

In the second study, MTurk was used again, but this time anyone who had no experience with hiring was screened out of the final sample. This resulted in 150 people with such experience, 70% of which were currently in a position involving supervision of direct reports.  Thus, people with current or former hiring responsibilities participated in this second part. People who could be realistically replaced by I/O technology.

The design was a bit different. Seeing no results for outcome type, this block was eliminated from their research design, crossing only interview and decision structure (a 2 x 2, only 4 conditions). Perceptions of causality/control and consistency were assessed again.  But additionally, perceived threat of unemployment by technology was examined (“Consistently using this approach to make hiring decisions would lessen others’ beliefs about the value I provide to my employing organization.”) as well as intentions to use (“I would use this approach to make the hiring decision.”)

The short version: this time, it was personal. The survey asked if the hiring managers used these techniques, what would happen? Could I be replaced?

As you might expect in a move to more real-world processes, the effects were a bit smaller this time, but still quite large: 22% of causality/control explained, and 9% of consistency.  You can see the effect on causality in this figure.

Nolan et al, 2016, Figure 3. Perceptions of causality/control on conditions.

Nolan et al, 2016, Figure 3. Perceptions of causality/control on conditions.

Nolan et al, 2016, Figure 3. Perceptions of causality/control on conditions.

Interestingly, the consistency effect turned into an interaction: apparently having both kinds of structure is worse than just one.

Nolan et al, 2016, Figure 4. Perceptions of consistency on conditions.

Nolan et al, 2016, Figure 4. Perceptions of consistency on conditions.

Nolan and colleagues also tested a path model confirming what we all expected:

  1. Managers who believe others will judge them negatively for using new technologies are less likely to use those technologies.
  2. This effect occurs indirectly via perceived threat of unemployment by technology.

In summary, some people responsible for hiring fear being replaced by technology, and this decreases their willingness to adopt those technologies. This explains cases I/Os often hear about implementing a new practice in an organization only to discover that nothing has changed because the managers never actually changed anything. In an era where robots are replacing many entry level jobs, this is a legitimate concern!

The key, I think, is to design systems for practice that take advantage of human input. There are many things that algorithms and robots can’t do well (yet) – like making ratings for structured interviews! Emphasizing this and ensuring it is understood up and down the chain of command could reduce this effect.

So for now, we know that being replaced by our products is something managers worry about.  This suggests that simply selling a structured interview to a client and leaving it at that is probably not the best approach.  Meet with managers, meet with their supervisors, and explain why people are still critical to the process. Only with that can you have some confidence that your structured interview will actually be used!

Always remember the human component to organizations, especially when adding new technology to a process that didn’t have it before. People make the place!

Footnotes:
  1. K.P. Nolan, N.T. Carter, & D.K. Dalal (2016). Threat of technological unemployment: Are hiring managers discounted for using standardized employee selection practices? Personnel Assessment and Decisions, 2 (1) []

Internet Scraping for Research: A Python Tutorial for Psychologists

2016 June 15
by Richard N. Landers

ResearchBlogging.orgOne of the biggest challenges for psychologists is gaining access to research participants. We go to great lengths, from elaborate and creative sampling strategies to spending hard-earned grant money, to get random(ish) people to complete our surveys and experiments.  Despite all of this effort, we often lack perhaps the most important variable of all: actual, observable behavior. Psychology is fundamentally about predicting what people do and experience, yet most of our research starts and stops with how people think and feel.

That’s not all bad, of course. There are many insights to be gained from how people talk about how they think and feel. But the Internet has opened up a grand new opportunity to actually observed what people do – to observe their Internet behaviors.  We can actually see how they communicate with others on social media, tracking their progress in relationships and social interactions in real time, as they are built.

Despite all of this potential, surprisingly few psychologists actually do this sort of research. Part of the problem is that psychologists are simply not trained on what the Internet is or how it works. As a result, research studies involving internet behaviors typically involve qualitative coding, a process by which a team of undergraduate or graduate students will read individual posts on discussion boards, writing down a number or set of numbers for each thing they read. It is tedious and slow, which slows down research for some and makes the entire area too time-consuming to even consider for others.

Fortunately, the field of data science has produced a solution to this problem called internet scraping.  Internet scraping (also “web scraping”) involves creating a computer algorithm that automatically travels across the Internet or a select piece of it, collecting data of the type you’re looking for and depositing it into a dataset. This can be just about anything you want but typically involves posts or activity logs or other metadata from social media.

In a paper recently published at Psychological Methods by me and my research team[1], our goal is to teach psychologists how to use these tools to create and analyze their own datasets. We do this in a programming language called Python.

Now if you’re a psychologist and this sounds a little intimidating, I totally understand. Programming is not something most research psychologists learned in graduate school, although it is becoming increasingly common.

R, a statistical programming language, is increasingly becoming a standard part of graduate statistical training, so if you’ve learned any programming, that’s probably the kind you learned.  If you’re one of these folks, you’re lucky – if you learned R successfully, you can definitely learn Python.  If you haven’t learned R, then don’t worry – the level of programming expertise you need to successfully conduct an Internet scraping project is not as bad as you’d think – similar to the level of expertise you’d need to develop to successfully use structural equation modeling, or hierarchical linear modeling, or any other advanced statistical technique. The difference is simply that now you need to learn technical skills to employ a methodological technique. But I promise it’s worth it.

In our paper, we demonstrated this by investigating a relatively difficult question to answer: are there gender differences in the way that people use social coping to self-treat depression? This is a difficult question to assess with surveys because you always see reality through the eyes of people who are depressed. But on the Internet, we have access to huge databases of people with depression and trying to self-treat in the form of online discussion forums intended for people with depression. So to investigate our demonstration research question, we collected over 100,000 examples of people engaging in social coping, as well as self-reported gender from their profiles.  We wanted to know if women use social coping more often than men, if women are more likely to try to support women, if men are more likely to try to support men, and a few other questions.

The total time to collect those 100,000 cases?  About 20 hours, during a solid 8 hours of which I was sleeping. One of the advantages to internet scraping algorithms collecting your data is that they don’t require much attention.

Imagine the research questions we could address if more people adopted this technique! I can hear the research barriers shattering already.

So I have you convinced, right? You want to learn internet scraping in Python but don’t know where to start? Fortunately, this is where my paper comes in. In the Psychological Methods article, which you can download here with a free ResearchGate account, we have laid out everything you need to know to start web scraping, including the technical know-how, the practical concerns, and even the ethics of it. All the software you need is free to use, and I have even created an online tutorial that will take you step by step through learning how to program a web scraper in Python! Give it a try and share what you’ve scraped!

Footnotes:
  1. Landers RN, Brusso RC, Cavanaugh KJ, & Collmus AB (2016). A Primer on Theory-Driven Web Scraping: Automatic Extraction of Big Data From the Internet for Use in Psychological Research. Psychological Methods PMID: 27213980 []

I’m Presenting at SIOP’s Leading Edge Consortium on Big Data Talent Analytics!

2016 May 18
by Richard N. Landers

This year’s Leading Edge Consortium is on Talent Analytics, with a focus on Big Data and data science. If you haven’t heard of the LEC, it’s SIOP’s specialty conference intended to train practicing I/O psychologists on bleeding edge practice and science for solving real, current problems. Each year has a different theme, depending upon precisely what SIOP believes to be at the bleeding edge that year. 2014’s conference was on Succession Strategies, whereas 2015’s was on High Performance Organizations.

LEC Announcement Banner

THis topic is near and dear to both my researcher and practitioner heart. It is officially titled: Talent Analytics: Data Science to Drive People Decisions and Business Impact.  It is being held October 21 and 22, 2016, in the home of Turner Field, Coca-Cola, and a beautiful airport: Atlanta, Georgia.

If you can’t tell from the meeting title, the idea is to share the newest of the new in talent analytics, the stuff that our journals haven’t even been able to publish yet, the stuff that’s being used in the highest performing I/O teams in the country.  There’s quite the all-star cast for me to stand aside, which is great, since a benefit of presenting is that I get to attend the conference!

Presenters (so far):

  • Leslie DeChurch, Georgia Tech
  • Eric Dunleavy, DCI Consulting
  • Alexis Fink, Intel
  • Ed Freeman, University of Virginia
  • Rick Guzzo, Mercer
  • Hailey Herleman, IBM
  • Allen Kamin, GE
  • Eden King, George Mason University
  • Richard Landers, Old Dominion University
  • Nathan Mondragon, HireVue
  • Adam Myer, Johnson & Johnson
  • Fred Oswald, Rice University
  • Dan Putka, HumRRO
  • Sara Roberts, Category One Consulting
  • Evan Sinar, DDI
  • Rich Tonowski, EEOC
  • Paul Tsagaroulis, U.S. General Services Administration
  • Alan Wild, IBM

My particular topic will be the Science of Data Visualization! There’s still time to register, but you’d better get to it quick! Hope to see you there!