Skip to content

Unpacking the Top 10 I/O Psychology Trends for 2014

2014 March 5
by Richard N. Landers

Each month, the Society for Industrial & Organizational Psychology releases a newsletter describing current events in I/O Psychology. In the February issue, I noticed a brief article on an intriguing little study conducted by the SIOP Media Subcommittee, part of the SIOP Visibility Committee. Via the SIOP newsletter, the SIOP website, and social media, the committee asked I/O psychologists what they considered to be the top workplace trends for 2014. Since most of the trends they identified are within my research area, I thought I’d explore my take on this what these items mean and how much progress I/O psychology has made so far in exploring them.  Here’s the list of trends, in order from hot to sizzling.

  1. Alternatives to Full Time Work. Temporary, part time, and contractor work is becoming an increasingly common model for modern businesses. As employees shift to part-time work, do their motivations for performing “good work” change? I/O certainly has models of job performance and its predictors, but most of the empirical work on these constructs was developed and tested on permanent positions. To what extent do these findings hold with temporary workers? Of all areas of I/O, I expect discussion of culture and climate to turn to this problem first.
  2. Telework. As technology improves, it is becoming more obvious that completing the technical requirements of many jobs does not require workers to be physically present in the office. So why not let them work from home, saving money on office space and other resources? The research literature on virtual teams has grown quite a bit in the last few years, but our understanding of completely remote workers is still quite limited. Much to be done here!
  3. Social Media for Employment-related Decisions. Social media is one of my personal areas of interest, in terms of both organizational learning and employee selection (evidenced by my lab’s work on social media in selection at SIOP this year). On both fronts, we know very little so far. It seems like we can get some degree of job-relevant information from social media, but it remains unclear what the best way to go about this is, or what the legality of that information ultimately is. Should we trust hiring managers to ignore information about protected class membership (sex, race, national origin, skin color, religion, disability, and others) when scanning social media for information about job applicants? Even if we trust hiring managers to ignore this information, would the courts believe us? I suspect not.
  4. Work-Life Balance. Of the list of top trends, this is probably the most well-explored on the list within I/O. Work-family conflict (and family-work conflict) are both linked with a variety of negative outcomes for workers.  But as technology becomes omnipresent in people’s lives, the line between work and home continues to blur. For many (like myself!), there essentially is no line. With Twitter, our private lives and public lives are often the same. What effect does this have on well-being and job performance? Will people burn out faster than ever before?
  5. Integration of Technology into the Workplace. Another of my research areas. Employees are now able to reach out via social media to others in their organizations when they need help – via email, via instant messaging, via teleconference – but are more tempted than ever by the siren song of easy, quick social interaction. How can social media be leveraged within organizations to bring the potential benefits without the drawbacks? We don’t know yet – and that’s what I aim to learn.  Technology is increasingly being used to monitor employees in ways not even conceivable 10 years ago – including tracking specific employee movements throughout the workday. What impact does all this dehumanization have upon productivity and retention?
  6. Gamification. Another research area! Gamification has taken the business world by storm, with gamification “gurus” and “experts” rising up all over the place. The dirty truth, of course, is that no one yet really know what makes it work. Nearly zero empirical research is available to explore these effects, although there are three at SIOP 2014 (two of which are from my lab). The big problem is that many gamification efforts do fail – and fail badly – with a wide variety of unintended negative consequences. Does gamification need to crash before people use it selectively and carefully, when there is a clear reason to do so? I hope the crash can be avoided, but it looks like we’re headed that direction.
  7. New Ways to Test. Yet another research area, with more SIOP presentations! Many people want to complete assessments on whatever device they have handy. That means tablets and mobile phones. The problem with BYOD (bring-your-own-device) is that your assessment needs to be reliable and valid on all of them – and this is notoriously difficult to test because of the sheer variety of such devices. Even desktop and laptop computers vary a bit in how they present the web – but this variation is nothing compared to the incredible range of mobile device interfaces (from 2″ to 7″ screens!). Fortunately, early research indicates that mobile and traditional devices don’t differ much in terms of reliability and validity. But more work is needed.
  8. The Talent Question. Now that organizations compete for talent on the Internet (and thus on a global level), how do you identify and attract the best of the best? If Google and Apple can snap up all the best IT talent from every school in the world, where does that leave everyone else? And as specialized skills that require deep levels of training become more common needs for organizations, how can organizations find these people at all? Unfortunately, online recruiting is probably the oldest yet understudied area on this list.
  9. Increasing Efficiency. The traditional I/O approach to helping organizations run lean has been to improve selection systems (you don’t need as many people if they are better people), improving training (if people know their jobs well, you don’t need as many backups) and to solve any specific personnel problems (troublemakers harming group performance, etc.). But post-recession, many organizations are already running lean – and need to do even more. At what point is there nowhere left to go? When is the only solution remaining to hire more personnel?
  10. Big Data. #1 on the list is Big Data. This is a bit of a hard concept for I/O’s I’ve talked to to deal with, because many believe we already “do” Big Data. “But I collected a 500-person validation study! That’s the sort of skill Big Data requires!” they say. It isn’t. I’ve only worked on one Big Data project myself – a 13.5 million case dataset. SPSS would not even open it, and that’s not even all that “big” in terms of Big Data, which starts at the point where traditional data analysis programs can’t handle the analyses you want to run. Big Data is not synonymous with data mining, but that’s mostly how it is used for because the people with data mining skills happen to be the people with the ability to process these kinds of data sets – computer and information scientists. I/O could theoretically do a lot of good here with its focus on rigorous construct development, but graduate training (in both I/O and management) is not generally sufficiently grounded in computer programming to get us there just yet.  Big Data analysis is not done by clicking buttons in a statistics program.  It is done by writing algorithms to process massive data sources, and letting those programs run for days at a time. The closest we get are Monte Carlo simulations. I teach computer programming to our graduate students, but I’m an outlier – it is not (yet) a common skill in I/O. If we want to be on top of Big Data – and of this list, I suspect this is the trend that will stick around the longest – this needs to change.

There you have it. Technology, and the impact of technology, is what current I/O’s are worrying about. It’s a lot of change very quickly, but that just means it’s an especially exciting time to be an I/O psychologist!


When You Are Popular on Facebook, Strangers Think You’re Attractive

2014 February 26
by Richard N. Landers

ResearchBlogging.orgFrom psychology, we’ve known for a while that people create near-instant impressions of people based upon all sorts of cues. Visual cues (like unkempt hair or clothing), auditory cues (like a high- or low-pitched voice), and even olfactory cues (what’s that smell!?!) all combine rapidly to create our initial impressions of a person. Where things get interesting is when one set of these cues is eliminated. For example, if we’ve never met a person in a real life, do we form impressions of people when all we know about them is their Facebook profile? And if so, what do we learn from those profiles?

As it turns out, it can be quite a lot. In an upcoming issue of the Journal of Computer-Mediated Communication, Scott[1] experimentally examined the impact of viewer gender, Facebook profile gender and number of Facebook friends on impression formation, finding that people with lots of friends appear more socially attractive, more physically attractive, more approachable, and more extroverted.

To determine this, the researcher first conducted a pilot study of 600 existing Facebook profiles (although the source of these profiles is not revealed). From that study, it appears that the researcher extracted wall posts at random to create new profiles, replacing the photo with one drawn from a database of photographs with known attractiveness to a photo of moderate attractiveness (all four within 0.05 SD of each other), updating the number of friends to match the needed condition (90-99 for unpopular and 330-340 for popular), and updating the number of photos to match the needed condition (60-80 for unpopular and 200-250 for popular).  This created a database of essentially random Facebook content, although the reason for these particular numbers (or their ranges) was never given.

In the main study, each of 102 undergraduate research participants saw all 4 conditions, in a random order.  All participants viewed the profiles on the same computers in the same research laboratory and completed a survey afterwards.

In analyses, neither research participant gender nor profile gender affected any of the five outcomes: social attractiveness, physical attractiveness, approachability, extroversion, and trustworthiness.  The only effects found were those from the experimentally controlled popularity manipulation, and these effects were noteworthy.


Figure 1 from Scott (in press), p. 9

This figure is a little misleading, since the y-axis doesn’t go down to the bottom of the scale (which was 1), but the effects are still fairly substantial: standard deviations were around 1.2, so these effects range from roughly 0.3 to 1.0 standard deviation – which are moderate to large in magnitude.  It is also a little misleading that the trustworthiness outcome is not within the graph, which was assessed but was not statistically significant.

Despite these very small limitations, this study quite cleanly demonstrates that people form a halo of impressions from relatively small cues, just as they do in “real life.”  Manipulating only the number of friends and number of posted photos led research participants to view the people behind the profiles as more physically attractive, among other outcomes.  This is a critical finding for research examining the value of Facebook profiles and other social media for real-life processes, like hiring or background checks – even relatively small cues can dramatically influence seemingly unrelated judgments made about a person, for better or worse.

  1. Scott, G. G. (2014). More than friends: Popularity on Facebook and its role in impression formation Journal of Computer-Mediated Communication DOI: 10.1111/jcc4.12067 []

Do Recommendation Letters Actually Tell Us Anything Useful?

2014 February 21
tags: , ,
by Richard N. Landers

ResearchBlogging.orgRecommendation letters are one of the most face valid predictors of academic and job performance; it is certainly intuitive that someone writing about someone else whom they know well should be able to provide an honest and objective assessment of that person’s capabilities.  But despite their ubiquity, little research is available on the actual validity of recommendation letters in predicting academic and job performance.   They look like they predict performance; but do they really?

There is certainly reason to be concerned. Of the small research literature available on recommendation letters, the results don’t look good. Selection of writers is biased; usually, we don’t ask people who hate us to write letters for us.  Writers themselves are then biased; many won’t agree to write recommendation letters if the only letter they could write would be a weak one. Among those that do write letters, the personality of the letter-writer may play a more major role in the content than the ability level of the recommendee.  So given all that, are they still worth considering?

In a recent issue of the International Journal of Selection and Assessment, Kuncel, Kochevar and Ones[1] examine the predictive value of recommendation letters for college and graduate school admissions, both in terms of raw relationships with various outcomes of interest and incrementally beyond standardized test scores and GPA. The short answer: letters do weakly predict outcomes, but generally don’t add much beyond test scores and GPA.  For graduate students, the outcome for which letters do add some incremental predictive value is degree attainment (which the researchers argue is a more motivation-oriented outcome than either test scores or GPA) – but even then, not by much.

Kuncel and colleagues came to this conclusion by conducting a meta-analysis of the existing literature on recommendation letters, which unfortunately was not terribly extensive. The largest number of studies appearing in any particular analysis was 16 – most analyses only summarized 5 or 6 studies. Thus the confidence intervals surrounding their estimates are likely quite wide, leaving a lot of uncertainty in the precise estimates they identified. That doesn’t necessarily threaten the validity of their conclusions – since these are certainly the best estimates of recommendation letter validity that are available right now – but it does highlight the somewhat desperate need for more research in this area.

Another caveat to these findings – the studies included in any meta-analysis must have reported enough information to obtain correlation estimates of the relationships of interest. In this case, that means the included studies needed to have quantified recommendation letter quality. I suspect many people reading recommendation letters instead interpret those letters holistically – for example, reading the entire letter and forming a general judgment about how strong it was. That holistic judgment is probably then combined with other holistic judgments to make an actual selection decision. Given what we know about statistical versus holistic combination (i.e., there is basically no good reason to use holistic combination), any particular incremental value gained by using recommendation letters may be lost in such very human, very flawed judgments.

So the conclusion? At the very least, it doesn’t look like using recommendation letters hurts the validity of selection. If you want to use such letters, you will likely get the most impact by coming up with a reasonable numerical scale (e.g. 1 to 10) and assign each letter you receive a value on your scale to indicate how strong the endorsement is. Then calculate the mean of that number alongside the other components of your statistically derived selection system (e.g. GPA and standardized test scores).

  1. Kuncel, N. R., Kochevar, R. J., & Ones, D. S. (2014). A meta-analysis of letters of recommendation in college and graduate admissions: Reasons for hope International Journal of Selection and Assessment, 22 (1), 101-107 : 10.1111/ijsa.12060 []