In the I-O Psychology Credibility Crisis, Talk is Cheap
With the backdrop of science’s replication and now credibility crisis, triggered perhaps by psychology but now much broader than that, I/O psychology is finally beginning to reflect. This is great news. Deniz Ones and colleagues recently published a paper in our field’s professional journal, the Industrial-Organizational Psychologist (TIP), providing some perspective on the unique version of this crisis that I/O now faces, a crisis not only of external credibility but also of internal credibility. In short, I/O practitioners and the world more broadly increasingly do not believe that I/O academics have much useful to say or contribute to real workplaces. Ones and colleagues outline six indicators/causes of this crisis:
- an overemphasis on theory
- a proliferation of, and fixation on, trivial methodological minutiae
- a suppression of exploration and a repression of innovation
- an unhealthy obsession with publication while ignoring practical issues
- a tendency to be distracted by fads
- a growing habit of losing real-world influence to other fields.
I will let you read the description of each of these individually in the authors’ own words, but let me just say that I agree with about 95% of their comments and would even go a step further. Our field is just past the front edge of a credibility crisis, one that has been bubbling beneath the surface for the last couple of decades. Things aren’t irreversible yet, but they could be soon. The only valid response at this point is to ask, “what should we do about it?” Ones and colleagues offer an unsatisfying recommendation:
To the old guard, we say: Be inclusive, train and mentor scientist–practitioners. Do not stand in the way of the scientist–practitioner model. To the rising new generation, we say: Learn from both science and practice, and chart your own path of practical science, discovery, and innovation. You can keep I-O psychology alive and relevant. Its future depends on you.
To be clear, I completely agree with this in principle. The training model needs to be changed. Researchers need to follow best path of investigation for their personal scientific research questions. We must integrate science and practice into a coherent whole. But if the replication crisis has taught me anything, it’s that waiting for the old guard to become levers of change is too slow and too damaging to the rest of us to accept. When charting your own path of practical science can lead to denial of tenure, the system is fundamentally broken. So if you want change, you must be the lever. I have tried to be such a lever, in my own small way, since I earned my PhD eight years ago. But I am eager for more to join me.
Few of you know this, but I named this blog “NeoAcademic” when I graduated in 2009 not only because I was a newly minted PhD about to set off into academia but also because I was already frustrated then with “the way we do things.” There seemed to be arbitrary “rules” for publishing in top journals that had relatively little to do with “making a contribution,” at least the way I understood the traditional meaning of the word, “contribution.” I heard grumblings from my adviser and the remainder of the Minnesota faculty about how publishing expectations for the Journal of Applied Psychology were “changing” and not necessarily for the better. But it wasn’t a “crisis.” Not yet. I just thought something was up, although I wasn’t quite sure what it was, at the time.
Focused on my own path to tenure, I saw technology as a pressing need for I/O psychology, so I naively decided to innovate through research, hearkening back to the I/O psychologists of old that actually discovered things useful to the world of work. That is, after all, why I wanted to go into academia – to trail blaze at the forefront of knowledge. As anyone who has done innovative research knows, and something I underappreciated at the time, the risk/reward ratio becomes quite a bit poorer when you innovate through research. Several folks here told me I was crazy for trying such a thing, especially pre-tenure. Why would you spend two years investigating a hypothesis that might not turn out to be statistically significant? Because it’s important to discover the answer, regardless of what happens; that’s why.
Unfortunately, I soon discovered that our top journals did (and do) not particularly appreciate this sort of approach. True innovation, when you operate at the boundaries of what is known, is messy and mixes inductive and deductive elements. It involves the creation of new paradigms, new methodologies, and operating in a space in the research literature that is not well explored. As anyone who has published in Journal of Applied Psychology knows, that’s not “the formula” that will get you published there. True innovation is treated a “nice to have,” not a driving reason to conduct research. That felt backwards to me. It felt like our journals were publishing information that was of relatively dubious worth to any real, live human being other than other people trying to publish in those journals. It was definitely not a path toward building a “practical science.” So with that thought, still pre-tenure, I found myself a bit lost.
My initial reaction to this was to think that business schools would save me, that surely business schools would be focused on helping businesses and not just writing papers that other academics will read. I even joined the executive committee of the Organizational Behavior division of the Academy of Management as a way to expose myself to that side of the “organizational sciences.” Unfortunately, I discovered that the problem there is even worse than it is here. For example, most business schools actually name a short list of “A journals” that one must publish in to be tenured and/or promoted. That seemed absolutely backwards to me. It the very worst possible realization of goal setting. It stifles innovation and limits the real world impact of our work. It explicitly creates an “in club,” establishing pedigree as the most important driver of high quality publishing instead of scientific merit. And perhaps worst of all, it encourages people to fudge their results, or at least to take desirable forking paths, for fame and glory. This sort of thinking is now deeply ingrained in much of Academy; it has become cultural.
So with that observation, I decided business schools were a cure worse than the disease, at least for me. I would find little practical, useful science there, at least in OB. (To be fair, when I raised this issue with an HR colleague, they just rolled their eyes at me and said, “Yeah, well, that’s OB.”) In response to this discovery, I turned to colleagues in disciplines outside of both psychology and the organizational sciences. Still later, I turned to interdisciplinary publication outlets. My eyes were opened.
In short, if all you know is I/O psychology, you have an incredibly narrow and limited view of both academic research and the organizational context in which we attempt to operate. I/O psychology is tiny. It often doesn’t feel that way when you’re sitting at a plenary among the thousands at SIOP, or even at Academy, but the ratio of I/O psychologists to the number of people practicing HR related tasks in the world related to what we do is essentially zero. We are a small voice in one corner of one room of a very large building. I/O psychologists often complain that we don’t get a “seat at the big table,” a voice on the world or even national stage, but that’s essentially impossible to do when we won’t let anyone anyone sit at ours either by imposing strange and arbitrary rules on “the way things are done.” We’ve become a clique, and that’s dangerous.
Just like business schools, I/O psychologists use an intense focus on methodological rigor and “rigorous” theory as a way to exclude people who don’t have the “right” training. I recall vividly a time when I once described the expectations for publication in I/O journals to a computer science colleague. I told him offhandedly that a paper could be rejected because its scales were not sufficiently validated in prior studies. He literally laughed in my face. What about situations where previously validated scales were unavailable, or perhaps a situation where scale length was a concern? What if the scale just didn’t work out like you’d expected, despite good reason to trust it? What if data were available in a context where they usually were not? Apparently no study at all is better than one with an alpha = .6.
Now, the bright side. Unlike business schools, I/O psychology now graduates a huge number of Master’s and Ph.D. practitioners who see academia’s sleight of hand for what it really is and then call us out on it. Yet what many academic I/Os still don’t realize, I think, is that many of our students go into practice not because they can’t “hack it” in academia, but because they see the hoops and hurdles to getting anything published, which is presented as the sole coin of the realm, as unpleasant, unnecessary, and lacking or at least diminishing any impact of that research on the real world. This growing chorus of voices, I think, is our field’s saving grace – if only we listen and change in response.
That brings me all the way back around the core question I asked at the beginning of this article: what do we do now? The short answer is 1) change your behavior and 2) get loud. Here are some recommendations.
- If your paper is rejected from a journal because you purposefully and knowingly traded away some internal validity for external validity, complain loudly. If the only way you can reasonably address a research question is by looking at correlational data inside a single convenient organization, the editor should not use criticisms of your sampling strategy as a reason to reject your paper. If Mechanical Turk is the best way, same thing. Do not let journal gatekeepers get away with applying their own personal morality to your paper. Many (although not all) of these people in positions of power are part of the “old guard” because they exploited the system themselves and expect those that follow behind them to do the same. You need to let them know that this sort of rejection is unacceptable. Do not do this with the expectation or even hope that you will get your manuscript un-rejected. Do this because it’s right. Change one mind at a time.
- Prioritize submissions to journals that engage in ethical editorial practices. Even better, submit to these journals exclusively. Journals live and die by citation and attention. If you ignore journals engaging in poor practices, they will either adapt or disappear. I’ve talked about this elsewhere on this blog, suggesting some specific journals you should pay attention to, but this list is growing. Can you imagine the effect of a journal with an 8% acceptance rate suddenly ballooning to 30% because so many people stop submitting there? Like night and day.
- Casually engage in the broader online community. There are many social media sites now devoted to discussing the failings of psychology, and the credibility crisis broadly, and what to do about these problems. Be a part of these discussions. Don’t let other fields, other people or the news media dominate these conversations. IO psychology is strangely quiet online, limited to Reddit, a handful of Facebook and LinkedIn groups, and an even smaller number of blogs. In the modern era, this lack of representation diminishes our influence. Speak up. You don’t need to be sharing the results of a research study to have something worthwhile to say.
- It’s time for some difficult self-reflection: Is your research useless? In another TIP article this issue, Alyssa Perez and colleagues went through the “top 15” journals in I/O psychology, concluding that of 955 articles published in 2016, only 47 reached standards they would consider potentially having “significant practical utility.” Would your publications be one of the 47 or one of the 908? Importantly, Perez just developed some potential criteria for making a utility judgment, and perhaps quite stringent ones, but I would call your attention to these questions to ask yourself of each of your papers, derived from their list. If you answer “no” to any of these, it’s time for a change. And if you think any of these aren’t worthwhile goals, you’re part of the problem:
- Does it tackle a pressing problem in organizations, either directly or indirectly, including but not limited to SIOP’s list of Top 10 Workplace Trends?
- Can the results of the study be applied across multiple types of workplace settings (as opposed to only advancing theoretical understanding)?
- Are the effect sizes of the study both significant and practically meaningful?
- Share your research and perspective beyond the borders of I/O psychology. Research is not the sole domain of academics; we are all scientist-practitioners, and we all have a shared responsibility to conduct research and spread it not only among each other but also among all decision-makers in the world. And if “the world” is scary, feel free to start with “HR.” Don’t hold your nose while you’re over there. Share your passion for scientific rigor and engage in good faith – because such rigor is truly a rare and significant strength of our field – but don’t use that expertise as a tool to brush aside the hoi palloi.
- Innovate by seeking out what I/O-adjacent fields are doing and learn all about them as if you were in graduate school again (or still are). This is a personal pet peeve of mine; a lot of I/O psychologists, both practitioner and academic, want to operate in a bubble. They want to apply their own little sphere of knowledge, whether it’s what they learned in grad school or what they took that workshop on once, use that and be done. It’s not that easy anymore. It shouldn’t be. Self-directed learning is becoming increasingly important as the speed of innovation increases in the tech sector. Many bread-and-butter I/O psychologists can already be replaced with algorithms, and this will only get worse in the coming years. Make sure you have a skill set that can’t be automated, and if you’re at risk of being automated, go get more skill sets. I’m trying to do my part over at TIP, but this is ultimately something I/Os need to do for themselves. If you need motivation, imagine the day when an algorithm can develop, administer, and psychometrically validate its own scales and simulations. That day is coming sooner than you think.
- Accept that sometimes, engineering can be more important than science. When doing science, we aim to use the experiences and circumstances of a single sample, or group of samples, to draw conclusions about the state of the world. When doing engineering, we try to use science to solve a specific problem for one population that is staring us in the face. I/O practitioners, most often, are engineering. They are using the research literature and their own judgment to create solutions to specific problems that specific organizations are facing. These are learning opportunities, all. They need to have a place in our literature and conversations. They are also the primary avenue by which fields outside of I/O learn about who we are and what we do. You must actively try to be both a scientist of psychology and an engineer of organizational solutions. And when your engineering project is successful, remember to trumpet the value of I/O psychology for getting you there.
Ultimately, I think these actions and others like them are not only the way to make academic I/O psychology relevant again but also the way to distinguish ourselves from business schools. Let them have their hundred boxes and arrows, four-way interactions, and tiny effects sizes. Leave solving the real problems to I/O psychology.
Previous Post: | Learn Web Scraping/Data Science at SIOP, APA, and IPAC Workshops |
Next Post: | SIOP 2017: Schedule Planning for IO Psychology Technology |
Here Here Richard.
Great post.
I of course like this area:
“Speak up. You don’t need to be sharing the results of a research study to have something worthwhile to say.”
🙂
Anyway, I found this post of yours a breath of fresh air.
Inspirational read. thank you.
Paul
Richard, do you think part of why you haven’t heard much yet on this blog is that there aren’t many academics actively using LinkedIn?
There are a few causes, I think. I posted it on both Twitter and on LinkedIn. There aren’t many (any?) dedicated I/O facebook groups, so no options there. I think the lack of their response is first because there aren’t many senior I/O academics on social media in general. I know several senior I/O folks who don’t even use Facebook. Beyond that, junior I/O academics without the warm blanket of tenure I think are more hesitant to put their name near something like this – I’ve received several backchannel emails and social media pokes from folks coming form that perspective. Fear of angering the powers that be is pretty strong in that group. You never know who you’ll get for your tenure review letters, and you really never know what imagined slight they’ll try to sink you with. More things broken about academia but now we’re getting beyond the scope of this post. 🙂
What advice would you give to PhD students? I’ll be starting my PhD at MSU in the fall and one thing I can tell you is that my generation of students is very skeptical of the practices that have been put in place. At the same time, it feels difficult for us to do anything about it, as failing to publish means that we will fail to get academic appointments and tenure. It feels like all we can really do at the moment is paint smiles on our faces and contribute to the problem for 10 or 15 years until we get tenure and can actually work towards fixing things. I am encouraged that my age group will one day become the establishment of I/O, so to speak, but I also worry about the damage that could occur in the meantime.
So, unless you were a child prodigy, you and I are both Millennials, so I know the feeling you’re talking about. I only went into academia because I wanted to make a positive difference in the world. That decision is very clearly not for the money, and I even turned down a much more lucrative job offer in industry because I valued “impact” over compensation. So that is a large part of what motivated me even then and definitely what is motivating me to speak out now. I think academia has a certain responsibility to the world, and we are currently at risk of failing.
So, all of that is to say that what you describe seems to be the perspective I was coming from back in 2009. The truth is that you actually do have control re: the listed issues I mention above. They are all really grassroots sorts of change. The changes we need higher up the chain are very slowly rolling in, so things may be a little different by the time you are looking for a job and/or tenure. For example, the two journals I mentioned in this article are edited by Steven Rogelberg and Scott Highhouse, who I think are at the forefront of taking a practical stance on issues of replicability and the science-practice gap. So support them. Submit to them as your “first choice,” and don’t cite articles in “top tier” journals just because they’re in “top tier” journals. That makes a bigger difference than you think. There’s good work in our top tier journals, but we too often get in the habit of citing things that are only peripherally relevant in such journals just because it makes our reference list look more impressive/grounded. Don’t fall in that trap.
I am not saying not to publish. Publishing is the primary communication medium for rigorous empirical work. I don’t think there’s anything wrong with that. I am just saying not to sacrifice your scruples when making publishing and research stream decisions. It is possible. It’s just a bit harder and a bit less glorious. But you don’t want your glory to turn to this anyway, believe me. Don’t contribute to the problem for 10 to 15 years before you do anything about it. You can still do good publishable work immediately.
And remember that there are more student members of SIOP than full members of SIOP right now. You have more influence than you think.
This is a good post. A very good post, IMHO. I just pessimistically wonder if some of the publishing issues might be too intractable. Even if recommendation #4 was addressed and people started to publish more useful research, would practitioners read it? I would guess “no” due to the fact most articles are written in “Academic-ese” and even more importantly, journal access for anyone outside of the university setting is prohibitively expensive (I’m sure companies are just DYING to set money aside for an Elsevier subscription).
Things like ResearchGate and open-access help in this regard, but ultimately someone has to pay somewhere. It is my understanding that to make an article in a major journal open-access, either the university has to pay (on top of their subscription) or the researchers themselves have to foot the bill (sad!).
I also think continued technological progress will exacerbate the science-practice divide as a lot of practical solutions to current problems will no longer be relevant by time they go through the peer-review process and are published several years after the research was first conducted (not just an I/O issue).
In terms of a solution, recommendation #5 kind of gets at importance of going beyond traditional avenues of dissemination (e.g., publications, conferences, etc.), however, universities don’t formally reward professors for these efforts and everyone is operating under time constraints.
Ultimately, I’d love to see an open-access “journal” that utilizes some form of rapid peer-review, is limited to 1000 words maximum, and focuses entirely on practical solutions. Who knows, if it was actually useful and had a large and diversified viewership, it might even be able to support itself through ad revenue. Of course, the problem of journal “prestige” would still remain.
Anyways, count me as a supporter of your cause…
Adding onto Anthony’s comment above, what would you suggest for master’s students while in graduate school and after? What do you consider to be important theories and technologies we should master in an applied setting – even if that is something we will need to learn outside of graduate school (especially if not being informed by new and useful academic IO research)? Given that master’s students are not likely conducting their own research throughout their careers, how can they advance themselves and the field in meaningful ways in the real world while also staying competitive against other similar fields?
I’m also interested in your thoughts on a future intersection of I/O and AI and any future impact to the I/O field (this probably belongs in a different post).
I think data science skills are critical for new IOs, and you are mostly on your own to learn them except in a very small number of programs. That is why I am releasing a free online self-paced course to contextualize datacamp.com lessons to IO psychology. You can find it here: http://datascience.tntlab.org
Beyond that, self-directed learning is very much what differentiates you in the market. The key to being successful as a Master’s student (and later IO in the field) is identifying a few general areas of specialization (e.g., selection, leadership development) and then doing independent reading, joining online discussion groups, etc., on that topic.
AI as it exists now is essentially predictive modeling. There are things that computer scientists call AI that we are already doing, e.g., logistic regression. So there is a lot of marketing double-speak going on right now. However, things are moving quickly, and the predictive modeling techniques now being developed are pushing the state-of-the-art further and further away from what mainstream IO is doing. So I think we either need to join that effort or admit that we are not as useful as we used to be when prediction is the goal (e.g., predicting turnover, predicting leadership potential, predicting injury, etc). There are efforts underway to try to course correct, but it’s not something widely believed among IOs just yet. The first wave will be at this coming SIOP though; if you’re curious, I recommend attending the session given by the 4 top scorers in the first SIOP machine learning competition, which is underway now. It should be enlightening!