We’re currently in the midst of the Society for Industrial and Organizational Psychology’s annual conference, which as appropriate to a pandemic, is mostly via Zoom. Over three thousand people are participating.
Another side effect of the pandemic, one that I did not expect but realized because of the online conference, is that there has been an explosion in I-O psychology podcasts. I also realized 1) there is no centralized record keeping for I-O podcasts and 2) I-O podcasts are hard to search for because most don’t use the words “industrial-organizational” or “I-O” because you obviously don’t want to scare away your audience.
Classifying podcasts as “I-O podcasts” is a trickier thing than you might imagine. Although anything about “workplace behavior” is potentially I-O-relevant, my goal here is to present podcasts hosted by people who identify to the world at large as I-Os, especially in their podcasts. That’s a much smaller list.
With all that in mind, here’s a list of the I-O podcasts I’m currently aware of that have had a new episode within the last 60 days, along with some short commentary for the first few followed by descriptions (happy to add yours if you leave a comment!).
The Great I-O Get-together with Richard Landers and Tara Behrend | This first one is obviously the best, and it’s obviously the best because Tara is involved. Available as both a YouTube Live variety show and as a podcast for those without the time to watch live YouTube variety shows, our goal is to bring the I-O psychology community together, whether you’re an academic, practitioner, student, or all three at the same time. We usually play a game, like trivia or would-you-rather, in the first half, and have interviews in the second half. We also have a Discord community where you can connect with your fellow I-Os. And just to be clear, this is a show for I-O, not a show about I-O. The rest of these will be alphabetical. đ |
Bias Check-In with Susana Gomez Ornelas and Claudia Della Pona | “What biases do we carry with ourselves in our daily lives, classrooms, and careers? A podcast hosted by IO psychologists and international students Susana GĂłmez Ornelas & Claudia Della Pona.” |
The Business of Behavior with Becca Tagg | “Getting clear about our values will help us to make changes in our business that lead to greater fulfillment, productivity, and abundance.” |
Coaching for Leaders with Dave Stachowiak | “Leaders aren’t born, they’re made. This Monday show helps you discover leadership wisdom through insightful conversations. Independently produced weekly since 2011, Dr. Dave Stachowiak brings perspective from a thriving, global leadership academy, plus more than 15 years of leadership at Dale Carnegie.” |
Department 12 with Ben Butina | One of the longest-running I-O podcasts, this is “a podcast for industrial-organizational psychologists, researchers, practitioners, and students. We cover anything and everything related to the research and practice of IO psych.” |
Healthy Work with Keaton Fletcher and Maryana Arvan | Keaton may need to work on his sales pitch, although the show is great!: “We are two Industrial-Organizational psychologists who care about how to make work a healthier experience for everyone. We run a bi-weekly podcast to bring the science directly to your ears.” |
HRchat with Bill Banham | “…get insights, strategies, and anecdotes to help you get more from the world of work. Show topics covered include HR Tech, Leadership, Talent, Recruitment, Employee Engagement, Recognition and Company Culture.” |
The Indigo Podcast with Ben Baran and Chris Evertt | “Take control of your life, business, and career once and for all. The Indigo Podcast offers raw, unfiltered, and science-based talk about flourishing in life. Join CEOs, executives, and regular folks in learning how to build a better future at work and beyond.” |
Mind Your Work with Nicholas Bremner and Jose Espinoza | “Mind Your Work is a podcast about understanding the human aspects of work through science. We live in an interesting time. Employers are paying more attention to their people than ever. Employee wellness is now considered important. What makes a good career has changed from âa stable jobâ to âinspiring workâ. At the same time, research on people at work is growing faster than ever. Employers are working on how to best use science to attract, retain, and engage employees. But what difference does all this make?” |
TyePod with Tianna Tye | “Thanks for tuning into TyePod, the go-to podcast for entrepreneurs building and leading teams. Your host, Tianna Tye, aims to bring you research-backed and practice validate tips, tools, and techniques to build a team that you can trust.” |
WorkLife with Adam Grant | Adam Grant is a somewhat polarizing figure in I-O. He’s arguably the most visible I-O psychologist on the planet, and has done a lot of good with that platform, but also falls pretty firmly in the category of “populist” these days, for better and/or worse. Here’s the sell for his podcast: “You spend a quarter of your life at work. You should enjoy it! Organizational psychologist Adam Grant takes you inside the minds of some of the worldâs most unusual professionals to discover the keys to a better work life. From learning how to love your rivals to harnessing the power of frustration, one thingâs for sure: Youâll never see your job the same way again.” |
The Workr Beeing Podcast with Patricia Grabarek and Katina Sawyer | “Do you want a happier and healthier work life but donât know where to start? Patricia and Katina, two bff organizational psychologists, share research and tips about workplace wellness and interview other leading experts in the field.” |
Direct link to the application and formal job ad: https://hr.myu.umn.edu/jobs/ext/339848
As part of a recently-funded US National Science Foundation project in which we will be building an online virtual interviewing platform, my laboratory will be hiring a part-time project manager with work to do for roughly the next two years. Actual workload will vary week-to-week between 10 and 25 hours (less in the beginning and more later). The position requires a Bachelor’s degree in a scientific field (psychology and computer science are targeted in the ad, but a degree from a business school ion which you completed a research methods course would qualify).
The position would be ideal for someone who is taking a few years off of school after graduation while considering applying to graduate school and would be a great opportunity for such a person to get some funded research experience. Pay will likely be in the $15-19/hr range, depending upon qualifications, and the position can be entirely remote, even post-pandemic.
Applicants local to Minneapolis are preferred for some post-COVID role responsibilities, but this is not required. As the Twin Cities metro is almost 80% white, we are particularly hoping to encourage remote applications from people in under-represented and systemically disadvantaged groups, so remoteness will not be held against anyone. All required role responsibilities can be performed with home access to a high-speed internet connection (being remote could prevent a few hours a week of job duties in late 2021/early 2022, but that’s the extent of it, and we’ve already identified potential workarounds).
If you recently graduated and were thinking about grad school but weren’t quite convinced, this would be a great way to participate in a functioning lab and make a final decision. You’ll also have access to University of Minnesota infrastructure (such as database access, inter-library loan, Qualtrics, etc.) and would be able to conduct supervised but mostly-independent research using those resources, if you were so inclined.
Here is a direct link for interested applicants, and excerpts from the formal job ad follow:Â https://hr.myu.umn.edu/jobs/ext/339848
About the Position
The TNTLAB (Testing New Technologies in Learning, Assessment, and Behavior) in the Department of Psychology seeks to hire a part-time âResearch Project Managerâ (8352P1: Research Professional 1) responsible for providing research, office, and technical support within activities funded by a federal National Science Foundation (NSF) grant. The project involves the creation of a web-based virtual interviewing platform and the execution of two data collection efforts between 2020 and 2022. The position principally requires responsibilities in the management and administration of project personnel and secondarily in the completion of scientific research tasks, such as assigned literature review or data analysis. Specific job tasks vary over the phases of the project, and training on all such tasks will be provided as required. Candidate expectations are: (1) willingness to participate with and independently manage the time of a team of PhDs and graduate students; (2) demonstrated ability to communicate well via multiple modalities (i.e., phone, e-mail, Zoom); (3) track record of achieving assigned goals in independent work projects; and (4) must be able to work with a diverse participant pool. Position works closely with the Principal Investigator of TNTLAB in the Department of Psychology and the Principal Investigator of the Illusioneering Lab in the Department of Computer Science & Engineering, as well as graduate and undergraduate students working within each of those labs.
Major Responsibilities
- Research Team Coordination (35%)
- Maintain meeting minutes while attending project and laboratory meetings
- Assign and follow up upon assigned tasks with team members within online project management software (e.g., Asana) based upon team meetings
- Keep project teams on top of project goals and set timelines using appropriate modalities, to include e-mail, online project management software, and web conferencing (i.e., Zoom)
- Train and manage undergraduate research assistants to complete study coding tasks
- Administration, Documentation, and Reporting (20%)
- Maintain records and documentation associated with project using cloud-based software (e.g., Google Docs and sheets)
- Maintain integrity of confidential data (e.g., Google Drive, Box Secure Storage)
- Assist with IRB processes, including the preparation of submissions and responses to IRB when requested
- Study Participant Management, Communication, and Support (20%)
- Enroll participants into focus groups and other data collection efforts
- Run online (e.g., with Zoom) and, if feasible, in-person focus groups
- Maintain participant records
- Manage payments to research participants
- Provide technical support to research participants facing difficulties using study software
- Provide opinions and input to technical team on the usability of developed study software
- Office and Financial Support (10%)
- Keep online document storage organized and well-documented
- Keep track of project expenses
- Conduct research on the internet to answer technical and process questions from the project team as needed
- Research Support (10%)
- Various short-term research tasks as needed, including conducting literature reviews under the supervision of project team members
- Creating submission materials to be submitted to the Open Science Framework to preregister study hypotheses and research questions
- Data Analyses and Presentation (5%)
- Create statistical, graphical, and narrative summaries of data
- Regularly (e.g., weekly) present data collection progress with such data summaries to the research team, generally via e-mail or web conferencing (i.e., Zoom)
Essential Qualifications
- BA/BS in a scientific field of study, such as Psychology or Computer Science, or a combination of education and work experience equal to four years;
- Demonstrated ability to work independently in a research environment and assume responsibility for project performance;
- Requires work on evenings and weekends during some project phases;
- Comfortable communicating with people and organizing the work of others;
- Able to allocate 10-25 hours per week through at least the end of August 2022.
Preferred Qualifications
- BA/BS coursework in both Psychology and Computer Science;
- Experience working with both Psychology and Computer Science faculty as a research assistant;
- At least one year of experience working as at least a half-time research study coordinator or project manager;
- At least one year of experience recruiting research study participants, collecting data, managing purchases/expenditures, providing documentation for IRB audits, and managing study logistics such as creating study manuals;
- Ability to empathically connect with participants, and understand their needs and concerns;
- Knowledge of research ethics and IRB rules and policies concerning the recruitment of research subjects;
- Knowledge of research design and methods commonly used in psychology;
- Organizational, time-management, decision-making and problem-solving skills;
- Leadership skills.
Diversity
The University recognizes and values the importance of diversity and inclusion in enriching the employment experience of its employees and in supporting the academic mission. The University is committed to attracting and retaining employees with varying identities and backgrounds.
The University of Minnesota provides equal access to and opportunity in its programs, facilities, and employment without regard to race, color, creed, religion, national origin, gender, age, marital status, disability, public assistance status, veteran status, sexual orientation, gender identity, or gender expression. To learn more about diversity at the U: http://diversity.umn.edu.
Although I’m now an industrial-organizational psychologist, that was not always the dream. As early as 5 years old, I wanted to do something involving computers. At that time, this mostly involved making my name appear in pretty flashing colors using BASICA. But that interest eventually evolved into a longer-term interest in computer programming and electronics tinkering that worked its way into a college Computer Science major. Starting around the same time, also from that early age, I decided I wanted to be a “professor,” mostly because I found the idea of knowing more than everyone else about something very specific quite compelling. I even combined these interests by designing a “course” in BASICA – on paper – around age 8 and then trying to charge my father $0.10 to register for each class (a truer academic calling I’ve never heard). My obsession with both professordom and Computer Science persisted until college, when a helpful CS professor told me, “the pay is terrible and everyone with any talent just ends up in industry anyway.”
Those events in combination rather firmly pointed me away from CS as a career path, yet my interest in technology still grew, and my drive toward professorship remained. Throughout the end of college and all during grad school, I continued to tinker and continued to teach myself programming, trying to find ways to integrate my interest-but-not-career in technology with my career in psychology research.
I tell you all of this to emphasize this one point: I came at psychology a bit sideways, and that perspective has led me to notice things about the field that others don’t seem to notice. My goal was not and has never been to learn how psychology typically approaches research and to master that approach; it has been to better understand the things I find interesting, using whatever the best methods are to do so. The things I find interesting tend to be about the psychology of technology and the use of technology in psychological methods. Yet the best approaches used to study either, I realized, were not commonly being used in psychological research.
This is partially a philosophical problem. Psychologists using traditional psychological methods are typically focused on psychological problems – which constructs affect which constructs, what is the structure of mentral processes, etc., etc. And there’s nothing really wrong with that. Where I noticed psychologists get in trouble is when they try to study technology, something decidedly not psychological, and treat as if it is just like any other psychological thing they’ve studied. The obvious problem, of course, is those are not at all the same.
Let me walk through an example from the study that started me down this path – my dissertation. It was a meta-analysis quantifying the differences between traditional, face-to-face instruction and web-based instruction. I naively asked, “Is there a difference between the two in terms of instructional effectiveness, on average?”
In hindsight, I now recognize this as a poor question. It presupposes a few ideas that are unjustifiable. First, it presupposes that both “face-to-face instruction” and “web-based instruction” are distinct entities. It assumes those labels can be used to meaningfully describe something consistent. Or put another way, it presupposes the existence of these ideas as constructs, i.e., that same applying-psychology-to-not-psychology problem I described above.
The second presupposition builds on the first. Assuming those two classes of technology can be treated as constructs, i.e., that any particular version of a technology is simply an instantiation or expression of the construct, those constructs must each therefore have effects, and those effects can be compared.
Do you see the problem? In traditional psychological research, we assume that each person has consistent traits that we can only get brief, incomplete glimpses of through measurement. When I refer to someone’s “happiness,” I say so understanding that a person’s happiness is an abstraction, the effects of neural activity that we cannot currently measure and probably will be unable to for decades, or perhaps much longer. We instead recognize that in lived human experience, “happiness” means something, that we as a species have a more-or-less joint understanding of what happiness is, that we make this joint understanding explicit by defining exactly what we mean by the term, and that indicators of happiness, whether through questionnaire responses or facial expressions or Kunin’s Faces scale or whatever else, provide glimpses into this construct. By looking at many of these weak signals of happiness together, by applying statistical models to those signals, we can create a number that we believe, that we can argue, is correlated with a person’s “true” happiness, i.e., “a construct valid measure.”
Psychologists often try to do the same thing with technology, yet none of this reasoning holds. There is no abstraction. There is no construct-level technology that exists, causing indicators of itself. There is no fundamental, root-level technology embedded deep within us. Technology is literal. It exists as it appears. It has purposes and capabilities. And those purposes and capabilities were created and engineered by other humans. These ideas are so overwhelmingly obvious and non-provocative in other fields that no one even bothers writing them down. Yet in psychology, researchers often assume them from the get-go.
These realizations served as the basis of a paper I recently published in Annual Review of Organizational Psychology and Organizational Behavior, along with one of my PhD students, Sebastian Marin. The paper challenges the psychologization of technology, and it tracks how this psychologization has affected the validity of theory that the field produces, usually for the worse. We map out three paradigmatic approaches describing how psychologists typically approach technology – technology-as-context, technology-as-causal, and technology-as-instrumental. We explain how these three approaches either artificially limit what the field of psychology can contribution in terms of meaningful, practical theory or are outright misleading. And there is a lot of misleading technology research in psychology, field-wide (don’t get me started on “the effects of violent video games”!).
The bright side? We identify a path forward: technology-as-designed. Researchers within this paradigm understand that it’s not the technology itself that’s important – it’s the capabilities those technologies provide and the ways people choose to interact with those technologies. It’s about humans driving what problems can addressed with technologies, and how those humans and other humans redesign their technologies to better address those problems. It discourages research on “what is the effect of technology?” and redirects attention toward “why?” and “how could this be better?” It turns us away from hitting every technology nail with the psychology hammer. We need newer, better, and integrative approaches that consider the interaction between design by humans and human experience.
The field of human-computer interaction researchers tries to do this, with varying levels of success. I think we see such variance because many HCI researchers are deeply embedded in computer science. A background in computer science appears to cause the opposite of the problem that psychologists have. As technology is concrete and literal, they often assume humans are too. When humans turn out to be squishy, such research becomes much less useful. Both fields need to do better.
Some psychology researchers have already turned toward a more integrative perspective, but they are sparse. This is a shame, because psychology research within the causal and instrumental paradigms we identify is nigh uninterpretable, and this approach appears to be the default.
In their research, a lot of people ask, “What is the effect of [broad class of technology] on [group of people]?” and do not probe further. This technology-as-causal perspective is simply, plainly not useful. It rarely does or should inform decision-making in any meaningful context. It is often completely uninterpretable and almost always lacks generalizability. Such work is a waste of resources. Unlike a fine wine, this kind of research rots with age. It benefits no one, and we should stop doing it immediately.
If these ideas are of interest to you, or if you want to be part of the design revolution (!), check out our Annual Review paper directly. It’s available for free download as either text or PDF through this link.