Psychology, Technology, and Incomplete Theory
Although I’m now an industrial-organizational psychologist, that was not always the dream. As early as 5 years old, I wanted to do something involving computers. At that time, this mostly involved making my name appear in pretty flashing colors using BASICA. But that interest eventually evolved into a longer-term interest in computer programming and electronics tinkering that worked its way into a college Computer Science major. Starting around the same time, also from that early age, I decided I wanted to be a “professor,” mostly because I found the idea of knowing more than everyone else about something very specific quite compelling. I even combined these interests by designing a “course” in BASICA – on paper – around age 8 and then trying to charge my father $0.10 to register for each class (a truer academic calling I’ve never heard). My obsession with both professordom and Computer Science persisted until college, when a helpful CS professor told me, “the pay is terrible and everyone with any talent just ends up in industry anyway.”
Those events in combination rather firmly pointed me away from CS as a career path, yet my interest in technology still grew, and my drive toward professorship remained. Throughout the end of college and all during grad school, I continued to tinker and continued to teach myself programming, trying to find ways to integrate my interest-but-not-career in technology with my career in psychology research.
I tell you all of this to emphasize this one point: I came at psychology a bit sideways, and that perspective has led me to notice things about the field that others don’t seem to notice. My goal was not and has never been to learn how psychology typically approaches research and to master that approach; it has been to better understand the things I find interesting, using whatever the best methods are to do so. The things I find interesting tend to be about the psychology of technology and the use of technology in psychological methods. Yet the best approaches used to study either, I realized, were not commonly being used in psychological research.
This is partially a philosophical problem. Psychologists using traditional psychological methods are typically focused on psychological problems – which constructs affect which constructs, what is the structure of mentral processes, etc., etc. And there’s nothing really wrong with that. Where I noticed psychologists get in trouble is when they try to study technology, something decidedly not psychological, and treat as if it is just like any other psychological thing they’ve studied. The obvious problem, of course, is those are not at all the same.
Let me walk through an example from the study that started me down this path – my dissertation. It was a meta-analysis quantifying the differences between traditional, face-to-face instruction and web-based instruction. I naively asked, “Is there a difference between the two in terms of instructional effectiveness, on average?”
In hindsight, I now recognize this as a poor question. It presupposes a few ideas that are unjustifiable. First, it presupposes that both “face-to-face instruction” and “web-based instruction” are distinct entities. It assumes those labels can be used to meaningfully describe something consistent. Or put another way, it presupposes the existence of these ideas as constructs, i.e., that same applying-psychology-to-not-psychology problem I described above.
The second presupposition builds on the first. Assuming those two classes of technology can be treated as constructs, i.e., that any particular version of a technology is simply an instantiation or expression of the construct, those constructs must each therefore have effects, and those effects can be compared.
Do you see the problem? In traditional psychological research, we assume that each person has consistent traits that we can only get brief, incomplete glimpses of through measurement. When I refer to someone’s “happiness,” I say so understanding that a person’s happiness is an abstraction, the effects of neural activity that we cannot currently measure and probably will be unable to for decades, or perhaps much longer. We instead recognize that in lived human experience, “happiness” means something, that we as a species have a more-or-less joint understanding of what happiness is, that we make this joint understanding explicit by defining exactly what we mean by the term, and that indicators of happiness, whether through questionnaire responses or facial expressions or Kunin’s Faces scale or whatever else, provide glimpses into this construct. By looking at many of these weak signals of happiness together, by applying statistical models to those signals, we can create a number that we believe, that we can argue, is correlated with a person’s “true” happiness, i.e., “a construct valid measure.”
Psychologists often try to do the same thing with technology, yet none of this reasoning holds. There is no abstraction. There is no construct-level technology that exists, causing indicators of itself. There is no fundamental, root-level technology embedded deep within us. Technology is literal. It exists as it appears. It has purposes and capabilities. And those purposes and capabilities were created and engineered by other humans. These ideas are so overwhelmingly obvious and non-provocative in other fields that no one even bothers writing them down. Yet in psychology, researchers often assume them from the get-go.
These realizations served as the basis of a paper I recently published in Annual Review of Organizational Psychology and Organizational Behavior, along with one of my PhD students, Sebastian Marin. The paper challenges the psychologization of technology, and it tracks how this psychologization has affected the validity of theory that the field produces, usually for the worse. We map out three paradigmatic approaches describing how psychologists typically approach technology – technology-as-context, technology-as-causal, and technology-as-instrumental. We explain how these three approaches either artificially limit what the field of psychology can contribution in terms of meaningful, practical theory or are outright misleading. And there is a lot of misleading technology research in psychology, field-wide (don’t get me started on “the effects of violent video games”!).
The bright side? We identify a path forward: technology-as-designed. Researchers within this paradigm understand that it’s not the technology itself that’s important – it’s the capabilities those technologies provide and the ways people choose to interact with those technologies. It’s about humans driving what problems can addressed with technologies, and how those humans and other humans redesign their technologies to better address those problems. It discourages research on “what is the effect of technology?” and redirects attention toward “why?” and “how could this be better?” It turns us away from hitting every technology nail with the psychology hammer. We need newer, better, and integrative approaches that consider the interaction between design by humans and human experience.
The field of human-computer interaction researchers tries to do this, with varying levels of success. I think we see such variance because many HCI researchers are deeply embedded in computer science. A background in computer science appears to cause the opposite of the problem that psychologists have. As technology is concrete and literal, they often assume humans are too. When humans turn out to be squishy, such research becomes much less useful. Both fields need to do better.
Some psychology researchers have already turned toward a more integrative perspective, but they are sparse. This is a shame, because psychology research within the causal and instrumental paradigms we identify is nigh uninterpretable, and this approach appears to be the default.
In their research, a lot of people ask, “What is the effect of [broad class of technology] on [group of people]?” and do not probe further. This technology-as-causal perspective is simply, plainly not useful. It rarely does or should inform decision-making in any meaningful context. It is often completely uninterpretable and almost always lacks generalizability. Such work is a waste of resources. Unlike a fine wine, this kind of research rots with age. It benefits no one, and we should stop doing it immediately.
If these ideas are of interest to you, or if you want to be part of the design revolution (!), check out our Annual Review paper directly. It’s available for free download as either text or PDF through this link.
Previous Post: | Trustworthy I-O Master’s and PhD. Program Rankings |
Next Post: | Hiring an NSF Research Project Manager to Start Immediately |