Skip to content

Hiring an NSF Research Project Manager to Start Immediately

2021 March 25
tags:
by Richard N. Landers

Direct link to the application and formal job ad: https://hr.myu.umn.edu/jobs/ext/339848

As part of a recently-funded US National Science Foundation project in which we will be building an online virtual interviewing platform, my laboratory will be hiring a part-time project manager with work to do for roughly the next two years.  Actual workload will vary week-to-week between 10 and 25 hours (less in the beginning and more later).  The position requires a Bachelor’s degree in a scientific field (psychology and computer science are targeted in the ad, but a degree from a business school ion which you completed a research methods course would qualify). 

The position would be ideal for someone who is taking a few years off of school after graduation while considering applying to graduate school and would be a great opportunity for such a person to get some funded research experience.  Pay will likely be in the $15-19/hr range, depending upon qualifications, and the position can be entirely remote, even post-pandemic.  

Applicants local to Minneapolis are preferred for some post-COVID role responsibilities, but this is not required.  As the Twin Cities metro is almost 80% white, we are particularly hoping to encourage remote applications from people in under-represented and systemically disadvantaged groups, so remoteness will not be held against anyone.  All required role responsibilities can be performed with home access to a high-speed internet connection (being remote could prevent a few hours a week of job duties in late 2021/early 2022, but that’s the extent of it, and we’ve already identified potential workarounds).

If you recently graduated and were thinking about grad school but weren’t quite convinced, this would be a great way to participate in a functioning lab and make a final decision. You’ll also have access to University of Minnesota infrastructure (such as database access, inter-library loan, Qualtrics, etc.) and would be able to conduct supervised but mostly-independent research using those resources, if you were so inclined.

Here is a direct link for interested applicants, and excerpts from the formal job ad follow: https://hr.myu.umn.edu/jobs/ext/339848

About the Position

The TNTLAB (Testing New Technologies in Learning, Assessment, and Behavior) in the Department of Psychology seeks to hire a part-time “Research Project Manager” (8352P1: Research Professional 1) responsible for providing research, office, and technical support within activities funded by a federal National Science Foundation (NSF) grant. The project involves the creation of a web-based virtual interviewing platform and the execution of two data collection efforts between 2020 and 2022. The position principally requires responsibilities in the management and administration of project personnel and secondarily in the completion of scientific research tasks, such as assigned literature review or data analysis. Specific job tasks vary over the phases of the project, and training on all such tasks will be provided as required. Candidate expectations are: (1) willingness to participate with and independently manage the time of a team of PhDs and graduate students; (2) demonstrated ability to communicate well via multiple modalities (i.e., phone, e-mail, Zoom); (3) track record of achieving assigned goals in independent work projects; and (4) must be able to work with a diverse participant pool. Position works closely with the Principal Investigator of TNTLAB in the Department of Psychology and the Principal Investigator of the Illusioneering Lab in the Department of Computer Science & Engineering, as well as graduate and undergraduate students working within each of those labs.

Major Responsibilities

  • Research Team Coordination (35%)
    • Maintain meeting minutes while attending project and laboratory meetings
    • Assign and follow up upon assigned tasks with team members within online project management software (e.g., Asana) based upon team meetings
    • Keep project teams on top of project goals and set timelines using appropriate modalities, to include e-mail, online project management software, and web conferencing (i.e., Zoom)
    • Train and manage undergraduate research assistants to complete study coding tasks
  • Administration, Documentation, and Reporting (20%)
    • Maintain records and documentation associated with project using cloud-based software (e.g., Google Docs and sheets)
    • Maintain integrity of confidential data (e.g., Google Drive, Box Secure Storage)
    • Assist with IRB processes, including the preparation of submissions and responses to IRB when requested
  • Study Participant Management, Communication, and Support (20%)
    • Enroll participants into focus groups and other data collection efforts
    • Run online (e.g., with Zoom) and, if feasible, in-person focus groups
    • Maintain participant records
    • Manage payments to research participants
    • Provide technical support to research participants facing difficulties using study software
    • Provide opinions and input to technical team on the usability of developed study software
  • Office and Financial Support (10%)
    • Keep online document storage organized and well-documented
    • Keep track of project expenses
    • Conduct research on the internet to answer technical and process questions from the project team as needed
  • Research Support (10%)
    • Various short-term research tasks as needed, including conducting literature reviews under the supervision of project team members
    • Creating submission materials to be submitted to the Open Science Framework to preregister study hypotheses and research questions
  • Data Analyses and Presentation (5%)
    • Create statistical, graphical, and narrative summaries of data
    • Regularly (e.g., weekly) present data collection progress with such data summaries to the research team, generally via e-mail or web conferencing (i.e., Zoom)

Essential Qualifications

  • BA/BS in a scientific field of study, such as Psychology or Computer Science, or a combination of education and work experience equal to four years;
  • Demonstrated ability to work independently in a research environment and assume responsibility for project performance;
  • Requires work on evenings and weekends during some project phases;
  • Comfortable communicating with people and organizing the work of others;
  • Able to allocate 10-25 hours per week through at least the end of August 2022.

Preferred Qualifications

  • BA/BS coursework in both Psychology and Computer Science;
  • Experience working with both Psychology and Computer Science faculty as a research assistant;
  • At least one year of experience working as at least a half-time research study coordinator or project manager;
  • At least one year of experience recruiting research study participants, collecting data, managing purchases/expenditures, providing documentation for IRB audits, and managing study logistics such as creating study manuals;
  • Ability to empathically connect with participants, and understand their needs and concerns;
  • Knowledge of research ethics and IRB rules and policies concerning the recruitment of research subjects;
  • Knowledge of research design and methods commonly used in psychology;
  • Organizational, time-management, decision-making and problem-solving skills;
  • Leadership skills.

Diversity

The University recognizes and values the importance of diversity and inclusion in enriching the employment experience of its employees and in supporting the academic mission.  The University is committed to attracting and retaining employees with varying identities and backgrounds.

The University of Minnesota provides equal access to and opportunity in its programs, facilities, and employment without regard to race, color, creed, religion, national origin, gender, age, marital status, disability, public assistance status, veteran status, sexual orientation, gender identity, or gender expression.  To learn more about diversity at the U:  http://diversity.umn.edu.

Psychology, Technology, and Incomplete Theory

2021 January 27
by Richard N. Landers

Although I’m now an industrial-organizational psychologist, that was not always the dream. As early as 5 years old, I wanted to do something involving computers. At that time, this mostly involved making my name appear in pretty flashing colors using BASICA. But that interest eventually evolved into a longer-term interest in computer programming and electronics tinkering that worked its way into a college Computer Science major. Starting around the same time, also from that early age, I decided I wanted to be a “professor,” mostly because I found the idea of knowing more than everyone else about something very specific quite compelling. I even combined these interests by designing a “course” in BASICA – on paper – around age 8 and then trying to charge my father $0.10 to register for each class (a truer academic calling I’ve never heard). My obsession with both professordom and Computer Science persisted until college, when a helpful CS professor told me, “the pay is terrible and everyone with any talent just ends up in industry anyway.”

Those events in combination rather firmly pointed me away from CS as a career path, yet my interest in technology still grew, and my drive toward professorship remained. Throughout the end of college and all during grad school, I continued to tinker and continued to teach myself programming, trying to find ways to integrate my interest-but-not-career in technology with my career in psychology research.

I tell you all of this to emphasize this one point: I came at psychology a bit sideways, and that perspective has led me to notice things about the field that others don’t seem to notice. My goal was not and has never been to learn how psychology typically approaches research and to master that approach; it has been to better understand the things I find interesting, using whatever the best methods are to do so. The things I find interesting tend to be about the psychology of technology and the use of technology in psychological methods. Yet the best approaches used to study either, I realized, were not commonly being used in psychological research.

This is partially a philosophical problem. Psychologists using traditional psychological methods are typically focused on psychological problems – which constructs affect which constructs, what is the structure of mentral processes, etc., etc. And there’s nothing really wrong with that. Where I noticed psychologists get in trouble is when they try to study technology, something decidedly not psychological, and treat as if it is just like any other psychological thing they’ve studied. The obvious problem, of course, is those are not at all the same.

Let me walk through an example from the study that started me down this path – my dissertation. It was a meta-analysis quantifying the differences between traditional, face-to-face instruction and web-based instruction. I naively asked, “Is there a difference between the two in terms of instructional effectiveness, on average?”

In hindsight, I now recognize this as a poor question. It presupposes a few ideas that are unjustifiable. First, it presupposes that both “face-to-face instruction” and “web-based instruction” are distinct entities. It assumes those labels can be used to meaningfully describe something consistent. Or put another way, it presupposes the existence of these ideas as constructs, i.e., that same applying-psychology-to-not-psychology problem I described above.

The second presupposition builds on the first. Assuming those two classes of technology can be treated as constructs, i.e., that any particular version of a technology is simply an instantiation or expression of the construct, those constructs must each therefore have effects, and those effects can be compared.

Do you see the problem? In traditional psychological research, we assume that each person has consistent traits that we can only get brief, incomplete glimpses of through measurement. When I refer to someone’s “happiness,” I say so understanding that a person’s happiness is an abstraction, the effects of neural activity that we cannot currently measure and probably will be unable to for decades, or perhaps much longer. We instead recognize that in lived human experience, “happiness” means something, that we as a species have a more-or-less joint understanding of what happiness is, that we make this joint understanding explicit by defining exactly what we mean by the term, and that indicators of happiness, whether through questionnaire responses or facial expressions or Kunin’s Faces scale or whatever else, provide glimpses into this construct. By looking at many of these weak signals of happiness together, by applying statistical models to those signals, we can create a number that we believe, that we can argue, is correlated with a person’s “true” happiness, i.e., “a construct valid measure.”

Psychologists often try to do the same thing with technology, yet none of this reasoning holds. There is no abstraction. There is no construct-level technology that exists, causing indicators of itself. There is no fundamental, root-level technology embedded deep within us. Technology is literal. It exists as it appears. It has purposes and capabilities. And those purposes and capabilities were created and engineered by other humans. These ideas are so overwhelmingly obvious and non-provocative in other fields that no one even bothers writing them down. Yet in psychology, researchers often assume them from the get-go.

These realizations served as the basis of a paper I recently published in Annual Review of Organizational Psychology and Organizational Behavior, along with one of my PhD students, Sebastian Marin. The paper challenges the psychologization of technology, and it tracks how this psychologization has affected the validity of theory that the field produces, usually for the worse. We map out three paradigmatic approaches describing how psychologists typically approach technology – technology-as-context, technology-as-causal, and technology-as-instrumental. We explain how these three approaches either artificially limit what the field of psychology can contribution in terms of meaningful, practical theory or are outright misleading. And there is a lot of misleading technology research in psychology, field-wide (don’t get me started on “the effects of violent video games”!).

The bright side? We identify a path forward: technology-as-designed. Researchers within this paradigm understand that it’s not the technology itself that’s important – it’s the capabilities those technologies provide and the ways people choose to interact with those technologies. It’s about humans driving what problems can addressed with technologies, and how those humans and other humans redesign their technologies to better address those problems. It discourages research on “what is the effect of technology?” and redirects attention toward “why?” and “how could this be better?” It turns us away from hitting every technology nail with the psychology hammer. We need newer, better, and integrative approaches that consider the interaction between design by humans and human experience.

The field of human-computer interaction researchers tries to do this, with varying levels of success. I think we see such variance because many HCI researchers are deeply embedded in computer science. A background in computer science appears to cause the opposite of the problem that psychologists have. As technology is concrete and literal, they often assume humans are too. When humans turn out to be squishy, such research becomes much less useful. Both fields need to do better.

Some psychology researchers have already turned toward a more integrative perspective, but they are sparse. This is a shame, because psychology research within the causal and instrumental paradigms we identify is nigh uninterpretable, and this approach appears to be the default.

In their research, a lot of people ask, “What is the effect of [broad class of technology] on [group of people]?” and do not probe further. This technology-as-causal perspective is simply, plainly not useful. It rarely does or should inform decision-making in any meaningful context. It is often completely uninterpretable and almost always lacks generalizability. Such work is a waste of resources. Unlike a fine wine, this kind of research rots with age. It benefits no one, and we should stop doing it immediately.

If these ideas are of interest to you, or if you want to be part of the design revolution (!), check out our Annual Review paper directly. It’s available for free download as either text or PDF through this link.

Trustworthy I-O Master’s and PhD. Program Rankings

2020 December 1
by Richard N. Landers

Grad School Series: Applying to Graduate School in Industrial/Organizational Psychology
Starting Sophomore Year: Should I get a Ph.D. or Master’s? | How to Get Research Experience
Starting Junior Year: Preparing for the GRE | Getting Recommendations
Starting Senior Year: Where to Apply | Traditional vs. Online Degrees | Personal Statements
Alternative Path: Managing a Career Change to I/O | Pursuing a PhD Post-Master’s
Interviews/Visits: Preparing for Interviews | Going to Interviews
In Graduate School: What to Expect First Year
Rankings/Listings: PhD Rankings | Online Programs | Trustworthy Master’s and Ph.D. Rankings


Having written my grad school series, one of the most common questions I get is, “Which graduate programs should I apply to?” What people generally want to know is some variation of this question:

  • What are the best IO psychology graduate programs?
  • What is the best IO psychology master’s program?
  • What is the best IO psychology PhD program?
  • Which IO psychology programs will get me a good job?

As I’ve explained on this blog before, evaluating program quality a complicated problem. It is difficult even for those of us in the field to agree on what “best” means; it is a worse problem for you as a prospective student. Fortunately, the advice is more straightforward: in the end, you should evaluate which schools offer what you want out of them as a student. Don’t look for “the best”; look for the best for you. That means compiling information from multiple sources, including multiple rankings, and keeping track of how each program matches you, as an individual.

Because trustworthy rankings tend to get buried on various websites over time, whereas spam rankings seem to live forever, I decided to start a running list of all current trustworthy rankings of IO programs, splitting them by the year they were released. This way, you can get the most recent information available but also see historic rankings and how they’ve changed. Again, you should not rely upon any single ranking system to determine where you apply, but consulting multiple ranking systems, seeing how they agree and don’t, and then weighing that against your own priorities is a great way to narrow down the field of options. “What is the best IO psychology master’s program” or “what is the best IO psychology PhD program” has a different answer for every prospective student.

By the way, I’m defining “trustworthy” as based upon empirical data, reporting a transparent ranking methodology, and where a reasonable argument could be made for construct validity of “program quality.”

Without further ado, here is the list.

As I alluded to earlier, there are many rankings on the internet that are… let’s say “stupid” which I have chosen not to include here (I’m looking at you thinly veiled attempts to earn commissions by sending people to online programs at Capella, Walden, and/or Phoenix). Most of these are based on nonsensical metrics (e.g., one Ph.D. ranking I found was based on tuition cost, which is misleading in several different harmful ways).

If you know of other rankings you think I should include or would like my opinion on a ranking you found, drop it in the comments!