An upcoming article in Academy of Management Perspectives by Aguinis, Suarez-Gonzalez, Lannelongue and Joo investigates the accuracy of citation counts as a measure of how impactful an academic’s work in both in and out of academia. Short answer: while citation counts reflect the extent to which an academic’s work affects the work of his/her peers, they do not reflect the extent to which that work affects the world at large.
Business schools in particular place a great deal of importance on citation counts, often equating them with the importance of the scholar. A researcher with a citation count might be headhunted from another business school in order to increase the prestige of the hiring institution.
The problem is that citation counts do not necessarily capture the actual importance of a person’s work in anything beyond scholarly circles. If the purpose of science is to help the world (and I’d argue that it is), then citation counts capture something altogether different: they reflect how valuable other scientists view your work to be in supporting their own ideas. This is not really what we want to know when we ask, “how important is this scholar’s work?”.
Many fields (especially business-related fields) worry that their research is not adopted by those that could benefit from it most. Often, research never makes it beyond journal articles and into practice. So if we are really concerned with identifying the most “impactful” scholars, do citation counts capture that? Do highly cited authors have a bigger impact on the world than less-cited authors?
To determine how much exactly citation counts reflect larger impact, the authors found the number of citations to the top 550 most cited authors in the Academy of Management. They searched Google for these authors, using their full name with quotation marks to pull up a list of Internet references to that author. They then reviewed the first 50 pages of results to see how many actually referred to the author of interest. If more than 5% referred to someone else, they dropped that author from analysis. This resulted in a final database of 391 scholars.
In that database, the count of citations on Google did not correlate highly with citation counts: correlations ranged from .152 to .260 depending on whether or not you include .edu domains. In a multiple regression analysis, when controlling for years since earning the doctorate and the number of articles published, the number of citations did not predict substantial incremental variance in Google listings among non-.edu domains (delta-R2 of about one half of one percent).
In summary, if we believe the number of references in Google to be an accurate metric for capturing impact on the world at large, citation counts do not reflect this value. Impact is clearly a more complicated construct that we typically consider it; future work should investigate better ways to capture this. It’s also worth noting that while this approach works for business, where research results should be directly adopted by managers, it would not work so well for fields where there are several steps between research and adoption. For example, just because a nuclear physicist does not appear much on Google doesn’t mean that his work didn’t help build a nuclear power plant.
At the least, my 17000 results in Google put me around #300 of the 391 most-cited authors in the Academy of Management. Not too bad for 3 years out!Footnotes:
Grad School Series: Applying to Graduate School in Industrial/Organizational Psychology
Starting Sophomore Year: Should I get a Ph.D. or Master’s? | How to Get Research Experience
Starting Junior Year: Preparing for the GRE | Getting Recommendations
Starting Senior Year: Where to Apply for Grad School | Value of Traditional vs. Online Degrees
Interviews/Visits: Preparing for Interviews | Going to Interviews
In Graduate School: What to Expect First Year
So you want to go to graduate school in industrial/organizational (I/O) psychology? Lots of decisions, not much direction. I bet I can help!
While my undergraduate students are lucky to be at a school with I/O psychologists, many students interested in I/O psychology aren’t at schools with people they can talk to. I/O psychology is still fairly uncommon in the grand scheme of psychologists; there are around 7,000 members of SIOP, the dominant professional organization of I/O, compared to the 150,000 in the American Psychological Association. As a result, many schools simply don’t have faculty with expertise in this area, leading many promising graduate students to apply elsewhere. That’s great from the perspective of I/O psychologists – lots of jobs – but not so great for grad-students-to-be or the field as a whole.
As a faculty member at ODU with a small army of undergraduate research assistants, I often find myself answering the same questions over and over again about graduate school. So why not share this advice with everyone?
I’ve decided to return to this series because a few questions have come up from students that I realized I didn’t cover here. This week, I’d like to cover another important decision: should I go to graduate school at an online institution or a more traditional on-campus program?
This is actually part of one of the continuing “big arguments” in the field of education, so there aren’t many clear answers just yet. There is some evidence from the Department of Education that web-based courses are no less effective than in-person courses. That is, there’s reason to believe that two courses, similarly designed, one online and one in-person, will be essentially the same in their ability to teach you content. But the studies that the DOE summarizes are generally all undergraduate or laboratory studies, offering little insight into a) graduate courses or b) complete programs (not individual courses).
The experience of graduate school is certainly going to be different at these two categories of institution. One of the major benefits from brick-and-mortar graduate school is literally immersing yourself in the academic environment: being a part of a cohort of graduate students with similar experiences that you socialize with, interacting intensively face-to-face with professors about your academic achievement and career goals, gaining networking contacts that you will call on for the rest of your career, and generally learning about the culture of the profession.
Most of that is lost in an online environment. You’re not going to go out for drinks after a difficult exam with your classmates. As you likely already learned as an undergraduate in psychology, frequency of interaction and shared traumatic experiences are some of the best ways to ensure relationships form between people. This simply doesn’t really exist in an online program. While you might get to know people on discussion boards, it’s not quite the same. Think of it like the difference between your in-person friends and your “Facebook friends.”
The casual interaction with others in an academic environment also is beneficial developmentally. Completing a graduate program at home, you almost always have time to sit and think about your answers, to carefully consider your responses, and to put a lot of time and effort into producing the best answer possible. And this is certainly a valuable skill in an I/O career – but it’s not everything. If you ever plan to use your I/O degree in the “real world,” you’ll need experience coming up with answers on the fly and responding to/interacting with other experts. Many graduate students find that their first academic conference presentation, where they must respond to random questions from interested parties about their research results, is eye-opening in terms of the sudden pressure to think on their feet. Most students have already practiced this skill in their courses, and still find it challenging. For example, in my first-year Master’s-level Personnel Psychology course, I have students lead discussion for over an hour on a set of several journal articles. Without that kind of practice, I’d be a bit worried – and this directly translates into the kind of work you’d need to do .
I often find that students are considering an online program because they want to balance graduate school against a job. Let me be absolutely clear: this is a terrible idea. I fully expect my graduate students to be studying and working on research 40-60 hours per week on top of any teaching responsibilities. Teaching, at its most intense, should be a commitment of 10 hours per week. That is the maximally permissible distraction. If you plan to hold an outside job to support yourself during graduate school instead of teaching, you should be working less than 10 hours per week. Most part-time jobs don’t permit this and “strongly encourage” employees to increase their hours, so it’s generally not a good idea to have such a job while in graduate school. Remember, you’re in graduate school to prepare for your career. Every class you take, every bit of research you conduct, is now precisely targeted at giving you better opportunities later. Distracting yourself from that goal in any way will only hurt you in the long run.
More practically, there is some question as to the quality of online programs. One 2010 report by SIOP found several disturbing features of online I/O programs. For example, most online I/O programs don’t report who their faculty are. Of the PhD programs identified offering online I/O graduate degrees, only one program (Walden University) did report this, and of the 22 faculty, only two (2!) held I/O PhDs. That opens many questions about the expertise in I/O of those offering these degrees. Master’s programs had, on average, 1.5 I/O faculty. Only one online I/O Master’s program required a written thesis, which is necessary for anyone hoping to progress into a PhD program.
In the annual SIOP Survey reported here, most employers additionally had negative or neutral opinions about students coming from online programs. For example, respondents tended to respond positive to, “I tend to negatively evaluate a résumé if I notice that the applicant earned his or her graduate degree online.” and “I feel that there IS a meaningful difference in the quality of training that one receives in an online graduate degree program in I-O Psychology versus a traditional, in-person program in I-O Psychology.”
A more recent 2012 report by Rechlin and Kraiger found in an experimental study of I/O consulting firms that applicants from online programs tend to be evaluated more negatively than those coming from brick-and-mortar institutions by those making hiring decisions of I/Os. They discovered this by presenting resumes of effectively identical candidates (but with different names, distracting information, etc) crossing several degree characteristics. They found that those with online degrees were less likely to be asked for an interview, less liked to be hired, and likely to get a lower starting salary offer.
So what this really comes down to is priorities. Are you just trying to get the degree/credentials as a stepping stone for some other career goal, or are you trying to gain experiences that will help you create an I/O career? If you just want the degree, either type of program is probably fine. But if you’re trying to build a career within I/O psychology, at least for now, a brick-and-mortar institution is likely to put you on a superior trajectory, with better training, better opportunities, and better earning potential.
Valve Software is one of the videogaming industry’s most prominent and successful companies. It is responsible for some of videogaming’s most prominent games and game platforms, including Half-Life, Team Fortress, Counterstrike, Half-Life 2, Portal, and Left 4 Dead. They also run the enormously popular Steam digital game distribution system, which allows PC gamers to purchase and download traditional retail and indie games. The company has a reputation for innovation and creativity, which is quite far from the reputation of other major players in the videogame industry, like publishing behemoth Electronic Arts.
Last week, someone leaked the New Employee Handbook from Valve, and it provides a fascinating glimpse into the structure of this enormously popular company. Contrary to how many videogame developers are structured, Valve is almost a 100% flat organization, despite boasting nearly 300 employees. Here’s what the handbook says:
That’s why Valve is flat. It’s our shorthand way of saying that we don’t have any management, and nobody “reports to” anybody else. We do have a founder/president, but
even he isn’t your manager. This company is yours to steer—toward opportunities and away from risks. You have the power to green-light projects. You have the power to
In many ways, it’s the job characteristics model at its logical conclusion: complete and absolute autonomy, variety, identity, significance, and feedback for every employee. Each employee decides the projects s/he will work on (autonomy). If they get bored with one project, they can instantly switch to another with no job-related consequences (variety). Each employee tries to gain support from coworkers to work on their personal projects (feedback), which then can be brought to fruition with sufficient effort (identity, significance).
Commentary among videogame blogs on this approach has been mixed, with some suggesting that this approach could never work with other companies. Outside of videogaming and other creative enterprises, that’s probably true. Videogame development is itself essentially a creative process. If a product doesn’t ship on time, the company may lose money, but it is unlikely to lose clients. Players will buy the game regardless of when it comes out, as long as it is of high quality. And quality is clearly a major objective at Valve. The attitude toward how employees spend their time is quite clear in this statement on what happens when employees make mistakes:
Screwing up is a great way to find out that your assumptions were wrong or that your model of the world was a little bit off. As long as you update your model and move forward with a better picture, you’re doing it right. Look for ways to test your beliefs. Never be afraid to run an experiment or to collect more data.
For Valve, spending more time is always justified if it means you’ll do a better job. A management system like this in a service organization could falter easily; imagine a project due for a client tomorrow only for your coworker to say, “I’ve decided to drop this project; you’ll need to find someone else.” In videogame development, this is still a roadblock, but it does not grind business to a halt.
One might think that this everyone-runs-the-organization kind of approach would be risky. And to some extent, that’s true – Valve even admits it:
So if every employee is autonomously making his or her own decisions, how is that not chaos? How does Valve make sure that the company is heading in the right direction? When everyone is sharing the steering wheel, it seems natural to fear that one of us is going to veer Valve’s car off the road.
The key to solving this problem is finding excellent people to work at your company. And this is clearly objective #1 at Valve:
Hiring well is the most important thing in the universe. Nothing else comes close. It’s more important than breathing. So when you’re working on hiring—participating in an interview loop or innovating in the general area of recruiting—everything else you could be doing is stupid and should be ignored!
Valve goes so far as to encourage new employees to take part in the interview process immediately after they themselves were hired, so that they can get a better sense of the kind of traits valuable to success at Valve. The interview process is not explained in much detail, however the discussion makes it clear that it is structured and carefully thought out. The hiring process actually echoes some sentiments well-explored in I/O Psychology:
Hiring is fundamentally the same across all disciplines. There are not different sets of rules or criteria for engineers, artists, animators, and accountants. Some details are different—like, artists and writers show us some of their work before coming in for an interview. But the actual interview process is fundamentally the same no matter who we’re talking to.
The key at Valve, as in all good selection processes, it to identify qualified individuals who fit with the company. It also only hires the very best:
In these conditions, hiring someone who is at least capable seems (in the short term) to be smarter than not hiring anyone at all. But that’s actually a huge mistake. We can always bringon temporary/contract help to get us through tough spots, but we should never lower the hiring bar.
The company is 100% focused on recruiting and selecting a committed, high performing workforce. It keeps its selection ratio as low as possible – even if there is more work to do than there are employees – to ensure that the permanent employees it retains are excellent and unlikely to bring the company crashing down around them.
The third day had a good amount of coverage on technology-driven assessment. Our social media symposium was in the middle of the day, so there’s a little gap there. Wrap-up incoming soon!
|7:59 AM||NeoAcademic: SIOP 2012: Day 3 Live-Blog (http://t.co/2xu5rnwM) #siop12|
|8:23 AM||Off to an 8:30AM session at #siop12 – really curious what attendance will look like!|
|8:42 AM||At Assessing video resums at #siop12|
|8:50 AM||Extraction and narcissism predict attitudes toward video resumes, but not behavior #siop12|
|9:04 AM||Ethnic minorities perceive video resumes as being more fair #siop12|
|9:05 AM||Video resumes as being more fair than paper, but disappears when controlling for education (more education = paper more fair) #siop12|
|9:07 AM||When applicant accent is strong in vid resumes, low prejudice reviewers rate job suitability much higher, not clear why #siop12|
|9:12 AM||Self promotion in vid resumes may result in lower evaluation of applicant (less credible, more manipulative and annoying) #siop12|
|9:18 AM||Men negatively view women in vid resumes when they engage in self promotion (violating gender norms) #siop12|
|9:33 AM||Our #siop12 symposium on social media is coming up at 10:30 in Elizabeth H – come on by!|
|9:36 AM||@K_Anderson_Eval all vid resume presenters seem to indicate they are at the cutting edge, not much if any published work #siop12 #siop|
|10:38 AM||Nearly 100 people at our social media symposium in Elizabeth H! #siop12 http://t.co/MC6rYL8Y|
|12:13 PM||Ended up with over 125 attendees and a flood of questions! Running to next session on future of IO as an academic discipline #siop12|
|12:24 PM||@budworth several touched on the idea in symposium 229 – I think you might want to contact Annemarie Hiemstra or Marie Waung #siop12|
|12:31 PM||Some concern from panel about losing IO people to business schools, more career opps in business schools so more attractive #siop12|
|1:36 PM||Saw fascinating poster on progress bars on web surveys – short answer: USE THEM; increases focus and enjoyment #siop12|
|1:47 PM||At presentation on how to create homegrown tech for accomplishing IO objectives – sounds familiar! #siop12|
|1:53 PM||People of any level of tech expertise can start learning tech skills useful to IO; Excel is good entry point #siop12|
|2:16 PM||MTurk workers useful for small practitioner jobs – pay $.25 per response for pilot study work #siop12|
|4:34 PM||Met with collaborators and now with several thousand psychologists to listen to talk by Albert Bandura #psychology #siop12|
|4:38 PM||Bandura speaks! #siop12 #psychology http://t.co/ZXa6H2BL|
|4:49 PM||Interesting idea by Bandura: individuals as coauthors of action with environment and behavior #siop12 #psychology|
|7:35 PM||So ends #siop12 – safe travels everyone!|
Today was busy, with our learner control symposium taking up most of the afternoon (hard to tweet while talking!).
|5:36 AM||At 5AM, it’s much harder to see why registering for the #siop12 5K was a good idea…. Blegh|
|7:49 AM||Did not beat Paul Sackett’s time (my goal) but still a respectable 31 mins for the #siop12 5K. Not bad for my 1st 5K!|
|8:38 AM||NeoAcademic: SIOP 2012: Day 2 Live-Blog (http://t.co/oP3fAbly) #siop12|
|10:19 AM||Whew… Recovered from Fun Run, met with potential collaborators and finally SITTING to hear about virtual org effectiveness #siop12|
|10:38 AM||Apparently virtual teams perform better when members are isolated (fewer distractions) #siop12|
|11:09 AM||Media richness changes how leadership is expressed in virtual teams #siop12|
|11:34 AM||At Chasing the Tortoise: Zeno’s Paradox in Technology-based Assessment #siop12|
|11:40 AM||Tech moves forward as research moves forward to understand it; never ending cycle #siop12|
|11:46 AM||Org context/purpose is inflexible while specific user experience is flexible; so when implementing new tech, consider context first #siop12|
|12:08 PM||No practical diffs between mobile and non mobile assessment – unless candidates took it on Nintendo Wii #siop12 (weird)|
|12:11 PM||Less than 1% of applicants used mobile devices to complete assessment in sample of 1 million #siop12|
|12:17 PM||Older adults more likely to take assessments on mobile devices; younguns prefer computers #siop12|
|12:20 PM||Personality tests don’t have diff scores, but cog ability scores do – mobile scores lower d=-.5 (non experimental study) #siop12|
|12:54 PM||Zickar on cyber vetting and looking up job applicants on Facebook: just don’t do it. #siop12|
|9:08 PM||Just had a great #siop12 dinner with our awesome ODU alums!|
Day 1 was busy, but still got some tweets out. I’ve noticed that every year, I randomly run into people. Which is great, but it takes so much time from presentations! Archive of my Twitter activity is below:
|7:58 AM||On the way to pick up my badge at #siop12… Then opening plenary!|
|8:27 AM||I wonder if the background music pre-plenary is designed to put everyone back to sleep… #siop12|
|8:31 AM||So who knows the wifi password? #siop12|
|8:32 AM||About 4000ish registered for #siop12… Not a high year, but not bad|
|8:43 AM||I don’t think I ever thought before about the ridiculous number of awards given by #siop – that’s a lot of committees! #siop12|
|9:18 AM||Congrats to all 23 new #siop fellows at #siop12, especially @OldDominionUniv’s own Debra Major!! http://t.co/yUCGYdY0|
|10:11 AM||RT @emilyalice21: the thought of seeing bandura @ the conclusion of #siop12 makes me feeel like ill be seeing elvis.|
|10:27 AM||At Do You Speak Technology? at #siop12|
|10:51 AM||Good to see growing awareness among I/Os about the need for translation between IO and IT people #siop12|
|11:24 AM||Got some interesting advice on how to structure my technology class for IO PhD students to better communicate with IT #siop12|
|12:07 PM||At presentation on personality driven by many meta-analyses, it’s like a Minnesota explosion #siop12|
|12:18 PM||Seems self efficacy is quite complex, even in its relationships with itself #siop12|
|12:39 PM||Warmth seems to be a compound agreeableness/extraversion personality trait, predicts many customer reactions to service emps #siop12|
|1:39 PM||At new investigations in applicant reactions to websites research… Meta up first #siop12|
|1:59 PM||interviewing advice on website interacts with gender to predict org attractiveness (women want support, men want no support) #siop12|
|2:04 PM||Blogs potentially serve as a recruitment channel by portraying leaders as relationship-oriented #siop12|
|2:19 PM||Organizational website quality matters much more for lesser known company companies; Fortune 100 can ride their reputations #siop12|
|2:36 PM||@lsinger it’s: Beeco & Raymark, Effectiveness of CEO blogs as a recruiting tool, which is presentation 59 at #siop12 http://t.co/leOs0Hon|
|2:45 PM||Phew! Time for a break at #siop12|
|6:56 PM||Ended up chatting with a colleague and missed some sessions! Whoops!! But that is what #siop12 is all about|
As in 2010 and 2011, I’ll be live-blogging the SIOP conference, which begins Thursday, April 26 and runs through Saturday, April 28. This post contains a list of all the sessions that I am interested in attending, which are generally focused on technology, training, and assessment. My live blogging will likely occur on Twitter, with a permanent record stored here. The biggest enemy in such efforts is battery life!
In the graphic below, symposia that I am chairing and presenting in are colored red (plus the doctoral program chair’s meeting), and poster sessions are marked as “free” (little white bar on the left). As you can see, it’s a bit packed, so I won’t be attending all of these – and midday Friday is particularly bad – but this is everything I’m interested in. If you’d like to meet up at any of these events, or if you think I missed something that I should definitely attend, please let me know!
The first list contains symposia and interactive poster sessions. My sessions are bolded and highlighted in grey.
|No.||Day||Start||End||Session Title||Room||Session Type|
|1||4/26/2012||8:30 AM||10:00 AM||Opening Plenary Session||Elizabeth C||Special Events|
|8||4/26/2012||10:30 AM||12:00 PM||I-O Bilingualism: Do you Speak Technology?||Edward AB||Panel Discussion|
|27||4/26/2012||12:00 PM||1:30 PM||Personality in I-O: New Meta-Analytic Contributions to Unexamined, Neglected Issues||Annie AB||Symposium/Forum|
|48-3||4/26/2012||1:30 PM||2:30 PM||Investigating Conflict Escalation in FTF and Virtual Teamwork Over Time||America’s Cup AB||Poster|
|59||4/26/2012||1:30 PM||3:00 PM||Back Into the Web: New Directions in Applicant Attraction Research||Madeline CD||Symposium/Forum|
|92||4/26/2012||5:00 PM||6:00 PM||Theme Track: Scholarly Reflections on the Past, Present, and Future of Discrimination||Elizabeth H||Special Events|
|98-16||4/26/2012||6:00 PM||7:00 PM||Unproctored Cognitive Ability Internet Testing: Does Cheating Pay Off?||Elizabeth D||Poster|
|102||4/27/2012||8:00 AM||10:00 AM||Addressing Unproctored Internet Testing Claims and Fears: Founded or Unfounded?||Elizabeth C||Symposium/Forum|
|134||4/27/2012||10:30 AM||12:00 PM||Virtual Organizational Effectiveness||Gregory AB||Symposium/Forum|
|138-4||4/27/2012||11:30 AM||12:30 PM||Emoticons at Work: Does Gender Affect Their Acceptability?||America’s Cup AB||Poster|
|138-3||4/27/2012||11:30 AM||12:30 PM||Applicants’ and Recruiters’ Perceptions of Social-Networking Web Sites in Selection||America’s Cup AB||Poster|
|140||4/27/2012||11:30 AM||1:30 PM||Chasing the Tortoise: Zeno’s Paradox in Technology-Based Assessment||Elizabeth C||Symposium/Forum|
|143||4/27/2012||12:00 PM||1:30 PM||Virtual Teams: Exploring New Directions in Research and Practice||Betsy BC||Symposium/Forum|
|153||4/27/2012||12:00 PM||2:00 PM||Current Research in Advanced Assessment Technologies||Ford AB||Symposium/Forum|
|158-4||4/27/2012||12:30 PM||1:30 PM||For Your Eyes Only? Reactions to Internet-Based Multimedia SJTs||America’s Cup AB||Poster|
|170||4/27/2012||1:30 PM||3:00 PM||Computerized Adaptive Testing: A Primer on Benefits, Design, and Implementation||Elizabeth C||Master Tutorial|
|181-3||4/27/2012||3:30 PM||4:30 PM||Reactions to Using Social Networking Web Sites in Preemployment Screening||America’s Cup AB||Poster|
|199||4/27/2012||3:30 PM||5:00 PM||Building a Science of Learner Control in Training: Current Perspectives||Madeline CD||Symposium/Forum|
|203||4/27/2012||4:30 PM||6:00 PM||Variations in Unproctored Internet Testing: The Good, Bad, and Ideal||Edward AB||Panel Discussion|
|229||4/28/2012||8:30 AM||10:00 AM||Assessing Video Resumés: Valuable and/or Vulnerable to Biased Decision Making?||Elizabeth C||Symposium/Forum|
|249||4/28/2012||10:30 AM||12:00 PM||The Impact of Social Media on Work||Elizabeth H||Symposium/Forum|
|294||4/28/2012||1:30 PM||3:00 PM||Applied Technology: The I-O Psychologist as Customer||Ford AB||Symposium/Forum|
|316||4/28/2012||4:30 PM||5:30 PM||Closing Plenary Session||Elizabeth C||Special Events|
This second list contains posters that I will drop by if I have time.
|23-21||4/26/2012||11:30 AM||12:30 PM||Trust Development in Computer-Mediated Teams|
|72-1||4/26/2012||3:30 PM||4:30 PM||Work Environment Factors and Cyberloafing: A Follow-Up to Askew|
|87-8||4/26/2012||4:30 PM||5:30 PM||Communication in Virtual Teams: The Role of Emotional Intelligence|
|87-13||4/26/2012||4:30 PM||5:30 PM||Personality Predicts Acceptance of Electronic Performance Monitoring at Work|
|123-3||4/27/2012||10:30 AM||11:30 AM||Toward a Theory of Technology Embeddedness|
|160-2||4/27/2012||1:00 PM||2:00 PM||Perceptions of Internet Threats: Behavioral Intent to Click Again|
|176-24||4/27/2012||2:00 PM||3:00 PM||Commitment and Regulation in Web-Based Instruction|
|234-29||4/28/2012||9:00 AM||10:00 AM||Social Mediaâ€&™s Influence on Social Support, Efficacy, and Life Satisfaction|
|258-12||4/28/2012||11:30 AM||12:30 PM||The Effectiveness of Three Techniques for Detecting Faking|
|278-32||4/28/2012||12:30 PM||1:30 PM||Effects of Survey Progress Bars on Data Quality and Enjoyment|
|284-1||4/28/2012||1:30 PM||2:30 PM||A Validation Study of Tablet Use in a Medical Setting|
A recent study by Giumetti et al examines cyber incivility, which is defined as low intensity “rude and discourteous” behavior that takes place through an internet or intranet-based communications system (e.g. e-mail, chat, or Facebook). They found that those reporting having experienced cyber incivility were more likely to skip work, burn out, and report their intention to look for a new job.
Incivility differs from traditional interpersonal organizational deviance in that it is less severe and more casual. While a person engaging in interpersonal deviance might steal their coworker’s stapler in order to annoy them, incivility might be expressed simply by that coworker making seemingly off-hand negative comments about their coworker’s workspace. Deviance is much easier to detect because it is generally obvious and with clear purpose; it is immediately clear that someone was acting inappropriately. Incivility, on the other hand, might be purposeful or it might not – it is difficult to know for sure, even for the victim.
As an example of cyber incivility, consider this scenario: a supervisor assigns work over the weekend to a subordinate. On Monday morning, the supervisor shoots off a quick e-mail that only says, “I hope you had a good weekend.” The subordinate is unclear on how to interpret this, as it might 1) be a half-hearted apology for assigning work over the weekend, 2) indicate that the supervisor didn’t remember or didn’t care about the extra work, or 3) represent the supervisor sarcastically rubbing in that s/he ruined the subordinates weekend. Thus, this is incivililty; the supervisor should not have sent the e-mail at all, or if it was not ill-intentioned, should have been more clear about the apology.
To examine their hypotheses, the authors collected surveys from two samples: 1) 407 university staff members and 2) 207 business school alumni. They modified a standard incivility questionnaire (the Workplace Incivility Scale) by adding the word “online” to the end of each statement. In addition to the effects of incivility on absenteeism, burnout, and intention to quit, the researchers also found these relationships to be moderated by neuroticism, i.e. the relationship between experienced incivility by a supervisor and burnout/intention to quit was stronger for people higher in neuroticism.
As a survey study, we can’t use this to conclude that supervisors acting rudely to their employees online causes these negative outcomes (it is equally plausible that there is a third variable predicting both), but it does explicitly link negative employee outcomes with the experience of a boss acting rudely online.Footnotes:
A few weeks ago, a new cheating scandal erupted at the University of Florida as 97 students in a 250-person course (39%) in Computer Science & Engineering were caught cheating on an online exam. Students were given the option to come clean before the presentation of evidence, in which case they faced reduced penalties. One student complained because she cheated in a way that students in the past had cheated, but she was caught while they were not.
The method of cheating detection, according to the course instructor, was perfect. Hidden markers were included from old exams so that any copy/pasting would result in those hidden markers being included in the student’s exam. These markers could not appear any way other than by copy/pasting. Thus, there may have been more cheaters than were detected, but there were definitely at least this many.
What caught my eye in this story was a comment by a student found by the reporter:
Julie Rothe, an 18-year-old finance and information systems freshman, said she plans to accept responsibility. But she will challenge the penalty, she said, because students cheated in years past.
“I’m really angry at the fact that students got away with this in earlier semesters,” she said. “We are taking the hit, and I believe that is unfair.”
According to a commenter, the online format was only adopted last semester, so this student’s premise is probably false anyway. But it still offers an intriguing anecdote in which to explore how people perceive “fairness” in decision-making.
In I/O Psychology, we talk about fairness in terms of organizational justice theory. This theory poses three types of justice:
- Distributive justice: rewards/punishments are distributed fairly
- Procedural justice: the process by which rewards/punishments are distributed is fair
- Interactional justice: adequate information about reward/punishment distribution is provided respectfully
As instructors, we think primarily about distributive justice. In this case, the students were caught cheating and they should be punished accordingly – end of story. To an academic, cheating is a break of the most sacred trust between students and faculty: that student only represent their own work as their own. Violation of this trust should result in substantial penalties, up to and including expulsion, because the transgression is so extreme.
But to the student interviewed, this is not the primary concern. Instead, she is concerned with how the decision was made in the past. Although her facts may not be correct, we can summarize her thinking with, “In the past, students weren’t penalized for doing what I did. Therefore, this is unfair.” This is a judgment about procedural justice. Although she accepts responsibility for being caught, she believes that because cheater detection in the past was not like this, she should not be penalized severely.
Which is more valid? Ultimately, the punishment itself is what should be judged as fair or unfair, so the student’s position is not tenable. But we can still appreciate the “logic” of her position. It highlights that in organizational settings, it is perceptions of fairness that ultimately affect employee behavior and attitudes more so than it is the actual fairness of decisions.
New research by Crede, Harms, Niehorster and Gaye-Valentine in the Journal of Personality and Social Psychology investigates the impact of using abbreviated personality measures. Short answer: don’t do it.
In their study, the researchers surveyed 437 employed people (collected via StudyResponse) and 395 undergraduates. Personality was assessed with common 1-item, 2-item, 4-item, 8-item, 6-item, and 8-item measures, along with a variety of outcomes, including job performance, GPA, stress, and health behaviors.
Almost universally, longer measures resulted in higher correlations with outcomes. For example, Conscientiousness predicted Job Task Performance .6 with the 8-item measure (Saucier, in this case), with correlations as low as .2 with 1-item measures.
This is most critical for personality researchers who often rely on incremental variance found over the Big Five to provide evidence of a new, unique personality trait. For example, they use a single-item measure and their new measure in a survey study, find new variance in job performance or life satisfaction using their new measure over the Big Five, and declare that they have found a new, useful personality trait. This study suggests that evidence for many such traits may be fallacious; instead, the incremental variance found is better explained by the lack of a reasonable multi-item personality trait for comparison.
The authors also find that the use of abbreviated personality measures increases both Type I and Type II statistical conclusion errors. Single-item measures are especially risky. Although the shortened length of the scales is attractive from a practical perspective, this is completely empirically unjustified – the loss of validity is simply not worth it.
All of this together suggests a very simple decision rule: don’t use abbreviated personality measures.Footnotes:
In a recent study appearing in the International Journal of Training and Development, researchers Saks and Burke discovered that the frequencies of behavioral and results-based training evaluation were related to actual transfer of training material. Or in other words, organizations that evaluated behavior changes and monetary benefits resulting from training tended to have better results from that training. In their words:
Overall, the results of this study suggest a training evaluation-transfer of training paradox: organizations are most likely to evaluate trainee reactions and learning, and yet only behavior and results criteria are significantly related to higher rates of training transfer.
The authors investigated this by surveying 150 members of a T&D association in Canada. They asked respondents to identify the extent to which employees actually made good use of organizational training initiatives and respond about the degree to which their organization evaluated their training programs. Kirkpatrick’s 4-level training evaluation model (reactions, learning, behavior, and results) was used as a framework for these questions. On a scale of 1 (never) to 5 (always), consistent with prior studies, the researchers found that evaluation of trainee reactions was by far the most common (M=4.08), followed by learning (M=2.85), behavior (M=2.59) and results (M=2.22).
Frequency of training evaluation predicted immediate transfer and far transfer (6 months and 1 year later). Correlations generally decreased with later evaluations, as would be expected. When looking at specific types of evaluative criteria, only behavior and results were statistically significantly related to transfer variables, with moderate effects (r = .28 to .43). Relationships with reactions and learning were much lower (r = .02 to .18).
This indicates that organizations that evaluate behavior and results tend to have better transfer, at least as rated by those responsible for implementing those training programs. As a survey study, these results do not speak to causation. While the act of evaluating may improve transfer, organizations that evaluate may simply pay more attention to their training programs in general than organizations that do not evaluate. The researcher controlled for organizational age and the size of the firm, but this would not address this issue. More in depth consideration of the source of variance in transfer is needed. This study also relied solely on T&D professionals’ impressions of their organizations, which may not reflect actual transfer. Research with real organizational transfer data is needed to address this limitation.Footnotes:
A recent article by Landers, Sackett and Tuzsinki investigated the degree to which 32,311 managerial applicants at a nationwide retailer completed a personality test for promotion to or selection into the position. Up to 6% of the sample (nearly 2000 applicants) distorted their responses on the personality test by responding with only the extreme ends of the scale, a phenomenon the authors labeled “blatant extreme responding” (BER). The percentage of applicants responding with BER dropped about 2% after an interactive warning was implemented.
The logic of the applicants’ response strategy is appreciated. On a multiple-choice personality survey, one approach to maximize your score might be to respond only “strongly agree” or “strongly disagree” – in effect, the extreme responses. By identifying which answer was “best,” an applicant could get the maximum possible score on such a test. It’s important to note though that not all tests are scored like this; there are many where BER would result in a very low score.
In this study, applicants were both internal (lower-level managers applying for promotions) and external (applicants from outside the organization). Among the internal applicants, a rumor spread among managers that BER would help them beat the test. This resulted in an increase in BER over the first 12 months that the test was available.
At the 12-month mark, the organization implemented an interactive warning – a process only possible in online testing. If BER was detected on the first page of the personality test, a warning popped up. This was successful in reducing the rate of BER at test completion (see the difference between the dashed and solid lines in the figure above).
So, the takehome? The authors describe a real-world setting in which applicants lied on a personality test. This lying was detectable, and warnings reduced the effect. If you’re implementing online personality testing, maybe you should use warnings too!Footnotes:
A recent study by Fang, Wen and Pavur investigated the extent to which the reputation of survey sponsors (e.g. corporations) and technology providers (e.g. SurveyMonkey) impact response rates. They discovered an interaction between the two and concluded, “A sponsoring corporation with a weak reputation who contracts with a survey provider having a strong reputation results in increased participation willingness from potential respondents if the identity of the sponsoring corporation is disguised in a survey.”
They discovered this with a series of 3 fairly small experiments (N=100, 100, 200) with Chinese participants. In each experiment, “strong reputation” providers were picked from well-known Chinese firms while the “weak reputation” providers were fictitious. So “strong” and “weak” might be better described as “having a strong reputation” versus “not having any reputation”.
In each experiment, participants were exposed to both strong and weak providers, so all results here are within-subject. The surveys were counterbalanced so as to neutralize order effects. Participants looked at several surveys and then rated their willingness to take each. So what did they do in each?
- Experiment 1 examined the effect of sponsoring program corporation reputation and found a small positive effect of corporate reputation (d = .17).
- Experiment 2 examined the effect of technology provider reputation and found a moderate positive effect of provider reputation (d = .30).
- Experiment 3 examined both simultaneously and found both main effects and an interaction between the two. When the sponsoring corporation reputation was weak, technology provider did not matter. When the sponsoring corporation was strong, a strong technology provider reputation was even more beneficial.
So as far as I can tell, the broad conclusion stated above was not actually tested anywhere. Instead, what we can safely conclude is that technology provider reputation is most important, while sponsor reputation also plays a role. A good reputation of both is further beneficial in comparison. As the study operationalized low reputation as “no reputation,” the effect may be even larger in comparison to institutions with poor reputations. And finally, it is unclear the extent to which a population of Chinese respondents is similar to those of any other nationality. In the United States, for example, corporate sponsorship might be met with more skepticism.Footnotes: