Skip to content

Cheating Base Rates

2010 April 20

One of the most difficult aspects of studying cheating on selection/promotion/training tests and otherwise dishonest behavior in either work or school is figuring out what percentage of employees/students actually engage in such behavior.

Statistical power is often low in research on organizations because sample sizes are restricted to whatever number of employees the researcher has access to.  Now imagine if the only a small percentage of those already small samples were the people you were really interested in.  Would it be worth it to conduct research on cheating with a sample of 400?  How many people engaging in cheating activities would you really get?  This is where base rates come in.

In a recent article from Network World, Professor Ed Lazowska, the Computer Science and Engineering department chair at the University of Washington says that roughly 1% – 2% of submitted computer science assignments involve cheating.  Computer science is a particularly interesting case because of the nature of the training material; computer science students submit programs and algorithms via computer that can be easily examined for cheating through automated methods.  This makes the tracking of cheating, even with such a low base rate, much easier – in this case, by automatically examining the assignment of roughly 2750 students per year.

The 1% – 2% figure is portrayed in the article as a high estimate, but is actually lower than the estimate that two colleagues and I found just a few years ago (Retesting after initial failure, coaching rumors, and warnings against faking in online personality measures for selection).  There, we found evidence that roughly 3% of people completing online personality tests for selection purposes engaged in a particular purposeful attempt to increase their scores (and thus their probability of getting hired).  But it’s impossible at this point to determine why those two estimates are different, to what extent these base rates change by environment, how test-taker motivation applies, and so on.

So how many cases do we need to investigate what psychological attributes might be relevant to cheating?  Well, the depressing thing is that this research gives us a fairly small range of low base rates for cheating behavior.  If you want 100 cheaters to study, it looks like you need an overall sample of between 3000 and 10000.  Good luck!

Previous Post:
Next Post:
2 Responses leave one →
  1. April 20, 2010

    It’s interesting to think about the similarities and differences between cheating on a personality test and cheating in other contexts such as on an assignment.

    One thought I had:
    To what extent is cheating a binary variable?
    On personality tests there is certainly a spectrum from a mild-positive spin to complete fabrication.
    Likewise, on assignments there are borderline cases such as getting inspiration from reading another students code to extreme cases of submitting another student’s assignment as your own.

    When I’ve looked at some of my lab-based studies of within-subject change in personality test scores between honest and applicant conditions, a much larger percentage of participants showed a small increase in scores when in an applicant condition (approx 5 to 15% of participants), whereas only a few participants showed evidence of extreme faking (approx 1 to 3% of participants).

  2. April 21, 2010

    That’s actually an issue we had to address in that paper – it’s difficult to distinguish between outright cheating and other patterns of response distortion that are not necessarily dishonest – like responding in a socially desirable fashion. There’s definitely a psychological distinction, and I think that is definitely binary: people are either trying to “game” the test or they aren’t. Whether they are successful at it is another matter.

    The pattern we found in that field data set referenced was actually substantial enough that we could definitely see that some people were engaging in such a purposeful attempt to enhance their scores. But we also had 17000 cases. The problem was distinguishing between those engaging in the response distortion pattern (cheaters) and those naturally responding in that pattern. We also had the added benefit of having people re-taking – if the test-takers got a score they thought was too low, they were able to re-take the test, and the score changes between Test and Retest was substantial – about 1.4 SD – with a much higher base rate of cheating the second time around.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS