Skip to content

Does Gamifying Survey Progress Improve Completion Rate?

2013 November 7

ResearchBlogging.orgIn an upcoming issue of Social Science Computer Review, Villar, Callegaro, and Yang1 conducted a meta-analysis on impact of the use of progress bars on survey completion. In doing so, they identified 32 randomized experiments from 10 sources where a control group (no progress bar) was compared to an experimental group (progress bar). Among the experiments, they identified three types of progress bars:

  • Constant. The progress bar increased in a linear, predictable fashion (e.g. with each page of the survey, or based upon how many questions were remaining).
  • Fast-to-slow. The progress bar increased a lot at the beginning of the survey, but slowly at the end, in comparison to the “constant” rate.
  • Slow-to-fast. The progress bar increased slowly at the beginning of the survey, but quickly at the end, in comparison to the “constant” rate.

From their meta-analysis, the authors concluded that constant progress bars did not substantially affect completion rates, fast-to-slow reduced drop offs, and slow-to-fast increased drop offs.

However, a few aspects of this study don’t make sense to me, leading me to question this interpretation.  For each study, the researchers report calculating the ratio of people leaving the survey early to people starting the survey as the “drop-off rate” (they note this as a formula, so it is very clear which should be divided by which). Yet the mean drop-off rates reported in the tables are always above 14. To be consistent with their own statements, this would mean that 14 people dropped for every 1 person that took the survey – which obviously doesn’t make any sense. My next thought was that perhaps the authors converted their percentages to whole numbers – e.g., 14 is really 14% – or flipped the ratio – e.g. 14 is really 1:14 instead of 14:1.

However, the minimum and maximum associated with the drop off rates in the Constant case are 0.60 and 78.26, respectively, which rules out the “typo” explanation. This min and max imply that for some studies, more people dropped than began the survey, but the opposite for other studies.  So something is miscoded somewhere, and it appears to have been done inconsistently.  It is unclear if this miscoding was done just in the tabular reporting or if it carried through to analyses.

A forest plot is provided to give a visual indication of the pattern of results; however, the type of miscoding described above would reverse some of the observed effects, so this does not seem reliable. Traditional moderator analysis (as we would normally see produced from meta-analysis) was not presented in a tabular format for some reason. Instead, various subgroup analyses were embedded in the text – expected versus actual duration, incentives, and survey duration. However, with the coding problem described earlier, these are impossible to interpret as well.

Overall, I was saddened by this meta-analysis, because it seems like the researchers have an interesting dataset to work with yet coding errors cause me to question all of their conclusions. Hopefully an addendum/correction will be released addressing these issues.

  1. Villar, A., Callegaro, M., & Yang, Y. (2014). Where am I? A meta-analysis of experiments on the effects of progress indicators for web surveys Social Science Computer Review, 1-19 DOI: 10.1177/0894439313497468 []
Previous Post:
Next Post:
No comments yet

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS