One of the questions faced by survey designers is presentation order. Does it matter if I put the demographics first? Should I put the cognitive items up front because they require more attention? If I put 500 personality items in a row, will anyone actually complete this thing? Some recent research in the Journal of Business and Psychology reveals that placing demographic items at the beginning of a survey increases the response rate to those items in comparison to demographic items placed at the end. And more importantly, it did not affect scores on the three noncognitive measures that came afterward, in this case: leadership, conflict resolution, and culture and goals measures.
To investigate this, Teclaw, Price and Osatuke conducted a large survey (roughly N = 75000) on behalf of the Veterans Health Administration. Respondents were randomly assigned to one of three surveys. Of those randomly assigned to the third survey, participants received one of seven scales, three of which were those listed above, resulting in a sample size for this study of N = 4508. Respondents completing each of these three surveys were in turn randomly assigned to either complete demographic items at the beginning of end of their survey. The authors compared response rates, considering both skipped items and “Don’t Know” responses to be a lack of response, and included all respondents that opened the survey regardless of how many questions were actually completed.
Response rates were indeed different. On the first of the three focal surveys, the response rate to demographics placed at the beginning of the survey was around 97%, while the response rate to demographics placed at the end was around 87%. While this isn’t a huge difference, if demographics are involved in your primary research questions (and they often are), then this may be a good idea.
What’s especially interesting about this is that conventional wisdom is to place demographic items at the end. The argument that I have most often heard is that priming your survey respondents with their demographic characteristics (e.g. race) will lead them to respond differently than they otherwise would have. This is especially salient in the context of race-based stereotype threat, the tendency for minority group performance on cognitive measures to decrease as a result of anxiety associated with confirming negative stereotypes about intelligence. So what should we do?
There are two important facts about this study that limit its applicability. First, all measures investigated were noncognitive, i.e. survey items. Stereotype threat typically applies in contexts where there is a “right answer,” for example, knowledge tests or intelligence tests. So the placement on demographic items on such measures may still be important. Second, the study did not control for cognitive fatigue – survey length was confounded with experimental condition. Is it because the survey items were at the beginning vs. the end, or was it simply because respondents had already responded to many, many items and were bored/tired/at a loss for time/etc? Would the effect still hold with a 20-item survey? A 50-item survey? We don’t really know.
If you’re giving a noncognitve voluntary survey, you are probably interested in demographics specifically and want to ensure they are responded to more so than any other items. For now, it appears to be safe to put demographic items up front if that is your goal. Whether your survey is 20 items or 200 items, it is a low cost to move the demographic items on your survey. But if your survey has cognitively-loaded items, I’d still recommend against it.Footnotes: