Skip to content

How to Evaluate Learning in Virtual Worlds and 3D Environments

2013 February 20

ResearchBlogging.orgIn a recent issue of the Journal of Virtual Worlds Research, Landers and Callan1 examine appropriate evaluation of learning taking place in virtual worlds (VWs) and other 3D environments. In doing so, they develop a new model of training evaluation specific to the virtual world context, integrating several classic training evaluation models and research on trainee reactions to technology. The publication of this model is critical because 1) much previous research in this area does not consider the antecedents of learning in this context, leading researchers to conclude that VW-based training is ineffective in comparison to traditional training when the opposite may be true and 2) organizations/educators often implement VWs unsuccessfully and blame the VW for this failure when subtler contextual factors are likely to blame, which may help explain the virtual worlds bubble, which I discussed a few years ago.  Considering VWs are appearing in research as viable alternatives to traditional, expensive, simulation-based training, the time for VWs may be at hand.  The model exploring these issues appears below.

Landers Callan Figure

Though framed in terms of organizational outcomes, their model of evaluation could be equally valuable in educational settings.  From their integration, the authors make five practical recommendations to those attempting such evaluation (from p. 15, with citations removed):

  1. All four outcomes of interest should be assessed if feasible: reactions, learning, behavioral change, and organizational results.  This will provide a complete picture of the effect of any training program.
  2. Attitudes towards VWs, experience with VWs, and VW climate should be measured before training begins, and preferably before training is designed.  This will give the training designer or researcher perspective on what might influence the effectiveness of their training program before even beginning.  If learners have negative attitudes towards VWs, training targeted at improving those attitudes should be implemented before VW-based training begins.  Without doing so, the designer risks the presence of an unmeasured moderation effect that could undermine their success.  Even if attitudes are neutral or somewhat positive, attitude training may improve outcomes further.
  3. If comparing VW-based and traditional training across multiple groups, take care to measure and compare personality, ability, and motivation to learn between groups.  Independent-samples t-tests or one-way ANOVA can be used for this purpose.
  4. Consider the organizational context to ensure that no macro-organizational antecedents (like VW climate) are impacting observed effectiveness.  Measure these well ahead of training, if feasible.
  5. Provide sufficient VW navigational training such that trainees report they are comfortable navigating and communicating in VWs before training begins.  A VW experience measure or a focus group can be used for this purpose.

For more detail on how to go about such evaluation, this article is freely available (“open access”) to the public at the Journal of Virtual Worlds Research website.

  1. Landers, R.N., & Callan, R.C. (2012). Training evaluation in virtual worlds: Development of a model Journal of Virtual Worlds Research, 5 (3) Other: http://journals.tdl.org/jvwr/index.php/jvwr/article/view/6335 []
Previous Post:
Next Post:
No comments yet

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS