In a new article appearing in Simulation & Gaming, Bedwell and colleagues do what the game studies literature has generally not been able to do for games in general; they develop a taxonomy that defines what a serious game is. This effort provides a road map for researchers exploring how games can contribute to learning.
The definition of “game” is an area of surprisingly vehement debate. Many researchers and game designers have their own definition of “game,” ranging from Sid Meier’s “a game is a series of interesting choices” to researchers’ attempts to define games by exploring the largely humanities-based game studies literature. This is a challenge for the serious games researcher, because without a clear definition for “game”, there is no way to systematically and scientifically explore the aspects of games that contribute to learning. What one researcher calls “challenge”, another might call “fun.” What another researcher calls “fun”, yet another might say “contributes to the creation of flow experiences.” This leads to overlap between researchers, which often leads to seemingly contradictory results. With the above example, a study finding a positive effect of “fun” might in another researcher’s term be a positive effect of flow or a positive effect of challenge. This leads to a highly inefficient scientific process.
Bedwell and colleagues begin to solve this problem by conducting an empirical study of game attributes targeted at identifying the areas of overlap between researcher conceptualizations of game attributes. First, they identified 65 self-proclaimed “game experts” from various online sources, including the online forums of the Escapist and Penny Arcade, video game listservs, and internal distribution lists at game developers. Next, these 65 game experts (50 players and 15 developers) conducted what is called a “card sort”, in which they sorted the 19 game attributes related to learning identified by previous researchers into their own categories. Finally, they completed a post-sort survey to capture demographic information. To examine this data, Bedwell and colleagues conducted a cluster analysis, a process that groups cases by similarity of ratings provided.
Interestingly, game developers and game players did not differ in their sorting. Support was found for a 9-category system:
- Action Language. The method by which information is relayed to the game (joystick, keyboard, mouse, etc.).
- Assessment. The extent to which feedback is provided to players on their progress.
- Conflict/Challenge. The degree to which the player is challenged, and the method for manipulating that challenge.
- Control. The degree to which players can affect their environment.
- Environment. The location chosen for the game.
- Game Fiction. The extent to which the game environment represents reality.
- Human Interaction. The degree to which players interact with other people.
- Immersion. The extent to which the game uses engrossing effects (audio, video, etc.) to draw the player in.
- Rules/Goals. The degree to which the game presents clear objectives.
With this system, the authors hope that “research on game attributes and their effects on learning can progress in a purposeful and unified fashion” (p. 753). Somehow, I don’t expect it will be quite that easy, but at the very least, this provides a valuable starting point for such efforts.Footnotes: