In the most recent issue of Journal of Media Psychology, Ross and Weaver1 investigate how the experience of negative griefing behaviors early in playing an online multiplayer games (like World of Warcraft) causes some new players to themselves grief others. The authors attribute this primarily to observational learning – that people joining an online multiplayer environment for the first time are likely to interpret the way they are treated by others as a model for how to treat others.
This means that the introduction of abusive players into a multiplayer game in effect creates a destructive loop – the abusive player mistreats new players, those new players gain experience and mistreat the next round of new players, and so on. The initial griefing is also likely to result in greater frustration, lesser enjoyment, and greater state aggression, which would also feed into this loop. The conclusion: griefing does not just negatively affect the griefed; instead, it may begin a chain of abuse that lasts through generations of new players.
To discover this, the researchers asked 68 men and women to play the online game Neverwinter Nights, in which they were asked to play through six 2-minute rounds of gameplay. They were randomly assigned to be griefed in either the first or fourth round of play. They were also randomly assigned to be exposed to a new player or to the griefer in the round immediately following the griefing to see if the negative effects would be revenge (on the griefer), or if the griefing would generalize to others. Researchers were led to believe that other study participants were in the game with them; in reality, it was a research assistant. In the griefing condition, this research assistant with a Level 20 character would repeatedly (and unfairly) kill the research participant’s Level 5 character. Given this arrangement, it was literally impossible for the research participant to damage the research assistant’s character, and the research assistant could kill the research participant’s character in one hit. The article mentions “corpse camping” (staying near a player’s corpse waiting for them to return, weak, to kill them again) and “spawn camping” (killing players the moment they spawn, when they are disoriented) as examples of this griefing.
The researchers found support for most of their hypotheses:
- Participants who are the victim of griefing during the first encounter of a game are more likely to grief other players in subsequent encounters than participants who are not initially griefed.
- Participants are more likely to grief the same opponent than a different opponent, and this difference was greater in the condition where players had an expectation of cooperation than the condition where they had an expectation of griefing.
- Players in a multiplayer game who are griefed experience more frustration, less enjoyment, and higher levels of state aggression than those who are not griefed.
- Ross, T., & Weaver, A. (2012). Shall we play a game?: How the behavior of others influences strategy selection in a multiplayer game. Journal of Media Psychology: Theories, Methods, and Applications, 24 (3), 102-112 DOI: 10.1027/1864-1105/a000068 [↩]
Although the title of this article may seem a little odd, that is exactly the finding that Matschke and colleagues1 describe in a new article appearing in Cyberlearning, Behavior, and Social Networking. This was not precisely what the authors set out to discover, however. What they actually wanted to know was this: Do people remember more from wikis when they identify that the wiki is written by someone within their in-group? The study was conducted around the height of patriotic German support for the national football (soccer) team in the 2010 World Cup, so the researchers decided to use “written by a fan of the German team” as the in-group.
The study went like this: seventy German-speaking university students were brought in for a “study about wikis.” First, they read a textbook entry on fibromyalgia. Next, they were told they would contribute to a wiki entry on fibromyalgia that had been contributed to by all previous study participants. In reality, this was not true – all participants saw the same wiki. After verifying an affiliation for the national team with an IAT and receiving a bit of training on editing wikis, participants were told to create a wiki username that indicated they were fans of the national team. Participants were then randomly assigned to either an in-group condition (where wiki entries were portrayed as written primarily by fans of the German team) or an out-group condition (where wiki entries were portrayed as written primarily by rivals of the German team). The authors explain:
The nicknames either indicated that four out of five previous authors were fans (ingroup condition, e.g., ‘‘black-red-gorgeous’’ in reference to the German national flag) or foes (outgroup condition, e.g., ‘‘black-red-gruesome’’) of the national team.
Next, participants spent 50 minutes attempting to increase the quality of the wiki entry. Finally, they completed two post-experiment measures: knowledge integration (which tested the degree to which they integrated information from other authors on the wiki with what they read) and factual knowledge.
Although it doesn’t test it directly, this somewhat speaks to online class projects – for example, when students are assigned to work on a wiki collaboratively. Do they actually learn from the activity, and what affects their learning?
As demonstrated in this study, shared affiliations with other learners are potentially important. Students in the in-group condition had greater integration and factual knowledge than those in the out-group condition. Thus, it appears that the success of collaborative online learning efforts is affected by interpersonal/social factors. Especially compelling is that the manipulation was not very “strong” in the classical sense: the only communication about group membership was through the form of the username. However, this was made exceptionally salient to participants by instructing them to give themselves an in-group nickname and providing an IAT. This was also a lab study, with paid participants. So it’s unclear how much this affects learners in real situations, with external demands (like getting a good grade, the unfortunate goal of most undergraduates).
At the minimum, we can conclude from this study that social factors may influence the degree to which people learn in collaborative online environments. Unfortunately, a great deal more work needs to be done to identify if this is something we should worry about in “the real world.”
- Matschke, C., Moskaliuk, J., & Kimmerle, J. (2012). The impact of group membership on collaborative learning with wikis Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2012.0254 [↩]
Over the last week or so, I wrote a web-based tool to automatically generate datasets and worked-out solutions. It creates and displays a dataset, a completed solution, and the results of most intermediary computational steps. It is freely available online for instructors or students: http://rlanders.net/datasets.php
As an example, if you select “Paired Samples t” with “n = 10” and an outcome type of “Survey”, it will output:
- Instructions on what to do with the dataset generated
- A dataset with Time 1 and Time 2 “Survey” variables (1=min, 5=max, 3ish=mean, 1ish=sd)
- A “completed” dataset containing calculated difference scores and squared difference scores
- Sum of d, Mean of d, Sum of d^2, (Sum of d)^2, sd of d, se of d, critical t statistic, degrees of freedom, observed t, CI lower and upper bounds
- A box with all important “final” output: the research question, null and alternative hypotheses, alpha, critical value, journal-type reporting of CI and t with precise p-value, unstandardized effect size (difference score), standardized effect size (Cohen’s d) and a NARRATIVE CONCLUSION including interpretation of the effect size. As an example: “Conclusion: Reject the null and accept the alternative. The difference is statistically significant. The Survey variable decreases over time. If we assume this sample to represent the population, we would expect 95% of sample means to fall between -1.68 and 0.08. On average, the Survey variable was 0.80 lower at Time 2. The difference over time was medium.”
This is customized to each test. The program will also randomly choose between directional and non-directional tests when directional tests are plausible.
I have tried to tweak the generation algorithms so that you get statistically significant results about 50% of the time. However, you will get more significant results with larger sample sizes and fewer significant results with smaller sample sizes (as you might expect). But if you leave it set to n = 10, it should be about even.
The tool includes fully worked out problems for: central tendency and variability statistics, z-score calculations, confidence intervals, z-tests, one-sample t-tests, paired-samples t-tests, independent-samples t-tests, one-way ANOVA, chi-squared goodness of fit and test of independence, and correlation/regression.
If you’re wondering why I wrote this, it is because I wrote an undergraduate 1-semester introduction to statistics for business students which will be published in 2013. However, you don’t need the book to use the dataset generator (just ignore the references to “Chapters”). It is customized to the statistical method I teach in that text, however, so if you like it, I’d ask you to consider my textbook when it is published in 2013.
If you choose to use this to generate datasets for your classes or if you provide it to students so that they can test themselves at home (strongly recommended), I only ask that you test it out a couple of times first – copy/paste datasets into SPSS and ensure that the output matches what the generator produces. I have tested it myself in Firefox, IE, and Chrome, and it looks like it works great, but bugs can crop up unexpectedly. The entire program is written in JavaScript, so you’ll need a fairly modern web browser.
Also, if you decide to adopt this or provide it to your students, please leave a note in the comments saying so. Feedback is appreciated!