Skip to content

Does Personality Predict Vulnerability to Violence in Games? (VG Series Part 2/10)

2010 September 14
by Richard N. Landers

Neo-Academic coverage of the Journal of General Psychology special issue on the psychology of video games:

ResearchBlogging.orgThere are few topics so hotly debated on the Internet as the value of video games. Are they the next generation’s artistic advance, as film was for the last, or are they a blight that makes children overly aggressive and dangerous? In this 10-part series, I’m reviewing a recent special issue of the Journal of General Psychology on video games.  For more background information, see the first post in the series.

One of the trickiest parts of predicting the impact of video games on behavior is that personality as we typically conceptualize it may not be a good model.  For example, if we conceptualize emotional stability as a continuum, perhaps it is the case that only those especially low in emotional stability will be affected negatively by playing violent video games.  Or in other words, perhaps video game violence will only lead the emotionally unstable to become violent.

The problem with this hypothesis is that most traditional analytic techniques applied to psychological data (linear regression, ANOVA, etc.) would not work well to test this hypothesis.  ANOVA, for example, would require an arbitrary split somewhere (to define “high” and “low”) while regression would require curvilinear models and/or computation of many, many interaction terms.

To avoid these problems, Markey and Markey (2010)1 introduce a new analytic perspective that I’ve never heard of – a spherical model of personality.  Each combination of three Big Five personality traits can be conceptualized as a an axis of the sphere.  In the example below, those high in psychoticism (which they define as high neuroticism, low agreeableness, and low conscientiousness) are expected to be at greatest risk of their behavior being affected by violence in video games.

Spherical Model of PersonalityThe advantage of a spherical model is that the data can be analyzed with the mathematical properties of a sphere – in a sense, any particular direction from the center of the sphere can be conceptualized as a main effect, which increases statistical power.  Markey and Markey reanalyze data from another study by the first author to do just this – a model with 26 main effects is examined (+N, -N, +A, -A, +C, -C and all possible crosses).  The result of those analyses is below:

Effects in the Spherical ModelWhat you’re seeing are differential effects based on direction, offering some support for the added value of the spherical model.  For example, for those low in Conscientiousness (the top line), low neuroticism is associated with an effect of 0, while high neuroticism is associated with an effect of roughly .2.  But I do have some concerns that make interpretation of those values a little tricky.

There are some misunderstanding of statistical significance testing here.  For example, consider this quote:

When the FFM dimensions of neuroticism, agreeableness, and conscientiousness were used to predict hostility, all of the interaction effects were null.  However, when these dimensions are combined to measure a single characteristic representing high neuroticism, low agreeableness, and low conscientiousness, significant results emerged from the exact same data set. [emphasis in original]

This should not have been surprising.  By redefining interaction terms as main effects, the sampling distributions of those terms changes – it is by definition “easier” to find significance.  That does not, however, mean that the sampling distribution of those effects is accurate.  They may just be “lowering the bar” without support to do so.  This might be akin to adding a few “degrees of freedom” in your analyses for no particular reason.

The authors try to justify this approach by stating the following:

Is is important to note that the methodology of combining together three dimensions is both computationally and conceptually different than examining the interaction between the three traits.  Such an interaction term would be computed by multiplying the three traits together whereas in the current analyses the combination of the traits was computed by averaging together the FFM traits with various weights.

I’m not sure I buy it.  I’m not a quantitative psychologist, but I believe you would be able to get the same result by creating custom summed interaction terms and using hierarchical linear regression.  Why go through the song and dance of creating a “spherical model” when you could recreate the same basic analyses with a more generally accepted analytic technique?  Why use an entirely hypothetical model when you could add a new twist to a standard method (which would be much easier to test and to justify)? Despite this being a principally methodological piece, very little mathematical detail is provided on why you might take this approach versus another other than because you’ll get significant results this way, which is not a very good scientific rationale.  No direct attempt is made to contrast this approach with any other, which seems quite odd to me.

Interaction terms are notoriously difficult to detect in multiple regression because you need a lot of data to accurately model the interaction.  Just think about it rationally – you need values spread around all levels and all combinations of two continuous predictors.  If you don’t have a lot of data, you either won’t have data at each point of interaction, or you won’t have replication of important portions of your analyses.  Making it easier to find statistical significance is not by itself a legitimate reason to use a new statistical technique, unless the sampling distribution has been demonstrated empirically or at least through simulation.

So what’s the final recommendation – do the Big Five predict vulnerability to violence in video games?  I wish I could give you a definitive answer.  I am confident that personality is important in investigating this question.  I am also confident that if personality plays a role, the effects are rather small (although this is true when examining personality in any domain).  But I am not confident that this paper helps us explain any of this.

  1. Markey, P., & Markey, C. (2010). Vulnerability to violent video games: A review and integration of personality research. Review of General Psychology, 14 (2), 82-91 DOI: 10.1037/a0019000 []

Is Video Game Research Influenced by the Media? (Part 1/10)

2010 September 9
by Richard N. Landers

Neo-Academic coverage of the Journal of General Psychology special issue on the psychology of video games:

ResearchBlogging.orgThere are few topics so hotly debated on the Internet as the value of video games.  Are they the next generation’s artistic advance, as film was for the last, or are they a blight that makes children overly aggressive and dangerous?

In the interest of full disclosure, I’m coming to this debate from the perspective of I/O psychology – I have an interest in the business value of games from a professional perspective.  From a personal perspective, I have played games most of my life.  My first game was Janitor Joe (though I played it as “Jump Joe”), a 1984 DOS game that I played on an IBM PCjr, which you can still play on your own computer.  I have since owned an NES, SNES, Nintendo 64, Gamecube, PlayStation 2, XBOX 360 and Wii.  So yes, I am probably in the demographic of “gamer” (although perhaps a little Nintendo-heavy).

Having said that, my overall viewpoint so far can be summarized, “If you think video games are bad for your kids, don’t let your kids play video games – leave the adults alone.”  While the arguments were amusing to watch, I have never had a personal stake in it one way or the other.  Take that as you will.

What caught my eye was a special issue in Journal of General Psychology on video games.  Most of the articles are qualitative literature reviews of video game research across a wide variety of domains.  My curiosity piqued, I decided to dig into this special issue to discover what truths about the value of video games it might contain.  What follows is an article-by-article discussion of what I found:

The first article is a theory-based article by Christopher Ferguson1 on the portrayal of video games as “good” or “bad.”  Ferguson argues that the negative effects of video games may have been exaggerated not only in the media, but also within the scientific community.  He discusses the causal hypothesis, which refers to the idea that media violence causes aggressive behavior.  Regardless of the specific context (video games, film, novels), the causal hypothesis has been a driving force in scientific inquiry in this domain, despite little credible evidence of a link.

Perhaps the most interesting pieces of research he discusses are the investigations of links between increased crime in the 1960s and the introduction of televisions roughly 20 years earlier.  Though it is certainly true that crime was higher in the late 1960s than it was in the 1940s, there’s no reason to think that the presence of televisions was the cause.

Video Game Sales vs. Youth Violence

Adapted from Ferguson (2010)

This was in fact the classic error of inferring causation from correlation.  Interestingly, youth violence crime is currently at the lowest point is has been since about 1966.  The correlation between youth violence and video game sales is -.95, an extremely strong negative correlation.  No one would claim that increased video game sales decrease youth violence, and yet this is just as much evidence as was available to researchers claiming that violent television increased violence.  Neither conclusion is credible.

So what’s been happening?  It seems that a somewhat one-sided increased media coverage of violent video games (from three perspectives: announcements of new violent video games, condemnations of those games by nonresearchers, and almost any effort to tie violent video games to real behavior) is associated with increased scientific publication in this domain.  Researchers, Ferguson argues, are taking advantage of moral panics in order to promote themselves.  This wouldn’t be a problem except that most research on the link between video games and aggression is weak at best.  Surprising to me, even the 2005 APA resolution on video game violence was apparently written by researchers citing principally their own work.

The most interesting piece for me personally was the discussion of Kato, Cole, Bradlyn & Pollock (2008)2, who used a violent first-person shooter video game called Re-Mission to successfully increase self-efficacy, understanding of cancer, and adherence to proscribed cancer treatments in young cancer patients.  The video game was both violent and an effective medium to improve learning in an audience typically difficult to reach.  I bet the same would be true of adults.

  1. Ferguson, C. (2010). Blazing angels or resident evil? Can violent video games be a force for good? Review of General Psychology, 14 (2), 68-81 DOI: 10.1037/a0018941 []
  2. Kato, P., Cole, S., Bradlyn, A., & Pollock, B. (2008). A video game improves behavioral outcomes in adolescents and young adults with cancer: A randomized trial. Pediatrics, 122 (2). DOI: 10.1542/peds.2007-3134 []

Some Employers Ruin Surveys For the Rest of Us

2010 September 2
tags:
by Richard N. Landers

This recent Dilbert comic caught my eye:
Dilbert.com

One of the biggest problems in organizational survey research, especially about sensitive topics, is that many employees do not trust that the survey responses they give will be held confidentially.

It’s led to a great number of techniques for minimizing impressions that surveys have anything at all to do with the company.  For example, if you were conducting a research project on ABC Co. about theft, it would be important not to use ABC Co. in materials.  Initial e-mail invitations should come from an independent research team with e-mail addresses unrelated to abc-company.com, the web address should not contain abc-company.com, the first page of the survey should have a color scheme unlike that of abc-company.com or any familiar company colors, a single graphic representing the company should be on the first page only to establish that this is company-sponsored (only if it really is – if the research is from a truly independent source, DON’T DO THIS), clear explanations of the anonymity/confidentiality of survey respondents should be given, and no references to the company should be made after that point, if at all possible.

Even doing this, some respondents may feel their responses are being forwarded to management.  Some researchers try to address this by asking “Do you think management will be able to link your responses directly to you?” and eliminating such people from subsequent analyses.  But thus we hit the major problem – if you thought management would be reading your responses, you’d probably say “No” to that too!

So far, we have no better solutions to this problem for survey research.  In fact, as long as there’s a person’s mind creating survey responses, this will always be a problem – at best, we can discourage such deception.  This is what leads some researchers to seek more “objective” measures, like absenteeism, units produced, weekly sales, and so on.  But there only so many things that can measured “objectively,” and many of these aren’t really that objective in the first place.