Skip to content

CFP: Assessing Human Capabilities in Video Games and Simulations

2013 December 11
by Richard N. Landers

SUBMISSION DUE DATE: June 2nd, 2014

SPECIAL ISSUE ON CFP: Assessing Human Capabilities in Video Games and Simulations
International Journal of Gaming and Computer Mediated Simulations (IJGCMS)

Guest Editor: Richard N. Landers

INTRODUCTION:

In video games and computer mediated simulations (GCMS), instructional designers and human resources professionals can develop virtual situations and environments that enable people to exhibit a wide range of knowledge, skills, abilities, and other characteristics (KSAOs; “O” includes a diverse set of constructs such as personality, preferences, interests). Often, direct measurement of these KSAOs outside of a computer-mediated environment is difficult, confounded, or costly. For example, in the instructional context, children may be more enthusiastic to demonstrate their mathematical fluency in a video game than on a paper-and-pencil test. When training firefighters, performance in a computer simulation may be used to assess job proficiency without the expense or risk of a live simulation.

In this issue, we plan to collect rigorous theoretical and empirical explorations of KSAO measurement using GCMS. Quantitative approaches, especially those containing concurrent or predictive validation data, are the highest priority, although any rigorous examination of KSAO measurement in GCMS will be considered.

OBJECTIVE OF THE SPECIAL ISSUE:

The purpose of this special issue is to explore and rigorously test methods for the measurement of human capabilities from GCMS.

RECOMMENDED TOPICS:

Topics to be discussed in this special issue include (but are not limited to) the following:

  • Construct validation studies of KSAOs as measured in GCMS, especially those containing evidence of convergent/discriminant validity and/or
  • criterion-related validity
  • Development of theoretical models of GCMS-based assessment and empirical tests of those models
  • Identification of KSAOs more readily measurable in GCMS than by using traditional approaches
  • Application of psychometric theory (e.g. reliability, validity) to GCMS-based assessment, especially in relation to the Standards for Educational and Psychological Testing and other professional guidelines
  • Considerations of the appropriateness of classical test theory versus item response theory in GCMS-based assessment
  • Explorations of faking and/or cheating in GCMS-based assessment and effects on validity
  • Rigorous explorations of tentative or creative approaches to measuring KSAOs in GCMS
  • Current best practices for practitioners seeking to conduct assessment using GCMS based upon the empirical literature
  • Current best practices for scholars related to construct specification, research design, and statistical approaches to studying assessment in GCMS

SUBMISSION PROCEDURE:

Researchers and practitioners are invited to submit papers for this special theme issue on Assessing Human Capabilities in Video Games and Simulations on or before June 2nd, 2014. Authors are encouraged to contact RNLANDERS@ODU.EDU well ahead of the submission deadline to discuss the appropriateness of their submissions. All submissions must be original and may not be under review by another publication. INTERESTED AUTHORS SHOULD CONSULT THE JOURNAL’S GUIDELINES FOR MANUSCRIPT SUBMISSIONS at http://www.igi-global.com/journals/guidelines-for-submission.aspx. All submitted papers will be double-blind peer reviewed. Papers must follow APA style for reference citations.

ABOUT:

International Journal of Gaming and Computer Mediated Simulations (IJGCMS)
IJGCMS is a peer-reviewed, international journal devoted to the theoretical and empirical understanding of electronic games and computer-mediated simulations. IJGCMS publishes research articles, theoretical critiques, and book reviews related to the development and evaluation of games and computer-mediated simulations. One main goal of this peer-reviewed, international journal is to promote a deep conceptual and empirical understanding of the roles of electronic games and computer-mediated simulations across multiple disciplines. A second goal is to help build a significant bridge between research and practice on electronic gaming and simulations, supporting the work of researchers, practitioners, and policymakers.  This journal is an official publication of the Information Resources Management Association

www.igi-global.com/IJGCMS
Editor-in-Chief: Richard Ferdig
Published: Quarterly (both in Print and Electronic form)

PUBLISHER:

The International Journal of Gaming and Computer Mediated Simulations (IJGCMS) is published by IGI Global (formerly Idea Group Inc.),
publisher of the “Information Science Reference” (formerly Idea Group Reference), “Medical Information Science Reference”, “Business Science Reference”,
and “Engineering Science Reference” imprints. For additional information regarding the publisher, please visit www.igi-global.com.

All inquiries should be should be directed to the attention of:

Richard N. Landers
Associate Editor
International Journal of Games and Computer Mediated Simulations (IJGCMS)
E-mail: RNLANDERS@ODU.EDU

All manuscript submissions to the special issue should be sent through the online submission system:

http://www.igi-global.com/authorseditors/titlesubmission/newproject.aspx

Share

Can You Trust Self-Help Mental Health Information from the Internet?

2013 November 20
by Richard N. Landers

ResearchBlogging.orgIn an upcoming issue of Cyberpsychology, Behavior and Social Networking, Grohol, Slimowicz and Granda[1] examined the accuracy and trustworthiness of mental health information found on the Internet. This is critical because 8 of every 10 Internet users has searched for health information online, including 59% of the US population. They concluded that information found in early Google and Bing search results is generally accurate but low in readability.  The following websites were identified as popular, generally reliable sources of mental health information: Help Guide.org, Mayo Clinic, National Institute of Mental Health, Psych Central, Wikipedia, eMedicineHealth, MedicineNet, and WebMD.

Because 77% of Internet users only look at the first page of search results, these authors examined websites found on the first two pages of search results (20 websites) on Google and Bing for each of 11 major mental health conditions (e.g. anxiety, ADHD, depression), resulting in 440 total rated websites.  This struck me as peculiar though, since there should be substantial overlap between Google and Bing – so most likely, some cases were represented multiple times.

After the sites were collected, two graduate student coders (the last two authors, presumably) made ratings on all 440 websites for quality of information presented, readability, commercial status, and the use of the HONCode badge.  HONCode is a sort of seal of approval for health websites from an independent regulatory body.

The found, among other things:

  • HONCode websites generally contained higher quality information than websites without HONCode certification.
  • Commercial websites generally contained lower quality information than non-commercial websites.
  • HONCode websites were generally harder to read (higher grade level) than websites without HONCode certification.
  • Commercial websites were generally harder to read than non-commercial websites.
  • 67.5% of websites were judged to be of good or better quality based upon a generally agreed-upon standard (above a 40 on the DISCERN scale).

Statistically, this study had two peculiar features. First, there is the likely non-independence problem described above (websites were probably in their dataset multiple times). Second, most of their analyses were done via simple correlations, which have a very simple calculation for degrees of freedom: n – 2.  Thus, degrees of freedom for all of the correlations they calculated (with the exception of the “Aims Achieved”, which had a lower n for a technical reason), ignoring the independence problem, should be 438.  The degrees of freedom for Aims Achieved should be 396.  However, in the article, these degrees of freedom are either 338 or 395. So something odd is going on here that is not explained.  Perhaps duplicates have been eliminated in these analyses, but this is not stated.

One additional caveat: the owner of Psych Central (identified as one of the reliable sources of mental health information) was the first author of this study. I doubt he would fake information just to get his website mentioned in a journal article, so this probably isn’t much of a concern.

Overall, I am pleased to see that the most common resources available to Internet users on mental health are generally of reasonable quality, and I feel fairly confident in this finding.  Like the authors, I am concerned that readability of the most popular websites is generally low.  The mean grade level was 12, indicating that a high school education is required to understand the typical popular health website.  That seems a bit high, especially considering younger people are most likely to turn to the web first as a source of information about their mental health. Hopefully this article will serve as a call to website operators to broaden their audience.

Footnotes:
  1. Grohol, J. M., Slimowicz, J., & Granda, R. (2013). The quality of mental health information
    commonly searched for on the Internet Cyberpsychology, Behavior and Social Networking DOI: 10.1089/cyber.2013.0258
    []
Share

When A MOOC Exploits Its Learners: A Coursera Case Study

2013 November 13
by Richard N. Landers

By now, if you read this blog, I assume you at least have a passing familiarity with massively online open courses (MOOCs), the “disruptive” technology set to flip higher education on its head (or not). Such courses typically involve tens of thousands of learners, and they are associated with companies like Coursera, Udacity, and EdX. To facilitate a 50,000:1 teacher-student ratio, they rely on an instructional model requiring minimal instructor involvement, potentially to the detriment of learners. As the primary product of these companies, MOOCs demand monetization – a fancy term for “who exactly will make money off of this thing?”. None of these companies have a plan for getting self-sufficient let alone profitable, and even non-profits need sufficient revenue to stay solvent.

Despite a couple of years of discussion, the question of monetization remains largely unresolved. MOOCs are about as popular as they were, they still drain resources from the companies hosting them, and they still don’t provide much to those hosts in return. The only real change in the year following “the year of the MOOC” is that these companies have now begun to strike deals with private organizations to funnel in high performing students. To me, this seems like a terrifically clever way to circumvent labor laws. Instead of paying new employees during an onboarding and training period, business can now require employees to take a “free course” before paying them a dime.

Given these issues, you might wonder why faculty would be willing to develop (note that I don’t use the word “teach”) a MOOC. I’ve identified two major motivations.

Some faculty motivate their involvement in MOOCs as a public service – in other words, how else are you going to reach tens of thousands of interested students? The certifications given by MOOCs are essentially worthless in the general job marketplace anyway (aside from those exploitative MOOC-industry partnerships described above). So why not reach out to an audience ready and eager to learn just because they are intrinsically motivated to develop their skills? This is what has motivated me to look into producing an I/O Psychology MOOC. But I am a bit uncomfortable with the idea of a for-profit company earning revenue via my students (probably, eventually, maybe), which is why I’ve been reluctant to push too fast in that direction. Non-profit MOOCs are probably a better option for faculty like me, but even that has unseen costs.

The other major reason faculty might develop a MOOC is to gain access to a creative source of research data.  With 10000 students, you also have access to a research sample of 10000 people, which is much larger than typical samples in the social sciences (although likely to be biased and range restricted). But this is where I think MOOCs can become exploitative in a different way than we usually talk about. I concluded this from my participation as a student in a Coursera MOOC.

I’m not going to criticize this MOOC’s pedagogy, because frankly, that’s too easy. It suffers from all the same shortcomings of other MOOCs, the standard trade-offs made when scaling education to the thousands-of-students level, and there’s no reason to go into that here.  What I’m instead concerned about is how the faculty in charge of the MOOC gathered research data.  In this course, research data was collected as a direct consequence of completing assignments that appeared to have relatively little to do with actual course content. For example, in Week 4, the assignment was to complete this research study, which was not linked with any learning objectives in that week (at least in any way indicated to students).  If you didn’t complete the research study, you earned a zero for the assignment.  There was no apparent way around it.

In my experience on one of the human subjects review boards at my university, I can tell you emphatically that this would not be considered an ethical course design choice in a real college classroom. Research participation must be voluntary and non-mandatory. If an instructor does require research participation (common in Psychology to build a subject pool), there must always be an alternative non-data-collection-oriented assignment in order to obtain the same credit. Anyone that doesn’t want to be the subject of research must always have a way to do exactly that – skip research and still get course credit.

Skipping research is not possible in this MOOC. If you want the certificate at the end of the course, you must complete the research. The only exception is if you are under 18 years old, in which case you still must complete the research, but the researchers promise that they won’t use your data. We’ll ignore for now that the only way to indicate you are under 18 is to opt-in – i.e. it is assumed you’re over 18 until you say otherwise (which would also be an ethics breach, at least at our IRB).

Students never see a consent form informing them of their rights, and it seems to me that students in fact have no such rights according to the designers of this MOOC. The message is clear: give us research data, or you can’t pass the course.

One potential difference between a MOOC and a college classroom is that there are no “real” credentials on the line, no earned degree that must be sacrificed if you drop. The reasoning goes: if you don’t want to participate in research, stop taking the MOOC. But this ignores the psychological contract between MOOC designer and student. Once a student is invested in a course, they are less likely to leave it. They want to continue. They want to succeed. And leveraging their desire to learn in order to extract data from them unwillingly is not ethical.

The students see right through this apparent scam. I imagine some continue on and others do not. I will copy below a few anonymous quotes from the forums of students who apparently did persist.  One student picked up on this theme quite early in a thread titled, “Am I here to learn or to serve as reseach [sic] fodder?”:

I wasn’t sure what to think about this thread or how to feel about the topic in general, but now that I completed the week 4 assignment I feel pretty confident saying that the purpose of this course is exclusively to collect research data… I don’t know that it’s a bad thing, but it means I feel a bit deceived as to the purpose of this course. Not that the research motives weren’t stated, but it seems to me that the course content, utility, and pedagogy is totally subordinate to the research agenda.

The next concerns the Week 6 assignment, which require students to add entries to an online research project database:

So this time I really have the impression I’m doing the researchers’ work for them. Indeed, why not using the thousands of people in this MOOC to fill a database for free, with absolutely no effort? What do we, students, learn from this? We have to find a game which helps learning, which is a good assignment, but then it would be sufficient to enter the name of the game in the forum and that’s it. I am very disappointed by the assignments for the lasts weeks. In weeks 1 and 2 we had to reflect and really think about games for learning. Then there was the ‘research fodder’ assignment. And this week we have to do the researchers’ job.

In response to such criticisms, one of the course instructors posted this in response:

Wow! Thanks for bringing this up! In short, these assignments are designed to be learning experiences. They mirror assignments that we make in the offline courses each semester. I’m really sorry if you, or anyone else felt any other way. It was meant to try and make the assignments relevant, and to create “course content on the fly” so that the results of the studies would be content themselves.

That strikes me as a strange statement.  Apparently these assignments have been used in past in-person courses as important “learning experiences” but are also experimental and new. It is even stranger in light of the final week of the class, which asks students to build a worldwide database of learning games for zero compensation without any followup assignments (to be technical: adding to the database is mandatory whereas discussing the experience on the forums – the only part that might conceivably be related to learning – is optional). So I am not sure what to believe. Perhaps the instructors truly believe these assignments are legitimate pedagogy, but we’ll never really know.

As for me, although the lecture videos have been pretty good, I have hit my breaking point. In the first five weeks of the course, I completed the assignments. I saw what they were doing for research purposes before now but decided I didn’t care. Week 6 was too much – when asked to contribute data to a public archive, I decided that getting to experience this material was not worth the price. Perhaps I could watch the videos this week anyway, but I don’t feel right doing it, since this is the price demanded by the instructors. Even right at the finish line, I will not be completing this MOOC, and I can only wonder how many others dropped because they, too, felt exploited by their instructors.

Share