As in 2010 and 2011 and 2012, I’ll be live-blogging the SIOP conference, which begins Thursday, April 11 and runs through Saturday, April 13. This post contains a list of all the sessions that I interested in attending, which are generally focused on technology, training, and assessment. Barring any unexpected battery problems, my live blogging will be on Twitter, with a permanent record stored here.
In the graphic below, the symposium that I presenting in is colored red. As you can see, it’s a bit packed, so I won’t be attending all of these, but this is everything I’m interested in. If you’d like to meet up at any of these events, or if you think I missed something that I should definitely attend, please let me know!
|No.||Day||Start||End||Session Title||Room||Session Type|
|1||4/11||8:30||10:00||Opening Plenary Session||Grand A||Special Events|
|16-16||4/11||11:00||12:00||Antecedents and Consequences of Voluntary Professional Developmentong STEM Majors||Ballroom||Poster|
|75||4/11||3:30||5:00||The Science and Practice of Social Media Use in Organizations||346 AB||Master Tutorial|
|76-9||4/11||3:30||4:30||Predicting Persistence in Science, Technology, Engineering, and Math Fields||Ballroom||Poster|
|89||4/11||5:00||6:00||Back to the Future of Technology-Enhanced I-O Practice||335 BC||Panel Discussion|
|91||4/11||5:00||6:00||Empirical Evidence for Successfully Using Social Media in Organizations||340 AB||Symposium|
|110||4/12||8:30||10:00||The Promise and Perils of Social Media Data for Selection||Grand E||Symposium|
|114||4/12||8:30||10:00||Team Leadership in Culturally Diverse, Virtual Environments||Grand J||Symposium|
|127-18||4/12||10:30||11:30||Creation and Validation of a Technological Adaptation Scale||Ballroom||Poster|
|138-32||4/12||11:30||12:30||Personality Perceptions Based on Social Networking Sites||Ballroom||Poster|
|148||4/12||12:00||1:30||Innovations in Online Simulations: Design, Assessment, and Scoring Issues||344 AB||Symposium|
|159-22||4/12||1:00||2:00||Learner Control: Individual Differences, Control Perceptions, and Control Usage||Ballroom||Poster|
|188-22||4/12||3:30||4:30||Another Look Into the File Drawer Problem in Meta-Analysis||Ballroom||Poster|
|206||4/12||5:00||6:00||I-O’s Role in Emerging Training Technologies||342||Symposium|
|225||4/13||8:30||10:00||New Insights Into Personality Test Faking: Consequences and Detection||340 AB||Symposium|
|274-27||4/13||12:00||1:00||Using Bogus Items to Detect Faking in Service Jobs||Ballroom||Poster|
|274-1||4/13||12:00||1:00||Examining Ability to Fake and Test-Taker Goals in Personality Assessments||Ballroom||Poster|
|274-13||4/13||12:00||1:00||Applicant Withdrawal for Online Testing: Investigating Personality Differences||Ballroom||Poster|
|280||4/13||12:00||1:30||Technology Enhanced Assessments, A Measurement Odyssey||Grand F||Symposium|
|305||4/13||2:00||3:00||Advances in Technology-Based Innovative Item Types: Practical Considerations for Implementation||337 AB||Symposium|
|306-12||4/13||2:00||3:00||Is Crowdsourcing Worthwhile? Measurement Equivalence Across Data Collection Techniques||Ballroom||Poster|
|306-21||4/13||2:00||3:00||Is the Policy Capturing Technique Resistant to Response Distortion?||Ballroom||Poster|
|316||4/13||4:30||5:30||Closing Plenary Session||Grand A||Special Events|
Most academic research on video games studies them as single player experiences – a single individual, alone in a room with a game console. Study on massively multiplayer online games (MMOGs) is also growing. However, much (and perhaps most) video game play in the modern day is multiplayer in a smaller setting: or at home in front of a Kinect or a Wii with two or three friends. Surprisingly little research has examined this context.
In a recent issue of Cyberpsychology, Behavior and Social Newtorking, Peng and Crouse compared enjoyment, future play motivation and physical intensity on XBOX 360 game Kinect Adventures between single player, cooperative multiplayer and competitive multiplayer modes, finding that competitive multiplayer provided the greatest enjoyment, motivation to play in the future, and physical activity.
The highly physical core game mechanic is best described by the authors:
This game utilized the Kinect motion camera, and the players used their own bodies as the controller to pop soap bubbles that floated between holes on the walls, floors, and ceilings of a virtual zero-gravity room by moving forward, backward, left, and right and waving their arms.
These three conditions were more complex than they appear:
- In the single player condition, participants played alone, twice. In this way, they were competing with their pre-test score.
- In the cooperative mutliplayer condition, participants brought a friend. After individual pre-tests, the two participants next played the game with each other (they were physically active in the same space simultaneously and their scores were combined).
- In the competitive multiplayer condition, participants brought a friend. After individual pre-tests, the two participants then played the game separately (in different rooms) and were told what their friend’s previous high score was on the pre-test.
After the game, participants completed the focal measures. Enjoyment was assessed with a 7-item scale asking them to rate the game on seven adjectives (including “boring,” “entertaining,” and “interesting”). Future play intention was rated on a 3-item scale asking questions like, “Given the chance, I would play this game in my free time.” Physical activity was captured with an accelerometer attached to each player.
162 students participated in the study; however, this seems to include the friend counts. Sample sizes were 26 for single player, 74 for cooperative play and 52 for competitive play. Given that their analytic strategy was a simple ANOVA, it does not seem that the researchers appropriately controlled for covariance between friend pairs – a better strategy would have been to have not included the “friend” in analysis, however this would have dropped their apparent (although misleading) reported sample size.
Based upon the reported ANOVA, several were statistically significant. Single player enjoyment was lower than cooperative and competitive multiplayer modes. Future play motivation after playing alone was lower than both cooperative and competitive mutliplayer modes. For physical activity, the relationship was more complicated. In this case, the single player and cooperative mutliplayer modes resulted in greater physical activity than the competitive multiplayer.
The authors did not depict these relationship graphically (always graph your results!), so I did:
Although the affects appear pretty substantial in terms of effect size (for example, from 3.99 to 5.22 for enjoyment, about 0.8 standard deviations), it’s hard to say how much the double-sample for the two multiplayer modes inflated the F-statistics (and thus statistical significance) of the results. So I am not 100% confident in the researchers’ interpretation of these data, but I am cautiously optimistic that these results would hold up with appropriate consideration of dependence.
In the context of education and training, this has some critical implications: play with others seems preferable to play alone. While this might seem intuitive, we too often build serious games and gamification efforts where people compete against themselves (e.g. high scores, or some instructor-set target goal level). Adding some coworkers or fellow students to the mix might just improve those efforts.Footnotes:
Recently, there was a brief discussion about online Ph.D. psychology programs on the PSYTEACH listserv. Generally, it was met with disbelief that an online psychology doctoral degree could be rigorous for a variety of reasons – a lack of teaching experience, a lack of lab experience, and a lack of face-to-face interaction with faculty and other students, to name a few.
I think it’s important to recognize that online doctoral programs as they exist now and online doctoral programs as they could exist, even given current technology, are very different. We currently have the technology available to have the traditionally rigorous, intensive doctoral experience for most students – I just don’t know of any programs that actually do so now. There are several online Psy.D. programs, but requirements for such programs are not as intense, as the degree is practice-oriented – the requirements for a Ph.D. program are more stringent.
So as a thought experiment, I decided to map traditional Ph.D. student experiences onto the technologies and program structure that would be required to realize that program online effectively. In doing so, I came up with five dimensions – the three traditional components of academic performance (teaching, research, and service) plus an instructional component and a community component. I’ll discuss each in turn.
- Teaching Experience.
- Challenge. In a traditional Ph.D. program, students often (although not always) gain teaching experience by being assigned as teaching assistants (TAs), lab/section leaders, and full-blown instructors. Many schools use a ramp-up program such that earlier students (1st and 2nd year, for example) are TAs or lab leaders, and after proving themselves in that role (or after getting their MA/MS), move up to teach their own courses. If students wish to gain experience in online courses, they sometimes have that freedom; in an in-person program, one can teach in-person or online courses. In an online program, students can still certainly work as TAs or instructors of online courses, but since there are no in-person classes, they cannot gain experience teaching one.
- Roadmap. The dirty secret that no one wants to admit to is that, to a certain degree, teaching is teaching. If you look at programs designed to assess the quality of online courses, you’ll find that what makes a high quality course online is very similar to what makes a high quality course in person. For example, Quality Matters is a “certification” program designed to assess such quality. Here are the dimensions it uses to make evaluations: inclusion of a course overview and introduction, statement of learning objectives, appropriate assessment strategies, informative instructional materials, tools to encourage interaction and engagement with and between students, appropriate use of technology, support for learners, and accessibility. On its own, this list could apply to either an in-person or an online course. The reason that online courses are often lower quality is that, for the instructor, it is often more difficult to realize these goals online. In an in-person course, I can split people into small groups to discuss a difficult question; online, I need to adequately plan ahead to set up a discussion area, ensure it’s communicated to everyone appropriately, identify any technology roadblocks and proactively work to circumvent them, and then monitor the discussion over a long period of time. If they are different, teaching online is more difficult than teaching in person. It requires most of the same skills and then several additional technology skills. Where online teaching is less difficult for the instructor is the face-to-face interaction component. You don’t need to be “on”, excited and engaged at 8AM. You don’t need to learn to watch and interpret subtle facial cues that indicate students don’t quite grasp the concept you are discussing. But this is a relatively minor aspect of teaching, in the grand scheme of required skills.
- Ideal Solution. Given that teaching online is more difficult than teaching in-person except in terms of interpersonal interaction, the ideal online Ph.D. program would have the Ph.D. student start as an online TA and move to online instruction. After “proving” themselves online, the Ph.D. student should teach as an adjunct professor at local universities or community colleges, and the online university should cover the host university’s salary requirements. For example, if the student’s local institution pays $3000 for an adjunct to teach a class, the online university should pay $3000 to the student to teach on behalf of the host institution.
- Research Experience
- Challenge. In a traditional Ph.D program, students gain research experience by actively designing, running, analyzing, interpreting, and writing up research studies. Most of these experiences involve working closely with an academic adviser who creates these studies and runs them. Such studies are often run by Ph.D. student in in-person laboratories, which require direct oversight of research participants.
- Roadmap. In truth, a lot of research in psychology these days is conducted through online surveys. This would be ideal for an online Ph.D. student, requiring few special accommodations. Much research (my own included) incorporates experimental designs but does so online – for example, assigning different research participants to different stimulus materials and measuring outcomes through a follow-up survey. For research that is in-person and experimental in nature, requiring fine timings or specialized equipment, there is currently no easy (or more critically, valid) way to conduct such studies online.
- Ideal Solution. For Ph.D. programs that absolutely require in-person experience or equipment (e.g. if an fMRI must be used, or if clinical populations must be interacted with, etc.), such programs should not have online programs without partnerships with universities local to the student. Ideally, a network of such inter-relationships between universities could be established to facilitate this, but I don’t see this happening any time soon. If a program relies primarily on survey research, Skype or other videoconferencing software can be used to interact with the adviser and with the various necessary committees (e.g. dissertation committees). Research experiences must be integrated as of the first year of study. I can’t even count the number of times I’ve seen students in online Ph.D. programs asking for advice on LinkedIn or on Listservs on how to design studies or conduct analyses for their dissertations because it was the first time they ever had to conduct a research study in their entire graduate program. This is not acceptable. A Ph.D. program must incorporate research from Day 1. Perhaps most critically, universities must keep the student-to-faculty-adviser ratios similar to that of in-person classes (for Ph.D. students, I’d characterize 8 advisees per faculty member as a large program; ideally, this would be in the 3-6 area). The workload is no lower for faculty, and for quality standards to remain high, a high degree of individual attention is critical.
- Service Experience
- Challenge. Graduate school often brings with it a myriad of service opportunities. In my graduate school experience, this included things like hosting students for welcome/interview weekend, running a student group, organizing brownbags, and so on. This communicates the shared mental model of academia to students: everyone pitches in a little bit to get things done, and so that everyone’s voice is heard.
- Roadmap. In an online environment, there is no reason that service opportunities would be lessened. For example, there should be an online student association that organizes online brownbags through Skype or Google Hangouts. A leading scholar could give a talk, with individual live cameras on every other student, all organized by a team of online students. Students could take turns manning the live video “Q&A” area during the official day set aside for interviews of upcoming graduate students. The opportunities to pitch in are endless for a motivated student.
- Ideal Solution. Of the five dimensions I’ve listed here, service is probably the easiest. It only requires a faculty willing to be creative and open-minded about the activities that students take part in. At a minimum, students should run their own graduate student association and hold regular online events.
- Challenge. One of the most difficult aspects of teaching graduate students is the actual act of teaching. It is generally not effective to teach Ph.D. students the same way you teach undergraduates. With undergraduates, there’s a certain amount of memorization required (e.g. who were the important figures in the development of psychology?), with deeper understanding lightly layered on top of that memorization. If students came out of my I/O Psychology class able to tell me the major functions of I/O, the important controversies surrounding those functions, and be able to apply some of the principles to their own workplaces, I would be thrilled. At the Ph.D. level, the requirements are much more intense. Not only must Ph.D. students be able to tell you about the controversies, but they must also be able to propose research studies to solve the problems that those principles introduce. They must be able to read and critically evaluate a research literature for gaps and limitations. These are skills that are not transferred easily in lecture, which is why most Ph.D. programs make heavy use of seminars (for content courses) and labs (for statistics courses).
- Roadmap. Recently, my department floated the idea of taking online Master’s students without disrupting pre-existing graduate courses. The biggest challenge in this idea is the integration – having in-person and online students in classes concurrently. But I am confident it is possible. For example, a computer could be set up with Google Hangouts (so that each attending online student would be visible with a webcam on that computer), and that computer could be wheeled into each class where online students are needed to participate. These systems are actually quite easy to use; when someone starts talking, the focal video window switches to that person so that you can see them full-screen. To the instructor, a window would appear with a person’s face whenever they wanted to ask a question; it would only require turning their head and talking to the camera to talk to that student. Intense, interpersonally-oriented seminars are possible and even simple once set up given current video technology. This is no longer a limitation of online. If the in-person component is not required, it gets even easier; instead of rolling a computer in, everyone in the class (professor included) can see each other face-to-face with video streaming.
- Ideal Solution. The university’s IT group should invest in a high-reliability high-quality small group video streaming technology. One example is Cisco Jabber. For blended classrooms (with both in-person and online students), a mobile computer should be set up that can be transported from classroom to classroom as needed. Real-time instant support must be provided for the technology to both faculty and students. Class sizes should be kept under 10 for most topics; exceptions might include some “core” courses, like various statistics courses, research fundamentals, and proposal-writing courses.
- Community Participation
- Challenge. Perhaps the most subtle indirect benefit to attending an in-person institution is immersion within the academic community. When you walk to class, you run into professors who may stop you in the hall to chat. When you conduct research, you interact with faculty to get your IRB approvals, to work within the departmental subject pool, and attend research talks. When you conduct service, you are working directly to the benefit of other graduate students and faculty within your department, college, or university. All of these activities teach you to be part of a larger scholarly community. In many online programs, this sense of community is missing.
- Roadmap. To this point, psychology faculty have been lucky in terms of community. By bringing in a graduate student, that student will be exposed to the local scholarly community by default. It requires little or no extra work on part of the faculty. Online, this is not true. Community must be actively built. Videoconferencing is perhaps the most cost effective way to do this. For example, lab meetings could be held weekly or biweekly through videoconferencing to encourage people to make such connections. But this alone is probably not enough to provide a real sense of community.
- Ideal Solution. To build community, I recommend a multi-pronged approach. First, videoconferencing should be used within lab groups to encourage small-scale community. Second, a student association with regular meetings should be established and run by the students themselves. Third, regular online brownbags should be established with a technology enabling both participant-to-participant and participant-to-presenter interaction. Fourth, students should be required to attend one conference per year (probably APA), where a hospitality suite would be rented by the online institution to enable everyone to meet face-to-face at least once per year. Preferably, these events would be broken down further to give ample opportunities for labs to meet together as well. Fifth, a regular retreat should be held once per year at a physical location outside of the context of a conference for two to three days, to enable long-term planning among labs and conduct teambuilding exercises more broadly.
So as you can hopefully see, what I would consider a high-quality rigorous online psychology Ph.D. program would require quite a great deal of effort on the part of a department chair and likely several committees. It is not something that can be started lightly, or else your program will end up with the reputation of current online Psychology programs, which is frankly pretty terrible. And that doesn’t serve anyone well, faculty or students. While I’d love to see or develop such a program, I doubt we will see anyone willing to make the sort of investment required to realize such a program for many years – which is a shame, because many high-quality students that can’t move (for whatever reason; typically family-related) currently have no way to complete a Psychology Ph.D. And that is something we should try our best to correct.
In a recent issue of the Journal of Virtual Worlds Research, Landers and Callan examine appropriate evaluation of learning taking place in virtual worlds (VWs) and other 3D environments. In doing so, they develop a new model of training evaluation specific to the virtual world context, integrating several classic training evaluation models and research on trainee reactions to technology. The publication of this model is critical because 1) much previous research in this area does not consider the antecedents of learning in this context, leading researchers to conclude that VW-based training is ineffective in comparison to traditional training when the opposite may be true and 2) organizations/educators often implement VWs unsuccessfully and blame the VW for this failure when subtler contextual factors are likely to blame, which may help explain the virtual worlds bubble, which I discussed a few years ago. Considering VWs are appearing in research as viable alternatives to traditional, expensive, simulation-based training, the time for VWs may be at hand. The model exploring these issues appears below.
Though framed in terms of organizational outcomes, their model of evaluation could be equally valuable in educational settings. From their integration, the authors make five practical recommendations to those attempting such evaluation (from p. 15, with citations removed):
- All four outcomes of interest should be assessed if feasible: reactions, learning, behavioral change, and organizational results. This will provide a complete picture of the effect of any training program.
- Attitudes towards VWs, experience with VWs, and VW climate should be measured before training begins, and preferably before training is designed. This will give the training designer or researcher perspective on what might influence the effectiveness of their training program before even beginning. If learners have negative attitudes towards VWs, training targeted at improving those attitudes should be implemented before VW-based training begins. Without doing so, the designer risks the presence of an unmeasured moderation effect that could undermine their success. Even if attitudes are neutral or somewhat positive, attitude training may improve outcomes further.
- If comparing VW-based and traditional training across multiple groups, take care to measure and compare personality, ability, and motivation to learn between groups. Independent-samples t-tests or one-way ANOVA can be used for this purpose.
- Consider the organizational context to ensure that no macro-organizational antecedents (like VW climate) are impacting observed effectiveness. Measure these well ahead of training, if feasible.
- Provide sufficient VW navigational training such that trainees report they are comfortable navigating and communicating in VWs before training begins. A VW experience measure or a focus group can be used for this purpose.
For more detail on how to go about such evaluation, this article is freely available (“open access”) to the public at the Journal of Virtual Worlds Research website.Footnotes:
- Landers, R.N., & Callan, R.C. (2012). Training evaluation in virtual worlds: Development of a model Journal of Virtual Worlds Research, 5 (3) Other: http://journals.tdl.org/jvwr/index.php/jvwr/article/view/6335 [↩]
In a recent study appearing in Cyberpsychology, Behavior, and Social Networking, Moskaliuk, Bertram and Cress examined the value of virtual training environments for training effective coordination between ground police officers and a helicopter crew. This study thus applied virtual environments to one of the situations that I have previously argued is ideal for the application of virtual worlds: a training context where the outcome is highly complex and real-world training would be expensive and dangerous. In this case, in-person training would require helicopters! What better reason do you need to use a virtual world?
So did it work? Here’s the short version: the virtual world-based training was as effective as traditional (lecturer plus simulation) training in teaching the officers facts and information about the exercise. But more critically, officers completing the virtual training were better able to later apply what they learned to real situations. Thus, virtual environments were more effective than a traditional in-person simulation at teaching officers needed real-world skills. As suggested by Gardner’s most recent depiction of the hype cycle, it seems we are finally entering the Slope of Enlightenment for virtual worlds.
To test their hypotheses, the researchers assigned 47 German federal police officers to 8-person teams, which were in turn assigned to one of three conditions. Trainees all received in-person training on strategies and rules for the exercise followed by the treatment of their particular condition:
- Traditional training. Officers completed in-person training with a trainer, in which communication with the helicopter was simulated.
- Virtual training. Officers completed the VTE Police Force simulation while wearing headsets to communicate with each other (see images to the right).
- Control condition. Officers completed a distraction task (completed a group activity based on an unrelated handout).
In the spirit of the Kirkpatrick model, reactions, knowledge, and transfer were next measured. Reactions and learning were measured with surveys, while transfer was measured by evaluating written responses by officers to 11 video vignettes. Reaction outcomes were lower for traditional training; trainees did not feel like virtual training was as realistic or as applicable to real life in comparison to virtual training. Despite this, knowledge outcomes were similar between training conditions. 22% of the variance in transfer performance was explained by variance in conditions, with transfer from virtual training significantly higher than the traditional training and the control.
Overall, this provides compelling evidence that virtual worlds are a viable – and perhaps even preferable – setting for complex training tasks. The only negative outcome was the motivational component – although trainees had similar knowledge and superior transfer from virtual training, they felt like the traditional training was superior. But as Rachel Callan and I recently suggested, training targeted at increasing virtual world acceptance by trainees may be the solution to this one remaining barrier.
Note. Images in this blog post were taken from another paper by the authors.Footnotes:
- Moskaliuk, J., Bertram, J., & Cress, U. (2013). Impact of Virtual Training Environments on the Acquisition and Transfer of Knowledge Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2012.0416 [↩]
- Moskaliuk, J., Bertram, J., & Cress, U. (2011). Training in virtual training environments: Connecting theory to practice. Connecting Computer-Supported Collaborative Learning to Policy and Practice: CSCL2011 Conference Proceeding, Vol 1., 192-199 [↩]
There has been quite a bit of hype about MOOCs lately. If you haven’t been following the craze – and it is definitely a craze, just as much as Big Data is – a MOOC is a Massively Open Online Course. I’ve written about them before - in this post, I provide a checklist to let instructors know if a MOOC can replace them. MOOCs are marked by several key features, at least as they are implemented now:
- Anyone can enroll in a MOOC. They are massively open in that anyone in the world can take the course for free.
- They are online courses. The only way to get information to the masses at a very low per-head cost is to do so online.
- They rely on peer involvement to facilitate. When taking a MOOC, you will almost certainly never meet, speak to, or email the instructor. Help for assignments is typically provided by other people in the class.
MOOC fervor is getting even more intense, with the New York Times declaring 2012 the year of the MOOC and some universities granting credit for completion of MOOC courses. It seems all is coming up roses for MOOC providers like edX and Coursera. Proponents of MOOCs have taken on a certain fervor, promoting the idea that MOOCs are the “solution” to college – that all students will one day be enrolled in single massive MOOC for each course they need, and the current-day idea of “college” will be rendered obsolete.
I say: not so fast. We heard similar rhetoric when online courses in general were introduced, and while many students do opt for online courses instead of in-person courses, most don’t. The predicted mass exodus of students from in-person classrooms to the glorious Internet did not happen. That’s because the in-person classroom still offers a lot of benefits that are difficult to provide online – at least, not without a lot of time and dedication on the part of the instructor.
In my research, I’ve even come across stories about the introduction of correspondence courses with the same sort of rhetoric as we see with MOOCs. Correspondence courses were presented as the solution to over-crowded classrooms, and a way to get more people to go to college who otherwise would not have. After all, why go to a college classroom when you can complete a course by mail? As it turns out – there are a lot of reasons. And there are a lot of reasons why you wouldn’t want to rely on MOOCs for your college education either.
I’ve noticed that proponents of MOOCs tend to not be in higher education. This is a critical problem of perspective. They make claims like, “We saw the Internet completely flip the music industry on its head; this must happen to higher education as well.” Because of this perspective, they make several assumptions that upon scrutiny turn out to be unwise.
- Given: MOOCs rely on automated grading to evaluate student progress.
Unwise assumption: If one class can be graded automatically, all classes can be graded automatically.
Regardless of your personal stance on the value of grades for learning, they do provide a benchmark for progress (albeit an imperfect one). Universities use grades to rank student comprehension of course material; a student with an “A” should come out of a course with greater skill than student with a “B”. MOOCs can only grade questions where there is a single correct answer or where an algorithm can be used to derive all correct answers. That means algebra is easy to grade; architectural design is not. The most problematic area is writing. Although an algorithm can be used to automatically grade written material, those algorithms, on the whole, are awful. At best, they assess sentence structure, spelling, and complexity. They cannot evaluate if your writing made any sense, which is really the only aspect of the writing most instructors actually care about (except perhaps English instructors!). Artificial intelligence is not yet sophisticated enough to evaluate writing quality or logic, and it will probably be decades before we even start to be able to do this. Think of it this way: how long have we been trying to create accurate translation software that actually converts the meaning of one language into another as effectively as a human translator can? Hint: IT’S BEEN A LONG TIME.
- Given: MOOCs are effective at teaching basic skills that can be assessed automatically.
Unwise assumption: The skills that can be assessed automatically are the ones we want to teach at the college level.
I completely agree that MOOCs can effectively replace some basic skills training currently taught in colleges (e.g. spelling and grammar, basic mathematics, etc.). But it does not follow that MOOCs can effectively replace all training currently taught in colleges. I will even go so far as to say that the introduction of MOOCs is an excellent opportunity for universities to do away with all those courses. Let me be clear: if course material can be taught and assessed effectively in a MOOC, it should probably no longer be a college course. It should be a required prerequisite before the student even gets to college. That way, time can be devoted in college to the skills that we actually want students to gain in a liberal arts environment: creativity, problem solving, and critical thinking. If a student is paying for college credits to learn how to add and subtract effectively, they are wasting their money on college after already having been failed by K12. We should not be spending instructional resources at the college level teaching these skills if a MOOC can deliver them equally effectively.
- Given: MOOCs will create a marketplace for learning skills without the need for degree programs. People can pick the knowledge they want to gain and gain only that.
Unwise assumption: Credentialing is not valuable.
A common perspective is that MOOCs will make degrees obsolete – that somehow having a worldwide marketplace of courses that provide specific skills will be sufficient to prepare people for the careers they want. But this assumes that the credential – possession of a degree – does not provide useful information in that worldwide marketplace. But it does. Possession of a college degrees acts as a signal to potential employers: “This person was able to organize their schedule, choose a program of study, and stick to it for multiple years at a sufficient degree of success for this institution to proclaim that they have earned a degree and for us to support that proclamation.” This is why people have judgments about the quality of college someone attends – there is a belief that a more rigorous college will provide a more rigorous education, and the degree signals that. While I agree that many employers put a little too much faith in what those degrees imply, this doesn’t mean that the degrees themselves are useless. Let’s not throw the baby out with the bathwater. A college degree indicates a substantial multi-year commitment and a major life accomplishment for the degree holder. Let’s not minimize that accomplishment by saying that taking a selection of MOOC courses would be equally impressive. Also important is that college exposes students to a much wider range of knowledge than they would otherwise be exposed to in high school. I’ve had many students who took my Industrial/Organizational Psychology course to fulfill a requirement only to discover that they had a real passion for the topic. Without a broad liberal arts education, they would never have discovered this about themselves. In a MOOC-centered world, those students would never have known that passion.
- Given: MOOCs rely on discussion forums to facilitate learning.
Unwise assumption: Discussion forums can effectively police themselves.
One of the biggest challenges for an online instructor is managing a discussion forum. Forums are used as a constructivist instructional strategy: the goal is for the students to actively think through the material, apply it to their own lives, and be able to explain their application of class concepts to others. There are generally two ways such forums go. First version: the instructor makes some regular assignments to “post in the discussion forum” or “reply to another student in the discussion once per week”. A MOOC can do this too. But these strategies rarely work very well; students do the bare minimum, usually resulting in a flood of posts the day that the assignment is due, and rarely do students demonstrate the application skills described above. Second version: the instructor is an active participant in discussion, working to correct misconceptions, redirect conversations in productive directions, and generally applies their expertise to improve student engagement with the material. The current sophistication level of artificial intelligence means that a MOOC cannot do this. They must rely instead on the “wisdom of the crowd” – that a critical mass of other people in the classroom will 1) be interested in helping others, 2) have sufficient expertise to answer the question being asked and 3) happen to browse the forum at the moment a response is needed. While this might work for a relatively common learning topic (like Introductory Algebra), when we get to relatively uncommon fields (like Advanced Organic Chemistry), those numbers are going to drop dramatically.
- Given: MOOCs rely on the support of others currently taking the course to provide the social context for a course.
Unwise assumption: Individual attention from a mentor/leader figure is unnecessary for effective learning.
Unfortunately, some college courses are taught by instructors that don’t really care about their students. They are there because their are required to be there, riding the tenure train for the long-haul. But this is not the norm in most universities. Most faculty care deeply about the success and learning of their students individually – they want every student to succeed. A common misconception is that faculty want students to fail. This is simply not true. Instructors want students to strive and push themselves. Instructors want to provide a sufficiently challenging course so that the most advanced students need to push themselves just as much as the struggling students do. This is realized in a variety of ways. Some provide a plethora of comments on student papers with the hopes that those students will read those comments and think about how they can improve for next time. Some instructors schedule one-on-one meetings to talk about course progress and identify areas needing extra attention. Some instructors schedule review days and feel out the classroom for areas of weak understanding so that they can target those areas with supplemental materials. None of these things are currently possible with a MOOC, and all are valuable. Little is more motivating than one-on-one attention, delivered by a mentor/leader figure that you know cares about you and your success. In a MOOC, you are surrounded by other learners, and yet you are all alone.
In summary, I think MOOCs are an important development for higher education, but they are not poised to majorly reform anything. At best, they can supplement and bolster existing efforts by replacing remedial and entry level courses, leaving the more advanced topics for the traditional college classroom, online or otherwise. As MOOCs improve, this may change – but for now, this is the best they can provide.
I believe that one of the major purposes of psychological science is to promote that science to the public – that is, after all, one of the reasons that I write this blog. But scientists vary in the degree to which they agree to this statement. One one side, some researchers will argue that the purpose of science is to make meaningful contributions to knowledge alone, i.e. furthering the frontiers of human understanding and nothing else. To these people, research is for researchers. On the other side, we have people like those appearing in the list below, who work to actively promote their work on Facebook and elsewhere.
This is certainly not the only purpose of a lab Facebook page. I also use our lab’s page for community-building: to congratulate graduate students on major accomplishments, to announce new publications and presentations, and to share jokes that academics and graduate students would find funny. Some labs use it for recruiting undergraduate research participants.
In creating this list, I came to several surprising conclusions:
- Psychology lab Facebook pages are relatively uncommon. There are literally thousands of psychology labs, yet I could only locate 28 with Facebook pages (although there were some restrictions to my search – see below).
- Psychology lab Facebook pages don’t have a whole lot of likes. A few hundred at the high end, almost all had less than 30, and most had less than 10.
- Psychology lab Facebook pages often aren’t maintained well. I only saw pages with my purposes (community-building with graduate students, lab announcements, etc.) in two or three pages. Most post quite infrequently. Unsurprisingly, the pages with the highest numbers of posts are also those with the most likes.
Now for the list. I’ve included only pages that have at least 2 likes and 1 post, for which I can clearly identify that it’s a lab, and for which posts are in English. We’ll start with the best (wink), and the rest will be alphabetical!
|Facebook Page Name||PI||Location||Area of Psychology|
|Technology iN Training Laboratory – TNTLab||Richard N. Landers||Old Dominion University||industrial/organizational|
|Arcadia University: Social Psychology Lab||(unknown)||Arcadia University||social|
|Arkin Lab||Robert Arkin||Ohio State University||social|
|Carter Adolescent Interpersonal Relationships Lab||Rona Carter||University of Michigan||developmental|
|Cogpsy Lab||Maria Viggiano||University of Florence||physiological|
|CSUN Sport Psychology Lab||Mark P. Otten||California State University, Northridge||sport|
|Dr. Tom Ford’s Social Psychology Lab||Tom Ford||Western Carolina University||social|
|Early Learning Lab (ELLA)||Annette M. E. Henderson||University of Auckland||developmental|
|Environmental Psychology, Interpersonal Attachment and Relationships Lab||Kerry Anne McBain||James Cook University||social|
|Evolutionary Psychology Lab at Oakland University||Todd K. Shackelford||Oakland University||evolutionary|
|Evolutionary Psychology Lab of SUNY New Paltz||Glenn Geher||SUNY New Paltz||evolutionary|
|Exercise Psychology Lab||Edward McAuley||University of Illinois||exercise|
|Forensic Psychology Lab||Don Dutton||University of British Columbia||forensic|
|Gender, Sexuality & Critical Psychology Lab RU||Maria Gurevich||Ryerson University||social|
|Lab of Evolutionary Psychology – Research Participation (MQ)||(unknown)||Macquarie University||evolutionary|
|Loyola University Psychology Lab||(unknown)||Loyola University||(unknown)|
|NEU Psychology Laboratory||(unknown)||New Era University||(unknown)|
|Psychology of Social Justice Lab (PSJL)||Phillip Atiba Goff||University of California, Los Angeles||social|
|SIUC Dr. Yu-Wei Wang’s Psychology Lab||Yu-Wei Wang||Southern Illinois University, Carbondale||counseling|
|Social Attitudes Psychology Research Lab||(unknown)||St. Mary’s University||social|
|Social Psychology Lab – University of Innsbruck||(research group)||University of Innsbruck||social|
|Social Psychology Lab (SPLAB)||(unknown)||University of Kwa-Zulu Natal||social|
|Social Stress and Health Psychology Research Lab||Elizabeth Brondolo||St. John’s University||health|
|TCNJ Organizational Psychology Lab||Jason Dahling||The College of New Jersey||industrial/organizational|
|UGA Infant Research Lab||Janet Frick||University of Georgia||developmental|
|UMSL Multicultural Psychology Research Lab||Matthew J. Taylor||University of Missouri, St. Louis||multicultural|
|University of Queensland Evolutionary Approaches to Psychology Lab Group||(unknown)||University of Queensland||evolutionary|
|UWM Anxiety Disorders Lab||Han-Joo Lee||University of Wisconsin, Madison||clinical|
In a new article appearing in Simulation & Gaming, Bedwell and colleagues do what the game studies literature has generally not been able to do for games in general; they develop a taxonomy that defines what a serious game is. This effort provides a road map for researchers exploring how games can contribute to learning.
The definition of “game” is an area of surprisingly vehement debate. Many researchers and game designers have their own definition of “game,” ranging from Sid Meier’s “a game is a series of interesting choices” to researchers’ attempts to define games by exploring the largely humanities-based game studies literature. This is a challenge for the serious games researcher, because without a clear definition for “game”, there is no way to systematically and scientifically explore the aspects of games that contribute to learning. What one researcher calls “challenge”, another might call “fun.” What another researcher calls “fun”, yet another might say “contributes to the creation of flow experiences.” This leads to overlap between researchers, which often leads to seemingly contradictory results. With the above example, a study finding a positive effect of “fun” might in another researcher’s term be a positive effect of flow or a positive effect of challenge. This leads to a highly inefficient scientific process.
Bedwell and colleagues begin to solve this problem by conducting an empirical study of game attributes targeted at identifying the areas of overlap between researcher conceptualizations of game attributes. First, they identified 65 self-proclaimed “game experts” from various online sources, including the online forums of the Escapist and Penny Arcade, video game listservs, and internal distribution lists at game developers. Next, these 65 game experts (50 players and 15 developers) conducted what is called a “card sort”, in which they sorted the 19 game attributes related to learning identified by previous researchers into their own categories. Finally, they completed a post-sort survey to capture demographic information. To examine this data, Bedwell and colleagues conducted a cluster analysis, a process that groups cases by similarity of ratings provided.
Interestingly, game developers and game players did not differ in their sorting. Support was found for a 9-category system:
- Action Language. The method by which information is relayed to the game (joystick, keyboard, mouse, etc.).
- Assessment. The extent to which feedback is provided to players on their progress.
- Conflict/Challenge. The degree to which the player is challenged, and the method for manipulating that challenge.
- Control. The degree to which players can affect their environment.
- Environment. The location chosen for the game.
- Game Fiction. The extent to which the game environment represents reality.
- Human Interaction. The degree to which players interact with other people.
- Immersion. The extent to which the game uses engrossing effects (audio, video, etc.) to draw the player in.
- Rules/Goals. The degree to which the game presents clear objectives.
With this system, the authors hope that “research on game attributes and their effects on learning can progress in a purposeful and unified fashion” (p. 753). Somehow, I don’t expect it will be quite that easy, but at the very least, this provides a valuable starting point for such efforts.Footnotes:
In a recent meta-analysis appearing in the Journal of Business and Psychology, Costanza and colleagues compare a wide variety of attitude variables between four generations of employees: Traditionals, Boomers, Gen X, and Millenials. In a quantitative review of 20 articles on generational differences across 19,961 workers, the authors conclude that generational differences are small or near zero in virtually all cases.
Six attitude variables were examined:
- Job satisfaction, the degree to which employees enjoy and have positive attitudes towards their job
- General organizational commitment, the degree to which employees are committed to their job (a combination of the next three)
- Affective organizational commitment, the degree to which employees feel emotionally attached to their job
- Normative organizational commitment, the degree to which employees feel obligated to remain at their job
- Continuance organizational commitment, the degree to which employees feel they have invested a lot in their organization and don’t want to lose those investments (personally, not monetarily)
- Intent to turnover, the degree to which employees intend to leave their organization in the near future
Reported in standard deviation units (Cohen’s d), the authors found effect sizes ranging from .02 to .25 for satisfaction, -.22 to .46 for commitment, and -.62 to .05 for intent to turnover, which can be described as “low to effectively zero.”
Millenials are often characterized as being very different from the rest of employees, with a “drastically different outlook” and “different expectations”, are “difficult to understand” and “entitled”, “less motivated”, “high maintenance” and “silver-spoon fed”, and a variety of other colorful descriptors. In truth, it seems that they are not much different than any other generation – or at the least, they are no more different from other generations than other generations are from each other.
The article is, of course, limited by the variables it was able to meta-analyze, so it is possible that some other attitude or characteristic that was not examined does demonstrate a large difference. However, Millenials are commonly vilified as being less committed to their jobs, more likely to jump ship, and less satisfied with their work. This study demonstrates that at least in these regards, Millenials are just like everyone else.Footnotes:
In the most recent issue of Journal of Media Psychology, Ross and Weaver investigate how the experience of negative griefing behaviors early in playing an online multiplayer games (like World of Warcraft) causes some new players to themselves grief others. The authors attribute this primarily to observational learning – that people joining an online multiplayer environment for the first time are likely to interpret the way they are treated by others as a model for how to treat others.
This means that the introduction of abusive players into a multiplayer game in effect creates a destructive loop – the abusive player mistreats new players, those new players gain experience and mistreat the next round of new players, and so on. The initial griefing is also likely to result in greater frustration, lesser enjoyment, and greater state aggression, which would also feed into this loop. The conclusion: griefing does not just negatively affect the griefed; instead, it may begin a chain of abuse that lasts through generations of new players.
To discover this, the researchers asked 68 men and women to play the online game Neverwinter Nights, in which they were asked to play through six 2-minute rounds of gameplay. They were randomly assigned to be griefed in either the first or fourth round of play. They were also randomly assigned to be exposed to a new player or to the griefer in the round immediately following the griefing to see if the negative effects would be revenge (on the griefer), or if the griefing would generalize to others. Researchers were led to believe that other study participants were in the game with them; in reality, it was a research assistant. In the griefing condition, this research assistant with a Level 20 character would repeatedly (and unfairly) kill the research participant’s Level 5 character. Given this arrangement, it was literally impossible for the research participant to damage the research assistant’s character, and the research assistant could kill the research participant’s character in one hit. The article mentions “corpse camping” (staying near a player’s corpse waiting for them to return, weak, to kill them again) and “spawn camping” (killing players the moment they spawn, when they are disoriented) as examples of this griefing.
The researchers found support for most of their hypotheses:
- Participants who are the victim of griefing during the first encounter of a game are more likely to grief other players in subsequent encounters than participants who are not initially griefed.
- Participants are more likely to grief the same opponent than a different opponent, and this difference was greater in the condition where players had an expectation of cooperation than the condition where they had an expectation of griefing.
- Players in a multiplayer game who are griefed experience more frustration, less enjoyment, and higher levels of state aggression than those who are not griefed.
Over the last week or so, I wrote a web-based tool to automatically generate datasets and worked-out solutions. It creates and displays a dataset, a completed solution, and the results of most intermediary computational steps. It is freely available online for instructors or students: http://rlanders.net/datasets.php
As an example, if you select “Paired Samples t” with “n = 10” and an outcome type of “Survey”, it will output:
- Instructions on what to do with the dataset generated
- A dataset with Time 1 and Time 2 “Survey” variables (1=min, 5=max, 3ish=mean, 1ish=sd)
- A “completed” dataset containing calculated difference scores and squared difference scores
- Sum of d, Mean of d, Sum of d^2, (Sum of d)^2, sd of d, se of d, critical t statistic, degrees of freedom, observed t, CI lower and upper bounds
- A box with all important “final” output: the research question, null and alternative hypotheses, alpha, critical value, journal-type reporting of CI and t with precise p-value, unstandardized effect size (difference score), standardized effect size (Cohen’s d) and a NARRATIVE CONCLUSION including interpretation of the effect size. As an example: “Conclusion: Reject the null and accept the alternative. The difference is statistically significant. The Survey variable decreases over time. If we assume this sample to represent the population, we would expect 95% of sample means to fall between -1.68 and 0.08. On average, the Survey variable was 0.80 lower at Time 2. The difference over time was medium.”
This is customized to each test. The program will also randomly choose between directional and non-directional tests when directional tests are plausible.
I have tried to tweak the generation algorithms so that you get statistically significant results about 50% of the time. However, you will get more significant results with larger sample sizes and fewer significant results with smaller sample sizes (as you might expect). But if you leave it set to n = 10, it should be about even.
The tool includes fully worked out problems for: central tendency and variability statistics, z-score calculations, confidence intervals, z-tests, one-sample t-tests, paired-samples t-tests, independent-samples t-tests, one-way ANOVA, chi-squared goodness of fit and test of independence, and correlation/regression.
If you’re wondering why I wrote this, it is because I wrote an undergraduate 1-semester introduction to statistics for business students which will be published in 2013. However, you don’t need the book to use the dataset generator (just ignore the references to “Chapters”). It is customized to the statistical method I teach in that text, however, so if you like it, I’d ask you to consider my textbook when it is published in 2013.
Also, if you decide to adopt this or provide it to your students, please leave a note in the comments saying so. Feedback is appreciated!