Ah, the end of the academic year. A time for reflection and furious attempts to get some research done. We are in the midst of Spring semester finals here at ODU, and getting ready to head into the summer. Like last year, I’ll be using a relaxed posting schedule for the semester – so no more weekly mid-week posts until August/September-ish.
And actually, unlike last year, I won’t be writing my textbook anymore because it’s been published! If you would like a student-centered, practical, one-semester introduction to statistics for business, I’d (obviously) strongly recommend my Step-by-Step Introduction to Statistics for Business. Each chapter starts with a case study describing a small business owner facing a problem that can be solved with statistics. Lots of diagrams and illustrations too! If my post on intra-class correlations is any indicator, apparently I’ve got a knack for describing statistics pretty clearly.
So instead of the textbook, I’ll instead be working on journal article submissions, a couple of software design projects, and taking a special trip to Europe for my 5-year wedding anniversary. A busy summer!
In an upcoming issue of Cyberpsychology, Behavior, and Social Networking, Papay and colleagues provide psychometric evidence for the short-form Problematic Online Gaming Questionnaire (developed earlier and published in PLOS ONE) using a national sample of 5,045 high school students. The short-form version is especially interesting because it has six dimensions over just twelve items. However, the authors found evidence of adequate model fit to the six-factor solution.
Problematic online gaming is related to low self-esteem, depression, and other negative consequences to your psychological health. Below, you can complete the Problematic Online Gaming Questionnaire (short-form) and get an assessment of your own addiction. If you wish to assess someone else, it is important that they complete the assessment themselves; don’t read it to them. After you submit your answers, additional analysis of your responses will appear comparing you to the sample collected by the research team and indicating your risk factors.
Please note: You must respond to every question to learn your results. Your responses are entirely local to your own computer (your responses will remain unknown to anyone but you).
- Pápay, O., Urbán, R., Griffiths, M., Nagygyörgy, K., Farkas, J., Kökönyei, G., Felvinczi, K., Oláh, A., Elekes, Z., & Demetrovics, Z. (2013). Psychometric Properties of the Problematic Online Gaming Questionnaire Short-Form and Prevalence of Problematic Online Gaming in a National Sample of Adolescents Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2012.0484 [↩]
- Demetrovics, Z., Urbán, R., Nagygyörgy, K., Farkas, J., Griffiths, M., Pápay, O., Kökönyei, G., Felvinczi, K., & Oláh, A. (2012). The Development of the Problematic Online Gaming Questionnaire (POGQ) PLoS ONE, 7 (5) DOI: 10.1371/journal.pone.0036417 [↩]
In a recent article appearing in Organizational Psychology Review, Pillutla and Thau make some very strongly worded arguments about the role of theory development in psychological science. I’ll start exploring their paper with a quote in their own words:
The state of [industrial/organizational psychology] and its obsession with novel theoretical contributions is antithetical to the goals of the scientific method.
The authors lament the over-reliance on the development of novel theory as the foundation for a meaningful contribution to scientific research. They suggest, rightfully I think, that the pursuit of novel theory creates a perverse incentive structure for scientists, encouraging them to pursue identification of “facts” which may fit available data but do not really exist, ultimately leading to the creation of theories that do not truly complement or build upon any prior theories, orphans of the scientific literature that provide no true value to our understanding of organizational phenomena. They label this approach the pursuit of the “interesting” at the expense of good science.
They further argue that this state of affairs is in fact the result of a fundamental misunderstanding of the scientific process. The enterprise of science is not one where we should aim to identify facts, because facts do not really exist; instead, it is one where theory is used as a tool to incrementally better understand phenomena, each additional bit of research contributing to a better description of a real research problem. When we pursue conclusions like “this is how organizations function” or “this is how employees behave”, we are misleading both ourselves those that rely upon our research.
Certainly, such misunderstandings have occurred before, with highly damaging implications for the progress of science. One such misunderstanding was the over-reliance on statistical significance testing, where “statistically significant” was mistakenly taken as synonymous for “real” or “important”. Although people still make such mistakes in interpretation, the movement toward effect size interpretation and model specification (as evidenced through changing APA guidelines for reporting) is clearly targeted at reducing this continuing problem. In their article, Pillutla and Thau are arguing that theory development as the sole target of science is the research-methods equivalent of the statistical-significance interpretation problem – that this over-emphasis in academic publishing is pervasive and harmful. Like statistical significance testing, sometimes novel theory development is the right approach, but many times it isn’t.
Through analysis of citations to one of the most dominant proponents of theory-building as the basis of all science, the authors also contend that management, business, and applied psychology have been disproportionately harmed by this viewpoint, in comparison to other areas of the social sciences.
This view is a bit extreme, but may be the front edge of a coming wave of reform. I’ve noticed myself over the last few years that fewer and fewer papers appearing in our top journals - Journal of Applied Psychology, Personnel Psychology, Academy of Management Journal, etc. – have solved any real problems for actual organizations. Instead, they focus on exploring and specifying increasingly minute aspects of organizations – interesting, sure – but not terribly useful in any tangible sense. From chatting with practitioners and other early career scholars, I’ve discovered this view is not all that unusual. One person at the Society for Industrial and Organizational Psychology conference this year even told me, “I haven’t seen anything I could actually use to help my employees in Journal of Applied Psychology for at least five years.” That is a terrible state of affairs. If we’re not solving real problems, what value do we really provide to both organizations (whom we study) and taxpayers (who pay us, or at least those of us working in state institutions)? The push to publish only in top tier journals – where such theory development is required – only exacerbates this problem.
Interesting theories are not, by themselves, worthy of investigation unless they are put to the service of explaining research problems… the quest for novelty and interestingness of facts has infused them with significance without any regard to the knowledge that they generate.
So are things as dire as Pillutla and Thau indicate? There have certainly been a number of high-profile cases in psychology lately that have caused the public to question the value of our entire field. And we should absolutely work to repair that damage, but the path to do so is unclear. Some researchers have taken a purely empirical approach, attempting to replicate controversial papers and examine their convergence, eschewing theory entirely in the quest for reliable knowledge. Is that enough? I’m guessing we’ll find out in the next five or ten years.Footnotes:
Day 3 was the final day of the SIOP conference. Per tradition, sessions were lightly attended – too much late night enjoyment at the end of Day 2, I suspect! I had a slow start myself, only getting to poster sessions a little after 10AM. One that caught my eye explored the effect of organizational support for the use of technology over time; surprisingly, the effect of strong organizational support on technology use weakened over time, not quite in line with what prior theory (and the researcher presenting!) predicted.
I also decided somewhat last-minute to attend a Master Tutorial on Bayesian statistics for I/O. Unfortunately, despite a full house and CE credits on the line for many, the speaker didn’t show! After about 15 minutes, I left and headed back to the poster session on Research Methods, with lots of interesting work, including a bit more on the equivalence of mTurk and undergraduate research samples. The added value on this paper? English-speaking, US-based mTurk samples are similar to undergraduate samples, but non-English-speaker samples are a little different.
Finally, I ended up at the closing plenary, given by Reverend TJ Martinez, founder of the Cristo Rey Jesuit College. He related an inspiring story about starting with zero support to start a school but ending by graduating a fair number of low-SES high school students (with an abysmal high school completion rate prior) with a 100% entrance rate into college. I am still not sure precisely what he had to do with I/O, but he was certainly an entertaining and admirable speaker.
At the very end incoming SIOP President Tammy Allen laid out a 5-section plan for SIOP in the coming year:
- Tie the varied local I/O advocacy and interest groups better to SIOP.
- Increase the visibility of I/O Psychology in Introductory Psychology courses and textbooks. Graduates of undergraduate psychology programs should, at minimum, be able to answer the question, “What is I/O Psychology?”, and this doesn’t happen in most programs.
- Work to improve SIOP’s branding, which may include the launch of a new practitioner-oriented journal.
- Improve national advocacy efforts on behalf of SIOP. Apparently we have already some folks on retainer, but this will expand.
- Better understand SIOP’s role and position in “science.” Psychology is viewed as one of the foundational areas of scholarship on which many other areas are based, but where does I/O fit specifically?
And that’s it for SIOP 2013! I’ve got another day in Houston – which will likely involve the Houston Aquarium and a nice dinner somewhere – and then back to cooler Norfolk (at a relatively chillier 70 degrees F!), where I’ll be writing a slew of follow-up emails to all the intriguing folks I met this year. And hopefully I’ll be sufficiently recovered from this trip in a year to attend SIOP 2014 in sunny Honolulu!
|14:04||#SIOP13 Day 3 begins with another SIOP tradition, for me anyway: I’ve lost my voice! Too much of a good thing I suppose!!|
|16:21||Fascinating poster on org support for tech use in orgs – changes over time in unexpected ways #siop13|
|16:22||@ericknud by that logic, wouldn’t you want to be unattractive? Inverted u-curve perhaps? #siop13|
|18:22||@ericknud yes, that’s what I’m saying – it is probably more complicated than just “moderately attractive is good”|
|18:24||#siop13 is both amazing and frustrating because you run into people you know constantly… Missed a whole session!|
|18:26||@TSP_Consulting @jbrodieg and what do #SIOP points get me, hmm? #siop13 or #siop14 perks perhaps?|
|18:41||Intro to Bayesian Statistics speaker is missing 10 mins into session… Uhoh #siop13|
|18:46||#SIOP13 skipped out on Bayesian tutorial without a speaker for research methods poster session… At least they’ll all show up!|
|18:51||@jsnread that was the consensus of the room!|
|18:53||@BelindaK04 what room is this?|
|21:55||#siop13 closing plenary… Rev Martinez is a pretty funny guy!|
|21:57||#siop13 closing plenary… Low income students as an investment opportunity for orgs… Future customers, future employees|
|22:10||#siop13 closing plenary… Inspiring speaker on the power of the few and disenfranchised to accomplish their goals|
|22:18||#siop13 #SIOP Prez Tammy Allen’s goals: tie local IO to SIOP, IO in intro, branding incl practitioner journal, natl advocacy, map of science|
|22:19||#siop13 video on Hawaii to encourage attendees at #siop14… Do we really need convincing??|
|23:06||End of Day 3 and end of #siop13… Great conference! So many followup emails to write!|
Day 2 of SIOP 2013 was a bit busier for me. After an early breakfast, I headed to my first session of the day at 8:30, a discussion of social media in selection, which ended up really being a discussion of LinkedIn in selection. Like Day 1, this was a bit depressing. Although there was a lot of talk about social media, there was not much data, which is what I was really hoping for. I did hear a few interesting tidbits in this session that I had not heard before. For example, a lot of managers look up job candidates on LinkedIn, where many pieces of information are shared that would not be part of a job application, like profile pictures and affiliated religious groups. Once a hiring manager sees such information, their judgment has potentially been irreparably affected – or more critically, the possibility that it has been so affected creates a substantial legal risk. How can an organization prove that manager wasn’t influenced by the job candidate’s race, skin color, national origin, gender or religion once they have been exposed to it?
The next session on social media was a little better, data-wise, but it wasn’t perfect. One of the presentations provided some data on the criterion-related validity of LinkedIn profiles for predicting job performance, as an extension of a previously published paper doing the same for Facebook profiles. As it turns out, LinkedIn profiles can provide valuable information to hiring managers. But I asked a question of the speakers: Aren’t we just setting ourselves up to redo this same study for every social media platform of the moment? If we seek a scientific understanding of the useful information provided by social media, don’t we need a better way to do it? Everyone seemed to think that was a great idea, but alas, I didn’t get any specific recommendations on how to go about it.
Next, I headed to a symposium on online simulation, which was absolutely packed. Much in line with what I noticed from Day 1, practitioners are hungry for data on how these technologies have and should affect their practice – and yet again, data was a bit light. They presented some interesting assessment tools, but they seemed like relatively small advancements over what we already have – for example, the use of branching video-based simulations rather than branching text or static video assessments. The practitioners did bring up some interesting issues they faced in launching these technologies globally. For example, when they tried to launch their video-based simulation in another country with poor Internet infrastructure, they discovered that bandwidth was not the guarantee that it is in the US – their client couldn’t view the videos at even quasi-decent quality. This resulted in some rather last minute changes to meet client needs.
After this symposium, I headed over to the poster session on training – in an hour, I got through only about 75% of the room. Hopefully I’ll have a bunch of papers incoming to my inbox in the next few days so that I can revisit them in more detail than a 3′ x 5′ poster can provide!
Finally, at 5PM, in a symposium with my colleagues at ICF and ARI on emerging training technologies, I gave my presentation on gamification, which was by a large margin the most successful presentation I have ever given at SIOP in the 9 years I have attended. In it, I presented a theoretical model of gamification providing two specific mechanisms by which gamification can affect learning in the context of education/training, and I tested one of those processes empirically, providing several practical recommendations for those wishing to gamify their own organizational learning. I had literally 25 minutes of questions and comments after the session ended!
All in all, an exceptionally successful Day 2! Day 3 is a bit lighter, and I suspect tonight will be a bit more intense, so Day 3 may also start a bit late. But I am still hopeful for some great learning and some great conversations.
|13:13||Hangovers and spotty session attendance… so starts Day 2 of #siop13… Although many of the hangovers won’t start for hours yet!|
|13:41||Yet another session on social media for selection… Some data in this one, I hope? #siop13|
|14:05||Information on protected classes may be more common on LinkedIn than in resumes… Greater legal risk (managers cannot unsee!) #siop13|
|14:06||@ChrisWieseIO The speakers seem to be mixing selection predictors and selection methods… Seems a very jumbled approach to me. #siop13|
|14:10||I’m getting a little tired of hearing “maybe we’ll have some empirical research later” in these social media symposia #siop13|
|14:26||@ericknud I’ve got several running right now… Just sad there are not more here #siop13|
|14:34||I am sad when people count the number of stat. significant relationships in their results as an indicator of result stability #siop13|
|14:47||@ericknud Open for now! Stick around, will probably head up front|
|15:17||@ericknud was talking to Don until just now, out in the hall now for a bagel|
|15:17||Discovered I was hoarse from last night’s activities while asking a question at a symposium… whoops #siop13|
|17:12||Full house, standing room only at innovations in online simulation #siop13|
|17:29||To practitioners, video-based virtual assessment center apparently = long-form video SJTs with open ended responses #siop13|
|17:36||Cultural issues within orgs can be unexpected and tricky for innovative assessment: “our org doesn’t use email” #siop13|
|17:55||Online assessment center design with branching reminds me of game design process #siop13|
|19:02||Did not even get all the way through the training poster session in an hour – too much interesting stuff! #siop13|
|19:03||RT @sandyfias: It makes me sad that about 8/3800 I/O psychologists at #siop13 r tweeting. And we ask how we can be more visible and rele …|
|19:03||@sandyfias I’d suggest we have a tweetup, but it would just be depressing! #siop13|
|20:22||Hope you’re attending my session at #siop13 on serious games and gamification in Room 339AB at 5PM today! (Room change!)|
|20:25||@BelindaK04 @PsychoSoAnt @sandyfias Are there enough people? I’ve got 3-4:30 tomorrow #siop13|
|21:55||Starting in 5 mins, session at #siop13 on serious games and gamification in Room 339AB!|
|22:05||Standing room only at the session! #siop13 http://t.co/HLZwNOwTpY|
|22:37||RT @jsnread: Gamified learning results in longer time spent on task, thus greater learning. Wish I could have gamified my dissertation. …|
|22:38||@neilmorelli thanks – got a little rushed at the end but hopefully it made sense! #siop13|
|23:27||Wow – 25 minutes of questions about #gamification research after session! Thanks everyone! #siop13|
|23:27||@jsnread ha, why not!?|
|23:50||#siop13 Day 2 is over! Now to the ODU alumni dinner (where I entertain) and the Minnesota alumni event (where I am entertaining)|
SIOP 2013 has begun, and Day 1 was a fascinating set of presentations. The day starting with the opening plenary by SIOP President Doug Reynolds, talking about Big Data and I/O Psychology’s role in it. For some reason, we are not leading the way, despite our expertise being in synthesizing, analyzing and interpreting massive datasets about behavior and personal characteristics. Sounds like it would be a happy marriage to me!
The biggest difference I noticed today in comparison to last year was the massive number of presentations on social media and mobile assessment. Mobile assessment presentations seem to focus on the apparent equivalence of computer-based and mobile-based assessment, but I’ve been seeing some results contradictory to that… so more research is clearly needed. Social media research is progressing, but I heard “we don’t have data” way more times than I can even count. If you don’t have data, why are you presenting at SIOP? This is a data-hungry crowd! Perhaps the most disappointing was the session on “Empirical Evidence” for social media, but where only one presentation had any real data to speak of. I was hopeful though.
Day 1 was actually my lightest day, and I had a couple of time slots that were not scheduled. No worries – just walk 10 feet at SIOP and you’ll run into someone you know. The disadvantage is that it takes an hour to get from one side of the conference hotel to the other!
I also sat in a presentation where “Millenials” were predicted to be dramatically different from everyone else. Surprise! I hate such conclusions – Millenials are a diverse bunch, like every other generation. And I don’t say that just because I’m a Millenial.
Evening activities went pretty late into the night… but it’s SIOP… so that’s just what happens. Totally worth it. Day 2 summary tomorrow!
Archive of Day 1 Twitter posts follows:
|13:38||I wonder if this is the first #siop with a 2-minute silence for award winners… #siop13|
|13:48||So many #siop13 awards… where is the award for “best tweet”?|
|13:56||Production values have gone up a bit for #siop13 fellow announcements. The muzak is an… interesting touch.|
|14:07||RT @fanseel: Very proud of my #ugent colleague Filip Lievens being awarded fellowship by Society for Ind&Org Psychology #SIOP13 http …|
|14:10||Who has made the biggest impact on your professional development? Donate to #siop in their honor #siop13 http://t.co/1qyjQJeRes|
|14:21||Quite the risquÃ© exposÃ© on SIOP President Doug Reynolds from Tammy Allen #siop13|
|14:33||Will this address on #siop branding effect change differently than the last several years of addresses on branding? #SIOP13|
|14:38||@sioptweets #SIOP13 is the official SIOP hashtag, right? Or did we switch back to #siop2013 this year?|
|14:39||@ChrisWieseIO I prefer #siop = “scientist-practitioners who care about both” #siop13|
|14:43||Doug Reynolds saying we are behind on Big Data… But we’ve been doing a lot of this for years, just didn’t call it “Big Data” #siop13|
|14:57||@jsnread Thought so! Saw folks using #siop2013 instead of #siop13 and wanted to be sure. Last year was #siop12, didn’t think it would change|
|14:58||With the closing of the opening plenary, #SIOP13 has officially begun! First stop coffee and scones?|
|16:20||@neilmorelli Which presentation is this from? We have contradictory results.|
|16:21||At community of interest on virtual work #SIOP13|
|16:45||RT @ftcniek: JAP editor Kozlowski in panel session on causality: “I do not send out cross-sectional self-report studies to reviewers” #s …|
|18:01||Andalocua tapas bar near the conference hotel highly recommended! #siop13|
|19:45||It’s always 3 simultaneous sessions I want to attend or zero #siop13|
|20:35||In state of social media symposium #siop13|
|20:52||#siop13 symposium on social media so far: lots of ideas, not much data, not many specifics|
|21:01||Most common statement about social media in Schmit’s talk: “we don’t have much research” #siop13|
|21:05||43% of orgs ban social network sites on their networks according to SHRM survey; guess no one thought about smartphones #siop13|
|21:22||Most states prohibit using lawful off-work activity to make hiring decisions; probably includes Facebook photos of drinking, smoking #siop13|
|21:29||@ChrisWieseIO Agreed. Perhaps a new Journal of Social Media in Organizations? #siop13|
|21:44||Getting the impression that the speakers in this social media session don’t know what time it is set to end #siop13|
|21:52||Third speaker just started, 2 mins after end of session… Uh oh #siop13|
|22:01||Now at Empirical Evidence for social media, so hopefully more hard evidence in this one! #siop13|
|22:03||Not going to be “a lot of numbers”, uh oh again #siop13|
|22:04||I do support a new hashtag for common use at #siop13 suggested by speaker: #thispresentationisawesome|
|22:11||@fanseel So it seems. He has a theory though. Just didn’t test it in his study. Not sure why.|
|22:14||@fanseel Ha, yes exactly!|
|22:23||Millennial job applicants usually use combination of social media and websites to investigate orgs, but some use social media alone #siop13|
|22:28||21% of youths felt it inappropriate that the military use social media for recruiting #siop13|
|22:30||@DrDavidBallard I know of precisely 2 studies… I’m not sure if she found anything else in her SHRM-funded review! #siop13|
|22:53||Day 1 of sessions wrapped up! Now hours and hours of socializing and networking until passing out and doing it again tomorrow #siop13|
As in 2010 and 2011 and 2012, I’ll be live-blogging the SIOP conference, which begins Thursday, April 11 and runs through Saturday, April 13. This post contains a list of all the sessions that I interested in attending, which are generally focused on technology, training, and assessment. Barring any unexpected battery problems, my live blogging will be on Twitter, with a permanent record stored here.
In the graphic below, the symposium that I presenting in is colored red. As you can see, it’s a bit packed, so I won’t be attending all of these, but this is everything I’m interested in. If you’d like to meet up at any of these events, or if you think I missed something that I should definitely attend, please let me know!
|No.||Day||Start||End||Session Title||Room||Session Type|
|1||4/11||8:30||10:00||Opening Plenary Session||Grand A||Special Events|
|16-16||4/11||11:00||12:00||Antecedents and Consequences of Voluntary Professional Developmentong STEM Majors||Ballroom||Poster|
|75||4/11||3:30||5:00||The Science and Practice of Social Media Use in Organizations||346 AB||Master Tutorial|
|76-9||4/11||3:30||4:30||Predicting Persistence in Science, Technology, Engineering, and Math Fields||Ballroom||Poster|
|89||4/11||5:00||6:00||Back to the Future of Technology-Enhanced I-O Practice||335 BC||Panel Discussion|
|91||4/11||5:00||6:00||Empirical Evidence for Successfully Using Social Media in Organizations||340 AB||Symposium|
|110||4/12||8:30||10:00||The Promise and Perils of Social Media Data for Selection||Grand E||Symposium|
|114||4/12||8:30||10:00||Team Leadership in Culturally Diverse, Virtual Environments||Grand J||Symposium|
|127-18||4/12||10:30||11:30||Creation and Validation of a Technological Adaptation Scale||Ballroom||Poster|
|138-32||4/12||11:30||12:30||Personality Perceptions Based on Social Networking Sites||Ballroom||Poster|
|148||4/12||12:00||1:30||Innovations in Online Simulations: Design, Assessment, and Scoring Issues||344 AB||Symposium|
|159-22||4/12||1:00||2:00||Learner Control: Individual Differences, Control Perceptions, and Control Usage||Ballroom||Poster|
|188-22||4/12||3:30||4:30||Another Look Into the File Drawer Problem in Meta-Analysis||Ballroom||Poster|
|206||4/12||5:00||6:00||I-O’s Role in Emerging Training Technologies||342||Symposium|
|225||4/13||8:30||10:00||New Insights Into Personality Test Faking: Consequences and Detection||340 AB||Symposium|
|274-27||4/13||12:00||1:00||Using Bogus Items to Detect Faking in Service Jobs||Ballroom||Poster|
|274-1||4/13||12:00||1:00||Examining Ability to Fake and Test-Taker Goals in Personality Assessments||Ballroom||Poster|
|274-13||4/13||12:00||1:00||Applicant Withdrawal for Online Testing: Investigating Personality Differences||Ballroom||Poster|
|280||4/13||12:00||1:30||Technology Enhanced Assessments, A Measurement Odyssey||Grand F||Symposium|
|305||4/13||2:00||3:00||Advances in Technology-Based Innovative Item Types: Practical Considerations for Implementation||337 AB||Symposium|
|306-12||4/13||2:00||3:00||Is Crowdsourcing Worthwhile? Measurement Equivalence Across Data Collection Techniques||Ballroom||Poster|
|306-21||4/13||2:00||3:00||Is the Policy Capturing Technique Resistant to Response Distortion?||Ballroom||Poster|
|316||4/13||4:30||5:30||Closing Plenary Session||Grand A||Special Events|
Most academic research on video games studies them as single player experiences – a single individual, alone in a room with a game console. Study on massively multiplayer online games (MMOGs) is also growing. However, much (and perhaps most) video game play in the modern day is multiplayer in a smaller setting: or at home in front of a Kinect or a Wii with two or three friends. Surprisingly little research has examined this context.
In a recent issue of Cyberpsychology, Behavior and Social Newtorking, Peng and Crouse compared enjoyment, future play motivation and physical intensity on XBOX 360 game Kinect Adventures between single player, cooperative multiplayer and competitive multiplayer modes, finding that competitive multiplayer provided the greatest enjoyment, motivation to play in the future, and physical activity.
The highly physical core game mechanic is best described by the authors:
This game utilized the Kinect motion camera, and the players used their own bodies as the controller to pop soap bubbles that floated between holes on the walls, floors, and ceilings of a virtual zero-gravity room by moving forward, backward, left, and right and waving their arms.
These three conditions were more complex than they appear:
- In the single player condition, participants played alone, twice. In this way, they were competing with their pre-test score.
- In the cooperative mutliplayer condition, participants brought a friend. After individual pre-tests, the two participants next played the game with each other (they were physically active in the same space simultaneously and their scores were combined).
- In the competitive multiplayer condition, participants brought a friend. After individual pre-tests, the two participants then played the game separately (in different rooms) and were told what their friend’s previous high score was on the pre-test.
After the game, participants completed the focal measures. Enjoyment was assessed with a 7-item scale asking them to rate the game on seven adjectives (including “boring,” “entertaining,” and “interesting”). Future play intention was rated on a 3-item scale asking questions like, “Given the chance, I would play this game in my free time.” Physical activity was captured with an accelerometer attached to each player.
162 students participated in the study; however, this seems to include the friend counts. Sample sizes were 26 for single player, 74 for cooperative play and 52 for competitive play. Given that their analytic strategy was a simple ANOVA, it does not seem that the researchers appropriately controlled for covariance between friend pairs – a better strategy would have been to have not included the “friend” in analysis, however this would have dropped their apparent (although misleading) reported sample size.
Based upon the reported ANOVA, several were statistically significant. Single player enjoyment was lower than cooperative and competitive multiplayer modes. Future play motivation after playing alone was lower than both cooperative and competitive mutliplayer modes. For physical activity, the relationship was more complicated. In this case, the single player and cooperative mutliplayer modes resulted in greater physical activity than the competitive multiplayer.
The authors did not depict these relationship graphically (always graph your results!), so I did:
Although the affects appear pretty substantial in terms of effect size (for example, from 3.99 to 5.22 for enjoyment, about 0.8 standard deviations), it’s hard to say how much the double-sample for the two multiplayer modes inflated the F-statistics (and thus statistical significance) of the results. So I am not 100% confident in the researchers’ interpretation of these data, but I am cautiously optimistic that these results would hold up with appropriate consideration of dependence.
In the context of education and training, this has some critical implications: play with others seems preferable to play alone. While this might seem intuitive, we too often build serious games and gamification efforts where people compete against themselves (e.g. high scores, or some instructor-set target goal level). Adding some coworkers or fellow students to the mix might just improve those efforts.Footnotes:
Recently, there was a brief discussion about online Ph.D. psychology programs on the PSYTEACH listserv. Generally, it was met with disbelief that an online psychology doctoral degree could be rigorous for a variety of reasons – a lack of teaching experience, a lack of lab experience, and a lack of face-to-face interaction with faculty and other students, to name a few.
I think it’s important to recognize that online doctoral programs as they exist now and online doctoral programs as they could exist, even given current technology, are very different. We currently have the technology available to have the traditionally rigorous, intensive doctoral experience for most students – I just don’t know of any programs that actually do so now. There are several online Psy.D. programs, but requirements for such programs are not as intense, as the degree is practice-oriented – the requirements for a Ph.D. program are more stringent.
So as a thought experiment, I decided to map traditional Ph.D. student experiences onto the technologies and program structure that would be required to realize that program online effectively. In doing so, I came up with five dimensions – the three traditional components of academic performance (teaching, research, and service) plus an instructional component and a community component. I’ll discuss each in turn.
- Teaching Experience.
- Challenge. In a traditional Ph.D. program, students often (although not always) gain teaching experience by being assigned as teaching assistants (TAs), lab/section leaders, and full-blown instructors. Many schools use a ramp-up program such that earlier students (1st and 2nd year, for example) are TAs or lab leaders, and after proving themselves in that role (or after getting their MA/MS), move up to teach their own courses. If students wish to gain experience in online courses, they sometimes have that freedom; in an in-person program, one can teach in-person or online courses. In an online program, students can still certainly work as TAs or instructors of online courses, but since there are no in-person classes, they cannot gain experience teaching one.
- Roadmap. The dirty secret that no one wants to admit to is that, to a certain degree, teaching is teaching. If you look at programs designed to assess the quality of online courses, you’ll find that what makes a high quality course online is very similar to what makes a high quality course in person. For example, Quality Matters is a “certification” program designed to assess such quality. Here are the dimensions it uses to make evaluations: inclusion of a course overview and introduction, statement of learning objectives, appropriate assessment strategies, informative instructional materials, tools to encourage interaction and engagement with and between students, appropriate use of technology, support for learners, and accessibility. On its own, this list could apply to either an in-person or an online course. The reason that online courses are often lower quality is that, for the instructor, it is often more difficult to realize these goals online. In an in-person course, I can split people into small groups to discuss a difficult question; online, I need to adequately plan ahead to set up a discussion area, ensure it’s communicated to everyone appropriately, identify any technology roadblocks and proactively work to circumvent them, and then monitor the discussion over a long period of time. If they are different, teaching online is more difficult than teaching in person. It requires most of the same skills and then several additional technology skills. Where online teaching is less difficult for the instructor is the face-to-face interaction component. You don’t need to be “on”, excited and engaged at 8AM. You don’t need to learn to watch and interpret subtle facial cues that indicate students don’t quite grasp the concept you are discussing. But this is a relatively minor aspect of teaching, in the grand scheme of required skills.
- Ideal Solution. Given that teaching online is more difficult than teaching in-person except in terms of interpersonal interaction, the ideal online Ph.D. program would have the Ph.D. student start as an online TA and move to online instruction. After “proving” themselves online, the Ph.D. student should teach as an adjunct professor at local universities or community colleges, and the online university should cover the host university’s salary requirements. For example, if the student’s local institution pays $3000 for an adjunct to teach a class, the online university should pay $3000 to the student to teach on behalf of the host institution.
- Research Experience
- Challenge. In a traditional Ph.D program, students gain research experience by actively designing, running, analyzing, interpreting, and writing up research studies. Most of these experiences involve working closely with an academic adviser who creates these studies and runs them. Such studies are often run by Ph.D. student in in-person laboratories, which require direct oversight of research participants.
- Roadmap. In truth, a lot of research in psychology these days is conducted through online surveys. This would be ideal for an online Ph.D. student, requiring few special accommodations. Much research (my own included) incorporates experimental designs but does so online – for example, assigning different research participants to different stimulus materials and measuring outcomes through a follow-up survey. For research that is in-person and experimental in nature, requiring fine timings or specialized equipment, there is currently no easy (or more critically, valid) way to conduct such studies online.
- Ideal Solution. For Ph.D. programs that absolutely require in-person experience or equipment (e.g. if an fMRI must be used, or if clinical populations must be interacted with, etc.), such programs should not have online programs without partnerships with universities local to the student. Ideally, a network of such inter-relationships between universities could be established to facilitate this, but I don’t see this happening any time soon. If a program relies primarily on survey research, Skype or other videoconferencing software can be used to interact with the adviser and with the various necessary committees (e.g. dissertation committees). Research experiences must be integrated as of the first year of study. I can’t even count the number of times I’ve seen students in online Ph.D. programs asking for advice on LinkedIn or on Listservs on how to design studies or conduct analyses for their dissertations because it was the first time they ever had to conduct a research study in their entire graduate program. This is not acceptable. A Ph.D. program must incorporate research from Day 1. Perhaps most critically, universities must keep the student-to-faculty-adviser ratios similar to that of in-person classes (for Ph.D. students, I’d characterize 8 advisees per faculty member as a large program; ideally, this would be in the 3-6 area). The workload is no lower for faculty, and for quality standards to remain high, a high degree of individual attention is critical.
- Service Experience
- Challenge. Graduate school often brings with it a myriad of service opportunities. In my graduate school experience, this included things like hosting students for welcome/interview weekend, running a student group, organizing brownbags, and so on. This communicates the shared mental model of academia to students: everyone pitches in a little bit to get things done, and so that everyone’s voice is heard.
- Roadmap. In an online environment, there is no reason that service opportunities would be lessened. For example, there should be an online student association that organizes online brownbags through Skype or Google Hangouts. A leading scholar could give a talk, with individual live cameras on every other student, all organized by a team of online students. Students could take turns manning the live video “Q&A” area during the official day set aside for interviews of upcoming graduate students. The opportunities to pitch in are endless for a motivated student.
- Ideal Solution. Of the five dimensions I’ve listed here, service is probably the easiest. It only requires a faculty willing to be creative and open-minded about the activities that students take part in. At a minimum, students should run their own graduate student association and hold regular online events.
- Challenge. One of the most difficult aspects of teaching graduate students is the actual act of teaching. It is generally not effective to teach Ph.D. students the same way you teach undergraduates. With undergraduates, there’s a certain amount of memorization required (e.g. who were the important figures in the development of psychology?), with deeper understanding lightly layered on top of that memorization. If students came out of my I/O Psychology class able to tell me the major functions of I/O, the important controversies surrounding those functions, and be able to apply some of the principles to their own workplaces, I would be thrilled. At the Ph.D. level, the requirements are much more intense. Not only must Ph.D. students be able to tell you about the controversies, but they must also be able to propose research studies to solve the problems that those principles introduce. They must be able to read and critically evaluate a research literature for gaps and limitations. These are skills that are not transferred easily in lecture, which is why most Ph.D. programs make heavy use of seminars (for content courses) and labs (for statistics courses).
- Roadmap. Recently, my department floated the idea of taking online Master’s students without disrupting pre-existing graduate courses. The biggest challenge in this idea is the integration – having in-person and online students in classes concurrently. But I am confident it is possible. For example, a computer could be set up with Google Hangouts (so that each attending online student would be visible with a webcam on that computer), and that computer could be wheeled into each class where online students are needed to participate. These systems are actually quite easy to use; when someone starts talking, the focal video window switches to that person so that you can see them full-screen. To the instructor, a window would appear with a person’s face whenever they wanted to ask a question; it would only require turning their head and talking to the camera to talk to that student. Intense, interpersonally-oriented seminars are possible and even simple once set up given current video technology. This is no longer a limitation of online. If the in-person component is not required, it gets even easier; instead of rolling a computer in, everyone in the class (professor included) can see each other face-to-face with video streaming.
- Ideal Solution. The university’s IT group should invest in a high-reliability high-quality small group video streaming technology. One example is Cisco Jabber. For blended classrooms (with both in-person and online students), a mobile computer should be set up that can be transported from classroom to classroom as needed. Real-time instant support must be provided for the technology to both faculty and students. Class sizes should be kept under 10 for most topics; exceptions might include some “core” courses, like various statistics courses, research fundamentals, and proposal-writing courses.
- Community Participation
- Challenge. Perhaps the most subtle indirect benefit to attending an in-person institution is immersion within the academic community. When you walk to class, you run into professors who may stop you in the hall to chat. When you conduct research, you interact with faculty to get your IRB approvals, to work within the departmental subject pool, and attend research talks. When you conduct service, you are working directly to the benefit of other graduate students and faculty within your department, college, or university. All of these activities teach you to be part of a larger scholarly community. In many online programs, this sense of community is missing.
- Roadmap. To this point, psychology faculty have been lucky in terms of community. By bringing in a graduate student, that student will be exposed to the local scholarly community by default. It requires little or no extra work on part of the faculty. Online, this is not true. Community must be actively built. Videoconferencing is perhaps the most cost effective way to do this. For example, lab meetings could be held weekly or biweekly through videoconferencing to encourage people to make such connections. But this alone is probably not enough to provide a real sense of community.
- Ideal Solution. To build community, I recommend a multi-pronged approach. First, videoconferencing should be used within lab groups to encourage small-scale community. Second, a student association with regular meetings should be established and run by the students themselves. Third, regular online brownbags should be established with a technology enabling both participant-to-participant and participant-to-presenter interaction. Fourth, students should be required to attend one conference per year (probably APA), where a hospitality suite would be rented by the online institution to enable everyone to meet face-to-face at least once per year. Preferably, these events would be broken down further to give ample opportunities for labs to meet together as well. Fifth, a regular retreat should be held once per year at a physical location outside of the context of a conference for two to three days, to enable long-term planning among labs and conduct teambuilding exercises more broadly.
So as you can hopefully see, what I would consider a high-quality rigorous online psychology Ph.D. program would require quite a great deal of effort on the part of a department chair and likely several committees. It is not something that can be started lightly, or else your program will end up with the reputation of current online Psychology programs, which is frankly pretty terrible. And that doesn’t serve anyone well, faculty or students. While I’d love to see or develop such a program, I doubt we will see anyone willing to make the sort of investment required to realize such a program for many years – which is a shame, because many high-quality students that can’t move (for whatever reason; typically family-related) currently have no way to complete a Psychology Ph.D. And that is something we should try our best to correct.
In a recent issue of the Journal of Virtual Worlds Research, Landers and Callan examine appropriate evaluation of learning taking place in virtual worlds (VWs) and other 3D environments. In doing so, they develop a new model of training evaluation specific to the virtual world context, integrating several classic training evaluation models and research on trainee reactions to technology. The publication of this model is critical because 1) much previous research in this area does not consider the antecedents of learning in this context, leading researchers to conclude that VW-based training is ineffective in comparison to traditional training when the opposite may be true and 2) organizations/educators often implement VWs unsuccessfully and blame the VW for this failure when subtler contextual factors are likely to blame, which may help explain the virtual worlds bubble, which I discussed a few years ago. Consider VWs are appearing in research as viable alternatives to traditional, expensive, simulation-based training, the time for VWs may be at hand. The model exploring these issues appears below.
Though framed in terms of organizational outcomes, their model of evaluation could be equally valuable in educational settings. From their integration, the authors make five practical recommendations to those attempting such evaluation (from p. 15, with citations removed):
- All four outcomes of interest should be assessed if feasible: reactions, learning, behavioral change, and organizational results. This will provide a complete picture of the effect of any training program.
- Attitudes towards VWs, experience with VWs, and VW climate should be measured before training begins, and preferably before training is designed. This will give the training designer or researcher perspective on what might influence the effectiveness of their training program before even beginning. If learners have negative attitudes towards VWs, training targeted at improving those attitudes should be implemented before VW-based training begins. Without doing so, the designer risks the presence of an unmeasured moderation effect that could undermine their success. Even if attitudes are neutral or somewhat positive, attitude training may improve outcomes further.
- If comparing VW-based and traditional training across multiple groups, take care to measure and compare personality, ability, and motivation to learn between groups. Independent-samples t-tests or one-way ANOVA can be used for this purpose.
- Consider the organizational context to ensure that no macro-organizational antecedents (like VW climate) are impacting observed effectiveness. Measure these well ahead of training, if feasible.
- Provide sufficient VW navigational training such that trainees report they are comfortable navigating and communicating in VWs before training begins. A VW experience measure or a focus group can be used for this purpose.
For more detail on how to go about such evaluation, this article is freely available (“open access”) to the public at the Journal of Virtual Worlds Research website.Footnotes:
- Landers, R.N., & Callan, R.C. (2012). Training evaluation in virtual worlds: Development of a model Journal of Virtual Worlds Research, 5 (3) Other: http://journals.tdl.org/jvwr/index.php/jvwr/article/view/6335 [↩]
In a recent study appearing in Cyberpsychology, Behavior, and Social Networking, Moskaliuk, Bertram and Cress examined the value of virtual training environments for training effective coordination between ground police officers and a helicopter crew. This study thus applied virtual environments to one of the situations that I have previously argued is ideal for the application of virtual worlds: a training context where the outcome is highly complex and real-world training would be expensive and dangerous. In this case, in-person training would require helicopters! What better reason do you need to use a virtual world?
So did it work? Here’s the short version: the virtual world-based training was as effective as traditional (lecturer plus simulation) training in teaching the officers facts and information about the exercise. But more critically, officers completing the virtual training were better able to later apply what they learned to real situations. Thus, virtual environments were more effective than a traditional in-person simulation at teaching officers needed real-world skills. As suggested by Gardner’s most recent depiction of the hype cycle, it seems we are finally entering the Slope of Enlightenment for virtual worlds.
To test their hypotheses, the researchers assigned 47 German federal police officers to 8-person teams, which were in turn assigned to one of three conditions. Trainees all received in-person training on strategies and rules for the exercise followed by the treatment of their particular condition:
- Traditional training. Officers completed in-person training with a trainer, in which communication with the helicopter was simulated.
- Virtual training. Officers completed the VTE Police Force simulation while wearing headsets to communicate with each other (see images to the right).
- Control condition. Officers completed a distraction task (completed a group activity based on an unrelated handout).
In the spirit of the Kirkpatrick model, reactions, knowledge, and transfer were next measured. Reactions and learning were measured with surveys, while transfer was measured by evaluating written responses by officers to 11 video vignettes. Reaction outcomes were lower for traditional training; trainees did not feel like virtual training was as realistic or as applicable to real life in comparison to virtual training. Despite this, knowledge outcomes were similar between training conditions. 22% of the variance in transfer performance was explained by variance in conditions, with transfer from virtual training significantly higher than the traditional training and the control.
Overall, this provides compelling evidence that virtual worlds are a viable – and perhaps even preferable – setting for complex training tasks. The only negative outcome was the motivational component – although trainees had similar knowledge and superior transfer from virtual training, they felt like the traditional training was superior. But as Rachel Callan and I recently suggested, training targeted at increasing virtual world acceptance by trainees may be the solution to this one remaining barrier.
Note. Images in this blog post were taken from another paper by the authors.Footnotes:
- Moskaliuk, J., Bertram, J., & Cress, U. (2013). Impact of Virtual Training Environments on the Acquisition and Transfer of Knowledge Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2012.0416 [↩]
- Moskaliuk, J., Bertram, J., & Cress, U. (2011). Training in virtual training environments: Connecting theory to practice. Connecting Computer-Supported Collaborative Learning to Policy and Practice: CSCL2011 Conference Proceeding, Vol 1., 192-199 [↩]
There has been quite a bit of hype about MOOCs lately. If you haven’t been following the craze – and it is definitely a craze, just as much as Big Data is – a MOOC is a Massively Open Online Course. I’ve written about them before - in this post, I provide a checklist to let instructors know if a MOOC can replace them. MOOCs are marked by several key features, at least as they are implemented now:
- Anyone can enroll in a MOOC. They are massively open in that anyone in the world can take the course for free.
- They are online courses. The only way to get information to the masses at a very low per-head cost is to do so online.
- They rely on peer involvement to facilitate. When taking a MOOC, you will almost certainly never meet, speak to, or email the instructor. Help for assignments is typically provided by other people in the class.
MOOC fervor is getting even more intense, with the New York Times declaring 2012 the year of the MOOC and some universities granting credit for completion of MOOC courses. It seems all is coming up roses for MOOC providers like edX and Coursera. Proponents of MOOCs have taken on a certain fervor, promoting the idea that MOOCs are the “solution” to college – that all students will one day be enrolled in single massive MOOC for each course they need, and the current-day idea of “college” will be rendered obsolete.
I say: not so fast. We heard similar rhetoric when online courses in general were introduced, and while many students do opt for online courses instead of in-person courses, most don’t. The predicted mass exodus of students from in-person classrooms to the glorious Internet did not happen. That’s because the in-person classroom still offers a lot of benefits that are difficult to provide online – at least, not without a lot of time and dedication on the part of the instructor.
In my research, I’ve even come across stories about the introduction of correspondence courses with the same sort of rhetoric as we see with MOOCs. Correspondence courses were presented as the solution to over-crowded classrooms, and a way to get more people to go to college who otherwise would not have. After all, why go to a college classroom when you can complete a course by mail? As it turns out – there are a lot of reasons. And there are a lot of reasons why you wouldn’t want to rely on MOOCs for your college education either.
I’ve noticed that proponents of MOOCs tend to not be in higher education. This is a critical problem of perspective. They make claims like, “We saw the Internet completely flip the music industry on its head; this must happen to higher education as well.” Because of this perspective, they make several assumptions that upon scrutiny turn out to be unwise.
- Given: MOOCs rely on automated grading to evaluate student progress.
Unwise assumption: If one class can be graded automatically, all classes can be graded automatically.
Regardless of your personal stance on the value of grades for learning, they do provide a benchmark for progress (albeit an imperfect one). Universities use grades to rank student comprehension of course material; a student with an “A” should come out of a course with greater skill than student with a “B”. MOOCs can only grade questions where there is a single correct answer or where an algorithm can be used to derive all correct answers. That means algebra is easy to grade; architectural design is not. The most problematic area is writing. Although an algorithm can be used to automatically grade written material, those algorithms, on the whole, are awful. At best, they assess sentence structure, spelling, and complexity. They cannot evaluate if your writing made any sense, which is really the only aspect of the writing most instructors actually care about (except perhaps English instructors!). Artificial intelligence is not yet sophisticated enough to evaluate writing quality or logic, and it will probably be decades before we even start to be able to do this. Think of it this way: how long have we been trying to create accurate translation software that actually converts the meaning of one language into another as effectively as a human translator can? Hint: IT’S BEEN A LONG TIME.
- Given: MOOCs are effective at teaching basic skills that can be assessed automatically.
Unwise assumption: The skills that can be assessed automatically are the ones we want to teach at the college level.
I completely agree that MOOCs can effectively replace some basic skills training currently taught in colleges (e.g. spelling and grammar, basic mathematics, etc.). But it does not follow that MOOCs can effectively replace all training currently taught in colleges. I will even go so far as to say that the introduction of MOOCs is an excellent opportunity for universities to do away with all those courses. Let me be clear: if course material can be taught and assessed effectively in a MOOC, it should probably no longer be a college course. It should be a required prerequisite before the student even gets to college. That way, time can be devoted in college to the skills that we actually want students to gain in a liberal arts environment: creativity, problem solving, and critical thinking. If a student is paying for college credits to learn how to add and subtract effectively, they are wasting their money on college after already having been failed by K12. We should not be spending instructional resources at the college level teaching these skills if a MOOC can deliver them equally effectively.
- Given: MOOCs will create a marketplace for learning skills without the need for degree programs. People can pick the knowledge they want to gain and gain only that.
Unwise assumption: Credentialing is not valuable.
A common perspective is that MOOCs will make degrees obsolete – that somehow having a worldwide marketplace of courses that provide specific skills will be sufficient to prepare people for the careers they want. But this assumes that the credential – possession of a degree – does not provide useful information in that worldwide marketplace. But it does. Possession of a college degrees acts as a signal to potential employers: “This person was able to organize their schedule, choose a program of study, and stick to it for multiple years at a sufficient degree of success for this institution to proclaim that they have earned a degree and for us to support that proclamation.” This is why people have judgments about the quality of college someone attends – there is a belief that a more rigorous college will provide a more rigorous education, and the degree signals that. While I agree that many employers put a little too much faith in what those degrees imply, this doesn’t mean that the degrees themselves are useless. Let’s not throw the baby out with the bathwater. A college degree indicates a substantial multi-year commitment and a major life accomplishment for the degree holder. Let’s not minimize that accomplishment by saying that taking a selection of MOOC courses would be equally impressive. Also important is that college exposes students to a much wider range of knowledge than they would otherwise be exposed to in high school. I’ve had many students who took my Industrial/Organizational Psychology course to fulfill a requirement only to discover that they had a real passion for the topic. Without a broad liberal arts education, they would never have discovered this about themselves. In a MOOC-centered world, those students would never have known that passion.
- Given: MOOCs rely on discussion forums to facilitate learning.
Unwise assumption: Discussion forums can effectively police themselves.
One of the biggest challenges for an online instructor is managing a discussion forum. Forums are used as a constructivist instructional strategy: the goal is for the students to actively think through the material, apply it to their own lives, and be able to explain their application of class concepts to others. There are generally two ways such forums go. First version: the instructor makes some regular assignments to “post in the discussion forum” or “reply to another student in the discussion once per week”. A MOOC can do this too. But these strategies rarely work very well; students do the bare minimum, usually resulting in a flood of posts the day that the assignment is due, and rarely do students demonstrate the application skills described above. Second version: the instructor is an active participant in discussion, working to correct misconceptions, redirect conversations in productive directions, and generally applies their expertise to improve student engagement with the material. The current sophistication level of artificial intelligence means that a MOOC cannot do this. They must rely instead on the “wisdom of the crowd” – that a critical mass of other people in the classroom will 1) be interested in helping others, 2) have sufficient expertise to answer the question being asked and 3) happen to browse the forum at the moment a response is needed. While this might work for a relatively common learning topic (like Introductory Algebra), when we get to relatively uncommon fields (like Advanced Organic Chemistry), those numbers are going to drop dramatically.
- Given: MOOCs rely on the support of others currently taking the course to provide the social context for a course.
Unwise assumption: Individual attention from a mentor/leader figure is unnecessary for effective learning.
Unfortunately, some college courses are taught by instructors that don’t really care about their students. They are there because their are required to be there, riding the tenure train for the long-haul. But this is not the norm in most universities. Most faculty care deeply about the success and learning of their students individually – they want every student to succeed. A common misconception is that faculty want students to fail. This is simply not true. Instructors want students to strive and push themselves. Instructors want to provide a sufficiently challenging course so that the most advanced students need to push themselves just as much as the struggling students do. This is realized in a variety of ways. Some provide a plethora of comments on student papers with the hopes that those students will read those comments and think about how they can improve for next time. Some instructors schedule one-on-one meetings to talk about course progress and identify areas needing extra attention. Some instructors schedule review days and feel out the classroom for areas of weak understanding so that they can target those areas with supplemental materials. None of these things are currently possible with a MOOC, and all are valuable. Little is more motivating than one-on-one attention, delivered by a mentor/leader figure that you know cares about you and your success. In a MOOC, you are surrounded by other learners, and yet you are all alone.
In summary, I think MOOCs are an important development for higher education, but they are not poised to majorly reform anything. At best, they can supplement and bolster existing efforts by replacing remedial and entry level courses, leaving the more advanced topics for the traditional college classroom, online or otherwise. As MOOCs improve, this may change – but for now, this is the best they can provide.
I believe that one of the major purposes of psychological science is to promote that science to the public – that is, after all, one of the reasons that I write this blog. But scientists vary in the degree to which they agree to this statement. One one side, some researchers will argue that the purpose of science is to make meaningful contributions to knowledge alone, i.e. furthering the frontiers of human understanding and nothing else. To these people, research is for researchers. On the other side, we have people like those appearing in the list below, who work to actively promote their work on Facebook and elsewhere.
This is certainly not the only purpose of a lab Facebook page. I also use our lab’s page for community-building: to congratulate graduate students on major accomplishments, to announce new publications and presentations, and to share jokes that academics and graduate students would find funny. Some labs use it for recruiting undergraduate research participants.
In creating this list, I came to several surprising conclusions:
- Psychology lab Facebook pages are relatively uncommon. There are literally thousands of psychology labs, yet I could only locate 28 with Facebook pages (although there were some restrictions to my search – see below).
- Psychology lab Facebook pages don’t have a whole lot of likes. A few hundred at the high end, almost all had less than 30, and most had less than 10.
- Psychology lab Facebook pages often aren’t maintained well. I only saw pages with my purposes (community-building with graduate students, lab announcements, etc.) in two or three pages. Most post quite infrequently. Unsurprisingly, the pages with the highest numbers of posts are also those with the most likes.
Now for the list. I’ve included only pages that have at least 2 likes and 1 post, for which I can clearly identify that it’s a lab, and for which posts are in English. We’ll start with the best (wink), and the rest will be alphabetical!
|Facebook Page Name||PI||Location||Area of Psychology|
|Technology iN Training Laboratory – TNTLab||Richard N. Landers||Old Dominion University||industrial/organizational|
|Arcadia University: Social Psychology Lab||(unknown)||Arcadia University||social|
|Arkin Lab||Robert Arkin||Ohio State University||social|
|Carter Adolescent Interpersonal Relationships Lab||Rona Carter||University of Michigan||developmental|
|Cogpsy Lab||Maria Viggiano||University of Florence||physiological|
|CSUN Sport Psychology Lab||Mark P. Otten||California State University, Northridge||sport|
|Dr. Tom Ford’s Social Psychology Lab||Tom Ford||Western Carolina University||social|
|Early Learning Lab (ELLA)||Annette M. E. Henderson||University of Auckland||developmental|
|Environmental Psychology, Interpersonal Attachment and Relationships Lab||Kerry Anne McBain||James Cook University||social|
|Evolutionary Psychology Lab at Oakland University||Todd K. Shackelford||Oakland University||evolutionary|
|Evolutionary Psychology Lab of SUNY New Paltz||Glenn Geher||SUNY New Paltz||evolutionary|
|Exercise Psychology Lab||Edward McAuley||University of Illinois||exercise|
|Forensic Psychology Lab||Don Dutton||University of British Columbia||forensic|
|Gender, Sexuality & Critical Psychology Lab RU||Maria Gurevich||Ryerson University||social|
|Lab of Evolutionary Psychology – Research Participation (MQ)||(unknown)||Macquarie University||evolutionary|
|Loyola University Psychology Lab||(unknown)||Loyola University||(unknown)|
|NEU Psychology Laboratory||(unknown)||New Era University||(unknown)|
|Psychology of Social Justice Lab (PSJL)||Phillip Atiba Goff||University of California, Los Angeles||social|
|SIUC Dr. Yu-Wei Wang’s Psychology Lab||Yu-Wei Wang||Southern Illinois University, Carbondale||counseling|
|Social Attitudes Psychology Research Lab||(unknown)||St. Mary’s University||social|
|Social Psychology Lab – University of Innsbruck||(research group)||University of Innsbruck||social|
|Social Psychology Lab (SPLAB)||(unknown)||University of Kwa-Zulu Natal||social|
|Social Stress and Health Psychology Research Lab||Elizabeth Brondolo||St. John’s University||health|
|TCNJ Organizational Psychology Lab||Jason Dahling||The College of New Jersey||industrial/organizational|
|UGA Infant Research Lab||Janet Frick||University of Georgia||developmental|
|UMSL Multicultural Psychology Research Lab||Matthew J. Taylor||University of Missouri, St. Louis||multicultural|
|University of Queensland Evolutionary Approaches to Psychology Lab Group||(unknown)||University of Queensland||evolutionary|
|UWM Anxiety Disorders Lab||Han-Joo Lee||University of Wisconsin, Madison||clinical|