In a new article appearing in Simulation & Gaming, Bedwell and colleagues do what the game studies literature has generally not been able to do for games in general; they develop a taxonomy that defines what a serious game is. This effort provides a road map for researchers exploring how games can contribute to learning.
The definition of “game” is an area of surprisingly vehement debate. Many researchers and game designers have their own definition of “game,” ranging from Sid Meier’s “a game is a series of interesting choices” to researchers’ attempts to define games by exploring the largely humanities-based game studies literature. This is a challenge for the serious games researcher, because without a clear definition for “game”, there is no way to systematically and scientifically explore the aspects of games that contribute to learning. What one researcher calls “challenge”, another might call “fun.” What another researcher calls “fun”, yet another might say “contributes to the creation of flow experiences.” This leads to overlap between researchers, which often leads to seemingly contradictory results. With the above example, a study finding a positive effect of “fun” might in another researcher’s term be a positive effect of flow or a positive effect of challenge. This leads to a highly inefficient scientific process.
Bedwell and colleagues begin to solve this problem by conducting an empirical study of game attributes targeted at identifying the areas of overlap between researcher conceptualizations of game attributes. First, they identified 65 self-proclaimed “game experts” from various online sources, including the online forums of the Escapist and Penny Arcade, video game listservs, and internal distribution lists at game developers. Next, these 65 game experts (50 players and 15 developers) conducted what is called a “card sort”, in which they sorted the 19 game attributes related to learning identified by previous researchers into their own categories. Finally, they completed a post-sort survey to capture demographic information. To examine this data, Bedwell and colleagues conducted a cluster analysis, a process that groups cases by similarity of ratings provided.
Interestingly, game developers and game players did not differ in their sorting. Support was found for a 9-category system:
- Action Language. The method by which information is relayed to the game (joystick, keyboard, mouse, etc.).
- Assessment. The extent to which feedback is provided to players on their progress.
- Conflict/Challenge. The degree to which the player is challenged, and the method for manipulating that challenge.
- Control. The degree to which players can affect their environment.
- Environment. The location chosen for the game.
- Game Fiction. The extent to which the game environment represents reality.
- Human Interaction. The degree to which players interact with other people.
- Immersion. The extent to which the game uses engrossing effects (audio, video, etc.) to draw the player in.
- Rules/Goals. The degree to which the game presents clear objectives.
With this system, the authors hope that “research on game attributes and their effects on learning can progress in a purposeful and unified fashion” (p. 753). Somehow, I don’t expect it will be quite that easy, but at the very least, this provides a valuable starting point for such efforts.Footnotes:
In a recent meta-analysis appearing in the Journal of Business and Psychology, Costanza and colleagues compare a wide variety of attitude variables between four generations of employees: Traditionals, Boomers, Gen X, and Millenials. In a quantitative review of 20 articles on generational differences across 19,961 workers, the authors conclude that generational differences are small or near zero in virtually all cases.
Six attitude variables were examined:
- Job satisfaction, the degree to which employees enjoy and have positive attitudes towards their job
- General organizational commitment, the degree to which employees are committed to their job (a combination of the next three)
- Affective organizational commitment, the degree to which employees feel emotionally attached to their job
- Normative organizational commitment, the degree to which employees feel obligated to remain at their job
- Continuance organizational commitment, the degree to which employees feel they have invested a lot in their organization and don’t want to lose those investments (personally, not monetarily)
- Intent to turnover, the degree to which employees intend to leave their organization in the near future
Reported in standard deviation units (Cohen’s d), the authors found effect sizes ranging from .02 to .25 for satisfaction, -.22 to .46 for commitment, and -.62 to .05 for intent to turnover, which can be described as “low to effectively zero.”
Millenials are often characterized as being very different from the rest of employees, with a “drastically different outlook” and “different expectations”, are “difficult to understand” and “entitled”, “less motivated”, “high maintenance” and “silver-spoon fed”, and a variety of other colorful descriptors. In truth, it seems that they are not much different than any other generation – or at the least, they are no more different from other generations than other generations are from each other.
The article is, of course, limited by the variables it was able to meta-analyze, so it is possible that some other attitude or characteristic that was not examined does demonstrate a large difference. However, Millenials are commonly vilified as being less committed to their jobs, more likely to jump ship, and less satisfied with their work. This study demonstrates that at least in these regards, Millenials are just like everyone else.Footnotes:
In the most recent issue of Journal of Media Psychology, Ross and Weaver investigate how the experience of negative griefing behaviors early in playing an online multiplayer games (like World of Warcraft) causes some new players to themselves grief others. The authors attribute this primarily to observational learning – that people joining an online multiplayer environment for the first time are likely to interpret the way they are treated by others as a model for how to treat others.
This means that the introduction of abusive players into a multiplayer game in effect creates a destructive loop – the abusive player mistreats new players, those new players gain experience and mistreat the next round of new players, and so on. The initial griefing is also likely to result in greater frustration, lesser enjoyment, and greater state aggression, which would also feed into this loop. The conclusion: griefing does not just negatively affect the griefed; instead, it may begin a chain of abuse that lasts through generations of new players.
To discover this, the researchers asked 68 men and women to play the online game Neverwinter Nights, in which they were asked to play through six 2-minute rounds of gameplay. They were randomly assigned to be griefed in either the first or fourth round of play. They were also randomly assigned to be exposed to a new player or to the griefer in the round immediately following the griefing to see if the negative effects would be revenge (on the griefer), or if the griefing would generalize to others. Researchers were led to believe that other study participants were in the game with them; in reality, it was a research assistant. In the griefing condition, this research assistant with a Level 20 character would repeatedly (and unfairly) kill the research participant’s Level 5 character. Given this arrangement, it was literally impossible for the research participant to damage the research assistant’s character, and the research assistant could kill the research participant’s character in one hit. The article mentions “corpse camping” (staying near a player’s corpse waiting for them to return, weak, to kill them again) and “spawn camping” (killing players the moment they spawn, when they are disoriented) as examples of this griefing.
The researchers found support for most of their hypotheses:
- Participants who are the victim of griefing during the first encounter of a game are more likely to grief other players in subsequent encounters than participants who are not initially griefed.
- Participants are more likely to grief the same opponent than a different opponent, and this difference was greater in the condition where players had an expectation of cooperation than the condition where they had an expectation of griefing.
- Players in a multiplayer game who are griefed experience more frustration, less enjoyment, and higher levels of state aggression than those who are not griefed.
Over the last week or so, I wrote a web-based tool to automatically generate datasets and worked-out solutions. It creates and displays a dataset, a completed solution, and the results of most intermediary computational steps. It is freely available online for instructors or students: http://rlanders.net/datasets.php
As an example, if you select “Paired Samples t” with “n = 10” and an outcome type of “Survey”, it will output:
- Instructions on what to do with the dataset generated
- A dataset with Time 1 and Time 2 “Survey” variables (1=min, 5=max, 3ish=mean, 1ish=sd)
- A “completed” dataset containing calculated difference scores and squared difference scores
- Sum of d, Mean of d, Sum of d^2, (Sum of d)^2, sd of d, se of d, critical t statistic, degrees of freedom, observed t, CI lower and upper bounds
- A box with all important “final” output: the research question, null and alternative hypotheses, alpha, critical value, journal-type reporting of CI and t with precise p-value, unstandardized effect size (difference score), standardized effect size (Cohen’s d) and a NARRATIVE CONCLUSION including interpretation of the effect size. As an example: “Conclusion: Reject the null and accept the alternative. The difference is statistically significant. The Survey variable decreases over time. If we assume this sample to represent the population, we would expect 95% of sample means to fall between -1.68 and 0.08. On average, the Survey variable was 0.80 lower at Time 2. The difference over time was medium.”
This is customized to each test. The program will also randomly choose between directional and non-directional tests when directional tests are plausible.
I have tried to tweak the generation algorithms so that you get statistically significant results about 50% of the time. However, you will get more significant results with larger sample sizes and fewer significant results with smaller sample sizes (as you might expect). But if you leave it set to n = 10, it should be about even.
The tool includes fully worked out problems for: central tendency and variability statistics, z-score calculations, confidence intervals, z-tests, one-sample t-tests, paired-samples t-tests, independent-samples t-tests, one-way ANOVA, chi-squared goodness of fit and test of independence, and correlation/regression.
If you’re wondering why I wrote this, it is because I wrote an undergraduate 1-semester introduction to statistics for business students which will be published in 2013. However, you don’t need the book to use the dataset generator (just ignore the references to “Chapters”). It is customized to the statistical method I teach in that text, however, so if you like it, I’d ask you to consider my textbook when it is published in 2013.
Also, if you decide to adopt this or provide it to your students, please leave a note in the comments saying so. Feedback is appreciated!
Last week, I spent most of my week in final preparation for and then running the live broadcast of the SIOP Leading Edge Consortium 2012 on Environmental Sustainability at Work in New Orleans. This was a bit experimental – SIOP had never done this before, nor had I. But the LEC leadership team wanted to increase the reach of the conference to those that would otherwise be unable to participate. This effort was apropos given the topic of the conference; by serving virtual attendees, we reduced the carbon footprint of the conference while simultaneously increasing its message. I learned a great many lessons in putting this together, which I thought I would share for those facing similar challenges.
Costs and Equipment
The LEC committee decided early on that we would stream exclusively to remote hosts, not to individuals’ computers. This was done to provide some control over the participant experience – at each remote location, a local facilitator would manage the traditional flow of conference activities, including lunch, breaks, etc. This was also beneficial cost-wise because we had over 100 virtual participants but only about a dozen local sites; most streaming sites (Livestream included) set pricing tiers in part based upon the number of anticipated viewers. With 12 remote sites, 3 observation/testing sites, and about 8.5 hours of video (+20% for testing and unanticipated overhead), we only needed 153 viewer hours, well within the lowest price tier for Livestream ($350 at the time; we set up this premium account a couple of weeks ahead of the conference so that the month of access we’d gain would overlap the conference sufficiently to allow time to test pre-conference and to download archived footage post-conference).
Because we wanted to shoot in HD, we also needed to purchase equipment: a Logitech HD Pro Webcam C920 and a Vanguard VS-82 Table Top Tripod. I chose the C920 because it has an on-camera codec (decreases processor load when broadcasting video in real time) and the VS-82 because I needed to pan/tilt smoothly without a full tripod but also needed it to fit in my messenger bag with my laptop. I am happy to report that both worked great.
That brought the total technology cost for Livestreaming our conference to about $450.
As for other equipment, I already had other necessary items – a laptop, a cooling pad (important), and ear-cup headphones (i.e. noise canceling, or at least designed such that external noises are very muffled while wearing them). If you don’t have these items, this would obviously cost you a bit more.
Since we were working with remote sites, we needed to invite those sites well ahead of the conference. My only advice here is that if you plan to involve remote sites, work with them well in advance. As the person tapped to organize a virtual conference, you are probably checking your e-mail continuously. Remote site facilitators are much less likely to share your enthusiasm. Even a single request for information could take a couple of weeks to be fulfilled. Critically, we also needed local copies of the presenters’ PowerPoint presentations to broadcast them in maximum quality – the last presentation was delivered less than an hour before that presentation began.
The evening before the conference, I set up my equipment in the conference room as seen in the diagram above. First, I set up the broadcasting laptop and cooling pad connected to the webcam and tripod. Because webcams don’t have optical zoom, you’ll need to be within 10-15 feet (3-4 meters) of the speaker in order to get a fair amount of detail (this generally means the first row of seats/tables). Next, I ran three cables – power to the wall, Ethernet (I would not trust wireless Internet access for a live broadcast) to the wall, and an audio cable to the room’s stereo mix out (i.e. whatever the room hears on the audio system should be fed into your computer’s microphone/line-in port). This will mean some conversion; typically, the room is on a microphone cable (called XLR) whereas your computer will need a 3.5mm mini-stereo plug. Such converters are generally cheap, but you’ll need to have them (or, as in our case, you’ll need to get them from the local AV folks). Finally, I plugged in my headphones.
Questions from Virtual Sites
You might have noticed an unaccounted-for netbook in the diagram above. This was because we wanted to take questions from the audience at the virtual sites. We did so with Twitter (using a hashtag) and via e-mail. A group of graduate students in my lab back at ODU were standing by monitoring these sources and feeding me the questions via Google Talk, which was open on the netbook (which was connected via wi-fi). A second team relaying questions is strongly recommended. Not only do they filter questions for you, but they also can monitor feed quality. You probably won’t have enough time to keep track of questions from multiple sources while also keeping the camera aimed, so this team keeps the multitasking pressure lower. I also kept a live feed open on the netbook so that I could watch for fluctuations in feed quality and skips.
If you’re using Livestream.com, I’d recommend the Procaster software which can be downloaded for free from Livestream. This allows you to switch between “screen” view (a capture of a portion of your screen – I used this to record a local copy of the presenter’s PowerPoint), “camera” view (the live camera feed), and various mixed feeds combining the two. I recommend the “3D Mix” which records a small portion of the camera view (enough to capture the presenter) plus a larger version of the screen view (so that the PowerPoint appears large and crisp). To increase professionalism, I also recommend switching feeds such that the people watching can’t see you transition between PowerPoints. For example, between presentations, I would switch to “camera” view to focus on the presenter while behind-the-scenes I would swap out the PowerPoints before returning to 3D Mix.
The only problem with this setup is video – since your audio is coming from the room, if any presenters use embedded video in their PowerPoints, you’ll need to switch to screen mode and pivot the camera to look at the projection screen or the audio won’t sync. The only way to avoid this is to get a live feed from the presenter’s computer and then re-broadcast that (usually not possible). You may also need to modify the camera’s video settings manually when changing from a speaker (generally fairly dark) to a projected image (extremely bright). Logitech’s built-in camera tools accomplish this nicely, which you can run in tandem with Procaster.
What to Broadcast When Not Broadcasting
I would strongly recommend creating an “on a break” PowerPoint slide and recording a short loop of video containing this slide (maybe 30 seconds). Then, using Livestream Studio, cue this on autopilot. This way, any time the feed cuts out, viewers will automatically be redirected to the an official-looking pause slide. There is also an “automatically add to auto-pilot” option in Procaster that you probably don’t want to turn on – otherwise, everything you record automatically re-appears in the live feed.
Once the hardware is set up, there’s a multi-step process to get Procaster running appropriately:
- Open the preferences and go through every menu. Every single item on here is potentially important, so take your time and think about every one (this is how I noticed the “automatically add to auto-pilot” option). I’d recommend hiding the mouse from the live feed.
- Select a recording quality that is appropriate given your computer and anticipated Internet connection. I’d recommend against using any of the quality presets, because your laptop probably can’t handle them – you’ll want as much processor power devoted to a single high quality stream as possible. Set your audio quality to at least 160kbps and the video quality pretty high (I used 1000 kbps) and test it (see if you can keep the processor usage consistently under 50% or so while swinging the camera around and playing music). Adjust up and down and test again until you can get it in the right area.
- Open all the PowerPoints you’ll need before opening Procaster.
- Open Procaster and click “Go Live”.
- Open any camera setting programs and set as appropriate (if using the recommend webcam above, I recommend turning off auto-focus and going to “full landscape”, and also turning off auto-gain/auto-exposure after the camera initially adjusts to the light level of the room).
- Drag the capture rectangle over PowerPoint.
- Open the audio mixer and enable the headphone monitor (so that you can listen to your stream). Adjust audio feeds until the mix sounds good. Close the audio mixer unless you’ll have multiple speakers.
- Monitor the feed to ensure the camera stays targeted and keep the headphones on to ensure the audio sounds right.
Even if you set up everything correctly, you’ll be doing a lot of multi-tasking – monitoring the video feed, listening to the audio feed, relaying questions, aiming the camera, mixing the audio, and adjusting camera quality. Anything you can do to delegate responsibility, do that.
Working closely with the on-site AV people is critical. They control the quality of sound output in the room, and by extension, the quality of sound in your feed. If they manage levels appropriately, you won’t need to adjust the mixer mid-presentation (one less thing to worry about). Nice AV people will also provide all your converters (we ended up needing some special equipment – a mixer – to get from the room’s stereo mix back to my computer – you can see it in the photo at the top of this page).
Livestreaming is processor intensive. I’d originally planned to broadcast in full HD (1080p; blu-ray quality), but this spiked my processor usage to 100% and my computer simply could not keep up without dropping frames and audio cut-outs. If you’re using a laptop, even with a fast processor, you probably can’t get above DVD quality in a live feed while keeping your processor at a reasonable level (<50%). Although not super-crisp, this still looked decent according to reports at remote sites (the biggest concern was that the slide text was not blurry on a projected screen). With a sufficiently powerful desktop and a proper cooling system, you can do better – but it seems unlikely most conference organizers would want to set up a desktop computer at a conference site (especially in a multi-room configuration).
One of things that surprised me the most was psychological: speakers don’t really know who to look at when a virtual participant asks a question. Usually, they just stared at me, as if they were trying to convince me personally of their response to the point that I had relayed. That’s neither good nor bad, but it might be a little unnerving if you don’t expect it.
Running a virtual conference is a big task with many working parts. It is easy to get overwhelmed. But if you carefully map out all the tasks involved and the requirements for those tasks, it’s quite doable, even with a small budget. With these guidelines and a little work, you should be able to take any in-person conference and add a virtual audience.
In the last several months, massively open online courses (MOOCs) have been presented as a game-changer, a seismic shift, a crisis, and a McDonaldization of higher education. A fair number of universities have jumped on-board, including several prominent universities in Australia and the United States. At the University of Virginia, a recent kerfuffle resulted in several resignations and a great deal of negative press regarding pressure to move toward a MOOC model.
If you’re not familiar with MOOCs, the basic idea is that a massively large number of students (in the ten-thousands or more) complete a course with a single instructor (or handful of instructors). The only way this is possible is through technology: a semi-permanent version of course content is developed and delivered online, and all assessment is done through automated grading systems. Discussion rarely involves experts; discussion is typically only with others taking the course simultaneously. Instructors are probably not available via e-mail or other social media (although they could be). The grading algorithms, because of current limitations on artificial intelligence, can only effectively assess multiple choice exams or written materials where there is a clear “right” answer – for example, basic mathematics problems, computer programs that respond to set input and produce specific output, exams where the presence of keywords is the important element, and so on. Assessment of essays is generally limited to spelling, grammar and keyword searches.
So are MOOCs really the future? To some, the current MOOC craze is nothing more than a repeat of the correspondence course craze of the 1920s – that as we experiment with MOOCs further, it will become increasingly obvious that MOOCs do not provide “fast, easy education” any more than courses by mail do – that learning is hard work, and MOOCs don’t change that. Considering the high drop-out rates associated with most MOOCs, there is probably some merit to this view. However, the added dimension this century is that the online education model, which is the basis for MOOCs, works pretty well. We’re at the point where we can conclusively say online education can be used to teach students as effectively as most in-person courses (whether any particular course actually does work as well is a different question). However, in a well-designed online course, we still need a fair number of instructors (or at least, instructional staff) to grade content, answer e-mails, respond to student disputes, and so on. In a MOOC, most of this is missing – if interaction is present at all, it is typically a discussion board where you are more likely to get an answer from a classmate (which may or may not be a high quality answer) than anyone with vetted subject matter expertise.
In summary, here are the core issues with MOOCs as they exist right now:
- Because students can join a MOOC at will, content must be standardized and cannot be easily updated mid-course to address the needs of the specific students taking the course at that time. Part of the perceived value of MOOCs is “create instructional content once and deliver it to a lot of students for a long time”.
- Because MOOCs rely on massive numbers of students (tens of thousands), all assessment of learning must be done automatically. Current technology (i.e. artificial intelligence that comes nowhere close to the complexity of human cognition) does not enable automatic assessment of anything other than rote processes (e.g. basic mathematics, basic computer programming, basic English skills like spelling and some aspects of grammar – you might be noticing a theme).
- Because of all of the above, MOOCs are effectively only when there is a clear path from A to B (for example, solving a basic algebraic equation), and learning the path from A to B is the only worthwhile objective.
Thus, for relatively simple topics where there is a relatively simple objective standard of mastery that students need to meet, MOOCs should work pretty well. But if your goal is to teach critical thinking or provide individual attention to students in need, a MOOC isn’t enough.
So whereas most of the blogosphere has been discussing the merits of MOOCs as a money-saving measure, a way to bring higher education to the masses, the revolution that will obsolete the university degree, etc., etc., my goal here is to answer a practical question for instructors. During the rise of industrial automation, assembly line worker jobs were replaced by machines because machines were more efficient. So if you are currently an instructor in higher education (I’m especially concerned about adjuncts, since they seem to be the most likely to lose their jobs should a “MOOC revolution” occur), can you be replaced by a machine? Let’s find out.
Course Content (not in your control):
- Do you teach a topic where the knowledge taught evolves over time?
(e.g. the topics covered in Introductory Psychology change as Psychology continues to evolve, but the topics covered in College Algebra don’t really change)
- Do you teach a topic where there is no “right answer”?
- Do you teach a topic that develops/requires student creativity?
Updates to Teaching Quality (mostly in your control):
- Do you update your course’s content regularly to keep up with research developments in your field?
- Do you update your course’s content regularly to keep up with current research on effective teaching in your field?
- Do you incorporate interactive exercises involving small groups to facilitate peer learning (e.g. projects, small group discussion)?
- Do you teach/assess critical thinking within your topic area?
- Do you assign and grade writing assignments to assess student mastery of complex concepts?
- Do you assess student ability to apply knowledge gained to complex, novel problems (i.e. complex problem solving)?
Student Feedback (completely in your control):
- Do you give in-depth feedback customized to each student’s achievement and needs?
- Do you discuss and explore individual student knowledge/progress through 1-on-1 interaction with students?
- Do you respond to students by updating/targeting course content to meet their needs or focus within their interests?
- Do you provide mentoring to students in need?
If you answered “yes” to at least six of the questions above, a MOOC can’t replace you very effectively right now (although that’s only true as long as the technology remains inadequate!).
So to summarize, right now, MOOCs are a perfectly capable medium to provide instruction on knowledge that is relatively stable, with clear “correct” answers. For example, a MOOC environment would be an efficient way to teach the solving of algebraic problems in College Algebra, but a MOOC would be less useful to instill a deeper sense of the relationships between numbers. Most college-level courses have the potential for such critical thinking content, but not all instructors choose to develop these skills. For instructors that choose not to do the things
If you aren’t updating your course content regularly (for whatever reason), aren’t working to maximize the effectiveness of your teaching every semester, and aren’t providing targeted feedback, your teaching is replaceable. My recommendation? Don’t be replaceable.
For now, I suggest we leave the MOOCs to whom they seem best designed to help: self-directed learners who can’t afford formal education but still want to improve their knowledge/skills because they see value in gaining knowledge/skills (not because they need a grade).
For those of you that know me personally, you’re probably aware that I’m the Technology Czar for the Organizational Behavior division of the Academy of Management. That is really just a fancy way to say, “When there’s a technology problem, the executive committee asks me to fix it.” However, this year, my role has expanded to include chairing the brand-new Social Media Committee. The purpose of that committee is to produce material to better connect the OB division leadership, members of the OB division, and the public at large, shared via social media.
So as part of that mandate, we have created a new podcast! During the AOM 2012 conference in Boston, Massachusetts, graduate student members of the Social Committee Committee interviewed prominent scholars of the OB division and recorded these interviews to video. Each month over the next year, we’ll be releasing these videos (edited, of course!) as a podcast. You can view the first of these videos embedded below or subscribe directly to the podcast in iTunes.
It begins with my giving a short video introduction to the series and this episode, so you finally can put a face and voice to this blog. For better or worse, I suppose…
In a fascinating new paper appearing in the Journal of Applied Psychology, Wagner, Barnes, Lim and Ferris investigate the link between lack of sleep and the amount of time that employees will spend wasting time on the Internet while at work – a phenomenon called cyberloafing. Using two studies – one using historical search data collected from Google Insights and another using a sample of undergraduates, Wagner and colleagues found that those who sleep less are more likely to cyberloaf the next day. They also found an interaction such that highly conscientious people were less likely to cyberloaf after a night of interrupted, poor quality sleep than less conscientious people.
In Study 1, Wagner and colleagues investigated this link by examining that longstanding arch-rival of good sleep the night before work in the United States: the hour lost when “falling back” from daylight savings time (DST) to standard time. They downloaded data from Google Insights for Search, investigating surfing habits across 3 Mondays each year over 6 years for each of 203 metro areas (the Monday following DST, plus the preceding and following Mondays). This resulted in a final dataset of search behaviors from 3492 Mondays (162 Mondays were unavailable from Google for unknown reasons). Using this dataset, they discovered that during the Mondays after the switch from DST, entertainment websites (e.g. Facebook, YouTube, ESPN.com) were browsed more often than the surrounding Mondays. There’s no way to say that this is primarily work-related surfing from the Google dataset alone, but other research cited by Wagner and colleagues indicates that 60% of entertainment website traffic is driven by surfing at work. So there’s some reason to believe that this increase represents cyberloafing.
Because that’s not terribly convincing by itself, Study 2 investigated the cyberloafing question with a more traditional correlational study. In a lab study, undergraduate sleep habits were tracked for one night with a device called an Actigraph, and the following day, the students were brought into the lab to watch a lecture video. They were told that this lecture video portrayed a professor that was being considered as a new hire, and that their feedback would help the university make this decision. The lecturers then secretly tracked how much of the 42-minute video was spent actually watching the video versus doing anything else (e.g. email, YouTube, etc.). On average, students spent 5.7 minutes cyberloafing. However, the researchers’ hypotheses were confirmed: those with poorer quality sleep and lower quantity of sleep were more likely to cyberloaf. I’ll let the researchers describe the effect sizes:
From a practical perspective, these results suggest that for every minute the participant slept the night before the lab session, the participant engaged in .05 fewer minutes cyberloafing (or 3 fewer minutes cyberloafing per hour spent sleeping). For every minute of interrupted sleep the prior night, participants engaged in .14 minutes more cyberloafing (or 8.4 more minutes cyberloafing per hour of interrupted sleep) during the task. Given that the task was 42 minutes long, an hour of disturbed sleep would on average result in cyberloafing during 20% of the assigned task, and every hour of lost sleep would result in cyberloafing during an additional 7% of the task.
That’s pretty severe for both managers and employees themselves. As a manager, poor sleep among employees is likely to lead to decreased performance. As an employee, if you get too little sleep, you may find it more difficult to self-regulate your behavior and focus on your work. Lack of sleep is bad for everyone! So what do we do about it? Once again, I’ll let the authors present it in their own words:
What is not mixed is the mounting evidence that sleepy employees do not a productive office make. Thus, we encourage policy makers to revisit the costs and benefits of implementing DST, and we encourage managers to consider how they can facilitate greater employee self-regulation by ensuring that employees get good sleep.
I recently received a peculiar e-mail invitation. It was titled, “Richard, welcome to Lore.” Here’s a screenshot:
I did not immediately recognize this for what it was, although it is now blazingly obvious. “ODU has a new LMS?” I thought. “That can’t be right – we just upgraded Blackboard.” I didn’t personally support that upgrade, but it was the will of the faculty, based on countless committee meetings and votes. For a faculty uninterested in Moodle (because it is too new/too different/etc.), a sudden, unannounced switch to a “new alternative to learning management systems” seemed a bit odd.
After thinking about this for a minute, I recognized this message for what it was: targeted spam. I was being spammed. The use of my name and the use of my university’s name were marketing tactics. The lower-case “university” was a side-effect of inattentive mass e-mailing without regard for the willingness of recipients. The citing of “professors at over 600 schools used it” probably refers to the number of registered instructional users, regardless of the extent to which those “professors” actually adopted Lore, and regardless of the ranks of those instructors (adjuncts/instructors vs. full-time faculty/professors). The language is imprecise, and there is little that annoys faculty more than imprecision.
Now, education-related spam in general is not unusual for faculty; we get countless advertisements for new textbooks, new printing services, cheesy fake ways to get our dissertations published, and so on – but I had never been spammed before with an advertisement for an LMS. And there’s a good reason for that. An LMS is a foundational piece of software in higher education. Every faculty member is expected to use one of the “accepted” LMS provided by their local IT support office. The primary reason for this is to simplify the lives of the students – they have one system to learn, one place to get all of their assignments, one tool on which to need technical support, and one place for their grades. A small holdout of faculty who hate technology don’t use the LMS at all, but most do. Some faculty do use other software to supplement the LMS. For example, a couple of years ago, I installed MediaWiki (the software underlying Wikipedia) on the Linux server I rent space on and had students create wiki entries for a term project. I didn’t use the wiki software in Blackboard because it was clunky and hard to use (it may be better in the new Blackboard; I haven’t checked yet). But one thing I didn’t do with MediaWiki was record student grades.
Student grades are strictly protected by a variety of federal, state, and institutional regulations. The most well-known of these is FERPA. My university doesn’t even want me to put students grades on my own laptop – which is always in my personal possession – if I can avoid it. LMS have gradebooks, and the tightly controlled LMS run by the university is the preferred place to keep mid-semester grades until recording them at the end of the semester with the registrar.
Imagine my surprise then, to see that Lore LMS has a gradebook! I suppose Lore wants me to upload protected student information right into the cloud, but I’m a bit uncomfortable with that. I’m confident my institution wouldn’t want me to do that; I’m not even sure it would be legal. The privacy documentation for Lore indicates that by posting content on Lore, you license that content to be shared freely by anyone else within the Lore community. I don’t see an exception for grades (which is also posted material), so I suppose by this license I am providing permission for student grades to be shared freely throughout the site.
None of that would be so terrible if not for the direct email marketing. Individual instructors have a lot of discretion to impose rules and requirements upon their students as they see fit to meet their instructional objectives. If an instructor seeks out something like this, reads the policies, and decides it’s worth it, that’s fine. But the spam posts being sent by Lore strongly imply that our institutions have already approved this software. “Welcome to Lore”, “now available at your institution”, and “tools for grading” all point to this being an officially supported alternative to the institutional LMS. And that’s simply not true. Their marketing relies on implication and guilt to increase faculty eyeballs on their website (i.e. “this is a valuable teaching tool at your university that others are using and you’re not!”), and that’s just not ethical, especially given the question marks surrounding grades.
When I originally went to investigate if anyone else has noticed this, I took at look at Lore’s Facebook page. There, I found one post from a faculty member pointing this out. Something to the effect of, “Please stop sending this to our faculty; your email makes them think this is an official college platform, and it is causing a great deal of confusion.” That post had been on their website for several weeks when I looked, without a response from Lore staff. Conveniently, it has since been deleted (if any readers know how to recover deleted Facebook posts, I’d be very appreciative if you could get a copy of this post for me).
Given all of this, I am left with a very negative impression of Lore, and I haven’t even tried its features. It might be a fantastic teaching tool. Its design principles seem quite positive on the surface. The goals of the project seem quite familiar to me personally; I have been working to get social media into classrooms through academic research and federal grants for the past several years. Unfortunately, its marketing is completely at odds with the academic culture it is trying to infiltrate. Frankly, it strikes me as the effort of enthusiastic technology entrepreneurs untempered by reality. I’d be surprised if they have a single professor on staff to advise them about these issues. If they did, an e-mail like this never would have been blasted to unsuspecting faculty.
As a side note, to give a fair assessment of the tools, I considered creating an account to see what it looked like. But I’m afraid that would just result in their next email saying six hundred and one.
In this first post back from my summer hiatus, I’ll be outlining the sessions I plan to attend at AOM 2012 in Boston this coming Monday and Tuesday.
This year’s AOM will be a little different for me than in previous years. For readers that don’t know, I’m the Technology Czar for the Organizational Behavior division of AOM. That means I coordinate all things technology for the division. This year, I’m spearheading a new group: the Social Media Committee (SMC). As part of our responsibilities, the SMC will be creating a new video podcast with interviews of OB scholars to be placed on the OB website. Unfortunately for you, dear readers, that means I will be much busier than in previous years.
Typically, I live-blog the AOM conference through my Twitter account. I am hopeful that I can do this again, but we’ll need to play it a bit by ear.
On the bright side (or not, depending on your perspective), there is not much technology-related research at AOM this year, so there’s not much for me to schedule around. I take this to reflect a general disinterest from AOM in how the Internet is changing the workplace and employee experiences with it. But in any case, it means that I have few sessions to attend. One of the ones I am most excited about is one that I am facilitating: a symposium exploring the use of social media in employee selection. Otherwise, it’s a little bleak.
Here’s a list of all the sessions I’m interested in. Note that despite the low number, I will definitely not be able to attend all of them since there are overlaps, and I have other required events that I must attend. But I will try to hit all I can.
|Session Title||Session Type||Sponsor||Date/Time||Location|
|Connecting the Academy through Technology||Meeting||AAA||Sunday, 2PM||Hynes 209|
|CSR and Communication in Social Media Environments||Symposium||SIM, CMS||Monday, 11:30AM||Marriott, Provincetown|
|Online Communities and Micro-Blogs||Roundtable||OCIS||Monday, 1:15PM||Sheraton, Hampton A|
|From Twitter to Virtual Worlds||Roundtable||MED||Monday, 1:15PM||Marriott, Nantucket|
|OB Lifetime Achievement Address (Richard Hackman)||Social Event||OB||Tuesday, 9AM||Park Plaza, Ballroom|
|Online Communities||Paper Session||OCIS||Tuesday, 9:45AM||Sheraton, Fairfax A|
|The Role of Social Media and On-Line Resources in Selection||Roundtable||HR||Tuesday, 1:15PM||Park Plaza, Newbury|
I’m sure you are clamoring for more technology and I/O-related news, but we have entered summer, and as anyone in academia knows, that means it’s time to write. And unfortunately for you, that doesn’t mean writing on NeoAcademic!
Instead, the summer is the time when academics work on large writing projects that they’ve had to push aside during the meeting-and-teaching-heavy school year. This summer is a little worse than usual for me, because I’m finishing up writing a textbook on statistics in business for Sage UK and putting together several journal article submissions. I’m also organizing a new video podcast series for the Organizational Behavior division of the Academy of Management, so be on the lookout for that starting in August/September.
Thus, blog posts will be few until August/September-ish. If something catches my eye, I will still write it up, but otherwise I’ll be off the weekly posting schedule for the near future. I will likely kick off my fall return with live coverage of the 2012 Boston conference of the Academy of Management in August.
When hiring with online tests (a concept called unproctored internet testing [UIT]), one of the biggest worries is that test-takers will cheat. A home computer is just about as “unsecured” a testing environment as possible, so test-takers have many options to deceive their potential employers: looking up answers on the Internet or getting a friend to complete the test for them are just two options. When people cheat, their test scores no longer represent their ability level, which reduces the average job performance of those that ultimately get hired. One remedy for this is something called “verification testing,” where job candidates are re-tested at a later stage of the selection process: for example, they might complete an intelligence test online and, after passing it, take a parallel version of that test in person. But many organizations don’t want the extra hassle of verification testing, because it takes more time and effort on the part of both the organization and the job candidate. So if people are cheating, can the organization still come out ahead by using online testing?
In an upcoming article in the International Journal of Selection and Assessment, Landers and Sackett simulate the conditions under which mean job performance of those hired will be different when job applicants are cheating. This includes both positive and negative changes to job performance, relative to the selection system used before UIT was adopted. The key is recruitment; if online testing enabled an organization to reach a large pool of applicants, it can be more selective, improving mean job performance.
You can see a sample of the researchers’ findings in the table here. The top left corner (0.24) represents the baseline. With the criterion-related validity chosen for simulation, job performance is 0.24 standard deviations higher than the previous system when no one is cheating and the applicant pool is not increased. We can compare the other values in the table to this baseline. For example, if 10% of your applicant pool is cheating but you double the size of your applicant pool, online testing results in a job performance level of 0.32 (over a 33% increase in job performance!). However, if 50% of your applicant pool is cheating, doubling the applicant pool will result in lower job performance over your old system (about a 46% decrease!).
Perhaps more disturbingly, some simulation conditions resulted in not only lower than baseline job performance but negative job performance. In other words, under some conditions, using online testing resulted in lower job performance than hiring at random.
This points to a pressing need for future research to identify what percentage of applicants we would expect to cheat and other simulation parameters. Unfortunately, dishonest behavior is one of the most difficult areas of study in psychological research. While in most studies, the biggest concern in this area is participant apathy (i.e. bored undergraduates not paying attention), people are outright working against you when trying to determine how dishonest they are. We can only hope this research spurs further work in this area.Footnotes: