In the last several months, massively open online courses (MOOCs) have been presented as a game-changer, a seismic shift, a crisis, and a McDonaldization of higher education. A fair number of universities have jumped on-board, including several prominent universities in Australia and the United States. At the University of Virginia, a recent kerfuffle resulted in several resignations and a great deal of negative press regarding pressure to move toward a MOOC model.
If you’re not familiar with MOOCs, the basic idea is that a massively large number of students (in the ten-thousands or more) complete a course with a single instructor (or handful of instructors). The only way this is possible is through technology: a semi-permanent version of course content is developed and delivered online, and all assessment is done through automated grading systems. Discussion rarely involves experts; discussion is typically only with others taking the course simultaneously. Instructors are probably not available via e-mail or other social media (although they could be). The grading algorithms, because of current limitations on artificial intelligence, can only effectively assess multiple choice exams or written materials where there is a clear “right” answer – for example, basic mathematics problems, computer programs that respond to set input and produce specific output, exams where the presence of keywords is the important element, and so on. Assessment of essays is generally limited to spelling, grammar and keyword searches.
So are MOOCs really the future? To some, the current MOOC craze is nothing more than a repeat of the correspondence course craze of the 1920s – that as we experiment with MOOCs further, it will become increasingly obvious that MOOCs do not provide “fast, easy education” any more than courses by mail do – that learning is hard work, and MOOCs don’t change that. Considering the high drop-out rates associated with most MOOCs, there is probably some merit to this view. However, the added dimension this century is that the online education model, which is the basis for MOOCs, works pretty well. We’re at the point where we can conclusively say online education can be used to teach students as effectively as most in-person courses (whether any particular course actually does work as well is a different question). However, in a well-designed online course, we still need a fair number of instructors (or at least, instructional staff) to grade content, answer e-mails, respond to student disputes, and so on. In a MOOC, most of this is missing – if interaction is present at all, it is typically a discussion board where you are more likely to get an answer from a classmate (which may or may not be a high quality answer) than anyone with vetted subject matter expertise.
In summary, here are the core issues with MOOCs as they exist right now:
- Because students can join a MOOC at will, content must be standardized and cannot be easily updated mid-course to address the needs of the specific students taking the course at that time. Part of the perceived value of MOOCs is “create instructional content once and deliver it to a lot of students for a long time”.
- Because MOOCs rely on massive numbers of students (tens of thousands), all assessment of learning must be done automatically. Current technology (i.e. artificial intelligence that comes nowhere close to the complexity of human cognition) does not enable automatic assessment of anything other than rote processes (e.g. basic mathematics, basic computer programming, basic English skills like spelling and some aspects of grammar – you might be noticing a theme).
- Because of all of the above, MOOCs are effectively only when there is a clear path from A to B (for example, solving a basic algebraic equation), and learning the path from A to B is the only worthwhile objective.
Thus, for relatively simple topics where there is a relatively simple objective standard of mastery that students need to meet, MOOCs should work pretty well. But if your goal is to teach critical thinking or provide individual attention to students in need, a MOOC isn’t enough.
So whereas most of the blogosphere has been discussing the merits of MOOCs as a money-saving measure, a way to bring higher education to the masses, the revolution that will obsolete the university degree, etc., etc., my goal here is to answer a practical question for instructors. During the rise of industrial automation, assembly line worker jobs were replaced by machines because machines were more efficient. So if you are currently an instructor in higher education (I’m especially concerned about adjuncts, since they seem to be the most likely to lose their jobs should a “MOOC revolution” occur), can you be replaced by a machine? Let’s find out.
Course Content (not in your control):
- Do you teach a topic where the knowledge taught evolves over time?
(e.g. the topics covered in Introductory Psychology change as Psychology continues to evolve, but the topics covered in College Algebra don’t really change) - Do you teach a topic where there is no “right answer”?
- Do you teach a topic that develops/requires student creativity?
Updates to Teaching Quality (mostly in your control):
- Do you update your course’s content regularly to keep up with research developments in your field?
- Do you update your course’s content regularly to keep up with current research on effective teaching in your field?
- Do you incorporate interactive exercises involving small groups to facilitate peer learning (e.g. projects, small group discussion)?
- Do you teach/assess critical thinking within your topic area?
- Do you assign and grade writing assignments to assess student mastery of complex concepts?
- Do you assess student ability to apply knowledge gained to complex, novel problems (i.e. complex problem solving)?
Student Feedback (completely in your control):
- Do you give in-depth feedback customized to each student’s achievement and needs?
- Do you discuss and explore individual student knowledge/progress through 1-on-1 interaction with students?
- Do you respond to students by updating/targeting course content to meet their needs or focus within their interests?
- Do you provide mentoring to students in need?
If you answered “yes” to at least six of the questions above, a MOOC can’t replace you very effectively right now (although that’s only true as long as the technology remains inadequate!).
So to summarize, right now, MOOCs are a perfectly capable medium to provide instruction on knowledge that is relatively stable, with clear “correct” answers. For example, a MOOC environment would be an efficient way to teach the solving of algebraic problems in College Algebra, but a MOOC would be less useful to instill a deeper sense of the relationships between numbers. Most college-level courses have the potential for such critical thinking content, but not all instructors choose to develop these skills. For instructors that choose not to do the things
If you aren’t updating your course content regularly (for whatever reason), aren’t working to maximize the effectiveness of your teaching every semester, and aren’t providing targeted feedback, your teaching is replaceable. My recommendation? Don’t be replaceable.
For now, I suggest we leave the MOOCs to whom they seem best designed to help: self-directed learners who can’t afford formal education but still want to improve their knowledge/skills because they see value in gaining knowledge/skills (not because they need a grade).
For those of you that know me personally, you’re probably aware that I’m the Technology Czar for the Organizational Behavior division of the Academy of Management. That is really just a fancy way to say, “When there’s a technology problem, the executive committee asks me to fix it.” However, this year, my role has expanded to include chairing the brand-new Social Media Committee. The purpose of that committee is to produce material to better connect the OB division leadership, members of the OB division, and the public at large, shared via social media.
So as part of that mandate, we have created a new podcast! During the AOM 2012 conference in Boston, Massachusetts, graduate student members of the Social Committee Committee interviewed prominent scholars of the OB division and recorded these interviews to video. Each month over the next year, we’ll be releasing these videos (edited, of course!) as a podcast. You can view the first of these videos embedded below or subscribe directly to the podcast in iTunes.
It begins with my giving a short video introduction to the series and this episode, so you finally can put a face and voice to this blog. For better or worse, I suppose…
In a fascinating new paper appearing in the Journal of Applied Psychology, Wagner, Barnes, Lim and Ferris1 investigate the link between lack of sleep and the amount of time that employees will spend wasting time on the Internet while at work – a phenomenon called cyberloafing. Using two studies – one using historical search data collected from Google Insights and another using a sample of undergraduates, Wagner and colleagues found that those who sleep less are more likely to cyberloaf the next day. They also found an interaction such that highly conscientious people were less likely to cyberloaf after a night of interrupted, poor quality sleep than less conscientious people.
In Study 1, Wagner and colleagues investigated this link by examining that longstanding arch-rival of good sleep the night before work in the United States: the hour lost when “falling back” from daylight savings time (DST) to standard time. They downloaded data from Google Insights for Search, investigating surfing habits across 3 Mondays each year over 6 years for each of 203 metro areas (the Monday following DST, plus the preceding and following Mondays). This resulted in a final dataset of search behaviors from 3492 Mondays (162 Mondays were unavailable from Google for unknown reasons). Using this dataset, they discovered that during the Mondays after the switch from DST, entertainment websites (e.g. Facebook, YouTube, ESPN.com) were browsed more often than the surrounding Mondays. There’s no way to say that this is primarily work-related surfing from the Google dataset alone, but other research cited by Wagner and colleagues indicates that 60% of entertainment website traffic is driven by surfing at work. So there’s some reason to believe that this increase represents cyberloafing.
Because that’s not terribly convincing by itself, Study 2 investigated the cyberloafing question with a more traditional correlational study. In a lab study, undergraduate sleep habits were tracked for one night with a device called an Actigraph, and the following day, the students were brought into the lab to watch a lecture video. They were told that this lecture video portrayed a professor that was being considered as a new hire, and that their feedback would help the university make this decision. The lecturers then secretly tracked how much of the 42-minute video was spent actually watching the video versus doing anything else (e.g. email, YouTube, etc.). On average, students spent 5.7 minutes cyberloafing. However, the researchers’ hypotheses were confirmed: those with poorer quality sleep and lower quantity of sleep were more likely to cyberloaf. I’ll let the researchers describe the effect sizes:
From a practical perspective, these results suggest that for every minute the participant slept the night before the lab session, the participant engaged in .05 fewer minutes cyberloafing (or 3 fewer minutes cyberloafing per hour spent sleeping). For every minute of interrupted sleep the prior night, participants engaged in .14 minutes more cyberloafing (or 8.4 more minutes cyberloafing per hour of interrupted sleep) during the task. Given that the task was 42 minutes long, an hour of disturbed sleep would on average result in cyberloafing during 20% of the assigned task, and every hour of lost sleep would result in cyberloafing during an additional 7% of the task.
That’s pretty severe for both managers and employees themselves. As a manager, poor sleep among employees is likely to lead to decreased performance. As an employee, if you get too little sleep, you may find it more difficult to self-regulate your behavior and focus on your work. Lack of sleep is bad for everyone! So what do we do about it? Once again, I’ll let the authors present it in their own words:
What is not mixed is the mounting evidence that sleepy employees do not a productive office make. Thus, we encourage policy makers to revisit the costs and benefits of implementing DST, and we encourage managers to consider how they can facilitate greater employee self-regulation by ensuring that employees get good sleep.
- D.T. Wagner, C.M. Barnes, V.K.G. Lim, & D.L. Ferris (2012). Lost sleep and cyberloafing: Evidence from the laboratory and a daylight saving time quasi-experiment Journal of Applied Psychology, 97, 1068-1076 : 10.1037/a0027557 [↩]