In the last several months, massively open online courses (MOOCs) have been presented as a game-changer, a seismic shift, a crisis, and a McDonaldization of higher education. A fair number of universities have jumped on-board, including several prominent universities in Australia and the United States. At the University of Virginia, a recent kerfuffle resulted in several resignations and a great deal of negative press regarding pressure to move toward a MOOC model.
If you’re not familiar with MOOCs, the basic idea is that a massively large number of students (in the ten-thousands or more) complete a course with a single instructor (or handful of instructors). The only way this is possible is through technology: a semi-permanent version of course content is developed and delivered online, and all assessment is done through automated grading systems. Discussion rarely involves experts; discussion is typically only with others taking the course simultaneously. Instructors are probably not available via e-mail or other social media (although they could be). The grading algorithms, because of current limitations on artificial intelligence, can only effectively assess multiple choice exams or written materials where there is a clear “right” answer – for example, basic mathematics problems, computer programs that respond to set input and produce specific output, exams where the presence of keywords is the important element, and so on. Assessment of essays is generally limited to spelling, grammar and keyword searches.
So are MOOCs really the future? To some, the current MOOC craze is nothing more than a repeat of the correspondence course craze of the 1920s – that as we experiment with MOOCs further, it will become increasingly obvious that MOOCs do not provide “fast, easy education” any more than courses by mail do – that learning is hard work, and MOOCs don’t change that. Considering the high drop-out rates associated with most MOOCs, there is probably some merit to this view. However, the added dimension this century is that the online education model, which is the basis for MOOCs, works pretty well. We’re at the point where we can conclusively say online education can be used to teach students as effectively as most in-person courses (whether any particular course actually does work as well is a different question). However, in a well-designed online course, we still need a fair number of instructors (or at least, instructional staff) to grade content, answer e-mails, respond to student disputes, and so on. In a MOOC, most of this is missing – if interaction is present at all, it is typically a discussion board where you are more likely to get an answer from a classmate (which may or may not be a high quality answer) than anyone with vetted subject matter expertise.
In summary, here are the core issues with MOOCs as they exist right now:
- Because students can join a MOOC at will, content must be standardized and cannot be easily updated mid-course to address the needs of the specific students taking the course at that time. Part of the perceived value of MOOCs is “create instructional content once and deliver it to a lot of students for a long time”.
- Because MOOCs rely on massive numbers of students (tens of thousands), all assessment of learning must be done automatically. Current technology (i.e. artificial intelligence that comes nowhere close to the complexity of human cognition) does not enable automatic assessment of anything other than rote processes (e.g. basic mathematics, basic computer programming, basic English skills like spelling and some aspects of grammar – you might be noticing a theme).
- Because of all of the above, MOOCs are effectively only when there is a clear path from A to B (for example, solving a basic algebraic equation), and learning the path from A to B is the only worthwhile objective.
Thus, for relatively simple topics where there is a relatively simple objective standard of mastery that students need to meet, MOOCs should work pretty well. But if your goal is to teach critical thinking or provide individual attention to students in need, a MOOC isn’t enough.
So whereas most of the blogosphere has been discussing the merits of MOOCs as a money-saving measure, a way to bring higher education to the masses, the revolution that will obsolete the university degree, etc., etc., my goal here is to answer a practical question for instructors. During the rise of industrial automation, assembly line worker jobs were replaced by machines because machines were more efficient. So if you are currently an instructor in higher education (I’m especially concerned about adjuncts, since they seem to be the most likely to lose their jobs should a “MOOC revolution” occur), can you be replaced by a machine? Let’s find out.
Course Content (not in your control):
- Do you teach a topic where the knowledge taught evolves over time?
(e.g. the topics covered in Introductory Psychology change as Psychology continues to evolve, but the topics covered in College Algebra don’t really change)
- Do you teach a topic where there is no “right answer”?
- Do you teach a topic that develops/requires student creativity?
Updates to Teaching Quality (mostly in your control):
- Do you update your course’s content regularly to keep up with research developments in your field?
- Do you update your course’s content regularly to keep up with current research on effective teaching in your field?
- Do you incorporate interactive exercises involving small groups to facilitate peer learning (e.g. projects, small group discussion)?
- Do you teach/assess critical thinking within your topic area?
- Do you assign and grade writing assignments to assess student mastery of complex concepts?
- Do you assess student ability to apply knowledge gained to complex, novel problems (i.e. complex problem solving)?
Student Feedback (completely in your control):
- Do you give in-depth feedback customized to each student’s achievement and needs?
- Do you discuss and explore individual student knowledge/progress through 1-on-1 interaction with students?
- Do you respond to students by updating/targeting course content to meet their needs or focus within their interests?
- Do you provide mentoring to students in need?
If you answered “yes” to at least six of the questions above, a MOOC can’t replace you very effectively right now (although that’s only true as long as the technology remains inadequate!).
So to summarize, right now, MOOCs are a perfectly capable medium to provide instruction on knowledge that is relatively stable, with clear “correct” answers. For example, a MOOC environment would be an efficient way to teach the solving of algebraic problems in College Algebra, but a MOOC would be less useful to instill a deeper sense of the relationships between numbers. Most college-level courses have the potential for such critical thinking content, but not all instructors choose to develop these skills. For instructors that choose not to do the things
If you aren’t updating your course content regularly (for whatever reason), aren’t working to maximize the effectiveness of your teaching every semester, and aren’t providing targeted feedback, your teaching is replaceable. My recommendation? Don’t be replaceable.
For now, I suggest we leave the MOOCs to whom they seem best designed to help: self-directed learners who can’t afford formal education but still want to improve their knowledge/skills because they see value in gaining knowledge/skills (not because they need a grade).
For those of you that know me personally, you’re probably aware that I’m the Technology Czar for the Organizational Behavior division of the Academy of Management. That is really just a fancy way to say, “When there’s a technology problem, the executive committee asks me to fix it.” However, this year, my role has expanded to include chairing the brand-new Social Media Committee. The purpose of that committee is to produce material to better connect the OB division leadership, members of the OB division, and the public at large, shared via social media.
So as part of that mandate, we have created a new podcast! During the AOM 2012 conference in Boston, Massachusetts, graduate student members of the Social Committee Committee interviewed prominent scholars of the OB division and recorded these interviews to video. Each month over the next year, we’ll be releasing these videos (edited, of course!) as a podcast. You can view the first of these videos embedded below or subscribe directly to the podcast in iTunes.
It begins with my giving a short video introduction to the series and this episode, so you finally can put a face and voice to this blog. For better or worse, I suppose…
In a fascinating new paper appearing in the Journal of Applied Psychology, Wagner, Barnes, Lim and Ferris investigate the link between lack of sleep and the amount of time that employees will spend wasting time on the Internet while at work – a phenomenon called cyberloafing. Using two studies – one using historical search data collected from Google Insights and another using a sample of undergraduates, Wagner and colleagues found that those who sleep less are more likely to cyberloaf the next day. They also found an interaction such that highly conscientious people were less likely to cyberloaf after a night of interrupted, poor quality sleep than less conscientious people.
In Study 1, Wagner and colleagues investigated this link by examining that longstanding arch-rival of good sleep the night before work in the United States: the hour lost when “falling back” from daylight savings time (DST) to standard time. They downloaded data from Google Insights for Search, investigating surfing habits across 3 Mondays each year over 6 years for each of 203 metro areas (the Monday following DST, plus the preceding and following Mondays). This resulted in a final dataset of search behaviors from 3492 Mondays (162 Mondays were unavailable from Google for unknown reasons). Using this dataset, they discovered that during the Mondays after the switch from DST, entertainment websites (e.g. Facebook, YouTube, ESPN.com) were browsed more often than the surrounding Mondays. There’s no way to say that this is primarily work-related surfing from the Google dataset alone, but other research cited by Wagner and colleagues indicates that 60% of entertainment website traffic is driven by surfing at work. So there’s some reason to believe that this increase represents cyberloafing.
Because that’s not terribly convincing by itself, Study 2 investigated the cyberloafing question with a more traditional correlational study. In a lab study, undergraduate sleep habits were tracked for one night with a device called an Actigraph, and the following day, the students were brought into the lab to watch a lecture video. They were told that this lecture video portrayed a professor that was being considered as a new hire, and that their feedback would help the university make this decision. The lecturers then secretly tracked how much of the 42-minute video was spent actually watching the video versus doing anything else (e.g. email, YouTube, etc.). On average, students spent 5.7 minutes cyberloafing. However, the researchers’ hypotheses were confirmed: those with poorer quality sleep and lower quantity of sleep were more likely to cyberloaf. I’ll let the researchers describe the effect sizes:
From a practical perspective, these results suggest that for every minute the participant slept the night before the lab session, the participant engaged in .05 fewer minutes cyberloafing (or 3 fewer minutes cyberloafing per hour spent sleeping). For every minute of interrupted sleep the prior night, participants engaged in .14 minutes more cyberloafing (or 8.4 more minutes cyberloafing per hour of interrupted sleep) during the task. Given that the task was 42 minutes long, an hour of disturbed sleep would on average result in cyberloafing during 20% of the assigned task, and every hour of lost sleep would result in cyberloafing during an additional 7% of the task.
That’s pretty severe for both managers and employees themselves. As a manager, poor sleep among employees is likely to lead to decreased performance. As an employee, if you get too little sleep, you may find it more difficult to self-regulate your behavior and focus on your work. Lack of sleep is bad for everyone! So what do we do about it? Once again, I’ll let the authors present it in their own words:
What is not mixed is the mounting evidence that sleepy employees do not a productive office make. Thus, we encourage policy makers to revisit the costs and benefits of implementing DST, and we encourage managers to consider how they can facilitate greater employee self-regulation by ensuring that employees get good sleep.
I recently received a peculiar e-mail invitation. It was titled, “Richard, welcome to Lore.” Here’s a screenshot:
I did not immediately recognize this for what it was, although it is now blazingly obvious. “ODU has a new LMS?” I thought. “That can’t be right – we just upgraded Blackboard.” I didn’t personally support that upgrade, but it was the will of the faculty, based on countless committee meetings and votes. For a faculty uninterested in Moodle (because it is too new/too different/etc.), a sudden, unannounced switch to a “new alternative to learning management systems” seemed a bit odd.
After thinking about this for a minute, I recognized this message for what it was: targeted spam. I was being spammed. The use of my name and the use of my university’s name were marketing tactics. The lower-case “university” was a side-effect of inattentive mass e-mailing without regard for the willingness of recipients. The citing of “professors at over 600 schools used it” probably refers to the number of registered instructional users, regardless of the extent to which those “professors” actually adopted Lore, and regardless of the ranks of those instructors (adjuncts/instructors vs. full-time faculty/professors). The language is imprecise, and there is little that annoys faculty more than imprecision.
Now, education-related spam in general is not unusual for faculty; we get countless advertisements for new textbooks, new printing services, cheesy fake ways to get our dissertations published, and so on – but I had never been spammed before with an advertisement for an LMS. And there’s a good reason for that. An LMS is a foundational piece of software in higher education. Every faculty member is expected to use one of the “accepted” LMS provided by their local IT support office. The primary reason for this is to simplify the lives of the students – they have one system to learn, one place to get all of their assignments, one tool on which to need technical support, and one place for their grades. A small holdout of faculty who hate technology don’t use the LMS at all, but most do. Some faculty do use other software to supplement the LMS. For example, a couple of years ago, I installed MediaWiki (the software underlying Wikipedia) on the Linux server I rent space on and had students create wiki entries for a term project. I didn’t use the wiki software in Blackboard because it was clunky and hard to use (it may be better in the new Blackboard; I haven’t checked yet). But one thing I didn’t do with MediaWiki was record student grades.
Student grades are strictly protected by a variety of federal, state, and institutional regulations. The most well-known of these is FERPA. My university doesn’t even want me to put students grades on my own laptop – which is always in my personal possession – if I can avoid it. LMS have gradebooks, and the tightly controlled LMS run by the university is the preferred place to keep mid-semester grades until recording them at the end of the semester with the registrar.
Imagine my surprise then, to see that Lore LMS has a gradebook! I suppose Lore wants me to upload protected student information right into the cloud, but I’m a bit uncomfortable with that. I’m confident my institution wouldn’t want me to do that; I’m not even sure it would be legal. The privacy documentation for Lore indicates that by posting content on Lore, you license that content to be shared freely by anyone else within the Lore community. I don’t see an exception for grades (which is also posted material), so I suppose by this license I am providing permission for student grades to be shared freely throughout the site.
None of that would be so terrible if not for the direct email marketing. Individual instructors have a lot of discretion to impose rules and requirements upon their students as they see fit to meet their instructional objectives. If an instructor seeks out something like this, reads the policies, and decides it’s worth it, that’s fine. But the spam posts being sent by Lore strongly imply that our institutions have already approved this software. “Welcome to Lore”, “now available at your institution”, and “tools for grading” all point to this being an officially supported alternative to the institutional LMS. And that’s simply not true. Their marketing relies on implication and guilt to increase faculty eyeballs on their website (i.e. “this is a valuable teaching tool at your university that others are using and you’re not!”), and that’s just not ethical, especially given the question marks surrounding grades.
When I originally went to investigate if anyone else has noticed this, I took at look at Lore’s Facebook page. There, I found one post from a faculty member pointing this out. Something to the effect of, “Please stop sending this to our faculty; your email makes them think this is an official college platform, and it is causing a great deal of confusion.” That post had been on their website for several weeks when I looked, without a response from Lore staff. Conveniently, it has since been deleted (if any readers know how to recover deleted Facebook posts, I’d be very appreciative if you could get a copy of this post for me).
Given all of this, I am left with a very negative impression of Lore, and I haven’t even tried its features. It might be a fantastic teaching tool. Its design principles seem quite positive on the surface. The goals of the project seem quite familiar to me personally; I have been working to get social media into classrooms through academic research and federal grants for the past several years. Unfortunately, its marketing is completely at odds with the academic culture it is trying to infiltrate. Frankly, it strikes me as the effort of enthusiastic technology entrepreneurs untempered by reality. I’d be surprised if they have a single professor on staff to advise them about these issues. If they did, an e-mail like this never would have been blasted to unsuspecting faculty.
As a side note, to give a fair assessment of the tools, I considered creating an account to see what it looked like. But I’m afraid that would just result in their next email saying six hundred and one.
In this first post back from my summer hiatus, I’ll be outlining the sessions I plan to attend at AOM 2012 in Boston this coming Monday and Tuesday.
This year’s AOM will be a little different for me than in previous years. For readers that don’t know, I’m the Technology Czar for the Organizational Behavior division of AOM. That means I coordinate all things technology for the division. This year, I’m spearheading a new group: the Social Media Committee (SMC). As part of our responsibilities, the SMC will be creating a new video podcast with interviews of OB scholars to be placed on the OB website. Unfortunately for you, dear readers, that means I will be much busier than in previous years.
Typically, I live-blog the AOM conference through my Twitter account. I am hopeful that I can do this again, but we’ll need to play it a bit by ear.
On the bright side (or not, depending on your perspective), there is not much technology-related research at AOM this year, so there’s not much for me to schedule around. I take this to reflect a general disinterest from AOM in how the Internet is changing the workplace and employee experiences with it. But in any case, it means that I have few sessions to attend. One of the ones I am most excited about is one that I am facilitating: a symposium exploring the use of social media in employee selection. Otherwise, it’s a little bleak.
Here’s a list of all the sessions I’m interested in. Note that despite the low number, I will definitely not be able to attend all of them since there are overlaps, and I have other required events that I must attend. But I will try to hit all I can.
|Session Title||Session Type||Sponsor||Date/Time||Location|
|Connecting the Academy through Technology||Meeting||AAA||Sunday, 2PM||Hynes 209|
|CSR and Communication in Social Media Environments||Symposium||SIM, CMS||Monday, 11:30AM||Marriott, Provincetown|
|Online Communities and Micro-Blogs||Roundtable||OCIS||Monday, 1:15PM||Sheraton, Hampton A|
|From Twitter to Virtual Worlds||Roundtable||MED||Monday, 1:15PM||Marriott, Nantucket|
|OB Lifetime Achievement Address (Richard Hackman)||Social Event||OB||Tuesday, 9AM||Park Plaza, Ballroom|
|Online Communities||Paper Session||OCIS||Tuesday, 9:45AM||Sheraton, Fairfax A|
|The Role of Social Media and On-Line Resources in Selection||Roundtable||HR||Tuesday, 1:15PM||Park Plaza, Newbury|
I’m sure you are clamoring for more technology and I/O-related news, but we have entered summer, and as anyone in academia knows, that means it’s time to write. And unfortunately for you, that doesn’t mean writing on NeoAcademic!
Instead, the summer is the time when academics work on large writing projects that they’ve had to push aside during the meeting-and-teaching-heavy school year. This summer is a little worse than usual for me, because I’m finishing up writing a textbook on statistics in business for Sage UK and putting together several journal article submissions. I’m also organizing a new video podcast series for the Organizational Behavior division of the Academy of Management, so be on the lookout for that starting in August/September.
Thus, blog posts will be few until August/September-ish. If something catches my eye, I will still write it up, but otherwise I’ll be off the weekly posting schedule for the near future. I will likely kick off my fall return with live coverage of the 2012 Boston conference of the Academy of Management in August.
When hiring with online tests (a concept called unproctored internet testing [UIT]), one of the biggest worries is that test-takers will cheat. A home computer is just about as “unsecured” a testing environment as possible, so test-takers have many options to deceive their potential employers: looking up answers on the Internet or getting a friend to complete the test for them are just two options. When people cheat, their test scores no longer represent their ability level, which reduces the average job performance of those that ultimately get hired. One remedy for this is something called “verification testing,” where job candidates are re-tested at a later stage of the selection process: for example, they might complete an intelligence test online and, after passing it, take a parallel version of that test in person. But many organizations don’t want the extra hassle of verification testing, because it takes more time and effort on the part of both the organization and the job candidate. So if people are cheating, can the organization still come out ahead by using online testing?
In an upcoming article in the International Journal of Selection and Assessment, Landers and Sackett simulate the conditions under which mean job performance of those hired will be different when job applicants are cheating. This includes both positive and negative changes to job performance, relative to the selection system used before UIT was adopted. The key is recruitment; if online testing enabled an organization to reach a large pool of applicants, it can be more selective, improving mean job performance.
You can see a sample of the researchers’ findings in the table here. The top left corner (0.24) represents the baseline. With the criterion-related validity chosen for simulation, job performance is 0.24 standard deviations higher than the previous system when no one is cheating and the applicant pool is not increased. We can compare the other values in the table to this baseline. For example, if 10% of your applicant pool is cheating but you double the size of your applicant pool, online testing results in a job performance level of 0.32 (over a 33% increase in job performance!). However, if 50% of your applicant pool is cheating, doubling the applicant pool will result in lower job performance over your old system (about a 46% decrease!).
Perhaps more disturbingly, some simulation conditions resulted in not only lower than baseline job performance but negative job performance. In other words, under some conditions, using online testing resulted in lower job performance than hiring at random.
This points to a pressing need for future research to identify what percentage of applicants we would expect to cheat and other simulation parameters. Unfortunately, dishonest behavior is one of the most difficult areas of study in psychological research. While in most studies, the biggest concern in this area is participant apathy (i.e. bored undergraduates not paying attention), people are outright working against you when trying to determine how dishonest they are. We can only hope this research spurs further work in this area.Footnotes:
An upcoming article in Academy of Management Perspectives by Aguinis, Suarez-Gonzalez, Lannelongue and Joo investigates the accuracy of citation counts as a measure of how impactful an academic’s work in both in and out of academia. Short answer: while citation counts reflect the extent to which an academic’s work affects the work of his/her peers, they do not reflect the extent to which that work affects the world at large.
Business schools in particular place a great deal of importance on citation counts, often equating them with the importance of the scholar. A researcher with a citation count might be headhunted from another business school in order to increase the prestige of the hiring institution.
The problem is that citation counts do not necessarily capture the actual importance of a person’s work in anything beyond scholarly circles. If the purpose of science is to help the world (and I’d argue that it is), then citation counts capture something altogether different: they reflect how valuable other scientists view your work to be in supporting their own ideas. This is not really what we want to know when we ask, “how important is this scholar’s work?”.
Many fields (especially business-related fields) worry that their research is not adopted by those that could benefit from it most. Often, research never makes it beyond journal articles and into practice. So if we are really concerned with identifying the most “impactful” scholars, do citation counts capture that? Do highly cited authors have a bigger impact on the world than less-cited authors?
To determine how much exactly citation counts reflect larger impact, the authors found the number of citations to the top 550 most cited authors in the Academy of Management. They searched Google for these authors, using their full name with quotation marks to pull up a list of Internet references to that author. They then reviewed the first 50 pages of results to see how many actually referred to the author of interest. If more than 5% referred to someone else, they dropped that author from analysis. This resulted in a final database of 391 scholars.
In that database, the count of citations on Google did not correlate highly with citation counts: correlations ranged from .152 to .260 depending on whether or not you include .edu domains. In a multiple regression analysis, when controlling for years since earning the doctorate and the number of articles published, the number of citations did not predict substantial incremental variance in Google listings among non-.edu domains (delta-R2 of about one half of one percent).
In summary, if we believe the number of references in Google to be an accurate metric for capturing impact on the world at large, citation counts do not reflect this value. Impact is clearly a more complicated construct that we typically consider it; future work should investigate better ways to capture this. It’s also worth noting that while this approach works for business, where research results should be directly adopted by managers, it would not work so well for fields where there are several steps between research and adoption. For example, just because a nuclear physicist does not appear much on Google doesn’t mean that his work didn’t help build a nuclear power plant.
At the least, my 17000 results in Google put me around #300 of the 391 most-cited authors in the Academy of Management. Not too bad for 3 years out!Footnotes:
Grad School Series: Applying to Graduate School in Industrial/Organizational Psychology
Starting Sophomore Year: Should I get a Ph.D. or Master’s? | How to Get Research Experience
Starting Junior Year: Preparing for the GRE | Getting Recommendations
Starting Senior Year: Where to Apply for Grad School | Value of Traditional vs. Online Degrees
Interviews/Visits: Preparing for Interviews | Going to Interviews
In Graduate School: What to Expect First Year
So you want to go to graduate school in industrial/organizational (I/O) psychology? Lots of decisions, not much direction. I bet I can help!
While my undergraduate students are lucky to be at a school with I/O psychologists, many students interested in I/O psychology aren’t at schools with people they can talk to. I/O psychology is still fairly uncommon in the grand scheme of psychologists; there are around 7,000 members of SIOP, the dominant professional organization of I/O, compared to the 150,000 in the American Psychological Association. As a result, many schools simply don’t have faculty with expertise in this area, leading many promising graduate students to apply elsewhere. That’s great from the perspective of I/O psychologists – lots of jobs – but not so great for grad-students-to-be or the field as a whole.
As a faculty member at ODU with a small army of undergraduate research assistants, I often find myself answering the same questions over and over again about graduate school. So why not share this advice with everyone?
I’ve decided to return to this series because a few questions have come up from students that I realized I didn’t cover here. This week, I’d like to cover another important decision: should I go to graduate school at an online institution or a more traditional on-campus program?
This is actually part of one of the continuing “big arguments” in the field of education, so there aren’t many clear answers just yet. There is some evidence from the Department of Education that web-based courses are no less effective than in-person courses. That is, there’s reason to believe that two courses, similarly designed, one online and one in-person, will be essentially the same in their ability to teach you content. But the studies that the DOE summarizes are generally all undergraduate or laboratory studies, offering little insight into a) graduate courses or b) complete programs (not individual courses).
The experience of graduate school is certainly going to be different at these two categories of institution. One of the major benefits from brick-and-mortar graduate school is literally immersing yourself in the academic environment: being a part of a cohort of graduate students with similar experiences that you socialize with, interacting intensively face-to-face with professors about your academic achievement and career goals, gaining networking contacts that you will call on for the rest of your career, and generally learning about the culture of the profession.
Most of that is lost in an online environment. You’re not going to go out for drinks after a difficult exam with your classmates. As you likely already learned as an undergraduate in psychology, frequency of interaction and shared traumatic experiences are some of the best ways to ensure relationships form between people. This simply doesn’t really exist in an online program. While you might get to know people on discussion boards, it’s not quite the same. Think of it like the difference between your in-person friends and your “Facebook friends.”
The casual interaction with others in an academic environment also is beneficial developmentally. Completing a graduate program at home, you almost always have time to sit and think about your answers, to carefully consider your responses, and to put a lot of time and effort into producing the best answer possible. And this is certainly a valuable skill in an I/O career – but it’s not everything. If you ever plan to use your I/O degree in the “real world,” you’ll need experience coming up with answers on the fly and responding to/interacting with other experts. Many graduate students find that their first academic conference presentation, where they must respond to random questions from interested parties about their research results, is eye-opening in terms of the sudden pressure to think on their feet. Most students have already practiced this skill in their courses, and still find it challenging. For example, in my first-year Master’s-level Personnel Psychology course, I have students lead discussion for over an hour on a set of several journal articles. Without that kind of practice, I’d be a bit worried – and this directly translates into the kind of work you’d need to do .
I often find that students are considering an online program because they want to balance graduate school against a job. Let me be absolutely clear: this is a terrible idea. I fully expect my graduate students to be studying and working on research 40-60 hours per week on top of any teaching responsibilities. Teaching, at its most intense, should be a commitment of 10 hours per week. That is the maximally permissible distraction. If you plan to hold an outside job to support yourself during graduate school instead of teaching, you should be working less than 10 hours per week. Most part-time jobs don’t permit this and “strongly encourage” employees to increase their hours, so it’s generally not a good idea to have such a job while in graduate school. Remember, you’re in graduate school to prepare for your career. Every class you take, every bit of research you conduct, is now precisely targeted at giving you better opportunities later. Distracting yourself from that goal in any way will only hurt you in the long run.
More practically, there is some question as to the quality of online programs. One 2010 report by SIOP found several disturbing features of online I/O programs. For example, most online I/O programs don’t report who their faculty are. Of the PhD programs identified offering online I/O graduate degrees, only one program (Walden University) did report this, and of the 22 faculty, only two (2!) held I/O PhDs. That opens many questions about the expertise in I/O of those offering these degrees. Master’s programs had, on average, 1.5 I/O faculty. Only one online I/O Master’s program required a written thesis, which is necessary for anyone hoping to progress into a PhD program.
In the annual SIOP Survey reported here, most employers additionally had negative or neutral opinions about students coming from online programs. For example, respondents tended to respond positive to, “I tend to negatively evaluate a résumé if I notice that the applicant earned his or her graduate degree online.” and “I feel that there IS a meaningful difference in the quality of training that one receives in an online graduate degree program in I-O Psychology versus a traditional, in-person program in I-O Psychology.”
A more recent 2012 report by Rechlin and Kraiger found in an experimental study of I/O consulting firms that applicants from online programs tend to be evaluated more negatively than those coming from brick-and-mortar institutions by those making hiring decisions of I/Os. They discovered this by presenting resumes of effectively identical candidates (but with different names, distracting information, etc) crossing several degree characteristics. They found that those with online degrees were less likely to be asked for an interview, less liked to be hired, and likely to get a lower starting salary offer.
So what this really comes down to is priorities. Are you just trying to get the degree/credentials as a stepping stone for some other career goal, or are you trying to gain experiences that will help you create an I/O career? If you just want the degree, either type of program is probably fine. But if you’re trying to build a career within I/O psychology, at least for now, a brick-and-mortar institution is likely to put you on a superior trajectory, with better training, better opportunities, and better earning potential.
Valve Software is one of the videogaming industry’s most prominent and successful companies. It is responsible for some of videogaming’s most prominent games and game platforms, including Half-Life, Team Fortress, Counterstrike, Half-Life 2, Portal, and Left 4 Dead. They also run the enormously popular Steam digital game distribution system, which allows PC gamers to purchase and download traditional retail and indie games. The company has a reputation for innovation and creativity, which is quite far from the reputation of other major players in the videogame industry, like publishing behemoth Electronic Arts.
Last week, someone leaked the New Employee Handbook from Valve, and it provides a fascinating glimpse into the structure of this enormously popular company. Contrary to how many videogame developers are structured, Valve is almost a 100% flat organization, despite boasting nearly 300 employees. Here’s what the handbook says:
That’s why Valve is flat. It’s our shorthand way of saying that we don’t have any management, and nobody “reports to” anybody else. We do have a founder/president, but
even he isn’t your manager. This company is yours to steer—toward opportunities and away from risks. You have the power to green-light projects. You have the power to
In many ways, it’s the job characteristics model at its logical conclusion: complete and absolute autonomy, variety, identity, significance, and feedback for every employee. Each employee decides the projects s/he will work on (autonomy). If they get bored with one project, they can instantly switch to another with no job-related consequences (variety). Each employee tries to gain support from coworkers to work on their personal projects (feedback), which then can be brought to fruition with sufficient effort (identity, significance).
Commentary among videogame blogs on this approach has been mixed, with some suggesting that this approach could never work with other companies. Outside of videogaming and other creative enterprises, that’s probably true. Videogame development is itself essentially a creative process. If a product doesn’t ship on time, the company may lose money, but it is unlikely to lose clients. Players will buy the game regardless of when it comes out, as long as it is of high quality. And quality is clearly a major objective at Valve. The attitude toward how employees spend their time is quite clear in this statement on what happens when employees make mistakes:
Screwing up is a great way to find out that your assumptions were wrong or that your model of the world was a little bit off. As long as you update your model and move forward with a better picture, you’re doing it right. Look for ways to test your beliefs. Never be afraid to run an experiment or to collect more data.
For Valve, spending more time is always justified if it means you’ll do a better job. A management system like this in a service organization could falter easily; imagine a project due for a client tomorrow only for your coworker to say, “I’ve decided to drop this project; you’ll need to find someone else.” In videogame development, this is still a roadblock, but it does not grind business to a halt.
One might think that this everyone-runs-the-organization kind of approach would be risky. And to some extent, that’s true – Valve even admits it:
So if every employee is autonomously making his or her own decisions, how is that not chaos? How does Valve make sure that the company is heading in the right direction? When everyone is sharing the steering wheel, it seems natural to fear that one of us is going to veer Valve’s car off the road.
The key to solving this problem is finding excellent people to work at your company. And this is clearly objective #1 at Valve:
Hiring well is the most important thing in the universe. Nothing else comes close. It’s more important than breathing. So when you’re working on hiring—participating in an interview loop or innovating in the general area of recruiting—everything else you could be doing is stupid and should be ignored!
Valve goes so far as to encourage new employees to take part in the interview process immediately after they themselves were hired, so that they can get a better sense of the kind of traits valuable to success at Valve. The interview process is not explained in much detail, however the discussion makes it clear that it is structured and carefully thought out. The hiring process actually echoes some sentiments well-explored in I/O Psychology:
Hiring is fundamentally the same across all disciplines. There are not different sets of rules or criteria for engineers, artists, animators, and accountants. Some details are different—like, artists and writers show us some of their work before coming in for an interview. But the actual interview process is fundamentally the same no matter who we’re talking to.
The key at Valve, as in all good selection processes, it to identify qualified individuals who fit with the company. It also only hires the very best:
In these conditions, hiring someone who is at least capable seems (in the short term) to be smarter than not hiring anyone at all. But that’s actually a huge mistake. We can always bringon temporary/contract help to get us through tough spots, but we should never lower the hiring bar.
The company is 100% focused on recruiting and selecting a committed, high performing workforce. It keeps its selection ratio as low as possible – even if there is more work to do than there are employees – to ensure that the permanent employees it retains are excellent and unlikely to bring the company crashing down around them.
The third day had a good amount of coverage on technology-driven assessment. Our social media symposium was in the middle of the day, so there’s a little gap there. Wrap-up incoming soon!
|7:59 AM||NeoAcademic: SIOP 2012: Day 3 Live-Blog (http://t.co/2xu5rnwM) #siop12|
|8:23 AM||Off to an 8:30AM session at #siop12 – really curious what attendance will look like!|
|8:42 AM||At Assessing video resums at #siop12|
|8:50 AM||Extraction and narcissism predict attitudes toward video resumes, but not behavior #siop12|
|9:04 AM||Ethnic minorities perceive video resumes as being more fair #siop12|
|9:05 AM||Video resumes as being more fair than paper, but disappears when controlling for education (more education = paper more fair) #siop12|
|9:07 AM||When applicant accent is strong in vid resumes, low prejudice reviewers rate job suitability much higher, not clear why #siop12|
|9:12 AM||Self promotion in vid resumes may result in lower evaluation of applicant (less credible, more manipulative and annoying) #siop12|
|9:18 AM||Men negatively view women in vid resumes when they engage in self promotion (violating gender norms) #siop12|
|9:33 AM||Our #siop12 symposium on social media is coming up at 10:30 in Elizabeth H – come on by!|
|9:36 AM||@K_Anderson_Eval all vid resume presenters seem to indicate they are at the cutting edge, not much if any published work #siop12 #siop|
|10:38 AM||Nearly 100 people at our social media symposium in Elizabeth H! #siop12 http://t.co/MC6rYL8Y|
|12:13 PM||Ended up with over 125 attendees and a flood of questions! Running to next session on future of IO as an academic discipline #siop12|
|12:24 PM||@budworth several touched on the idea in symposium 229 – I think you might want to contact Annemarie Hiemstra or Marie Waung #siop12|
|12:31 PM||Some concern from panel about losing IO people to business schools, more career opps in business schools so more attractive #siop12|
|1:36 PM||Saw fascinating poster on progress bars on web surveys – short answer: USE THEM; increases focus and enjoyment #siop12|
|1:47 PM||At presentation on how to create homegrown tech for accomplishing IO objectives – sounds familiar! #siop12|
|1:53 PM||People of any level of tech expertise can start learning tech skills useful to IO; Excel is good entry point #siop12|
|2:16 PM||MTurk workers useful for small practitioner jobs – pay $.25 per response for pilot study work #siop12|
|4:34 PM||Met with collaborators and now with several thousand psychologists to listen to talk by Albert Bandura #psychology #siop12|
|4:38 PM||Bandura speaks! #siop12 #psychology http://t.co/ZXa6H2BL|
|4:49 PM||Interesting idea by Bandura: individuals as coauthors of action with environment and behavior #siop12 #psychology|
|7:35 PM||So ends #siop12 – safe travels everyone!|
Today was busy, with our learner control symposium taking up most of the afternoon (hard to tweet while talking!).
|5:36 AM||At 5AM, it’s much harder to see why registering for the #siop12 5K was a good idea…. Blegh|
|7:49 AM||Did not beat Paul Sackett’s time (my goal) but still a respectable 31 mins for the #siop12 5K. Not bad for my 1st 5K!|
|8:38 AM||NeoAcademic: SIOP 2012: Day 2 Live-Blog (http://t.co/oP3fAbly) #siop12|
|10:19 AM||Whew… Recovered from Fun Run, met with potential collaborators and finally SITTING to hear about virtual org effectiveness #siop12|
|10:38 AM||Apparently virtual teams perform better when members are isolated (fewer distractions) #siop12|
|11:09 AM||Media richness changes how leadership is expressed in virtual teams #siop12|
|11:34 AM||At Chasing the Tortoise: Zeno’s Paradox in Technology-based Assessment #siop12|
|11:40 AM||Tech moves forward as research moves forward to understand it; never ending cycle #siop12|
|11:46 AM||Org context/purpose is inflexible while specific user experience is flexible; so when implementing new tech, consider context first #siop12|
|12:08 PM||No practical diffs between mobile and non mobile assessment – unless candidates took it on Nintendo Wii #siop12 (weird)|
|12:11 PM||Less than 1% of applicants used mobile devices to complete assessment in sample of 1 million #siop12|
|12:17 PM||Older adults more likely to take assessments on mobile devices; younguns prefer computers #siop12|
|12:20 PM||Personality tests don’t have diff scores, but cog ability scores do – mobile scores lower d=-.5 (non experimental study) #siop12|
|12:54 PM||Zickar on cyber vetting and looking up job applicants on Facebook: just don’t do it. #siop12|
|9:08 PM||Just had a great #siop12 dinner with our awesome ODU alums!|
Day 1 was busy, but still got some tweets out. I’ve noticed that every year, I randomly run into people. Which is great, but it takes so much time from presentations! Archive of my Twitter activity is below:
|7:58 AM||On the way to pick up my badge at #siop12… Then opening plenary!|
|8:27 AM||I wonder if the background music pre-plenary is designed to put everyone back to sleep… #siop12|
|8:31 AM||So who knows the wifi password? #siop12|
|8:32 AM||About 4000ish registered for #siop12… Not a high year, but not bad|
|8:43 AM||I don’t think I ever thought before about the ridiculous number of awards given by #siop – that’s a lot of committees! #siop12|
|9:18 AM||Congrats to all 23 new #siop fellows at #siop12, especially @OldDominionUniv’s own Debra Major!! http://t.co/yUCGYdY0|
|10:11 AM||RT @emilyalice21: the thought of seeing bandura @ the conclusion of #siop12 makes me feeel like ill be seeing elvis.|
|10:27 AM||At Do You Speak Technology? at #siop12|
|10:51 AM||Good to see growing awareness among I/Os about the need for translation between IO and IT people #siop12|
|11:24 AM||Got some interesting advice on how to structure my technology class for IO PhD students to better communicate with IT #siop12|
|12:07 PM||At presentation on personality driven by many meta-analyses, it’s like a Minnesota explosion #siop12|
|12:18 PM||Seems self efficacy is quite complex, even in its relationships with itself #siop12|
|12:39 PM||Warmth seems to be a compound agreeableness/extraversion personality trait, predicts many customer reactions to service emps #siop12|
|1:39 PM||At new investigations in applicant reactions to websites research… Meta up first #siop12|
|1:59 PM||interviewing advice on website interacts with gender to predict org attractiveness (women want support, men want no support) #siop12|
|2:04 PM||Blogs potentially serve as a recruitment channel by portraying leaders as relationship-oriented #siop12|
|2:19 PM||Organizational website quality matters much more for lesser known company companies; Fortune 100 can ride their reputations #siop12|
|2:36 PM||@lsinger it’s: Beeco & Raymark, Effectiveness of CEO blogs as a recruiting tool, which is presentation 59 at #siop12 http://t.co/leOs0Hon|
|2:45 PM||Phew! Time for a break at #siop12|
|6:56 PM||Ended up chatting with a colleague and missed some sessions! Whoops!! But that is what #siop12 is all about|
As in 2010 and 2011, I’ll be live-blogging the SIOP conference, which begins Thursday, April 26 and runs through Saturday, April 28. This post contains a list of all the sessions that I am interested in attending, which are generally focused on technology, training, and assessment. My live blogging will likely occur on Twitter, with a permanent record stored here. The biggest enemy in such efforts is battery life!
In the graphic below, symposia that I am chairing and presenting in are colored red (plus the doctoral program chair’s meeting), and poster sessions are marked as “free” (little white bar on the left). As you can see, it’s a bit packed, so I won’t be attending all of these – and midday Friday is particularly bad – but this is everything I’m interested in. If you’d like to meet up at any of these events, or if you think I missed something that I should definitely attend, please let me know!
The first list contains symposia and interactive poster sessions. My sessions are bolded and highlighted in grey.
|No.||Day||Start||End||Session Title||Room||Session Type|
|1||4/26/2012||8:30 AM||10:00 AM||Opening Plenary Session||Elizabeth C||Special Events|
|8||4/26/2012||10:30 AM||12:00 PM||I-O Bilingualism: Do you Speak Technology?||Edward AB||Panel Discussion|
|27||4/26/2012||12:00 PM||1:30 PM||Personality in I-O: New Meta-Analytic Contributions to Unexamined, Neglected Issues||Annie AB||Symposium/Forum|
|48-3||4/26/2012||1:30 PM||2:30 PM||Investigating Conflict Escalation in FTF and Virtual Teamwork Over Time||America’s Cup AB||Poster|
|59||4/26/2012||1:30 PM||3:00 PM||Back Into the Web: New Directions in Applicant Attraction Research||Madeline CD||Symposium/Forum|
|92||4/26/2012||5:00 PM||6:00 PM||Theme Track: Scholarly Reflections on the Past, Present, and Future of Discrimination||Elizabeth H||Special Events|
|98-16||4/26/2012||6:00 PM||7:00 PM||Unproctored Cognitive Ability Internet Testing: Does Cheating Pay Off?||Elizabeth D||Poster|
|102||4/27/2012||8:00 AM||10:00 AM||Addressing Unproctored Internet Testing Claims and Fears: Founded or Unfounded?||Elizabeth C||Symposium/Forum|
|134||4/27/2012||10:30 AM||12:00 PM||Virtual Organizational Effectiveness||Gregory AB||Symposium/Forum|
|138-4||4/27/2012||11:30 AM||12:30 PM||Emoticons at Work: Does Gender Affect Their Acceptability?||America’s Cup AB||Poster|
|138-3||4/27/2012||11:30 AM||12:30 PM||Applicants’ and Recruiters’ Perceptions of Social-Networking Web Sites in Selection||America’s Cup AB||Poster|
|140||4/27/2012||11:30 AM||1:30 PM||Chasing the Tortoise: Zeno’s Paradox in Technology-Based Assessment||Elizabeth C||Symposium/Forum|
|143||4/27/2012||12:00 PM||1:30 PM||Virtual Teams: Exploring New Directions in Research and Practice||Betsy BC||Symposium/Forum|
|153||4/27/2012||12:00 PM||2:00 PM||Current Research in Advanced Assessment Technologies||Ford AB||Symposium/Forum|
|158-4||4/27/2012||12:30 PM||1:30 PM||For Your Eyes Only? Reactions to Internet-Based Multimedia SJTs||America’s Cup AB||Poster|
|170||4/27/2012||1:30 PM||3:00 PM||Computerized Adaptive Testing: A Primer on Benefits, Design, and Implementation||Elizabeth C||Master Tutorial|
|181-3||4/27/2012||3:30 PM||4:30 PM||Reactions to Using Social Networking Web Sites in Preemployment Screening||America’s Cup AB||Poster|
|199||4/27/2012||3:30 PM||5:00 PM||Building a Science of Learner Control in Training: Current Perspectives||Madeline CD||Symposium/Forum|
|203||4/27/2012||4:30 PM||6:00 PM||Variations in Unproctored Internet Testing: The Good, Bad, and Ideal||Edward AB||Panel Discussion|
|229||4/28/2012||8:30 AM||10:00 AM||Assessing Video Resumés: Valuable and/or Vulnerable to Biased Decision Making?||Elizabeth C||Symposium/Forum|
|249||4/28/2012||10:30 AM||12:00 PM||The Impact of Social Media on Work||Elizabeth H||Symposium/Forum|
|294||4/28/2012||1:30 PM||3:00 PM||Applied Technology: The I-O Psychologist as Customer||Ford AB||Symposium/Forum|
|316||4/28/2012||4:30 PM||5:30 PM||Closing Plenary Session||Elizabeth C||Special Events|
This second list contains posters that I will drop by if I have time.
|23-21||4/26/2012||11:30 AM||12:30 PM||Trust Development in Computer-Mediated Teams|
|72-1||4/26/2012||3:30 PM||4:30 PM||Work Environment Factors and Cyberloafing: A Follow-Up to Askew|
|87-8||4/26/2012||4:30 PM||5:30 PM||Communication in Virtual Teams: The Role of Emotional Intelligence|
|87-13||4/26/2012||4:30 PM||5:30 PM||Personality Predicts Acceptance of Electronic Performance Monitoring at Work|
|123-3||4/27/2012||10:30 AM||11:30 AM||Toward a Theory of Technology Embeddedness|
|160-2||4/27/2012||1:00 PM||2:00 PM||Perceptions of Internet Threats: Behavioral Intent to Click Again|
|176-24||4/27/2012||2:00 PM||3:00 PM||Commitment and Regulation in Web-Based Instruction|
|234-29||4/28/2012||9:00 AM||10:00 AM||Social Mediaâ€&™s Influence on Social Support, Efficacy, and Life Satisfaction|
|258-12||4/28/2012||11:30 AM||12:30 PM||The Effectiveness of Three Techniques for Detecting Faking|
|278-32||4/28/2012||12:30 PM||1:30 PM||Effects of Survey Progress Bars on Data Quality and Enjoyment|
|284-1||4/28/2012||1:30 PM||2:30 PM||A Validation Study of Tablet Use in a Medical Setting|