In the past, employee monitoring was relatively straightforward. You either had cameras or you didn’t. You either required employees to check in with their supervisors or you let them punch their own cards and move to their shifts.
These days, things are a little more complicated. Northern Arizona University is implementing a system by which class attendance is tracked by scanning students as they walk into a classroom. The article is a little light on detail, but from what I can piece together, student ID cards will be implanted with an RFID chip (similar to the technology used in current US passports) which can be read from a distance. Thus, when you walk in the door, a sensor detects your student ID and automatically registers your attendance.
How long until this tech enters the workforce?
Remember, if your location can be tracked, that opens up a lot more options for monitoring. Not only will your employer know when you got in this morning, but also how many times you got up from your chair, how many times you went to grab some coffee, how long to the second you spent on break, when you typically go to the restroom… the list goes on and on.
There’s some research on employee monitoring in industrial/organizational psychology, but it’s mostly focused on e-mail monitoring and the use of cameras. What happens when employees can quickly and easily record everything you do, down to the total number of minutes you spend leaning back in your chair to relax? At what precise point do employers have an ethical responsibility to respect the privacy of their employees?
And on the other hand, what an awesomely powerful source of data for research!
One of the most prominent criticisms of virtual worlds (like Second Life) is that we’ve already experienced a virtual worlds bubble – a period of high enthusiasm, followed by market over-saturation, followed by collapse. Most of the bubble happened in 2006: profits and non-profits alike (from IBM to NASA) spent a great deal of money to create Second Life presences for their organizations. Reuters even opened a Second Life office. Then what happened?
Well, nothing, and that was the problem. Although everyone was talking about virtual worlds, relatively few people were using them, especially compared to number of users in modern social gaming. What went wrong?
Some people in the industry claim it’s because virtual worlds were just another over-hyped tech that no one ever needed in the first place. But I think that’s a little too simple. There were a number of barriers to the success of virtual worlds at that time:
- No one knew why they were deploying in a virtual world. Most organizations built Second Life campuses because they were told “it’s the next big thing” without a clear reason for doing so. Were they launching a new training initiative where inexpensive scripted simulation would be useful? Were they created a targeted marketing campaign for tech-savvy users to get more familiar with their products? No – none of these things. They were building virtual worlds because they were told to. And that’s not a recipe for success.
- They expected the users to come to them. Many organizations believed that people using virtual worlds were some sort of vast, untapped market of people that they didn’t already have access to. By moving into Second Life, they would not only gain this market but also the vast “future” market of users. But there was never any evidence that either side of this assumption was true.
- They didn’t have a plan. Like with any tool (especially technology), you must have a plan to use a tool before trying to use it. Do you turn to a blank wall in your home with a hammer and assume it’ll all work out by the time you’re done? Certainly not! You have a plan, schematics, and a clear path to the finish line.
- The barriers to entry were too high. Second Life requires you to download client software, create an account, and wait for the 3D engine to load before you can even make a judgment as to whether it is enjoyable or worthwhile. Farmville, on the other hand, is the most popular game/application on Facebook. There are plenty of people who play Farmville once and never play again. But there are millions of players who tried it on a whim and got hooked. They were only able to do this because the barriers to entry were low – a couple of clicks, and you could try it out.
These four features together spelled the death of virtual world popularity in 2006 – the “bubble” popped and Reuters closed its Second Life office. Since the bubble has burst, does that mean virtual worlds are no longer worth our attention?
Absolutely not! This is the lie perpetuated by those always looking for the next tech. “Virtual worlds have passed,” they say. This is absolutely false. The bubble of artificial popularity may have burst, but virtual worlds themselves still have a great deal of promise. Just because a few carpenters destroyed a house by using tools they didn’t really understand doesn’t mean that the tools themselves have no value.
The key to virtual worlds is addressing the four problems above. As far as general popularity among the masses, I think Problem #4 is probably the biggest. Until you can run Second Life with zero lag and fully scripted/customizable experiences within a browser window, Second Life will never gain mainstream acceptance.
But I’m not interested in mainstream acceptance – I’m interested in whether or not virtual worlds can be used effectively in organizations, especially in regards to training, and that is a very different question. Consider:
- We know why we want to deploy in a virtual world. We want to use virtual worlds to take advantage of the multi-user environment for training purposes. We can created scripted encounters far more effectively and at a much lower cost than if we were building a physical facility, which would involve training people to run the facility on top of building costs.
- We’re bringing the users to the VW, not the other way around. If we’re replacing training interventions with VW interventions, it will be because we brought the virtual world to the users – setting up VW software on office systems and training employees on navigation of the virtual space.
- We have a plan. The key to successful virtual world deployment is the identification of training goals that can be best addressed in a VW. If you’re just going to give a PowerPoint presentation, does it matter if you put it in a VW or in the conference room? Absolutely not. But what if you want to train users on a dangerous or costly machine? What if you want to simulate customer service or leadership interactions with people of different races or sexes as part of training on discrimination? These things are simple in a VW.
- The barriers to entry should be minimized by the organization. By making VWs part of a company-supported training program, the organization takes on the costs of deploying the software and training its users. Any barriers to entry are mitigated by appropriate training and support from the organization.
We may be on the opposite side of a virtual worlds bubble, but that has absolutely no impact on what should be our core concern: can virtual worlds be used to minimize costs or improve outcomes for workers, and how do we do it? This question has not yet been fully answered, but there’s no reason to stop trying now.
College students everywhere, heed this metaphor:
Imagine you’re a server in some sort of fern-filled bar/restaurant – let’s say Applebee’s. You’re serving 20 people in a single group, all by yourself. You in fact know that all 20 of these people are regulars to Applebee’s – they come in all the time. In fact, this Spring, they’ve even been requesting you to be their server every time they come in!
Today, all 20 order fajitas. Now, fajitas are a fairly difficult order because they have to be timed carefully – they must still be sizzling when they get to the table. So you carefully watch which orders come up, and the moment the orders are filled, you make several quick trips across the restaurant to deliver all 60 pieces of the meal (meat skillet, sides/toppings plate, and tortilla container for each of the 20).
Mission accomplished – or so you think. As the meal progresses, you notice 10 of the people aren’t eating their food! When you ask them one of them why, they just shrug and go back to chatting with their friends. At the end of the meal, after everyone leaves, you notice that those 10 didn’t even touch their fajitas.
You’re annoyed. But do you have the right to be? They still paid for their food, and thus the service you provided – they simply didn’t eat anything. Why would they do that, you wonder? Why order food you don’t want to eat? Why did I even bother bringing the plates out if they weren’t going to eat it? Couldn’t I have just left the food in the kitchen and taken their money?
This is the struggle that all professors go through. When you text or read Facebook in class, you’re ignore the fresh, sizzling, heaping portion of knowledge that we are trying to serve you. We know the knowledge is delicious, because we’ve spent our whole lives gathering that knowledge – we love it! So why don’t you? We go to a great deal of effort to craft the perfect knowledge fajita for you to enjoy, and then you just ignore it!
And from a practical standpoint, why would you pay to take classes when you don’t intend to learn anything? And if you did pay for classes where you didn’t want to learn, why show up in class at all? If you don’t want to get any value out for your money, why not just send your university a check and stay home? If you’re not paying attention in class, you’re not learning anything in class – just stay home and come on exam days.
And why did I create this metaphor? I’m teaching undergraduates this summer for the first time in about a year, which has made me reflect on what new course policies I’ll be setting. Should I have a rule about no texting? Should I have a rule about no laptops? Should I cram the fajita down their throats?
I cringe at the thought.
And you might wonder why I hesitate to institute policies forbidding the use of wireless devices that apparently lots of faculty have no problem implementing. The reason is simple: because it’s a knee-jerk reaction. “Students aren’t paying attention, and doggone-it, I won’t have that in my classroom!” Personally, I don’t really care that students are texting and using their laptops in my classroom; they have free will, after all. If they don’t want to partake of my delicious knowledge fajita, they don’t need to. I’m more interested in convincing them that they should want to get value for their money – that they should want to learn, and to be a better person.
I’m just sad they won’t eat the fajita on their own.