Skip to content

SIOP 2010: End of Day 3

2010 April 13
tags: ,
by Richard N. Landers
Previous Post:
Next Post:

SIOP 2010 Coverage
General: Schedule Planning | Lament Over Wireless Coverage
Live-Blogs: Day1 | Day 2 | Day 3
Daily Summaries: Day 1 | Day 2 | Day 3

Day 1 was all over the place, Day 2 had an excellent training track, and Day 3 was about research methods and statistics.

At 8:30AM came Statistical and Methodological Myths and Urban Legends: Part V.  It was an interesting set of presentations from several well-known I/O psychologists, including Spector, Brannick, Landis, Cortina, and Aguinis.  One talk was on the overuse of control variables without a rational approach to doing so.  This is a common approach in survey research, because researchers think that because they are using correlational data, they must throw some control variables in.  This is oversimplified; people tend to assume that statistical corrections will magically “fix” the causality problems with a correlational research design, and this simply is not true.  Statistical corrections cannot compensate for a lack of modeling.

We also had a talk about the merits of statistical significance testing. If you’re unaware of the debate, it centers around a very simple problem: statistical significance testing has a very specific purpose and gives you very specific information about your data, but most researchers over-interpret it. For example, a person finding a statistically significant difference using a t-test comparing Condition A and Condition B might want to conclude “Condition A produces different results from Condition B.” That is not correct. The correct conclusion is “if we were to assume Condition A and Condition B weren’t different in the population, it is improbable that the difference we did see would be as big as it is.”  But that doesn’t exactly roll off the tongue, so people tend to oversimplify.

There are two major camps on this topic: 1) we don’t have anything better, so use it anyway (which was the viewpoint of the speaker in this session), and 2) conclusions made from such tests are evil and should never be used under any circumstances.  Much like the conservatism/liberalism debate, the moderates (it has flaws, but does produce interesting findings when interpreted correctly) are not really the vocal ones.

There was also a talk on meta-analysis, which was pretty good.  Two major conclusions came of of that.  First, meta-analysis is not infallible, but neither are randomized controlled trials.  Each has its strengths and weaknesses, and interpreting them with those considerations in mind is central to correctly understanding research findings in any field.  Second, meta-analysis is often being interpreted too liberally; it does not provide “the answer” in any field.  It is simply another piece to add to the puzzle.

After this, I attended Archiving Data: Pitfalls and Possibilities, which was a very interesting panel discussion on what should happen to data after publication in journals. Should we require open access to all data used to produce journal articles?  This is already a common model in the hard sciences – Science, for example, requires the authors of articles they publish to produce the data on demand, and many times even include it in an online article supplement. Does this make sense in psychology? We didn’t come to any firm conclusions, but a taskforce or I/O subcommittee may be in our future.

I next attended Verification of Unproctored Online Testing: Considerations and Research which was a predominantly practitioner-oriented look at how to deal with cheating in unproctored Internet-based testing (UIT). There aren’t many good ways to deal with this problem, and they primarily discussed all the ways people have tried so far through in-person verification testing. But many questions remain. Do you use the in-person scores or the online scores? What do you do with people you think cheated (since you can never be 100% sure)? How do you include cheating-prevention measures without implying to your applicants that you don’t trust them?

After a break, I attended the closing plenary, with a talk given by Dave Ulrich on the future of our profession. Engaging, but not much too surprising.  One thing that stood out was a suggestion that we move from a model of knowledge warehouses where information is stored quietly by academics in journals where we can retrieve it at will to knowledge networks where knowledge is actively shared and moved to where it is needed. This really resonated with me; one of the reasons I push for social media in I/O psychology is because I don’t think we can be truly relevant or even useful to the world of work without getting our hands dirty on the front lines.

The session ended with the handing of the gavel to our new SIOP president, ODU alum Ed Salas.

That’s it for day-to-day reactions. After letting the conference percolate for a few days, I’ll post summary thoughts.

Previous Post:
Next Post:
No comments yet

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS