If you want to stay current with your research area, you need to read new journal articles as they come out. In the past, this was much more difficult; you needed to subscribe to journals and have them delivered to your office or home, or trudge over to your university library. Archaic!
These days, you can often be notified of and read new journal articles as they are placed in press. This puts you at the very forefront of new research; you can read articles after peer review, the moment they are made available to the public, often pre-publication. To do this, you need only two things:
- A news reader (sometimes called a news aggregator) that processes RSS feeds
- Each journal’s RSS feed
RSS stands for Really Simply Syndication. This is a technique used by many websites (not just journals) to provide plain, unformatted, transportable content to anyone that might want it, which is packaged as an RSS feed. For example, this site’s RSS feed is https://neoacademic.com/feed, which is also linked on the top right of this page. It contains the content of all the articles I post here, as they are released, and nothing more.
That way, if you have an RSS feed reader, you can go to a single website (or run a single program) and view the RSS feeds from all of your favorite websites (and journals!) in one place. This eliminates the need to visit a large different number of websites each day and hunt for new content. The news reader remembers what you’ve already read; all you see is new content.
So to enjoy the wide world of journal RSS feeds, you need an RSS feeder – and there are many to choose from. To start, I suggest Google Reader, which is free and quite easy to use. If you already have a Google account (and who doesn’t?), just click on that link to see your very own Google Reader. The first time you use it, it will be a bit empty. That’s because you need to fill it up with RSS feeds. I suggest starting with this one. If you’re using a modern web browser, adding RSS feeds is very easy – just click on the link to a feed and follow the prompts.
Once you get comfortable with RSS, you should think about the easiest way to get your RSS content. Do you want to stick with the cloud (i.e. Google) or use a standalone program where you can more easily save your favorite RSS entries for later? If you already use an e-mail management program, like Outlook, there are often RSS readers built-in so that you can read your RSS with your e-mail. If not, there are plenty of other options.
Finding RSS feeds can be quite easy (just look for the little orange RSS symbol, like the one on this page) or quite difficult (involving hunting through many layers of badly formatted publishers’ web pages). A well-crafted Google search is, as always, the best solution. Simply search Google for “JournalName rss” and you are likely to find what you’re looking for – at least, as long as the journal publisher has decided to release an RSS feed.
Here are a couple of lists of RSS feeds to get you started (thanks to Jeremy Anglim for providing me a starting point for the I/O list):
Related to I/O Psychology
- Administrative Science Quarterly
- Human Performance
- Human Resource Development Quarterly
- Human Resource Management
- In-press articles from the Academy of Management (includes AMJ, AMR, AMLE, and AMP)
- Industrial and Organizational Psychology: Perspectives on Science and Practice
- International Journal of Selection and Assessment
- International Journal of Training and Development
- Journal of Applied Psychology
- Journal of Business and Psychology
- Journal of Management
- Journal of Managerial Psychology
- Journal of Organizational and Occupational Psychology
- Journal of Organizational Behavior
- Journal of Personality and Social Psychology
- Organizational Behavior and Human Decision Processes
- Organizational Research Methods
- Personnel Psychology
You can also subscribe to the Society for Industrial and Organizational Psychology’s blog’s RSS feeds.
Related to Psychology, Games, Technology, and Education
- Computer and Education
- Computers in Human Behavior
- Cyberpsychology, Behavior and Social Networking
- Game Studies
- Games and Culture
- International Journal of Gaming and Computer-Mediated Simulation
- International Journal of Game-based Learning
- Journal of Computer-Mediated Communication
- Journal of Media Psychology
- Simulation and Gaming
And finally, consider subscribing to a few I/O Psychology blogs!
In an upcoming issue of Cyberpsychology, Behavior and Social Networking, Maslowska, van den Putte and Smit1 explore the value of personalizing mass e-mail newsletters. They found that personalizing a newsletter caused an improved reaction to the content of that newsletter.
Research on personalization, to this point, has been mixed. The majority of studies have found personalization to improve outcomes, but a few have found negative effects. Hypothesized effects vary, but generally fall along the lines of persuasion; the article lists enhanced attention, memory, intentions and behaviors as potential outcomes.
The use of personalization itself is surprisingly simple. In this study, the authors manipulated it by adding the recipient’s name in three places in the newsletter – a relatively low-effort change. The authors randomly assigned Dutch undergraduates to one of two conditions: personalized or generic e-mails promoting the “University Sports Centre.” The newsletter was then distributed; at the bottom of the newsletter, readers could follow an optional link to a survey, compensating participants with a raffle entry. This resulted in 105 cases in the final dataset.
The study is not terribly clear as to response rates. My suspicion is that it was sent to a much larger number of students, and only 109 responded (4 were already Sports Centre users and were discarded). This probably doesn’t matter anyway, as the missingness was likely at random – I find it unlikely that the likelihood of responding to a newsletter survey to be entered into a raffle is correlated with the the effect personalization might have on persuasion.
In comparing the two conditions with independent-samples t-tests, they found an effect on the evaluation of the newsletter but not on any attitude or behavioral outcomes. Or in other words, those with personalized newsletters liked the newsletters more, but didn’t like the subject of those newsletters any more (i.e. the Sports Centre). The effect was rather small (eta squared = .05).
The authors also examined a variety of two-way interactions, but approached the questions in a non-standard way. Typically, we would use hierarchical multiple regression to determine if interactive effects provided incremental prediction above and beyond main effects. Instead, the authors did some sort of convoluted analysis of those interactions using ordinary multiple regression, followed up by simple slopes comparisons. It’s hard to tease apart, but it seems as if they examined the effect of the interaction alone on the outcome, which would be blatantly incorrect, if true. But it is difficult to tell.
So what can we reasonably conclude from this article? It does seem that personalizing a newsletter e-mail results in the recipient liking the newsletter somewhat more. This might have other long-term effects; for example, recipients might be more likely to stay subscribed to the newsletter. But it does not seem to affect how the letter-receiver acts on the information it contains. At least when it comes to advertisement of on-campus resources to undergraduates, anyway. These implications are, of course, not tested in this article.
Would I personalize a newsletter? Sure! There seem to be few if any negative effects, and it’s a very easy change to make. Mail merges are not that complicated. So even if the positive effect is limited, any benefit at all makes the cost-benefit ratio quite good.
- Maslowska, E., Putte, B., & Smit, E. (2011). The Effectiveness of Personalized E-mail Newsletters and the Role of Personal Characteristics Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2011.0050 [↩]
In his 2008 book, Outliers, Malcolm Gladwell criticizes the traditional “top-down” selection method used by organizations hiring on the basis of cognitive ability (colloquially referred to as “intelligence”). By his argument, there is a certain level at which additional cognitive ability simply is not valuable.
It is certainly an attractive idea; Gladwell argues that it is historical advantages that bias scores on employee selection devices. John is lucky to come from a good family, happens to put himself in a position to take advantage of the opportunities that present themselves, and gets hired to a solid position more by luck than by skill. By this logic, there are many qualified applicants; some were simply more lucky than others, and hiring from the top-down merely punishes the unlucky ones without any real benefit to the organization. And if the success of your highest performing employees is due to luck, there’s not much reason to worry about selecting the best of the best, right?
In a recent article in Psychological Science, Arneson, Sackett, and Beatty1 call Gladwell’s approach the good-enough hypothesis while they call the traditional approach the more-is-better hypothesis. They test the relative value of these hypotheses by comparing cognitive ability and performance in four large datasets containing a total of 170,286 people:
- Project A. In the late 1980s, US soldiers took part in a large scale project designed to link various selection devices to soldier performance, which included the ASVAB, which itself contained a strong cognitive ability component. Actual ratings of job performance were standardized and compared with cognitive ability scores derived from the ASVAB.
- College Board SAT. This dataset contained SAT scores and GPA data. GPA was standardized within schools and compared with SAT performance.
- Project TALENT. The Department of Health, Education and Welfare (before Health and Education split in the late 1970s) ran a longitudinal study high school seniors, starting in 1960, tracking a large group of them through their college GPA. This GPA was compared with a composite ability score collected from the study.
- National Education Longitudinal Study of the Class of 1988. Standardized verbal and mathematics ability scores of eighth graders were compared with their later college GPAs.
To determine which hypothesis better described the data across these datasets, the authors did the following:
- Checked for ceiling effects (to ensure that the data toward the upper end of ability was not clustered)
- Used a lowess smoother on the scatterplot of the ability-performance relationship to visualize the relationship at all levels of ability
- Used a power polynomial test to identify curvilinearity in the relationship
- Used a second power polynomial test on only the high ability folks (z > 1)
- Compared regression slopes (b) of performance on ability at four segments of the relationship: minimum to z = -1, -1 < z < 0, 0 < z < 1, and 1 to maximum
Across analysis types, it became clear that the more-is-better hypothesis better described the data. But more surprisingly, ability appeared to be more important at the high end than it was at the low end. Or in other words, the relationship between ability and performance gets stronger among those with high ability. Gladwell could not have been more inaccurate in describing these relationships.
Interestingly, Gladwell may have previously had a more anti-selection bent in his book that was perhaps edited down later after he became aware of the massive research literature on hiring using psychological measurement.
- Arneson, J., Sackett, P., & Beatty, A. (2011). Ability-Performance Relationships in Education and Employment Settings: Critical Tests of the More-Is-Better and the Good-Enough Hypotheses Psychological Science DOI: 10.1177/0956797611417004 [↩]
