Is Age-Related Mental Decline Not As Bad As We Think?
It’s well-supported in psychology that fluid intelligence (i.e. a person’s ability to solve unique, unfamiliar problems or remember large amounts of unfamiliar information, or otherwise flex their mental muscles) decreases with age. There are several theories as to why – perhaps our brains become less efficient over time as our neurons age, or perhaps we simply exercise our brains less as we get older, resulting in a sort of muscular atrophy). But a new paper by Ramscar, Hendrix, Shaoul, Milin and Baayen in Topics in Cognitive Science, described in an article published by the New York Times, provides a new theory: as our brains store more information, it may become more difficult to retrieve memories. This contention is based on information theory, which explores how information processing occurs in computing systems (e.g., what is the most efficient way to seek specific information given a database of general information?). As we get older, the theory states, we have more memories to sort through, which causes us to perform more poorly on cognitive measures.
Several statements in the New York Times article on this topic intrigued me.
- The title: “The Older Mind May Just Be a Fuller Mind.”
- A notable assertion: “The new report will very likely add to a growing skepticism about how steep age-related decline really is. It goes without saying that many people remain disarmingly razor-witted well into their 90s; yet doubts about the average extent of the decline are rooted not in individual differences but in study methodology.”
- The conclusion: “It’s not that you’re slow. It’s that you know so much.”
The title is tentative, the next statement is an anecdote (several people does not a theory prove), and the last is just intended to be memorable. Yet these statements, among others in the article, are provocative, leaving the reader with just enough information in order to maximize the punch of that final pithy conclusion. But is it a valid conclusion?
For that, I needed to turn to the original research article. This was a bit of an exercise, because the article is presumably written with an audience of information scientists in mind, and I am not an information scientist. But from my reading, I believe that I was able to decode what they did. The basic premise was that the researchers conducted a series of simulations (i.e. not using human participants) on developed vocabulary. In the first simulation, the researchers constructed a database of words of varying complexity. They then selected words based upon a simulated 20 people learning words over their lifespans, plotting the number of words learned at each age.
This is the first peculiar decision in this study if the goal is to generalize to humans, because it assumes that people add new words to their vocabulary at a predictable, curved rate. In one illustrative graph (p. 10), the mean vocabulary across the 20 individuals was plotted as a function of age – however, age was defined as the number of words that a person had been exposed to. The valid conclusion from this figure appears to be that exposure to a greater number of words leads to a larger vocabulary. So the actual finding is not surprising, but the use of the word “age” to describe the effect is a bit strange. For this to be a valid in describing age, our reading habits would need to be something like: I read random books with random words in them and am exposed to new words on a regular basis. I suspect reality is not quite so random.
The researchers next made some assumptions about how many words people were exposed to and the growth rates of vocabulary. They assumed that adults read 85 words/min, 45 min/day, 100 days/year. They also assumed that adults encountered new words not in their vocabulary at the rates established in their prior simulation. Given those assumptions, they created algorithms to read from lists of words at that rate for the equivalent of a 21-year old (1,500,000 words) and a 70-year old (29,000,000 words). They then simulated how long (relative to each other) retrieving words would be from each of those lists, finding that the 70-year old would take longer to browse through her memory than the 21-year old would.
These do not seem like wise assumptions to me. I would hazard a guess that I learn a handful of new words in any given year, and my reading is also restricted to things I’m interested in on the Internet (where the overall reading level is not terribly high) and in psychology journals. I strongly doubt people reading newspapers and magazines in their free time (the primary sources of reading for our current seniors) encountered new words at a predictable, consistent rate – and I would guess that the new word encounter rate of a non-academic is somewhat lower than mine. More accessible metrics (e.g. new words/year) are not presented in the article, so I don’t have any way to convert the numbers reported in the article into more understandable metrics. Again, this assumes that people read random sources with random words in them, constantly exposing themselves to new vocabulary, regardless of age. It also assumes that the brain works perfectly like a computer algorithm; yet we know that the human brain relies heavily on heuristics/shortcuts and makes a lot of mistakes. This does not match up the researchers’ model.
There were also a few writing peculiarities. Take this, from p. 16:
To ensure older adults’ greater sensitivity to low-frequency words was not specific to this particular data set, a second empirical set of data was analyzed in the same way (Yap, Balota, Sibley, & Ratcliff, 2011; Fig. 4). All of the effects reported in the first analysis replicated successfully.
This is the entire discussion of this apparently independent follow-up study – no explaining anything they did in that follow-up study, no reporting of any statistics actually conducting the “comparisons”, no defining a “successful replication”, and no listing the effects that they tested. The entire “methods” and “results” sections associated with this statement were in fact a single figure. Maybe that’s normal in cognitive science, but it would never fly in a decent psychology journal. We have no way to know what they did, why they did it, what was replicated, what went wrong, etc. The ability to replicate from a write-up is absolutely required for good science, and this element is missing from this part of the paper. That’s not catastrophic – since this replication isn’t really necessary to make the point they were making – but it adds yet another “that’s weird” to my tally as I read through.
So despite being based upon faulty assumptions about human learning as well as being limited entirely to the growth of vocabulary over the life-span, the second simulation described above seems to be the basis for the NYT article conclusion that memory in general does not decline with age. This conclusion is not justified. Contrary to the NYT statement, I am no more skeptical of age-related decline than I was before reading either the NYT piece or the original research article.
To me, both the NYT and original article demonstrate only one thing clearly: when applying information processing theory to understanding human cognition, one must be careful to remember the “human”.Footnotes:
- Ramscar, M., Hendrix, P., Shaoul, C., Milin, P., & Baayen, H. (2014). The myth of cognitive decline: Non-linear dynamics of lifelong learning Topics in Cognitive Science, 6, 5-42 : 10.1111/tops.12078 [↩]
|Previous Post:||Using Links And Writing About Morality Increase Perceived Credibility|
|Next Post:||NSF Report Flawed; Americans Do Not Believe Astrology is Scientific|