Can You Trust Self-Help Mental Health Information from the Internet?
In an upcoming issue of Cyberpsychology, Behavior and Social Networking, Grohol, Slimowicz and Granda1 examined the accuracy and trustworthiness of mental health information found on the Internet. This is critical because 8 of every 10 Internet users has searched for health information online, including 59% of the US population. They concluded that information found in early Google and Bing search results is generally accurate but low in readability. The following websites were identified as popular, generally reliable sources of mental health information: Help Guide.org, Mayo Clinic, National Institute of Mental Health, Psych Central, Wikipedia, eMedicineHealth, MedicineNet, and WebMD.
Because 77% of Internet users only look at the first page of search results, these authors examined websites found on the first two pages of search results (20 websites) on Google and Bing for each of 11 major mental health conditions (e.g. anxiety, ADHD, depression), resulting in 440 total rated websites. This struck me as peculiar though, since there should be substantial overlap between Google and Bing – so most likely, some cases were represented multiple times.
After the sites were collected, two graduate student coders (the last two authors, presumably) made ratings on all 440 websites for quality of information presented, readability, commercial status, and the use of the HONCode badge. HONCode is a sort of seal of approval for health websites from an independent regulatory body.
The found, among other things:
- HONCode websites generally contained higher quality information than websites without HONCode certification.
- Commercial websites generally contained lower quality information than non-commercial websites.
- HONCode websites were generally harder to read (higher grade level) than websites without HONCode certification.
- Commercial websites were generally harder to read than non-commercial websites.
- 67.5% of websites were judged to be of good or better quality based upon a generally agreed-upon standard (above a 40 on the DISCERN scale).
Statistically, this study had two peculiar features. First, there is the likely non-independence problem described above (websites were probably in their dataset multiple times). Second, most of their analyses were done via simple correlations, which have a very simple calculation for degrees of freedom: n – 2. Thus, degrees of freedom for all of the correlations they calculated (with the exception of the “Aims Achieved”, which had a lower n for a technical reason), ignoring the independence problem, should be 438. The degrees of freedom for Aims Achieved should be 396. However, in the article, these degrees of freedom are either 338 or 395. So something odd is going on here that is not explained. Perhaps duplicates have been eliminated in these analyses, but this is not stated.
One additional caveat: the owner of Psych Central (identified as one of the reliable sources of mental health information) was the first author of this study. I doubt he would fake information just to get his website mentioned in a journal article, so this probably isn’t much of a concern.
Overall, I am pleased to see that the most common resources available to Internet users on mental health are generally of reasonable quality, and I feel fairly confident in this finding. Like the authors, I am concerned that readability of the most popular websites is generally low. The mean grade level was 12, indicating that a high school education is required to understand the typical popular health website. That seems a bit high, especially considering younger people are most likely to turn to the web first as a source of information about their mental health. Hopefully this article will serve as a call to website operators to broaden their audience.
- Grohol, J. M., Slimowicz, J., & Granda, R. (2013). The quality of mental health information
commonly searched for on the Internet Cyberpsychology, Behavior and Social Networking DOI: 10.1089/cyber.2013.0258 [↩]
Previous Post: | When A MOOC Exploits Its Learners: A Coursera Case Study |
Next Post: | CFP: Assessing Human Capabilities in Video Games and Simulations |