# Meta-Analyses for Easter

Previous Post: | Training in a Knowledge-based Economy |

Next Post: | Unexpected Deadlines |

The tables for my dissertation are done. I ran a total of 93 individual meta-analyses and produced 40 pages of tables over the last two days. My fingers hurt. It’s a lot like that feeling you get after finishing a long in-class essay exam and your hand naturally tries to cave in on itself. I imagine the Easter bunnies above rubbing their fur on my poor, broken digits.

For those of you that don’t know, a meta-analysis (in this case) is a quantitative summary of findings across many studies, weighted by sample size. So, for example, say Study 1 examined 50 people and found a correlation of 0.5, and Study 2 examined 300 people and found a correlation of 0.2. Meta-analysis takes both of those findings and produces a “weighted average” correlation, which is nudged closer to those studies that had lots of people (because more people = less random statistical variation).

You might think, “Well why not just look at each study individually and see what they did?” The answer is – well, that’s what people usually do, which is called, simply, a literature review. The problem is that literature reviews often conflict. Multiple reviewers might interpret the importance of a single study differently. Meta-analysis is a smidge more objective.

I say a smidge because there are still many subjective decisions made in how to create a meta-analysis. In fact, for my dissertation, I am re-creating and expanding upon a published meta-analysis because I disagree with some of the decisions the previous author made. I think the implications of those decisions might have led her to conclude the opposite of what she should have.

And so far – it looks like I’m right. I still need to stare at my 40 tables to come up with full list of conclusions I’d like to make, and augment those 40 tables with additional, explanatory tables, but for the most part, my data analysis is complete. At least until my adviser rips it apart.

Related articles from NeoAcademic:

Previous Post: | Training in a Knowledge-based Economy |

Next Post: | Unexpected Deadlines |

It also results in a more appropriate error rate, right? Like doing one encompassing anova rather than 50 individual t-tests.

Well, kind of. I wouldn’t personally recommend using significance testing at all in a meta-analytic context… it sort of defeats its purpose.

But having said that, meta-analysis helps in terms of error because you’re able to get around the sample size limitations of individual studies. Say, for example, you have a study comparing traditional and web-based learning outcomes with N=20 and you’re expecting an effect size of d = 0.2. It is extremely unlikely that you would find a statistically significant difference with that study, even if that difference really exists, because the sample size is so low. So, in that study, the authors would likely (falsely) conclude that there’s no difference between the two, when in reality, the difference is simply too small to detect using that sample size.

So therein lies the difference. Individual studies ask the question, “Is there a difference?” (which relies on a lot of tenuous assumptions about the null hypothesis) while meta-analysis asks the question, “How big is the difference?”

There are also some differences in terms of your level of analysis – ANOVA and t-tests all operate at the case-level (comparing outcomes for groups of people), while meta-analysis operates at the study-level (comparing outcomes for groups of studies of people).

That might have been more information that you were looking for.