What researchers mean by... meta-analysis

When making decisions that affect many people, policy-makers, clinicians and other decision-makers may turn to research to help inform their choices. Single studies on a topic do provide some information. However, to increase confidence in their decisions, it is better to look at all of the available research.

This is where a meta-analysis can help. A meta-analysis is a type of systematic review. In a meta-analysis, findings from many studies are integrated or “added” in a formal statistical analysis to create one large overview.

The steps of a meta-analysis are:

  • define a narrow, focused question that the meta-analysis will seek to answer.
  • define and follow rigorous criteria for identifying and selecting studies to include in the analysis.
  • collect the data from these studies, and convert estimates or results into a common measure across studies, if possible.
  • combine and analyze the data, and develop conclusions to answer the question.

In general, a meta-analysis aims to answer the questions: What is the effect of a program or treatment, based on all the relevant research to date? How large is the effect?

Meta-analysis in practice

Let's say you wanted to know if rest breaks reduced the rate of low-back pain in a particular work setting. If you gathered all the research on rest breaks and low-back pain, you might find hundreds of research articles.

You may also find studies so small that you wouldn't be confident about the findings. Various articles might seem to contradict each other, with some showing that rest breaks reduced low-back pain rates, and others finding they had no effect.

As explained earlier, in a meta-analysis these findings or outcomes would be statistically combined to provide an overall answer. But first, they need to be converted into a common measure to reach any conclusions, and this can be difficult. With low-back pain, different studies might measure back pain in workers using different scales or questionnaires. Some additional calculations would be needed to achieve a common measure.

In some cases, outcomes are routinely based on a common measure. For example, in cancer research, one widely used outcome is patients' survival rates five years after diagnosis. When many different studies use this common outcome, their results are easier to combine.

For a meta-analysis on rest breaks and back pain, the reviewer might take study findings using different low-back pain scales and calculate a standard “effect” for each study. This “effect” becomes the common measure. By statistically combining the effects from all studies, reviewers may see if there is an overall effect from rest breaks, and how large the effect is. However, the reality is that different studies on a topic may not even measure the same outcome, and there might not be a way to make all the results comparable.

Let's now compare how conclusions are expressed in meta-analysis and other systematic reviews. In the example above, a systematic review may show that six out of eight quality studies show that rest breaks reduce the rate of low-back pain. Using a meta-analysis, which integrates the effect from all the studies, you might find that the numerical size of this effect is very low.

Benefits of meta-analysis

A meta-analysis has many benefits. By combining results into one large study, it reduces the time and energy that decision-makers spend looking at research.

But the real benefit lies in the way meta-analysis can make sense of inconclusive and conflicting data from each original study. Through meta-analysis, researchers can combine smaller studies, essentially making them into one big study, which may help show an effect. Additionally, a meta-analysis can help increase the accuracy of the results. This is also because it is, in effect, increasing the size of the study.

By helping to bring into focus the sometimes blurry picture developing from the abundance of research evidence on any given topic, a meta-analysis is a very effective type of review.

* This example is fictional.

Source: At Work, Issue 48, Spring 2007: Institute for Work & Health, Toronto