In another column, we touched on bias as a flaw in the way a study is designed or carried out. It’s the kind of flaw that would systematically skew the findings and lead to certain outcomes being more likely than others. When reading or reviewing a study, researchers have to be on alert for biases. There are many different types. They can exist at any phase of the study. To add to the examples already covered in the first column, here are a couple more—both having to do with the collection of data.
Recall bias
Sometimes studies rely on participants’ recollection of something that took place in the past. With these types of study, the potential for faulty recall is always a concern. Whether it’s a nutrition study that asks participants to write down all that they ate a week ago, or a child development survey that asks respondents whether their parents spanked them when they were young, the chances are high that participants do not perfectly remember what actually occurred.
However, imperfect recall only becomes a potential bias when it systematically skews the results in one direction over another. Here’s an example. Suppose you’re doing research on the effects of a chemical spill that took place two summers ago. You want to learn the spill’s effects on the health of the people who lived nearby. For study participants, you would recruit people living within a certain distance from that chemical spill. You would ask them about their exposure to the spill at the time (by breathing in the toxic fumes, for example), about their health levels then and their health levels now.
As you would expect, people who have developed health problems after that chemical spill might be more likely to remember their exposure to the spill in great detail. That might be especially the case if they’ve already reached the conclusion that the spill was the cause of their health problems. In contrast, people who did not go on to develop health problems might be more likely to downplay their exposure. They might have been exposed to the fumes, but because no bad health outcomes followed, over time, they might have forgotten about that exposure.
Both scenarios are a problem for the researcher. To be able to determine the health effects of the chemical spill, the researcher needs to know accurately who was exposed and who would go on to develop problems. If people with health problems down the road overstate their exposure, and if people who aren’t negatively affected underestimate theirs, then the study results would be skewed. They would overestimate the health impact of the chemical spill exposure beyond what might actually be the case.
Surveillance bias
Let’s stay with the example of the chemical spill for a bit longer. Suppose right after the spill, you recruited everyone who was exposed and created a cohort that you could follow over time. This type of study design, called a cohort study, does have the advantage of helping you avoid recall bias. However, the people in your cohort are also understandably worried about their health because of the spill. They will likely go to the doctor more frequently, react more quickly to worrying signs or symptoms, be more proactive in getting medical exams and diagnostic tests. If you then compare their health with that of the general population, you might find higher rates of illnesses and conditions. But that could be because members of this cohort are more vigilant about their health.
The term surveillance bias refers to the idea that the closer you pay attention to something, the more things you’ll find. People who have a certain exposure or a certain outcome will typically get, or receive, more attention than others. In this example, it’s the affected individuals who are responsible for the greater scrutiny of their health status. But you can probably think of many examples where it’s an institution, a system or an authority body that is responsible for the increased surveillance. For example, when you have a government campaign to crack down on a certain practice, you’ll likely find that the practice will appear at first to be increasing in prevalence. However, that could just be due to the increased attention on that practice.
Source: At Work, Issue 88, Spring 2017: Institute for Work & Health, Toronto