Most scientific studies are designed to pinpoint the effect of something—such as the effect of a condition on developing a problem (disease, injury) or the effect of an intervention (treatment, program) on overcoming a problem. Scientists usually determine effect by taking two similar groups—the only difference being the groups’ exposure to that condition or intervention—and measuring the difference in outcomes experienced by them.
But what happens when the two groups selected were not similar to begin with? What if key characteristics distinguishing the two might have played a role in producing the different outcomes? That’s an example of what’s called selection bias.
Bias is a type of error that systematically skews results in a certain direction. Selection bias is a kind of error that occurs when the researcher decides who is going to be studied. It is usually associated with research where the selection of participants isn’t random (i.e. with observational studies such as cohort, case-control and cross-sectional studies).
For example, say you want to study the effects of working nights on the incidence of a certain health problem. You collect health information on a group of 9-to-5 workers and a group of workers doing the same kind of work, but at night. You then measure the rates at which members of both groups reported the health problem. You might conclude that night work is associated with an increase in that problem.
The trouble is, the two groups you studied may have been very different to begin with. The people who worked nights may have been less skilled, with fewer employment options. Their lower socioeconomic status would also be linked with more health risks—due to less healthy diets, less time and money for leisure activities and so on. So your finding may not be related to night work at all, but a reflection of the influence of socioeconomic status.
Selection bias also occurs when people volunteer for a study. Those who choose to join (i.e. who self-select into the study) may share a characteristic that makes them different from non-participants from the get-go. Let’s say you want to assess a program for improving the eating habits of shift workers. You put up flyers where many work night shifts and invite them to participate. However, those who sign up may be very different from those who don’t. They may be more health conscious to begin with, which is why they are interested in a program to improve eating habits.
If this was the case, it wouldn’t be fair to conclude that the program was effective because the health of those who took part in the program was better than the health of those who did not. Due to self-selection, other factors may have affected the health of your study participants more than the program.
Minimizing selection bias
Good researchers will look for ways to overcome selection bias in their observational studies. They’ll try to make their study representative by including as many people as possible. They will match the people in their study and control groups as closely as possible. They will “adjust” for factors that may affect outcomes. They will talk about selection bias in their reports, and recognize the degree to which their results may apply only to certain groups or in certain circumstances.
Another way researchers try to minimize selection bias is by conducting experimental studies, in which participants are randomly assigned to the study or control groups (i.e. randomized controlled studies or RCTs). However, selection bias can still occur in RCTs. For example, it may be that the pool of people being randomly assigned to the intervention group is not very representative of the wider population. Or it could be the researcher’s allocation techniques aren’t so random (e.g. when clinicians, often motivated by good intentions, manipulate the allocation method to get their patients in a treatment group instead of the control group).
Often, selection bias is unavoidable. That’s why it’s important for researchers to examine their study design for this type of bias and find ways to adjust for it, and to acknowledge it in their study report.
Source: At Work, Issue 76, Spring 2014: Institute for Work & Health, Toronto