What (if anything) was being compared?

As you will have learned from some of the earlier information this week, randomised controlled trials are generally seen as the best way of establishing whether a treatment works.

1. Did the study have a control group?

That is, were there two or more groups; one receiving the treatment being tested and the other(s) receiving a placebo or usual care? If there’s no comparison group it may be that the results are due to either a placebo or Hawthorne effect.

2. Were the people taking part randomised into groups?

As we’ve seen randomising people into groups (where allocation is down to chance) means that the groups are more likely to be equal. Confounders (other explanations for the result) are distributed in a similar way in both groups which should mean that any differences are a result of the treatment. Acceptable randomisation normally means that the allocation to group can’t be manipulated by anyone and is unknown at that point to the researchers. So randomising people by methods like throwing dice or hospital numbers or days of the week are not generally considered acceptable.

3. Were the groups similar?

The researchers should say whether the groups were similar or whether there were significant differences between them.

4. How many people dropped out of the research?

This is sometimes called “study attrition”. Where lots of people drop out of the research or if the groups are unequal at the end this may compromise the reliability of the results.

Share this article:

This article is from the free online course:

Making Sense of Evidence: The Informed Health Consumer

Cardiff University