Skip main navigation

Effect size

Effect size.

When evaluating research studies or interventions, an important question arises: How big is the difference? and How meaningful is the impact?

The answer lies in effect size, a statistical measure that quantifies the strength or magnitude of a relationship or difference between groups. Unlike statistical significance, which merely tells us whether an effect exists, effect size determines how substantial or practically important that effect is.

There are various measures of effect size, depending on the type of research. Common examples include mean difference, odds ratio, and correlation coefficient.

To illustrate, consider two randomized controlled trials (RCTs) assessing the impact of a treatment on quality of life (QoL) in patients with end-stage renal failure. In RCT 1, the treatment group had a mean improvement of 5 points on a 20-question QoL scale, while in RCT 2, the treatment group improved by 10 points on a 120-question QoL scale. At first glance, the second study appears to have a larger effect. However, because the studies used different QoL scales, the raw effect sizes are not directly comparable.

The Importance of Standardising Effect Size

To ensure fair comparison across studies, researchers standardise effect sizes by dividing the mean difference by the standard deviation.

This process converts the effect size into a standardised mean difference, making it dimensionless and comparable across studies. After standardisation, researchers might find that RCT 1 had a larger effect size than RCT 2, despite the raw scores suggesting otherwise.

Standardised effect sizes play a critical role in meta-analyses, where data from multiple studies are combined to estimate an overall effect. Without standardisation, pooling results from studies using different scales would be impossible.

How Effect Sizes Are Reported

Effect sizes are usually reported using two types of estimates:

1. Point Estimate: A single number representing the best estimate of the true effect size in the population. Examples include mean difference, relative risk, or odds ratio.

2. Interval Estimate: A range of values, usually expressed as a confidence interval (CI), which indicates the likely range where the true effect size lies. A 95% CI, for instance, means researchers are 95% confident that the true effect falls within that range, accounting for variability across repeated studies.

Confidence intervals provide crucial context, helping researchers determine the precision and reliability of an effect size. If a confidence interval is narrow, the estimate is more precise; if it is wide, there is greater uncertainty.

Effect size is a crucial tool for interpreting research findings. It not only shows the magnitude of a difference or relationship but also helps compare results across different studies.

Standardisation ensures fair comparisons, while confidence intervals provide insights into precision and reliability. Ultimately, effect size is more than just a number—it is essential for making data-driven, evidence-based decisions in health research and practice.

This article is from the free online

Mastering Evidence-Based Practice: Foundational Strategies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now