Correlations and misleading inferences
We have seen in this course that tremendous advances have been made in comprehending wellbeing and have already learned about great examples of groups and organisations at local and national level that seek to promote wellbeing. This week we’ll turn our attention to whether these efforts have made a difference. But how do we assess if a wellbeing lens is leading to better outcomes? We need to be careful not to oversimplify the relationships between possible causal factors and wellbeing outcomes.
Everyone interested in making their plans and decisions more ‘evidence-based’ would love to have strong evidence of how wellbeing is caused. But we’re never going to get such evidence. The processes involved in facilitation of wellbeing are far too complex, dynamic, and uncertain for us to know causes and effects with much confidence. Not only are the causes multiple and dynamically interactive, wellbeing is itself of course an extremely uncertain, vague outcome which can’t be quantified with confidence. So the best we can do is combine our own commonsense analysis of plausible pathways of causation, with available evidence from correlational, experimental, and longitudinal studies. And we must avoid the temptation of allowing ourselves to be unduly persuaded by any one piece of evidence.
Sadly, most of the available evidence is only correlational, yet both the producers and communicators of that information are prone to forget that correlations don’t tell us anything positive about causation. Correlations can only tell us what factors are unlikely to be causative. Yet even the brightest statistical scientists habitually make elementary blunders in misrepresenting their findings as if they contained messages about causation.
Most professional statisticians do tend to use wording that indicates confusion between correlation and causation – referring to ‘effects’, ‘impacts’, and ‘determinants’ and making after-the-event statistical ‘predictions’ when what they’re looking at is associations. There is no clear distinction between mere carelessness in this regard and actually mistaking correlational information for causation. (Note that such mistakes are different from the problem of ‘illusory correlation’ whereby prejudices prompt people to observe correlations where there are none.)
Researchers are particularly prone to making false causal inferences when they are trying too hard to link their research with policy and practice. A simple example is a report to the UK government on the ‘wellbeing impacts [sic] of culture and sport’ which was based on correlation research and so should have made clear that it conveyed no information on ‘impacts’ (Fujiwara, Kudrna, and Dolan, 2014; for further examples, see Best 2001, and Thin 2012).
In statistical modelling, considering one factor as in some way influential over another, without necessarily considering it as the whole or even main cause. Adverse weather conditions can be a statistically significant ‘cause’ of a reduction in outdoor exercise, even if the main causes are sociocultural and mental.
We would be really interested in hearing any examples you can think of that illustrate how a complicated causal relationship was oversimplified, leading perhaps to misleading conclusions of what improves wellbeing.
Best, Joel (2001) Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press
Thin, Neil (2012) Social Happiness: Research into Policy and Practice. Bristol, UK: Policy Press, ch.8 ‘Correlations and causal theories’.
© Neil Thin, University of Edinburgh