Skip to 0 minutes and 10 seconds Nobody’s perfect. And surveys pull together the work of lots of somebodies. So we should not ask whether a particular survey is flawed. Instead, we should ask how it’s flawed and how we should adjust our interpretations to accommodate these flaws. It’s no more possible for me to list all the things that can go wrong for a survey than it would be for me to list all the positive qualities of my cat Ernie. There’s always going to be another one. Still, here’s a list of four that are particularly important, with the last two applying just to election and referendum prediction. One, respondents may not give correct answers to the questions they are asked.
Skip to 0 minutes and 56 seconds For example, in UK-based surveys men report having substantially more opposite gender sex partners on average than females do. But this is impossible. Think about it. Similarly, people often tell little white lies to surveyors about their voting, church attendance, and exercise. Two, a sample may be systematically unrepresentative of the broad population we’re trying to understand. For example, let’s say we want to estimate average income in the UK, but we do this by sampling from the members list of the British Lawn Tennis Association. Our estimate will probably be too high. And our systematic bias towards selecting people with above average incomes would remain no matter how large we make our sample.
Skip to 1 minute and 45 seconds Because we’ll always be selecting from a list of paid up tennis club members. Now let’s turn to the last two items on my list of potential pitfalls, which recall, are specific to predictions of election or referendum results. Three, it’s not good enough to understand people’s preferences for candidates. You also need to predict who is going to turn up and actually vote. Consider the UK’s Brexit referendum of 2016. Older people were much more likely to favour leaving the EU than younger people were. So it was important to understand likely turnout rates broken down by age. We know that in general older people are more likely to vote than younger people are. But how strong is this effect?
Skip to 2 minutes and 36 seconds Our answer is pretty important for our predictions. Or consider the Trump versus Clinton election of 2016. It appears that white voters without college degrees, a very pro-Trump constituency, were more likely to turn up and vote, then many pollsters had assumed when making their predictions, which are largely based on turnout patterns in recent elections. The surprisingly high engagement of this pro-Trump constituency may have been enough to tip the balance in a few key swing states. Number four, people may change their minds after responding to a poll but before an election. Again Trump-Clinton provides a good example. You may recall that just before the election, FBI director James Comey announced that he was reopening the FBI investigation into Clinton’s emails.
Skip to 3 minutes and 36 seconds This announcement seems to have hurt Clinton, even though shortly thereafter Comey announced that the new investigation didn’t find anything. One final and particularly important point– margins of error do not capture any of the potential pitfalls I’ve discussed in this clip. And the clip itself is just a highlight reel of some of the biggest errors that can mess up surveys and election predictions. Traditional margins of error are really just minimal underestimates of the true uncertainty that looms over survey estimates. Full uncertainty can easily be twice as large. So be careful. Think through the problems that can distort each survey finding you encounter, and treat error margins as the start of a conversation not the finish.
Think back to steps 2.12 and 2.13 about error margins. Notice that none of the pitfalls covered in the video are even mentioned in these steps on error margins. Error margins and the pitfalls reflect two separate sets of considerations.
Steps 2.12 and 2.13 are about uncertainty caused by the fact that we interview only a sample of the population rather than the entire population. This sampling issue, at least as presented in these steps, doesn’t systematically “bias” our estimates upwards or downwards compared to the true numbers we are trying to estimate. It does, however, inject uncertainty into our estimates.
In contrast, the issues raised in the present video do systematically “bias” results in one direction or another. For example, a sample packed with rich people will tend to lean Tory or more Republican more than the population as a whole does.
What can you add to my list of things that can go wrong with surveys and bias the results in a direction you can identify?
Caution: I’m not suggesting that everything that can go wrong will go wrong. Think of it like this. When you’re planning a family vacation it might be a good idea to first sit down and list all the things that might go wrong. The list might look a little daunting but this doesn’t mean that you should just stay home. In fact, making such a list should increase your chances of making a successful trip.
© Royal Holloway, University of London