Andrew Bell

Andrew Bell

Senior Lecturer in Quantitative Social Sciences at Sheffield Methods Institute, University of Sheffield. Find out more at https://www.sheffield.ac.uk/smi/about-us/andrewbell or tweet @andrewjdbell

Location Sheffield

Activity

  • So you aren't wrong: the interpretation of the survey by The Sun was the main problem here, and their fault, not Survation's, that they didn't refer to a reference category. But a survey question should aim to get at something precise, and I'm not sure what that precise thing was in this case. The question allowed a number of subtly different things to be...

  • I don't know whether you class it as more reputable, but the Times ran a similar story on the same poll (although they did apologise and rejig the online article a few days later).

  • All really good points. I think the survey was on admittance to hospital, so it would count towards a 'weekend effect' if you were admitted to hospital at the weekend but died, say, the following Wednesday. But you are right - the causal mechanism is not at all well defined, and there is no reason to think that increasing the number of junior doctors would be...

  • Agree with all of that, except I'm not sure it's true that people die at home instead - the paper linked talks mostly about mortality generally, not just mortality in hospitals, so people dying at home aught to be picked up. That isn't to say that I think strikes protect patients - rather that fewer (even routine) operations mean fewer things can go...

  • The only way you could have that option is by increasing the sample size, which would allow you more precision (narrower confidence intervals) for a given confidence level. If your sample size is fixed, you would as you say have to make that judgement between confidence and precision.

  • Hi Caroline, that is exactly right, you are sacrificing one type of accuracy for another, and as a researcher you make a judgement as to which accuracy is more useful. As a rather stylised example, imagine we were trying to estimate the true proportion of people who support Barack Obama in the US. Being 99.99% sure that the answer is between 1% and 99% of the...

  • Great example of a correlation that isn't necessarily causation. If there is a difference between being admitted at weekends and being admitted at weekends, that could be because care is worse at weekends, or it could be because people admitted to hospital at weekends are more ill for other reasons (e.g. there might be fewer routine operations etc for which...

  • Thanks for the feedback!

    I'm going to push you a little bit further on this. The survey found that 20% of Muslims had at least some sympathy for British fighters in Syria. But (linking back to the previous section re reference categories) a similar survey found that 13% of non-Muslims held the same position (see the bottom graph here...

  • I agree that this is a really important area of research - radicalisation of Muslims is a huge problem, and research on the extent of it would undoubtedly help us to come up with solutions to it. This is bad research because it doesn't tell us anything about that issue.

    If you wanted to find out about the extend of radicalisation of Muslims, you would need...

  • Surprisingly, that isn't actually true. Whether we have a population of 5000 or 5 million, a sample of 500 would give us the same degree of accuracy.

    Imagine, for instance I had a coin and was interested in whether or not it was biased. I tossed the coin 1 million times, and then took a sample of 500 from those tosses to find an estimate of the proportion...

  • Well that depends on what data you have - you might have age data that is more exact than the year :)

  • It depends I guess on your definition of quant and qual - I would call this quantitative because you are counting the number of people in each category. You could I suppose call it qualitative because it is getting at a concept that doesn't lend itself to being counted, though.

    You are right that how someone interprets the question will vary greatly between...

  • Yes, I can see that's a bit confusing, sorry. I wanted an example where question wording can be interpreted differently by different groups, but you are right that the question you quote is hypothetical, since the word wasn't used in the question at all.

  • Excellent! :)

  • Indeed - they did not use the word Jihadi. You might want to look at

    (a) Survation's own analysis of the survey http://survation.com/new-polling-of-british-muslims/ (particularly interesting is the comparison to non-muslims in the final graph)
    and
    (b) their response to the Sun's headlines....

  • This is true - nothing is perfectly accurate though, and there are ways to make these things more accurate, depending on the type of data (large scale surveys, internet polling etc...). The question is less whether something is perfectly accurate, and more whether it is accurate enough to be useful for the question at hand.

  • Unfortunately I don't think that is covered by this course - see here to find out a bit more though http://www.applied-survey-methods.com/weight.html

    I think you are right that it can be hazardous - you need to make sure you are weighting by as many key demographic variables as possible. The problem with the 2015 UK election was basically that they got the...

  • I think surveys can produce important and newsworthy results - it is poorly designed or interpreted surveys that are the problem. So rather than ignore survey stories, I would say dig a bit deeper - find the question, and if it doesn't match the headlines, then ignore it (or write a scathing comment on the news article saying so).

  • I imagine a question was asked along the lines of "Are you a member of any religious groups" - subject to measurement error but probably not fatally so. You are right you might find different results within the islamic community. I believe it was a phone survey, so the participants wouldn't have known what the others looked like (although would have heard...

  • Indeed so - another one is that often young people wont answer their phone if they don't recognise the number - something that is different from other generations. There are ways around it - if you know one group of people are underrepresented, you can weight those individuals from that group who do answer more heavily. But you have to get the weightings right...