Margin of Error
It's election season, and that means it is time for something that bothers me about polls. Actually, polls happen all the year round, because there is literally no such thing as too many polls. But seeing as the elections are about a month away, it might be an appropriate time to say that all you poll watchers are way too hidebound.
Gallup will release a new poll, with margin of error 3.5 points, showing candidate A with 40 points, and candidate B with 42 points. Some people will say, well oh, candidate B is ahead. Which is probably a facile reading, but fundamentally correct. All the evidence suggests that candidate B is ahead. He is not ahead with 95% certainty, but he is ahead with at least 50% certainty. Bet on candidate B.
Lots of people will tell you, though, that no, nothing can be inferred from this poll. The margin of error is 3.5 percent, and anything less than that is statistically insignificant. Probably if you tried to submit this poll to a sociological journal, they would reject your findings. But they would only reject your findings because 95% confidence is a convention among scholars (pollsters wish they were scholars.)
Journals could choose to run papers that found a correlation with 51% confidence, if they so desired. The papers would usually be right. The papers would tell us something. They would add to the sum of human knowledge, rather than detract. There are obvious reasons journals don't do that, most of which involve the journal wanting to parcel out its space, resources and prestige on the papers that offer the most return. The same, of course is true of newspapers, so they tend to prefer polls whose results fall outside the margin of error.
Of course with the internet these days, and increasingly hysterical partisans, the threshold for poll-acceptability is falling. And rightly so. Don't tell me that only polls that venture outside the margin of error are significant. It's a made-up number. Every poll is significant, if you have enough spare time to read it.
Gallup will release a new poll, with margin of error 3.5 points, showing candidate A with 40 points, and candidate B with 42 points. Some people will say, well oh, candidate B is ahead. Which is probably a facile reading, but fundamentally correct. All the evidence suggests that candidate B is ahead. He is not ahead with 95% certainty, but he is ahead with at least 50% certainty. Bet on candidate B.
Lots of people will tell you, though, that no, nothing can be inferred from this poll. The margin of error is 3.5 percent, and anything less than that is statistically insignificant. Probably if you tried to submit this poll to a sociological journal, they would reject your findings. But they would only reject your findings because 95% confidence is a convention among scholars (pollsters wish they were scholars.)
Journals could choose to run papers that found a correlation with 51% confidence, if they so desired. The papers would usually be right. The papers would tell us something. They would add to the sum of human knowledge, rather than detract. There are obvious reasons journals don't do that, most of which involve the journal wanting to parcel out its space, resources and prestige on the papers that offer the most return. The same, of course is true of newspapers, so they tend to prefer polls whose results fall outside the margin of error.
Of course with the internet these days, and increasingly hysterical partisans, the threshold for poll-acceptability is falling. And rightly so. Don't tell me that only polls that venture outside the margin of error are significant. It's a made-up number. Every poll is significant, if you have enough spare time to read it.
0 Comments:
Post a Comment
<< Home