Media Literacy: Spotting Suspect Surveys
Media Literacy: It’s not enough to be able to spot click-bait websites or suspect headlines. Understanding what makes an opinion poll suspect is just as important. But that doesn’t mean to ignore all statistics out-of-hand. FiveThirtyEight has put together a guide for spotting shoddy work that doesn’t require an advanced degree in math. It requires no math at all. Developing media literacy in these times is as essential to avoid getting conned as learning to see through mass-market advertising was in the 1950’s. FiveThirtyEight illustrates how the “Kid Rock Senate” survey may have been used less for politics and more to manipulate betting markets.
“All models are wrong, some are useful.”
Box’s famous adage sets the right frame of mind to consider statistical polls and surveys in. A quick refresher though as to just what the purpose of statistics is. It’s not, generally, to with pinpoint accuracy describe exactly what will happen in the real world. But rather describe information that we cannot readily perceive ourselves and draw inferences from what that information might be telling us. As a mathematical model it is an abstraction of the real world. It is not the real world no matter what a data-science consultant will try and sell you. We use mathematical models because the real world is beyond our individual means of comprehension. I can ask my friends who they think will win the Presidency, but I can’t ask 320m people in the US who they favor for President. Nor can anyone else. Inferential statistics hopes to take a much smaller sample and from that glean what might happen. The more complex the phenomena being examined, the more challenger there is in creating a model that can explain that phenomena with any degree of regularity. When we speak about statistical confidence, it’s not confidence that what the survey predicts will actually happen – it’s confidence that if someone collected the same data, in the same way, using the same methods then the result would be the same. When a thousand forecasts are run and it claims that 95%-5% of the time Clinton wins – that’s not a score like in football where Trump must overcome a deficit of 91% to top Clinton. It means that if the exact same Presidential race were run in 100 parallel universes in 95 of them Clinton would win and in 5 of them we’d still be talking about President Trump’s scandal-of-the-week ten months after the polls closed.
But especially after the 2016 Presidential election many have become so suspect of surveys that they automatically reject as wrong any survey poll put in front of them. But that’s like ignoring the hurricane forecast because the last time it was wrong. My confirmation-bias on the accuracy of statistics won’t keep a tidal surge from swamping your basement. The real world, to pardon the phrase, trumps our beliefs. The benefit of the model then is whether or not it is accurate, but how useful it is. Most pollsters provided a forecast that was wrong on the Presidential election. But only a few polls were useful in describing that, if they were wrong, how they would be wrong. Nate Silver’s final election update is an example of one of these wrong yet useful results, especially where he provided detail of what was creating the large amount of uncertainty in their polls. As it turns out Nate’s description of how all the polls could be wrong, and a Trump win occur, was very close to what actually happened.(1) A clear front-runner for Most Accurate Caveat award if such were handed out. And he took it one step further after the election to provide even more detail. (2) FiveThirtyEight’s forecast was wrong, but the information contained in it was very useful.
“Lies, Damn Lies & Statistics”
Popularized by Mark Twain this warning cautions us to consider ulterior motives of the statistician. There is a difference between making a strong statistical product that failed to reflect the reality of what occurred, making a weak statistical product with good intentions – and intentionally creating suspect poll results for some ulterior motive. FiveThirtyEight has performed a great community service in advancing media literacy by creating a list of questions, no math required, to consider when being confronted by a poll result.
First and foremost, does it seem professional? That may seem too basic, but it works surprisingly well. Is a pollster’s press release riddled with typos? Reputable pollsters are run by publicly identified people, and if they’re putting their professional reputations on the line, they probably want to make a good first impression. Spelling simple words wrong or misspelling the candidates’ names is often a sign that either a pollster doesn’t know what it’s doing or isn’t on the level. Small mistakes usually come with big mistakes.
Who? Who conducted the poll? Does the pollster have a long track record? Check out the polling firm’s website — are there real people with expertise listed there? Does the pollster even have a website and not just a Twitter account? (Websites are pretty easy to create, but some fake pollsters don’t even do that.) If a pollster doesn’t reveal the people working for the company, then you probably don’t want to cite the firm’s numbers.
How? How was the survey conducted (e.g., via automated phone, live telephone interview or on the internet)? If it was on the internet, see how the pollster was getting people to participate in its polls (e.g., via its own panel or Google Surveys). If it was on the phone, find out which phone bank was doing the calling. If a pollster isn’t revealing its methodology, don’t trust it. Legitimate, professional pollsters prize transparency.
What? What questions are being asked? If it’s a poll about an election, legitimate pollsters will typically ask respondents more than simply who they prefer, Candidate A versus Candidate B. The pollsters will want to find out why people are voting the way that they are (what issues matter to them, for example, or how favorably respondents view the candidates). At a minimum, pollsters will ask demographic questions in order to weight their data properly. If a pollster isn’t revealing this data and how it’s being weighted, be suspicious.
When? This works two ways. First, when was the poll itself conducted? And how many people did it reach? Those are crucial, standard details every on-the-level pollster releases. Second, when was the polling company founded? If there’s no answer, be suspicious. If it was only very recently, treat its results with caution until it has a body of work to judge.
Why? Polls cost money, so most pollsters aren’t conducting them on a whim. Academic institutions often poll to increase their name recognition, or to provide students an educational opportunity. Most professional pollsters conduct surveys to make money. If there isn’t something on the website that tells you why the pollster is conducting the poll, something is probably up.
Where? Find out where the company is located. Even in the age of the internet, most pollsters have a physical location. An address that you should be able to send a piece of mail to. An actual place that you can check exists via a website like the Whitepages.
Can you reach the pollster? Some fly-by-night operations won’t even have phone numbers on their websites for you to call. That’s probably not a good sign. If there is a phone number, see if it’s toll-free (costs more money to the company, but less to the consumer). If it’s not a toll-free number, see if the area code matches the area where the company is located. And if you’re really adventurous, pick up a phone and see if you can speak to a real person. (You can also try the “Shattered Glass” trick, if you’re suspicious.) If there’s no number, shoot the pollster an email (assuming its website includes an address). Do you get a response?
Short on time? Check to see if polling websites like HuffPost Pollster or FiveThirtyEight have cited the pollster. If they haven’t, there’s probably a good reason.
Still unsure? If you think there’s a fake poll out there, simply email FiveThirtyEight at firstname.lastname@example.org. We’ll look into it.(3)
“A sucker is born every minute.”
Suspect surveys can be created for a variety of purposes. Most media-savvy consumers know to ignore the advert of ‘4 out of 5 doctors agree’ commercialism. But when it comes to politics we have proven not to be as savvy. Take for example the Kid Rock Senate Survey of several weeks ago. Becoming viral, the poll results purportedly showing Kid Rock faring well in a Senate Race popped up everywhere. As FiveThirtyEight describes the intent here might have been:
to move the betting markets. That is, a person can put out a poll and get people to place bets in response to it — in this case, some people may have bet on a Kid Rock win — and the poll’s creators can short that position (bet that the value of the position will go down). In a statement, Lee said Delphi Analytica was not created to move the markets. Still, shares of the stock for Michigan’s 2018 Senate race saw their biggest action of the year by far the day after Delphi Analytica published its survey.(4)
Although the creator of the poll denied that had been the purpose, the data collected by FiveThirtyEight demonstrated the outsized impact it had on betting markets:
It’s trite to say we’re in a new era of communication tools. But it seems that with every new wave of communication and the benefits that it brings there’s a lag-time where the population is susceptible to getting fleeced by those same methods. After Gutenberg’s press mass-produced a German Bible, Catholic authorities were apparently worried that the ‘lay-man’ would be able to access scripture without expert interpretation. And when mass-market advertising began some of the methods were so hamfisted that we laugh at them now. But they worked for awhile. The InfoMullet believes media literacy will be just as much a life skill going forward as basic literacy (reading & writing) and computer literacy have become so ubiquitous we take them for granted as just existing.
(1) The mother-of-all-caveats begins down the page just under the first map. http://fivethirtyeight.com/features/final-election-update-theres-a-wide-range-of-outcomes-and-most-of-them-come-up-clinton/
FiveThirtyEight.Com has published