By Dr. Lee M. Miringoff
Poll Watcher Season is upon us big time. And, with it comes both the good and the bad. Each election cycle resurrects some oldies about the failings of public polls and typically ushers in a few new critiques. Expect 2016 to follow the same pattern.
In an attempt to shed a little light on the discussion… here goes. Pre-election polls are not predictive even though many continue to treat them that way. Common sense tells us that a poll conducted substantially before voting cannot be predictive. Instead, pollsters like to describe their work as a “snapshot,” although as Gary Langer correctly points out, “portrait” is more accurate. Without pre-election polls, we would be clueless about the surprising and lasting electoral appeal of Donald Trump. No summer romance was he. Or, how would we know that JEB! hasn’t connected with GOPers? It would be impossible to assess how Hillary Clinton’s main opponent, Bernie Sanders, is doing. Will she turn out to be inevitable this time, or will she be derailed again?
Public polls help us understand the emergence and decline of different candidates and also let the public in on the secret that campaign pollsters and strategists see in their private poll data. If you want to understand why Bush, Rubio, Christie, and Kasich are battling each other for the “third Lane” of so-called establishment voters (and, have chosen, at least for now, to give frontrunners Trump and Cruz a free ride), check out the public polls.
These insights are also accompanied by a wave of criticism about public polls, and some of this fallout is well deserved. There are a growing number of faulty polls. The public is well-advised to check out the sponsorship of polls, when they are conducted, whether they consider likely voters, the track record of the organization, and the method of data collection utilized. Answers to these and many more questions separate good quality public opinion research from the hit and run poll-liferation that now characterizes our number crunching campaign coverage. Poll aggregators that provide an average of the averages are useful but only if the organization tries to sort out the good polls, from the bad, and, especially, the ugly.
A word of caution. Don’t be thrown by sample size and the margin of error. For example, the margin of error is a statistical concept that largely relates to the numbers of people interviewed. It is often misunderstood in that it is not really an error at all but the acceptable range that poll findings would fall within had you interviewed the entire population. Who you interview, how you interview them, and how you model your data are more significant indicators of quality than the number of people in a poll. Put it this way, if you have a badly constructed sample, the more people you interview the more inaccurate your results will be. The errors in your data will multiply while the margin of error will shrink making the poll appear more precise and rigorous.
Unfortunately, there are no foolproof guarantees that the best polls will be right all the time or that a bad poll will always miss the target. In class, I like to tell students that even a broken clock tells the right time twice a day. Public polls are aiming at a moving target. The campaigns don’t take a break once the polls have spoken. Get out the vote efforts, particularly in primaries and caucus states, are critical. There is no copyright on defining a “likely voter.”
So, we are left with lots of poll numbers which hopefully present an accurate narrative of campaign dynamics. But, accuracy is hard to achieve. There have always been challenges, real and exaggerated, to the accurate measurement of public opinion. And, that’s been the case every four years.
This election cycle will present its unique array of tests. In the current atmosphere of voter frustration and declining response rates, debate will center on modes of data collection. Traditional probability- based polls which use live interviewers and reach voters on landline and mobile devices are being joined with a variety of on-line and Internet measurements, some probabilitybased and others not. It will be interesting to watch how the public opinion field assesses these developments.
Regardless of the mode of data collection, public pollsters worth their weight are striving to be accurate, and transparency helps the serious student of public opinion to better understand poll results. But, transparency also feeds the criticism that pollsters are “cooking” their numbers to benefit one candidate or political party. Social media certainly contributes to this hammering.
So, we are left with lots of poll numbers which are hopefully developed in an honest attempt to be accurate. In the best of worlds, these public polls present a narrative of the campaign that reflects what is going on. If you want precision in predictions, don’t ask public polls to go beyond what they can reasonably do. If you’re looking for guarantees, you’ll have to look elsewhere.