8/1: Methodology Should be Methodology, whether a Public Poll* or an Academic Survey

It’s no secret that academic survey researchers and public pollsters have very different goals for their work. Public pollsters are gathering data to quickly measure the current environment and find out who thinks what.  In contrast, academic researchers ponder why the environment is what it is, and what the implications of their findings are for theoretical approaches in their discipline.   I have worked on both sides of this equation, and have been questioned for my career choice quite a bit given my graduate school and postdoctoral training in academia. Even though I’m working for an academic institute, the perceived focus on public polling and not having “professor” in my job title seems to lead to the idea that somehow I “sold out.” But “we’re not so different, you and I.” All credible surveys rely on complex, rigorous methodology, regardless of purpose.

Natalie Jackson

The difference in purpose is best illustrated by comparing public pre-election polls to the American National Election Study (ANES), conducted for every presidential election since 1948. Pre-election polls are ubiquitous prior to elections. If the data were collected more than a few days ago, the numbers are considered “old” and no longer relevant. On the other hand, the ANES gathers two waves of data which are processed, cleaned, and carefully combed through prior to public release several months after the election. ANES data are primarily used in academic publications that may emerge years or even decades after the initial data collection. Public polling data chronicle public reactions to the give and take of campaigns and events for a 24/7 news cycle, magazines, and popular books.

Public polls often stoke more controversy than academic work, especially election polling because there is something to compare the election polls to in order to verify accuracy—the actual election returns. Reputations are gained and lost very quickly over one errant poll (which, by the way, is statistically guaranteed to happen to everyone sooner or later simply by the laws of probability).  There are clear expectations of transparency in methods of public polls, and they are dissected by anyone with an opinion and a keyboard.

On the other hand, there are usually few guidelines to provide checks on conclusions about how or why people think the way they do or for the vast majority of survey sample findings in academic work. And, rarely do they have quantifiable verification.  Academic reputations are gained by publication, which is generally earned through the perceived importance of the findings in a peer review process and are not usually the subject of public scorn, argument or debate about “bias,” as the public polls were in the 2012 election cycle.

Despite these differences, the intersection of these two diverse approaches is found in the underlying methodology.  How do we best measure what we’re trying to measure? The question is the same regardless of the goal or use of the data. We are united by the desire to get the best quality data, from the best and most representative samples, and to do this within budget constraints that are often outside our control. There is also a need to innovate and stay on the cutting edge of technology while being careful to not step beyond the scientific bounds to achieve reliability and validity of our samples and polls.

We each think statistically, creatively, and scientifically about survey sampling and data collection. But it doesn’t end there. We think psychologically and ethically about the people who so graciously (or through some artful persuasion) provide us with the information we seek. How do people hear questions? What does each word mean to 1,000 or more different people? How does the order of our words and our questions affect what people hear and how they answer? In some modes, an additional question is how the interviewer administering the survey affects the respondent’s experience and how they hear and answer questions.

These difficulties are shared by everyone who engages in any type of survey research—whether academic, public polling, market research, private polling, or evaluation work—if we care about producing high-quality reliable and valid data. We are trying to achieve statistical precision about human behavior by crafting methodologies that weave together sampling theory, statistics, psychology, sociology, political science, evaluation theories, and technology.

Creating this tapestry is the heart of survey research, and what better place to do methods work than a place where journalists, partisan devotees with agendas, and the general public are constantly looking over your shoulder questioning how your polls were done and whether you were “right” about a given election? I deal with the same methodological puzzles, struggles, and need for innovation that I dealt with in my academic positions, only in a fast-paced and very public environment. So please, can we stop pretending that academic survey researchers and public pollsters are so different, or that one is better than the other? We all put our pants on one leg at a time—and we all learned our sample statistics one t-score at a time.  Besides, last time I checked there were a lot of smart and talented PhDs around here too.

 

*”Public polling” and “public pollsters” refer only to those who do nonpartisan, non-candidate funded work. Funding structures may vary, but public polls are done for the benefit of the general public, not a party, candidate, PAC, Super PAC, or any politically organized entity.