By Patrick Murray
This column originally appeared as an Op-Ed on NJ.com on November 4, 2021.
I blew it. The final Monmouth University Poll margin did not provide an accurate picture of the state of the governor’s race. So, if you are a Republican who believes the polls cost Ciattarelli an upset victory or a Democrat who feels we lulled your base into complacency, feel free to vent. I hear you.
I owe an apology to Jack Ciattarelli’s campaign — and to Phil Murphy’s campaign for that matter — because inaccurate public polling can have an impact on fundraising and voter mobilization efforts. But most of all I owe an apology to the voters of New Jersey for information that was at the very least misleading.
I take my responsibility as a public pollster seriously. Some partisan critics think we have some agenda about who wins or loses. I can only assume they have never met a public pollster. The thing that keeps us up at night — our “religion” as it were — is simply getting the numbers right.
Unlike a campaign consultant, my job is not to figure out a candidate’s best path to victory, but to provide an explanation of the public mood as it exists now. Polling continues to do that quite well when we are taking a snapshot of the full population. For example, polls on the impact of COVID and attitudes toward vaccines over the past year and a half provided an accurate picture of shifting behaviors that directly impacted public health.
Election polling is a different animal, prone to its fair share of misses if you focus only on the margins. For example, Monmouth’s polls four years ago nailed the New Jersey gubernatorial race but significantly underestimated Democratic performance in the Virginia contest. This year, our final polls provided a reasonable assessment of where the Virginia race was headed but missed the spike in Republican turnout in New Jersey.
The difference between public interest polls and election polls is that the latter violates the basic principles of survey sampling. For an election poll, we do not know exactly who will vote until after Election Day, so we have to create models of what we think the electorate could look like. Those models are not perfect. They classify a sizable number of people who do not cast ballots as “likely voters” and others who actually do turn out as being “unlikely.” These models have tended to work, though, because the errors balance out into a reasonable projection of what the overall electorate eventually looks like.
Monmouth’s track record with these models, particularly here in our home state over the past 10 years, has been generally accurate within the range of error inherent in election polling. However, the growing perception that polling is broken cannot be easily dismissed.
Monmouth’s conservative estimate in this year’s New Jersey race was an 8-point win for Murphy, which is still far from the final margin. More than one astute observer of polls has pointed out that the incumbent was consistently polling at either 50% or 51% against a largely unknown challenger. That metric in itself should have been an indication of Murphy’s underlying weakness as an incumbent. Still, in the age of polling aggregators, needles, and election betting markets, we tend to obsess more on the margin than on the candidate’s vote share. And we end up assuming that the “horse race” number is more precise than it actually is. This can lead to misleading narratives about the state of the race, as happened in New Jersey this year.
While pundits and the media are hardwired to obsess on margins, we pollsters bear some responsibility too. Some organizations have decided to opt-out of election polling altogether, including the venerable Gallup Poll and the highly regarded Pew Research Center, because it distracts from the contributions of their public interest polling. Other pollsters went AWOL this year. For instance, Quinnipiac has been a fixture during New Jersey and Virginia campaigns for decades but issued no polls in either state this year.
Perhaps that is a wise move. If we cannot be certain that these polling misses are anomalies then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.
This is especially important now because the American republic is at an inflection point. Public trust in political institutions and our fundamental democratic processes is abysmal. Honest missteps get conflated with “fake news” — a charge that has hit election polls in recent years.
Most public pollsters are committed to making sure our profession counters rather than deepening the pervasive cynicism in our society. We try to hold up a mirror that accurately shows us who we are. If election polling only serves to feed that cynicism, then it may be time to rethink the value of issuing horse race poll numbers as the electorate prepares to vote.