The following piece by Gary Sherwood, VP of Client Solutions – Politics & Advocacy was published in The Hill on 11/17/2016.

Traditional polling began to falter in 2012 but no one was prepared to sound the alarm. Now, after the 2016 election, there is little to deny that traditional polling may have run its course.

As the dust starts to settle, pollsters, campaigns and candidates are looking towards what’s next. How will campaigns track voters, find persuadables and target supporters?  How will campaigns know what motivates voters? The values behind why voters decide to take — or not take — certain actions or support certain causes and candidates?

The first step is to consider factors of why the polling failed.

Obsolete notions of getting people to pick up a phone: The evolution of human behavior and technology makes the idea of calling people on the phone, whether manually or using automated random dialing technology, outdated and it can often skew polling results.

40 years ago, if the phone rang you picked it up. Why? Because you had no way of knowing who was calling. Today, however, the cost of a cell phone has gone down so significantly that people are starting to cancel their landlines.

When you add that to the fact that most of an individual’s personal communication takes place through text messages and apps, answering the phone for a number you don’t know is very unlikely.

Decline in people willing to answer questions: The second unsettling factor is the rapid decline in the willingness of people to engage in conversation with someone they don’t know, who is asking them questions.

In the 1970s, an 80 percent response rate was considered acceptable. By 1997, the response rate fell to around 35 percent and by 2014 it was at 9 percent, according to Pew Research. Currently, it’s at 5 percent.

Limitations from “traditional” sample sizes:  There are 153 million registered American voters. However, the average poll has a sample size of 1,000 adults. If those 1,000 adults answer truthfully and are a random sample from the population then it is possible to accurately estimate the distribution of responses in the population at large.

If those assumptions are violated, then these samples are much too small and the data too limited to account for the added noise.

Limitations on “signal” in polling data: Polling generally addresses a few issue positions and captures a handful of demographic and socioeconomic variables. The paucity of descriptive data, along with small sample sizes, means traditional polling does not provide enough information to generalize into highly accurate predictive models.

While this data may describe the important characteristics of a candidate’s base, they are unable to accurately predict whether or not specific individuals will be supportive on Election Day.

Relying on a static moment in time: Traditional polls are a static snapshot of the respondent’s opinion in the moment that the question is asked — and that’s it. Although many voters have a firm and long-formed opinion on politics and political candidates, other voters’ views are constantly evolving and changing.

That means that even the best traditional public opinion poll is only a view of public opinion at that particular moment in time, and it is nearly impossible to assess the changing opinions of many Americans – much less understand the source of their conflict.

The inability to identify likely voters:  It is not uncommon for 60 percent of those surveyed in a poll to report that they definitely plan to vote in an election – and then only 40 percent will actually turn out.

Pollsters have to guess, in effect, who will actually vote, and organizations construct “likely voter” scales from respondents’ answers to maybe half a dozen questions, including how interested they are in the election, how much they care who wins, their past voting history and their reported likelihood of voting in this particular election.

This election exposed significant flaws with the data-driven campaign paradigm. From now on, campaigns will need to address three concerns to win elections moving forward.

Campaigns will need to utilize different data:  Voter files and demographics are useful for general characteristics, but the fact of the matter is these variables do not contain enough information to drive highly accurate predictive models by themselves.

Similarly, traditional polling is not robust to violations of the underlying assumptions. These traditional data sources do not scale nor do they address the core values, beliefs, and motivators of individual voters.

Campaigns must abandon static snapshots in favor of dynamic real-time models: It’s estimated that between 5-15% of the electorate oscillate between positions until Election Day. Traditional campaigns are unable to detect the sentiment shifts in these individuals, but dynamic models can identify these individuals within days of changing their minds.

Campaigns that succeed in keeping these voters on message are likely to win a clear plurality of the vote across the electoral map, effectively dispensing with razor thin margins or simply racing to 51% of the vote.

Campaigns will need to reject exclusively in-house data and analytics solutions:  Campaign analytics is fundamentally an exercise in objectively assessing outcomes. Given the high stakes and the staff’s passion, it’s impossible to ensure the analysis group can maintain the level of detachment necessary to confront realities on the ground — especially when those involved are drawn from similar backgrounds and experiences.

Addressing these needs, our model was 94% accurate on election night and correctly forecasted the Rust Belt wins in Ohio, Michigan and Wisconsin. It’s clear that from now on, campaigns will need to utilize data differently.

On a fundamental level, campaigns will need to have data that scales and allows them to get to the core values, beliefs and motivators of individual voters in order to track how attitudes, positions and behaviors shift on a continuous basis.