By Lauren Kreisberg, Research Director

Ah, spring cleaning! Every once in a while it’s nice to conduct a grand cleanout – whether it’s at home, at work, or in a survey. With the release of Resonate’s newest wave of data, we’ve done just that!

As we look towards making our data more transparent and more accessible to our clients, the primary research team undertook an initiative to review every question in our survey.  We gave some long overdue attention to areas where we could provide way more value.  Better research design, better breadth of data, or just better wording – we covered all three.

In this “Survey Says” post, I’d like to talk a little bit about the reasoning for two of the major changes we made.

Scale vs. Priority

One change we made was to turn away from many of our rating scale-based questions and to instead ask respondents to prioritize what’s important to them.  When asked a series of attributes about themselves, a company, or a process, we found that it was hard for respondents to distinguish the relative importance in which they held each option; everything became important to them. While that might be true when rated independently, it didn’t help our clients learn about and target different types of people.

For example, who are the people who care most about having a knowledgeable sales staff in their local store versus those who care most about a clean store?

In order to find the answer to those types of questions, we asked respondents to choose a subset of attributes that most defined their viewpoint. This required them to consider the attributes together, rather than to evaluate each one by itself.  The resulting data showed greater differentiation between attributes in these types of questions.

Covering All Sides

When talking about public affairs and political issues on which a respondent might choose sides, it’s common to think about them from the context of support/opposition.  However, a client tipped us off to a potential shortcoming in this design – when the client is on the opposing side, their strategy starts to sound negative.  With clients who often use our data to service their own clientele, we want to make it easy to incorporate Resonate in a positive light.

We also found that asking support and opposition questions only evaluated one side of the issues. Take tax increases, for example. Asking support for a taxation plan doesn’t allow the respondent to indicate if they oppose the tax altogether or if they are just fine with the tax as is.  Instead, it just provides a viewpoint on increasing the tax.

To solve this problem, we began re-designing many of our issue batteries to provide a more holistic view of the respondent’s thoughts across policy matters.  In return, we got data on more outcome measures without adding in any additional time to the survey.

What Does It All Mean?

As the largest primary research platform in the U.S., we are well aware that there is a finite amount of survey respondent minutes available at any given time.  Add in quality measures, such as screening for thoughtful responses and eliminating duplicates, and that amount continues to drop.  When looking to make the most of this fixed resource, we are forced to address possibly dropping questions or getting more value from them.  For us, the answer is easy.  If we can extract more value for ourselves and our clients, that’s where our priorities lie.