Skip to content

How Experts Are Optimizing Survey Data | IntelliSurvey

 

Today we wrap up our five-part blog post series about the e-book Overcoming the Biggest Threats to Market Research: Bad Data & Bad Actors. Over the past ten weeks, we’ve covered everything from the dissolution of good data to the growing sophistication of survey farms and other bad actors. Continue reading to learn how modern data cleaning is being utilized by the best researchers in the industry. And if you’ve missed our previous posts, you can start back at the beginning of this series with “The Modern Slump in Data Quality.”

Even with the best survey design and sampling plan, some amount of bad data seeps into every survey. Strong data cleaning practices are the ultimate safety net to filter out unwanted responses.

Cheaters, fraudulent respondents focused on qualifying for surveys for easy rewards, have gotten more and more sophisticated in how they dodge controls. As researchers innovate and develop new best practices in screening and design to filter out bad responses, cheaters learn and adjust their survey taking methodologies accordingly. This has created a seemingly endless cat-and-mouse game. For this reason, many contemporary approaches to data cleaning simply don’t cut it anymore.

What's Wrong With the Old Way of Data Cleaning?

It’s not that conventional survey design and data-cleaning techniques should be thrown out completely. In fact, traditional methods should still be used to an extent – just not exclusively.

Conventional data cleaning approaches involve a lot of manual work, which increases the amount of time it takes to get to insight while potentially decreasing accuracy.

In an attempt to minimize and expedite this manual work, the industry has witnessed a wave of algorithmic approaches to data cleaning. Either manually or through software solutions, many leverage algorithms to quickly sweep results, looking for certain flags and tell-tale signs of an inauthentic or dishonest respondent. Below are a few classic examples of things algorithmic data cleaning approaches filter out:

  • Speeders: respondents who complete all questions much faster than the median length of the interview
  • Straight-liners: participants who select the same answer across several questions or engage in a similar type of suspicious response pattern
  • Gibberish or nonsense responses: open-ended questions with answers that clearly weren’t thoughtfully provided
  • Technology flags: certain IP address activity can be a clear sign that a cheater is participating

While this list is a good start, cheaters know that researchers filter out these archetypes. They have developed more sophisticated ways to work around them. 

For this reason, innovative researchers rely on additional data-cleaning techniques to minimize how many cheaters make their way through. By fine-tuning their approach, researchers have an opportunity to outsmart inauthentic and dishonest participants.

How Modern Data Cleaning Picks Up the Slack

Instead of relying solely on algorithms and manual review, more advanced data cleaning methodologies apply a diverse array of techniques, including scoring the probability of a dishonest actor. With over twenty markers used to score survey responses, modern techniques established by market research experts at IntelliSurvey do a better job of validating respondent answers.

Weighted data can be compared against other responses to eliminate results that likely come from fraudulent sources. Other techniques, like comparing answers from multiple sample providers—an industry best practice to reduce risk—further improve data quality by exposing data-source-level biases. This multi-layered approach to data cleaning creates additional controls and barriers against infiltration.

To learn more about how researchers leverage advanced techniques to improve survey data quality, download our free e-book, Overcoming the Biggest Threats to Market Research: Bad Data & Bad Actors.

 

EBook-CTA

 

Subscribe to our Monthly Newsletter