Surveys are incredibly valuable for businesses, helping them drive growth through understanding.
Fighting Survey Farms: The Battle for Accurate Business Intelligence | IntelliSurvey
Continuing our blog post series about the new e-book, Overcoming the Biggest Threats to Market Research: Bad Data & Bad Actors, this week we’re providing insight into survey farms and how researchers can identify responses from these bad actors in order to safeguard business decisions from being influenced by fraudulent survey responses.
Across the globe, fraudulent individuals misrepresent themselves in surveys to gain access to rewards.
Individuals like this pose a major problem to research organizations and the businesses that contract them. Not only do these fraudsters exploit important resources, they corrupt important data with non-answers.
Beyond these individual malicious actors lies a larger problem that can pose an even greater threat to survey data quality: survey farms.
What is a Survey Farm?
Survey farms actively try to participate in as many surveys as possible using highly organized, modern processes and technology. Survey farms can be made up of groups of real people, or they may use bots. While the full magnitude is unknown, some estimates are that anywhere from 20-30% of survey respondents come from farms.
While early bots were sometimes easy to spot in data, technological advances have made these false respondents increasingly difficult to spot and remove.
Farm bots around the world use a variety of techniques, including programmed lag, to help disguise the amount of time a bot spends on any given question. Other red flags, like IP addresses and regional markers that might signal bots, are easily hidden with modern hacks. Fraudsters have even learned to program a bot’s answers so that they are varied and appear authentic.
But not every Survey Farm has significant bot presence, they also take surveys manually. Imagine an environment similar to a call center with 3-20 people in it. At the lowest level, these bad actors work together to qualify for and respond to surveys for financial rewards. But these farms can also be quite sophisticated. Threads on Reddit have emerged inviting users in target markets to ‘earn passive income’ by creating accounts with a number of panel sites. Sophisticated survey farms will then revenue share with the true account owner and take surveys on their behalf.
This ever-shifting landscape locks researchers and survey farms in an evolving and escalating cycle of prevention and circumvention. As researchers introduce new protections, fraudsters find new methods of infiltration.
How are Researchers Combatting Survey Farms?
Researchers can minimize the amount of survey farm data that enters their surveys with a combination of good survey design and data cleaning.
Researchers can leverage survey design best practices to identify and block dishonest actors from completing surveys. Strong screeners serve as the first line of defense, filtering out unwanted respondents early to reduce cost and save time.
Seeking checks, knowledge checks, and other controls can be added to surveys as a way to spot bad actors. These checks validate that individuals are who they claim to be while also signaling unusual behaviors to researchers.
During and after a survey, researchers should be actively cleaning data to minimize the impact of survey farms. This is a delicate art, where good answers can easily be thrown out with the bad if researchers are not careful.
As a best practice, data cleaning should combine automated and manual reviews. Relying too heavily on one or the other can impact quality and increase costs.
While algorithms can catch speeders—those who rush through answers instead of reviewing questions thoroughly—they can also remove important demographics from the results. Young males, for example, spend less time on answers than other groups, which can often be mistaken by an algorithm as fraudulent behavior.
Data cleaning should involve technology aiding researchers in their quest to spot bad actors. Instead of simply removing all questionable respondents from a survey, modern approaches to data cleaning apply probability scores based on a wide array of factors. From there, it’s possible to set weighted thresholds, a standard benchmark for filtering out cheaters and managing the risk of bad data entering a study.
To get a more comprehensive look at how research organizations clean survey farm data from survey results, download our free e-book, Overcoming the Biggest Threats to Market Research: Bad Data & Bad Actors.
Subscribe to our Newsletter
Building on the blog post, The Modern Slump in Data Quality, this week we’re highlighting the...