Skip to content

How to Minimize Survey Bias

 

Looking for new insights to inform your company's decision-making? Survey-based research is a powerful tool when used correctly. Business owners know that making data-based decisions is essential to success, but how do you ensure that you're gathering high-quality data that will help your company reach its goals?

Inaccurate data causes costly blunders in product development, marketing campaigns, and company strategy, eroding customer trust and damaging employee satisfaction. 

So what's a business leader to do? The key is to be aware of the potential for bias in survey-based research and to take steps to minimize it. A dedicated partner like IntelliSurvey expertly programs your surveys so they're on target and reduce survey bias, giving you the best possible data every time. With well-designed surveys that aim to reduce the effects of bias, you'll achieve maximum benefit from your survey investment and gather mountains of reliable and actionable data.

Knowledge is power when it comes to avoiding the pitfalls of bias in surveys. In this article, we'll cover the most damaging types of bias in surveys and how to prevent them from skewing your survey results so you can use your hard-earned data to confidently make decisions for your organization.

Our Survey Bias Cliff Notes:

In this article, we tackle two big types of survey bias: selection bias, and response bias.

Both selection bias and response bias impact survey results, impeding researchers' ability to collect reliable and accurate data. Sampling methods such as random, stratified, and quota sampling can help mitigate selection bias. Survey design can reduce response bias through question construction, pre-testing, and respondent anonymity. Below, we discuss types of bias, sample issues, and tips to reduce bias in your survey results.

Selection Bias

Response Bias

Types of Selection Bias and Sample Size Issues

Selection bias, also known as sampling bias, results from unrepresentative samples (a sample is the specific group of people from whom you wish to gather data). We've outlined several leading causes of selection bias below.

1. Convenience Sampling

This common form of selection bias occurs when researchers pick respondents because they are easy to access, rather than because they reflect the appropriate audience for the topic. It is tempting to reach out to people nearby or who will be keen to answer. However, that very proximity or motivation often correlates with opinions about the topic and can skew the results deeply. 

Let’s take the example of customer satisfaction. If you only survey people who follow your brand on social media, your results will be unreliable. Followers are much more likely to be advocates than the average customer. This survey satisfaction score will be much higher than the real one, because of sample selection bias. To find the real satisfaction score, screen your survey sample starting from the general population and ensure representation with quotas. This will balance the right mix of advocates, neutral and detractors in your study.

2. The Self-selection Fallacy

When respondents are allowed to self-select into taking a survey, the results aren't trustworthy. Why? Because respondents who self-select are more likely to have a bias (in either direction) toward the survey topic. On top of that, when respondents receive an incentive to complete the survey, some people will be tempted to lie to qualify and get the reward for the project while they do not really belong in your target audience. The answer from these cheaters will blur the results.

To avoid self-selection, use non-leading language until you have confirmed the respondent belongs in your target audience. Don’t disclose the topic of the study in your invitations. In your screener, use neutral questions that don’t let the respondent guess the topic. For example, don’t ask “Have you purchased shoes in the past 12 months? Yes/No” but “Which of the following have you purchased in the past 12 months? Shoes/ Clothes/ A car/ White goods (e.g. fridge, washing machine, etc.)/ Groceries/ Cinema tickets/ Travel tickets/ None”. To make it even better, randomize the order in which the respondents show the list of answers in the second question.

3. Non-response Bias

When people who do not respond to the survey differ from those who do (especially in ways relevant to the research question), inaccurate data will abound. For example, younger men drop out significantly more than other demographics when the surveys get longer. Even if you start from a representative sample source, you will naturally see a percentage of males that is significantly too low among the people completing a 40-minute survey. Ensuring this percentage is in line can be done with quotas on completes. Carefully defining quotas is key to minimizing non-response and data accuracy.

4. Insufficient Sample Size

Another cause of inaccurate survey results is when the sample size is too small to extrapolate with confidence to the full target population. The smaller the sample, the more likely the results are far from the metrics you want to assess in the full population due to selection error. The larger the sample, the more robust the results. 

Picking the right sample size is always a trade-off between minimizing the margin of error of the results and practical constraints like feasibility, timing, and costs.

5. Unfamiliar Content To Your Audience

Your respondents may not possess the background knowledge needed to answer your survey questions accurately, which can skew your results and data. If they do not have the context to answer the question, they may be unable to answer. Adding “not sure” or “undecided” responses can help provide alternatives for audience members who don’t feel comfortable answering.

How To Do It Right?

Selecting a representative sample is the gold standard to get robust insights. To be representative, 2 criteria must be met. First, the sample composition needs to be accurate: the sample should have the right mix of respondents, mirroring the target population. Second, the sample size should be large enough.

For example, suppose you are conducting a survey about the online behaviors of American adults. You will need to compose a sample of adults from all 50 states, with the right distribution of ages, genders, ethnicities, and income levels. The sample size for each question and each sub-group analyzed should be at least n=384 to keep the error margin below 5%. 

At IntelliSurvey, we understand representativity. Our expert programmers and field team are here to advise you on fielding methodology that zeros in on the optimal bundle of sample sources for your specific needs. We source from a broad range of panels including consumer, B2B, HCP, and patient surveys in over 50 countries. You can rest assured that your sample is carefully targeted to minimize bias.

What Is Response Bias?

Response bias is when certain factors lead a survey respondent to respond falsely or inaccurately to a survey question. Have you ever encountered a survey question you didn't want to answer with the whole truth? Maybe you feared being judged or didn't want to rock the boat. Now, imagine the same situation but on a larger scale and potentially millions of dollars riding on the answers to those questions. That's the essence of survey response bias.

People want to sound favorable or be seen by others in a certain way. Social pressure or a desire to please the surveyor all come into play. And sometimes, they simply don’t know or remember what their real answer to a question is. Even trustworthy respondents can give wrong answers when placed in the wrong situation.

Types of Response Bias

There are several ways to overcome response bias to ensure accurate survey results. Let's take a look at some of the most common response bias offenders and how to limit them.

1. Observation Bias

Observation bias occurs when respondents skew their answers to align with what they think the survey hopes to prove. For example, people might show interest in product development concepts because they’re the subject of a survey, but they wouldn’t have any interest in the product on the shelf. 

Results are also often skewed as a result of unintentional bias in survey questions. For example, a question like  "Don't you think taxes are too high?" leads respondents to think they probably are; while "Do you think taxes are fair?" suggests they are not. Neutral wording using a Likert scale brings better results: “What do you think of the level of taxes?” with answers on the scale from 0=” Significantly too low” to 10=” Significantly too high” and a mid-point at 5= ”Fair”.

Best practice: Make sure your questions are clear, concise, and free of loaded language. It doesn't hurt to remind respondents that they should answer honestly, without worrying about what the researcher might want to hear. Introduction screens like “There is no right or wrong answer, we want to understand what you really think” can be effective.

Avoid cues in the survey that point to the study goal. If the respondent cannot guess what you are trying to prove, they cannot skew answers to please you.

2. Social Desirability Bias

This is when respondents submit answers they think are the most socially acceptable. For example, someone might claim to recycle when they don't or say they never spread gossip when they do it all the time. Unchecked, this bias also results in inaccurate information about important metrics like age, income, or education level.

Best practice: Conduct surveys anonymously and design questions neutrally to minimize the impact of social desirability bias.

3. Acquiescence Bias

Some people socially tend to agree more than they disagree. This is true in general, and in particular of survey respondents. While acquiescence bias is present to some degree in every culture, it’s stronger in Asia and reaches its apex in the Middle East. This can cause meaningless differences across markets in multi-country projects.

Best practice: For countries with strong acquiescence bias, use question formats that do not ask them to agree, like rankings, contra-scales, or MaxDiff. 

You can replace agreement batteries by using contra-scales with statements on both sides and a scale in the middle. Instead of asking “Price is more important than quality” on a scale from Strongly Disagree to Strongly Agree, place on the left “Price is the most important” and on the right “Quality is the most important” (this is even more effective if you randomize the sides of the statements).

Contra-scales, ranking questions, and MaxDiffs (which also establish a rank order) all remove acquiescence bias. They also allow for international comparison across cultures.

4. Non-Response Bias

Sometimes called participation bias, non-response bias occurs in surveys when people that don’t respond to a survey have wildly varied responses in comparison to people that do respond to the survey. 

Non-response bias can make surveys non-representative. Surveying as a practice is based on the belief that the surveyed respondents are representative of the larger population you're interested in. Non-response bias occurs when the group that chooses to open your survey and / or qualifies for your survey is systematically different from the group you're interested in.

For example, if I'm interested in interviewing single parents, response bias may occur if extremely busy single parents (who, let's say, are 50% of single parents) do not take the survey.

This can lead to overrepresented or underrepresented groups in your data. 

5. Neutral and Extreme Response Bias 

In an extreme response bias situation, respondents give an answer that portrays a very strong view, even if they do not necessarily feel that strongly about the subject. This can occur easily with satisfaction surveys or with a question that asks if they agree with a specific statement. 

Typically, extreme response bias appears when questions are loaded, or when participants want to appease the surveyors.

Neutral response bias is the polar opposite of extreme response bias. 

In a neutral response bias situation, respondents may answer neutrally for every question. This typically happens with respondents that are uninvested in the survey or do not have time to respond thoughtfully to every question. 

Reduce Survey Bias, Every Time

At IntelliSurvey, we’ve developed best practice methodologies through the thousands of surveys we run every year, ongoing research reviews, as well as our own research on research exercises. We recommend validated designs tailored to each study. Our teams will set up, monitor, and manage your project to minimize the risk of biases threatening your data and provide a solid foundation for decision-making. 

Contact us today to discuss your upcoming research studies.

Subscribe to our Monthly Newsletter