Research (and personal experience!) demonstrates that people can fall prey to an enormous number of biases. No one, including researchers, is immune to biases and cognitive fallacies!

The international science journal Nature provides this really helpful overview:

Biases-and-fallacies.jpg

Below we highlight five common biases in research and how to mitigate their impact on your data.

Generally, bias refers to an instance where you measure a certain variable in your sample, but the measurement systematically deviates from the true value of that variable in the population you want to study. Wikipedia provides a really useful overview of these biases. In this section, we’re going to discuss a small selection of biases that are particularly important to be aware of when conducting good research. Note, however, that these biases are not limited to online research only (for biases specific to online sampling, see limitations of online samples in this guide). 

1. Confirmation bias (or hypothesis myopia)

This is one of the biases that concerns you as a researcher rather than the participants or your study. It means that, if a researcher has stated a hypothesis he/she believes is true, they may be using responses that confirm the bias and disregard evidence that would undermine the hypothesis. This is especially dangerous when it comes to analysing your results, where contradictory findings are easily overlooked. 

To avoid this form of bias, it is important to critically check your hypotheses and data for alternative explanations. This is often best achieved by openly discussing your research with colleagues or collaborators. Also, preregistration, and especially registered reports, are effective tools that can help combat confirmation bias.

Source:
https://www.psychologytoday.com/us/blog/science-choice/201504/what-is-confirmation-bias

2. Question-order bias

This bias refers to the possibility that a prior question may influence how a participant answers subsequent questions, for example due to concepts, ideas and emotions that can be activated through a certain question. 

One way to avoid the question-order bias is to start with general questions that are easy to answer and not too sensitive, then subsequently going to the more specific and potentially sensitive questions. Furthermore, asking positive questions before the negative ones can also reduce this bias. Finally, counterbalancing can work wonders! Simply make sure you randomise the presentation of your survey items/questions. This will help you to rule out systematic effects of asking certain questions before others. 

Source: 
http://methods.sagepub.com/reference/encyclopedia-of-survey-research-methods/n428.xml

3. Method Bias

Using only one method to measure several constructs can result in the so-called method bias. This may be due to the phrasing of questions and instructions or the answer format chosen. This issue is most concerning when you are relying on self-reported data only. 

Let’s say that you want to understand whether safety climate (i.e., the employees’ perception of the working environment and practices regarding safety) can predict risk behaviour of employees. You measure both the predictor and the outcome variable at the same time using two questionnaires. 

The results of this can be twofold. First, the construct reliability and validity cannot be estimated correctly, as the variance resulting from the method cannot be separated from the systematic variance caused by the trait of the construct. Second, the relationship between two constructs can be misinterpreted, as the method bias can suggest a stronger or weaker relationship between the constructs than there actually is. Applied to our example, this means: you cannot really be sure that you are actually measuring safety climate and risk behavior, because based on your design, you cannot tell which part of the variance in risk behavior is actually explained by safety climate, and which part of the variance is due to the fact that you measured everything at one time point using the same participants. Furthermore, if you found a very strong relationship between safety climate and risk behavior, you can still not be entirely sure that this relationship is actually this strong, because part of it may still be due to the fact that you used questionnaires only.

The method bias can however be minimized in four simple steps:

  1. Separate the measurement of the independent and dependent variable. This separation can either be through time (delay), space (physically separate the items, e.g. on different pages and placing the respective items far apart within the study to weaken the association), or psychologically (i.e., having a cover story to not suggest a relevant relationship between the variables you measure). 
  2. Change the response format of the questionnaires, for example by avoiding the same response scale for all the constructs that you measure. For example, when asking about employees’ risk behaviour, you could use a frequency scale ranging from 1 (never) to 3 (always). To assess safety climate, you could use a 5-point Likert scale measuring the extent of employees’ agreement with certain statements about safety climate. 
  3. Try to avoid ambiguity in your items by providing examples whenever necessary. When your participants know what exactly they are being asked, they will be more likely to answer the question correctly instead of just selecting the neutral answer option. 
  4. Use other sources of data for the construct you are trying to measure. For example, if you want to investigate the relationship between safety climate and risk behavior, using official accident statistics or report rates of incidents can be useful sources for these variables. 

For more information: 

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P.(2012).Sources of method bias in social science research and recommendations on how to control it. Annual Review Psychology, 63, 539-569.

4. Social Desirability Bias

This bias refers to the phenomenon that participants provide answers of how they feel they should respond to a certain question, because it is socially more accepted than their true behaviour or attitudes. 

For example, numerical answers to the question “How often do you exercise per week?” may be much higher than they truly are, because people go for a run far less often than they are ready to admit. The social desirability bias is especially a problem when it comes to sensitive questions or questions where there is an obvious‘ right’ choice favoured by society. 

Please note that there may be cultural differences in what constitutes a ‘right’ choice in a certain society, for example see research on individualism vs. collectivism or research on tight vs. loose norms across cultures. Also note that there is some evidence that social desirability bias might actually be weaker in anonymous online studies -  so good news for online research!

Source:
http://methods.sagepub.com/reference/encyclopedia-of-survey-research-methods/n537.xml

5. Selection bias

When a participant volunteers to take part in a study, they are deliberately choosing which study they want to take. Consequently, it is possible that the people who participate in your study differ systematically from the wider population, for example because they are particularly interested in the topic of your survey.

For an overview of the methods that can be used to reduce the selection bias, please see this article.

Pro tip: How to detect which participants don’t pay attention? Our data analyst Jim Lumsden has provided some practical advice on how to improve your data quality.

 
Was this article helpful?
4 out of 4 found this helpful