Using attention checks as a measure of data quality

What is an attention check?

Attention checks, according to a seminal paper by Oppenheimer, Mayvis, and Davidenko (2009), are a straightforward and simple way to determine who does or doesn't pay attention to your study instructions.

Here's an example:

However, more recent research suggests that attention checks may not be as reliable indicators of data quality as they seem. Investigative research conducted by Qualtrics (Vanette, 2017) discovered that excluding participants based only on failed attention checks may actually be harmful to one's data quality, due to the demographic and responses biases this introduces into the sample. Attention checks have also been criticised for changing participants' behaviour in a study, rather than simply measuring attention to the stimuli (Hauser & Schwarz, 2015). 
 
At Prolific, we strongly endorse the principle of ethical and fair rewards. Therefore, we have carefully considered our policy with respect to excluding participants based on failed attention checks, so any attention checks used as a basis for rejection must comply with the guidelines stated below.
 

What is a fair attention check?

The purpose of an attention check should be to see whether or not a participant has paid attention to the question, not so much to the instructions above it. A fair attention check should be used as a measure of attention only when it is crucial for valid completion of a task.

  • Here is an example of a fair attention check, which complies with these recommendations:

The colour test is simple, when asked for your favourite colour you must enter the word puce in the text box below.

Based on the text you read above, what colour have you been asked to enter?

Screen_Shot_2018-12-20_at_12.10.22.png

  • Here is an example of an attention check which does not comply with our recommendations:

The colour test is simple, when asked for your favourite colour you must enter the word puce in the text box below.

What is your favourite colour?

 Screen_Shot_2018-12-20_at_12.10.22.png

The second example ("What is your favourite colour?") is not a fair attention check, because you don't need to pay attention to the instructions above the question in order to submit a valid response.

Note: Attention checks must be in a part of the study that, if they were not attention checks, would be necessary to complete the study correctly. For example, they cannot be in repeated unchanging text, or in text that's in an intentionally small font. Also, attention checks should not require participants to remember unreasonable amounts of information.

Finally, it's important that participants are explicitly instructed to complete a task in a certain way (e.g. "click 'Strongly disagree' for this question"), rather than leaving room for subjective mis-interpretations (e.g. "Prolific is a clothing brand. Do you agree?").

If you're unsure whether your attention check is fair, please don't hesitate to contact us.

Can I use attention checks to detect low quality submissions?

Whilst we do recommend that you always have at least one fair attention check in any study, you should consider using multiple checks alongside this to reliably detect submissions of low quality. For example, if a participant fails more than one fair attention check, or appears to be responding at random in addition to failing an attention check, they can justifiably be rejected.

On the other hand, if a participant has failed to answer only one attention check incorrectly, and has submitted an otherwise complete response to your survey, a rejection is not appropriate as they could have made an honest mistake - i.e. the failed attention check does not necessarily reflect "bad data".

Payments can, therefore, be made contingent on attention checks, provided they're fair and in line with the above guidance. We do ask you to be considerate when exercising your right to reject submissions on this basis, and to always keep in mind that participants are real people who have spent time on your study :)

Follow these links to read about our thoughts on failing attention checks, deciding upon who to accept/reject, as well as our blog post detailing other ways in which you can maximise your data quality.


References

Hauser, D. J., & Schwarz, N. (2015). It’s a Trap! Instructional Manipulation Checks Prompt Systematic Thinking on “Tricky” Tasks. SAGE Open.

Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867-872.

Vannette, D. (2017). Using attention checks in your surveys may harm data quality. Retrieved from https://www.qualtrics.com/blog/using-attention-checks-in-your-surveys-may-harm-data-quality/

Was this article helpful?
4 out of 4 found this helpful