Your online quantitative research respondents may not all be human. Having said that, if you are running a survey with your own customers, employees or other stakeholders, then the chances are that they will be. There is a small chance they may not, if for example you are offering an incentive. But based on all the surveys of this type that we have run we think the risk is negligible, and not something to worry about.
The risk of non-human respondents really arises with panel surveys. This is not to criticize panel operators, because all the ones we have ever worked with have been highly professional.
It’s just that non-human respondents can really only be identified and dealt with at an individual survey level, rather than on a macro panel level. So, how does this work?
When we run any online quantitative research project with panel sample we include a bot detector at the start of the questionnaire. This is a quick and easy measure, which does not add any time or cost to the survey. It checks each respondent against a list of known bots. New bots may appear from time to time, but nevertheless this protects against the major ones.
Another method for protecting against non-human respondents is record-by-record inspection of survey results. What we particularly look for are multi-response questions in which a large number of answers have been selected in an unfeasibly short amount of time.
For example, we have seen numerous cases in which respondents may have selected 15+ answers to a multi-response question within 2-3 seconds. How can they have done this? It is not physically possible.
Another method of checking for non-humans (or people not concentrating), is to include logic checks within questions. For example, a multi-response question about, say, pet food brands “ever bought” could include a non-existent brand. Any respondent selecting that brand could then be screened out.
If you would like to discuss this further please do not hesitate to contact us or read about our online quantitative research services.