Online quantitative research methods in market research comprise, in practice, of one method for the collection of data, namely the survey. Then there are other methods involved in the set up and running of a survey. These comprise of:
sample design (source, composition and size)
preparation and analysis of the results
All online quantitative research projects begin with a brief, which is provided by the client, and which details the aims and objectives of the research.
Usually, surveys will be designed to take anywhere from 5-15 minutes to complete. This equates, roughly speaking, to 15-45 questions in the questionnaire, assuming a mix of single and multi response close questions, grid or matrix questions and open endeds. That’s because the industry works on the basis of respondents answering roughly 3 questions per minute. Of course, this depends on the nature of the questions, but it is a useful rule of thumb.
Questionnaires may occasionally be longer than 15 minutes, but if they are there can be concerns about the quality of results due to respondent fatigue.
Questionnaire design is a large subject in its own right, so we do not cover it here.
Surveys need respondents. These may be a company’s own customers or employees, in the case of satisfaction research for example. But very often a company will wish to undertake a survey with members of the public, rather than their own customers. This could be for a brand awareness study, for example. In these cases it is usual to use an “access panel” to source the sample. Access panels are essentially databases of thousands of people who agree to take part in online surveys in return for incentives, which are often given in the form of points.
Sample composition and size
Usually a survey will have a sample size (the number of people who complete the questionnaire) in the hundreds or possibly thousands. A popular sample size is 1000 for surveys that are designed to “represent” the population, and the results of which may be published. But many surveys, such as those for concept or pack testing, will have smaller sample sizes, because and the results are for internal use by the client (and a lower sample size means lower cost).
In terms of structure, the sample may need to be “Nationally representative” of a population (by age, gender, region, etc..). This is achieved using quotas built into the online questionnaire. Essentially, what these do is count the number of completed questionnaires for specified demographic groups, and then when a quota has been completed it will prevent any more respondents from taking part. For example, say the required sample for a survey is 500 men and 500 women. The quota control on men will prevent the 501st man from completing the questionnaire. And the same happens with a quota on women.
Very often a survey sample will need to be with a particular type of respondent, rather than just being nationally representative. For example, it may be with purchasers of a particular type of product. This is achieved using “screener” questions positioned early on in the questionnaire. For example, if the survey is to be with current university students, then there could be a screener question in the questionnaire asking about current occupation, with one of the answers being “university student”. Anyone who selects that answer will be able to continue through the questionnaire, but anyone who does not will be screened out. There is an art to writing screener questions, to make sure they are effective, and cannot be recognised for what they are by respondents. This is because respondents may only want to take part in the survey in order to receive their incentive at the end.
Even in that case, however, usually there will be some requirement about the demographic structure of the sample. The correct demographic balance is achieved using “quotas”.
This refers to the running of a survey. It includes sending survey invitations and, possibly reminders, usually by email. Typically invitations will be sent out in batches, rather than all in one go. This can be for various reasons. For example, if all the invitations are sent at once during the working day then people who do not work are more likely to answer than people who do work. So this is introducing bias.
In addition, the batches of email invitations will generally be sent to specific target respondents, rather than just randomly. For example, in a survey targeting men and women, often the fieldwork manager will target invitations at the men first of all. That’s because they are usually less inclined to answer questionnaires, so it is best to get their answers first of all.
The fieldwork will usually begin with a soft launch, which aims to achieve 10% of the desired final sample. For example, if the client requires a sample size of 200 completes then the soft launch would aim to achieve 20 completes. The soft launch has various purposes. Firstly, it is used to test that the questionnaire takes the expected amount of time to complete. This is known as the Length of interview, or LOI. Secondly, it is used to check that the incidence rate (IR) is as expected. And finally, it is used to check that there are no mistakes in the questionnaire, that the questions make sense to the respondents, or to make some final tweaks.
Online surveys are “self completion” in the sense that there is no interviewer present to administer the questionnaire. Respondents read the questions themselves, and then give their answers. Quality problems can arise from respondents answering too quickly, without reading the questions properly. Or respondents may try to complete the questionnaire several times, in order to increase their chances of winning a prize. Due to these risks it is usual to inspect the results of a survey either during or after fieldwork, to detect and remove any “dirty” data.
Some online surveys (with short questionnaires) will not offer any kind of incentive, or reward, for taking part. But usually there will be a prize draw, or in the case of surveys using panel sample, the respondents will be given a certain number of “points”, depending on the length of the questionnaire.
The preparation and analysis of results
In practice, outside of universities, the results of online quantitative research are only occasionally analysed “statistically” in our experience (we’ve run hundreds of surveys for clients over many years). In other words, clients do not usually concern themselves with measures such as statistical significance.
Instead, the survey results will be prepared into a set of tabulations. Usually these will comprise of one table for each question in the questionnaire, using a banner (or crossbreak) on all tables which shows the answers to the demographic questions (age, gender, etc…)
If there are any open-ended questions in the questionnaire then the client may just look through the verbatim responses at the end of the survey (perhaps in a “word cloud”), or they may have the verbatims coded, and the coded results included in the tabulations.
There is more to quantitative research than this, of course, but hopefully it has been useful to you.