Think like a Respondent to Improve Survey Data Quality
Most likely you rely at least to some extent on survey data to divine the insights that lead to better business decisions. How confident are you that your survey data are both reliable and valid?
We’ve come a long way in the practice of survey research in terms of understanding and managing sources of error such as scale usage bias and order effects, but an accumulation of research into the unobservable cognitive processes that come into play when respondents answer survey questions shows that crafting survey questions that reliably elicit the information we think we are asking for is no easy matter. In fact, the survey question may be the weakest link in the chain of components that comprise the typical quantitative market research study.
One challenge we face is a fundamental asymmetry in mindset between the question writer and the person who will be answering the question. When the survey writer uses a phrase like “your household” she knows exactly what she has in mind. The same phrase, however, may trigger different specific or general mental associations in someone else and that can create a problem for the market researcher.
Since it is unlikely that the average respondent will take the time to learn to think like a survey writer, a key to improved survey data quality is for question writers to learn to “think like a respondent.” This starts with understanding that conversations (and a survey is a form of conversation) are governed by implicit rules and expectations that we largely take for granted.
According to British philosopher of language Paul Grise, we expect others to cooperate with us in arriving at a mutual understanding of what we are saying to each other. In getting there, we rely on maxims regarding quantity (be as informative as needed, but not more so), quality (do not say what you know is false or you are unable to prove), and relation and manner (say what is relevant, and be clear). Applying these maxims helps us resolve ambiguity and confusion. For example, if a survey asks how happy you are in your marriage or other significant relationship followed by a question asking how happy you are in general, you are more likely to give different ratings than if the order is reversed or the questions were asked at different times in different surveys. The reason is that you assume, in the first case, that the “overall” happiness question does not include your marriage because there was a separate question about that.
What does this mean for survey question writers? For one thing, respondents assume that if you put something in a question it must be important to their interpretation of the question (conversely, if something is “missing,” it must not be relevant). For another, respondents expect that the question contains all the information they need to come up with an answer.
Over several studies conducted by Norbert Schwarz, Seymour Sudman, and Roger Tourangeau (and many others) a framework has emerged for understanding the cognitive origins of survey measurement error. In addition to violations of the principle of cooperation, such errors arise from the interaction of question characteristics with specific cognitive processes like memory retrieval.
Learning to think like a respondent
The average busy market researcher has little time to devote to becoming an expert in the cognitive psychology of survey research. Even so, a few simple steps may help us improve the effectiveness of our survey questionnaires (and, by extension, the ROI on MR).
First, we can learn to look at each question we write with a four-step model of survey response in mind. This model was proposed by Roger Tourangeau and colleagues (Tourangeau, Rips & Rasinski, 2000). The four steps are comprehension, retrieval, judgment, and response matching.
Comprehension encompasses all the mental work of understanding the meaning of the question. Respondents always try to resolve ambiguity so that they can answer the question, but they may not resolve it in the way the survey writer intended.
Retrieval represents the process of searching our memories for mental representations that are relevant to the question. Once we have retrieved a set of representations, we need some way to evaluate and integrate them into our “answer” to the survey question—the judgment step. Finally, we must match our internally generated answer to the responses available in the survey instrument.
To see how this works, imagine that you have just been asked to rate your overall satisfaction with a recent purchase from Amazon. Comprehension likely is not an issue, unless Amazon is not too precise in specifying what they mean by “recent.” Because the invitation names the product you purchased, you naturally bring to mind thoughts about the product. Whether you first remember something positive or something negative is likely to impact subsequent thoughts. After a few seconds you have a handful of memories and you mentally weigh them to decide whether you are, in general, satisfied or not. You give more weight to those memories that are emotional (how you felt) and less to those that are simply factual. In the end you conclude that you are “more satisfied than not” and you go back to the survey question. Unfortunately, there is no option to say that you are “more satisfied than not.” Instead you have to choose a number between 1 and 10 where one means “not at all satisfied” and ten means “completely satisfied.”
You might not be able to think this way as you write a question, but you can train yourself to think in these terms as you review the questions that you or other market researchers have written.
Finally, you can begin to “observe” respondents’ thinking processes first hand by conducting think aloud pre-tests (also known as “cognitive interviews”). In a think aloud pre-test the respondent does exactly that—verbalizes whatever he or she is thinking on the way to answering each survey question. Conduct several think aloud pretests and you definitely will start thinking more like a respondent as you craft survey questionnaires.
– David Bakken, PhD, Chief Insight Officer