What’s Wrong with Traditional Survey Measurement?
Prepared by Ben Moorsom, Chief Creative Officer & Creator of Neuroscaping™, Debut Group® & Brock Edwards, Neuroscaping Researcher, Debut Group®
There isn’t a need to completely overhaul existing techniques. There are simple strategies that reduce bias while using common measurement strategies to elevate event experiences.
Accurate measurement has long been the holy grail of business investing in events. Yet after many years, high quality data and ROI estimates seem difficult to attain. This paper examines common flaws in the design and execution of surveys, along with strategies to overcome these shortcomings.
The Fatal Flaws of Survey Design
From a scientific perspective, surveys and other common measurement techniques are typically designed and used in highly ineffective ways. Since surveys are so easy to implement, they are often thrown together by people without adequate survey design expertise, and without the proper planning or critical thinking required to gather meaningful insights.
Flaws start with how we build our questions. These flaws become amplified when there’s a failure to plan for fluctuations in factors such as context, mood, memory, and perception.
So, should we use surveys? It’s not that common measurement techniques are completely broken, but that surveys are built poorly and executed haphazardly. When we measure we need to plan, think and act like scientists.
The Importance of Timing
Post-event surveys are often distributed a few days to a few weeks after an event. Over this period of time the perception of respondents fundamentally changes, distorting responses to event-related questions.
Here’s how timing can distort perception and survey responses:
1. We know that evaluations before and after an experience tend to be more positive than evaluations during an experience (Mitchell, Thompson, Peterson, & Cronk, 1997). In other words, we look backwards and forwards in time with rose-colored glasses.
2. Our memory is much less reliable than we think. Over time, memories contain more of an event’s “essence” than specific details (Kalat, 2013, p. 371). This means that delayed event surveys capture feedback that is vaguer than what would be captured in real-time or closer to the event.
3. Memories are sensitive to change when they are recalled (Weiten & McCann, 2013). By the time post-event surveys hit participant’s mailboxes they have had time to chat with colleagues and share their experiences. Each time they do so, their memory is distorted and tends to contain more of the same details instead of a unique perspective (Loftus, 2005).
Simply delaying a survey by a few days may drastically change measurement outcomes. Memory and perception changes with time and meaningful insights are collected when we anticipate these changes, not when we ignore them.
Capturing an Accurate Sample
When companies survey, often their focus is on getting as many responses as possible. The highest “response rate”. But there are risks to this approach.
Scientists have known for decades that data is more powerful and accurate when it is collected from a smaller, high quality sample pool rather than a larger, biased group (Cook, Heath & Thompson, 2000). In fact, excellent, peer-reviewed studies often use a sample of 35-55 people and draw conclusions about product safety, medicine or psychological phenomenon in the general population.
One issue with quantity over quality is self-selection bias. Individuals with the most extreme feedback or opinions – positive or negative – are most likely to choose to participate. As a result, their thoughts and feelings are disproportionately used to judge success or failure. Since this sample is biased, conclusions about the success or failure of an experience are inaccurate.
However, there are simple strategies to reduce bias while using common measurement strategies.
For example, instead of sending out an unsolicited web survey, a series of iPads could be stationed outside of a workshop to survey participants. The timing of this survey is excellent for obtaining detailed feedback and suggestions, but we still have a problem with self-selection… the most opinionated people will choose to walk up to an iPad and participate.
The solution: Rather than surveying everyone, we can use an objective method to select respondents. A colleague could approach every third person who leaves the workshop and ask them to participate.
Alternatively, we could distribute survey invitations before the session on random chairs in the room. Not only can this invitation approach increase response rates, but we can capture a more random and thus better sample.
Encouraging Better Responses for More Accurate Data
Better responses lead to more accurate data… but not always the data you want to see.
The brain conserves energy by pushing us to mindlessly select the first acceptable response, to agree with statements, to select the answer that makes us look good to others or to select the response that the surveyor seems to be looking for. In research, these biases are referred to as satisficing, yes-saying, social desirability and demand characteristics respectively.
- Satisficing is essentially “cognitive laziness”. It’s when people take a shortcut in responding (Krosnick & Presser, 2010). For example, if you’ve ever had to answer several open questions in a row you might answer the first few with detail and then provide shorter and less clear answers as you progress. At its worst, “strong” satisficing occurs when a respondent doesn’t bother to search their memory or do any cognitive work at all when answering a question. Instead, they simply circle an answer that seems acceptable based on the wording of the question.
- Yes-saying refers to a tendency that leads some people to agree to statements, regardless of their content. Because of this, up to 10% of people will tend to automatically agree to survey questions (Krosnick & Presser, 2010). In other words, if you ask, “are you tired” and “are you energized”, up to 10% of people may agree with both. Tackling this challenge is important, but it requires effort to come up with contradictory statements to catch the “yes-sayers”.
- Social desirability bias occurs when respondents answer in a way that makes them appear good to others – to avoid punishment, or to be rewarded (Nederhof, 1985). We often see social desirability bias when clients are running internal surveys. Since employees are unsure if management will be accessing responses, responses tend to trend unrealistically positive. For this reason, we recommend always beginning surveys with a privacy statement, explaining who will come into contact with data and how data will be used. The simple addition of a privacy statement can drastically improve the honesty of responses.
- Demand characteristics occur when respondents can guess the goal of a survey, based on its design. For example, when companies unveil new initiatives, they often use surveys that clearly seek support for their new approach. In these situations, respondents tend to be supportive of the changes. These demand characteristics thus lead to overly positive responses.
Understanding these cognitive biases brings us to an important conclusion. Many cognitive biases push responses toward positive feedback. Though it’s tempting to take positive feedback and move on, we need to be far more critical when reviewing data that is almost completely positive. Instead of celebrating, it’s important to push for negative feedback that can be used to improve the quality of future ventures.
To mitigate the impact of cognitive bias, consider these strategies:
- Create a culture in the measurement team that focuses on obtaining accurate data, not just desirable data.
- Maximize anonymity. Use untraceable survey links and mention privacy to respondents at the beginning of the survey. If measuring face-to-face, use a third party or allow respondents to answer privately on a printed questionnaire. Further, allow respondents to drop finished responses in a closed ballot box. Biases are prevented when privacy is made clear to respondents so leave nothing to the imagination.
- The more you want to capture negative or controversial feedback, the more anonymous responses should be. For example, as the CEO, one may need to openly encourage negative feedback in responses and provide absolute anonymity. Otherwise, employees may censor responses and provide solely positive feedback.
Scientific Guidelines for Better Surveys
Check your next survey against this list of helpful tips to strengthen your measurement results:
Simplify your Language:
- Use Grade 5 level language
- Practice Positivity in your wording – avoid negatively worded statements (e.g., “I feel energized vs. I don’t feel tired”)
- Create Shorter surveys – the shorter the better
- Use open-ended questions sparingly
Leverage multi-point scales:
- Answering a binary question (yes/ no) is just as cognitively demanding as responding on a 5-point scale and produces less useful data.
Check for Don’t-know response options:
- If you have “don’t know” options find alternative response options. This answer doesn’t improve measurement and can reduce data quality.
Catch poor responders by asking contradictory questions, such as:
- “I am tired” and “I am energized”. If they aren’t thinking questions through, they will agree to both.
Sampling & Distribution:
- Use face-to-face interviews when feasible
- Leave questionnaires with targeted individuals and provide a drop box for them to submit their responses. This improves response rates and is simpler and more private than face-to-face.
- Spend more time strategizing to get a random pool of responses than getting a perfect response rate. Research shows it is better to have a small, unbiased sample, than a large and biased one (Cook, Heath & Thompson, 2000).
Context & Timing
- Time your distribution as close to your event as possible. Detailed responses and negative feedback are best obtained immediately, instead of post-event or weeks later.
- During events, consider combining detail-oriented session surveys with broad post-event surveys to collect more meaningful responses.
- When interpreting responses, keep in mind that memories lose detail over time. This often leads us to vague feedback that is more positive than they should be.
- Pick the optimal time to survey participants when they are fresh. The more fatigued we are, the more we use cognitive shortcuts to pick an answer versus making a calculated judgment. When possible, don’t survey when respondents are distracted or fatigued (e.g. directly after a 3-hour keynote or series of presentations). If you need feedback on a specific session, keep it short and simple, saving more thought-provoking and open questions for a better time.
- Gamify your surveys to get more and quality responses. Be careful not to offer large incentives for completing surveys as they can undermine intrinsic motivation, causing participants to respond quickly to obtain the reward. Instead, use modest prizes or raffles. There are emerging survey platforms utilizing artificial intelligence that reward good responding, rather than completion.
For more Neuroscaping insights, subscribe to our Neuroscaping mailing list at http://www.debutgroup.com.
Or contact us at: email@example.com