How to Avoid Common Survey Question Mistakes

How to Avoid Common Survey Question Mistakes to Prevent Data Inaccuracies

Questionable Questionnaires

Not every survey session will yield the results you desire. Oftentimes, the initial fieldwork sessions within a campaign highlight the errors that are harming the quality of your data.

With so much conversation taking place online discussing the source of these errors and how to solve them, it’s important to separate fact from fiction:

  • Survey Fact: When it comes to non-sampling errors (problems stemming from how a survey was designed and conducted) in survey research, mistakes made by the humans creating the question sequences or carrying out the fieldwork sessions are almost always at the heart of the problem.
  • Survey Myth: These mistakes are unavoidable and a sunk cost associated with survey research campaigns.
  • Survey Fact II: There are many ways to improve surveying fieldwork sessions, with an efficient method being to identify errors within the survey question sequence and adjust accordingly.
  • Survey Myth II: Reviewing and improving a question sequence is a timely and challenging process requiring research firms to dedicate full teams to properly assess errors and their root causes.
  • Survey Fact III: With the right toolkit, small research teams can efficiently improve their question sequences without elongated review periods.

Today, we will explore the last fact in detail. We will cover common missteps in survey design and execution, as well as how to leverage the capabilities of modern survey data collection software to spot errors within question sequences. We will also review the best ways to correct said mistakes to increase the overall productivity of surveyors working in the field.

Protecting Data Quality

While the discourse around survey question sequences and survey optimization varies in opinion depending on which source or outlet you read, most players in this space will agree upon a few topics — one of them being the importance of data quality.

If your surveyors are heading out into the field and spending hours upon hours interviewing subjects only to return with low-quality data, your organization is facing a costly problem.

Perhaps certain topics within the survey sequence are not being addressed, meaning that you are missing data that could help fuel your research. Maybe your surveyors aren’t missing any data capture points, but due to an error in their question sequence, they receive answers that aren’t entirely aligned with the subject matter. In this case, your respondents are producing inaccurate datasets that can potentially damage the overall quality of your research.

If you discover either of these scenarios to be the case, what usually happens next?

You have to try again. More survey sessions result in more money spent on fieldwork. Depending on the scale of your surveying operations, these setbacks can range from slightly costly inconveniences to massive hits to your research budget.

This is why the significance of data quality is such a universally agreed-upon topic; when data quality suffers, your organization’s bottom line and overall productivity levels suffer in tandem.

Before diving into the best ways to remedy poor survey data capture, we must first highlight the non-sampling errors that cause it in the first place.

Common Errors

Non-sampling errors may arise in a few areas, the first being in the development of the question sequence itself:

  • Confusing terminology/phrasing: If a question is very long, clunky, or includes words or phrases that are difficult to understand, your respondents may either provide an irrelevant response or choose to abstain from answering altogether.
  • Loaded or leading question verbiage: Questions that are emotionally charged, biased, or inconsistently worded to make the respondent feel pressured to answer in a certain fashion should be avoided. Questions leading the respondent to a desired answer will produce inaccurate datasets.
  • Sequential bias: The order of your questions may unintentionally create bias among your respondents. This may occur more frequently if your question sequences are the same across all your interviews.
  • Restrictive answers: Some respondents will have unique experiences or perspectives they wish to include in response to your questions. Without including an “other” or fill-in text option to provide additional context, you force your respondents to pick from a set of answers that may not accurately assess their opinion or experience. You will also lose out on any valuable insights and contextual information that could enhance your research sessions.
  • Forced responses: Certain questions may contain sensitive content, and some respondents may not feel comfortable answering them for that reason. Other questions may not apply to the interviewee whatsoever. Forcing answers from respondents can also skew your datasets.
  • Improper multifaceted questions: You may want to get feedback regarding multiple aspects of a particular item, experience, or topic. If so, ensure you are not combining questions for the sake of brevity. By asking about two or more elements of a singular topic, experience, etc., you may be limiting your respondents to a set of answers that do not accurately capture their opinions. Example: Asking someone if they like or dislike both the design and color of a new company logo through one question prevents the respondent from articulating two separate opinions that address each facet of the question adequately.

Your surveyors may also struggle to deliver your question sequence to interviewees for various reasons, including:

  • Articulation: Your surveyors may mumble or mispronounce words or phrases in your question sequence, confusing your respondents.
  • Misrepresentation: Respondents may ask your surveyors questions, who may misrepresent the meaning of certain words or the purpose of specific questions, influencing respondents’ answers in the process. They may also be over- or under-emphasizing specific portions of questions through their volume, tone, inflection, etc.
  • Flow and logic: Surveyors can inadvertently miss questions within the flow. They may forget to branch the sequence or skip questions after receiving a specific response.

Now that we’ve established the kinds of non-sampling errors that can arise within your question sequence, we can identify the manual fixes and CAPI tools best fit to address them individually.

Tips for Question Rewrites

As simple as it may seem, rewriting questions and answers after reviewing your datasets can effectively fix underperforming sections within your sequence.

Action Steps:

  1. Simplify your language in questions that are critical to your research, and avoid uncommon or jargon-heavy phrases that could confuse respondents.
  2. To remain neutral in your assessments, remove any language that could stir an emotional response from an interviewee and any sections that appear to persuade the respondent to answer a certain way.
  3. Break any multifaceted questions into individual inquiries. Ensure these questions are asked sequentially but separately to learn more about a specific topic.
  4. Include an “other” option to allow respondents to provide additional context to their answers when they deem it to be necessary.

CAPI Solutions

Modern CAPI survey platforms offer a variety of solutions and quality control tools designed to improve your question sequences.

Nearly every survey questionnaire maker offers randomization capabilities, which can prevent sequential bias. Specific questions and sections can remain sequential for comprehension purposes, while others can be randomized to avoid unintended bias.

Automated flow and logic control removes the guesswork from branching, skipping, and filtering. Surveyors no longer need to remember complex flows by memory — they can instead leverage their CAPI platform to ensure the right question gets asked 100% of the time. These tools also enable respondent skipping, allowing respondents to skip questions involving sensitive materials they don’t feel comfortable discussing or inquiries irrelevant to their experience.

With silent recording capabilities, leadership teams can evaluate surveyor performance by analyzing individual survey sessions. If a surveyor struggles to articulate questions within the sequence or misrepresents important concepts, silently recording each session allows teams to deliver constructive feedback to the interviewer, enhancing productivity and reducing fieldwork spending over time.

SurveyToGo

If you’re seeking a CAPI platform complete with best-in-class solutions to revamp your survey questionnaire, SurveyToGo has what you need.

SurveyToGo provides a seamless CAPI market research survey creation process. With rotation and randomization settings, silent audio and visual recording capabilities, adjustable logic and flow controls, and one-click deployment of modified surveys, SurveyToGo offers a truly comprehensive user experience that far exceeds the expectations of modern survey questionnaire software platforms.

Don’t just take our word for it — our customers have used these controls to improve their survey performance drastically:

  • Learn how Instituto Olhar used SurveyToGo to conduct audio recordings of interviews and apply logical rules for question ordering, resulting in a 15% cost reduction and a 25% reduction in project turnaround time.
  • For further reading into the deployment of unique questionnaires, click here to see how CDA France leveraged SurveyToGo’s quality control features to enhance data quality by 40% and reduce project length by an average of two full days.