Are surveys dead? Why AI will reinvigorate feedback collection

Voicepanel

May 6, 2024

5 mins

Imagine you’re developing a new product and want to gather some early input from the market.  You decide to run a survey with 1000 people in your target demographic to better understand their needs & preferences.  Since your survey has a large sample and you’re asking all the questions you care about, you’re certain to learn something valuable to take back to your team.  Right?

Wrong!  There’s no guarantee that your survey will reveal anything useful.  In all likelihood, your survey is flawed and you might as well throw out the results.  In the rare case that you have a professionally designed survey, a sample that is perfectly representative of your target population, and you’ve somehow identified all the honest respondents and discarded the spammers & bots, there’s no guarantee you’ll discover anything at all.

Surveys don’t work well for discovering new things

For decades, businesses have been using online surveys to collect feedback on pretty much everything: customer satisfaction, market research, creative design, competitive analysis, employee check-ins, and much more.  Surveys have become almost synonymous with feedback.  In fact, about 70% of businesses say that surveys are the only method they use for collecting feedback.

This is all well and good, but in spite of their popularity surveys have a fundamental limitation: they don’t work well for discovering new things.  Surveys only work for quantifying known things.  The abundance of features in survey software gives the impression that they provide  endless possibilities for feedback.  But the truth is that the more bells and whistles you add to your survey, the more you are constraining the responses to a preconceived worldview rather than discovering what respondents actually have to say.

Good survey designers know this, and they will often start their survey design process with thoughtful qualitative discovery process at the outset.  They’ll conduct qualitative interviews with a small population before deciding how to craft their survey questions for a large population.  But if it’s the case that this qualitative process is more effective at getting to the heart of the matter, why bother doing surveys at all?  Well, it’s because the qualitative process doesn’t scale.  It’s not practical to manually conduct interviews with 1000 people, so we give them radio buttons & checkboxes instead.

The burden of choice questions

Consider the following survey question: “On what occasions do you typically use mints?”:

This question isn’t made up: it was part of a recent survey run by a leading consumer goods company.  It’s a good example of a choice question where the survey designer has done some homework ahead of time: based on their prior qualitative research on mint eating habits, they’ve determined a set of choices that are mutually exclusive and collectively exhaustive of the answer space.  (Or at least, the possible answers are exhaustive enough such that the “Other” category should be pretty small - the actual result was 5.8%.)

So what’s the problem?  Simple: people will happily select one of the provided options rather than saying what they actually think.  Let’s face it: most people want to get through surveys as fast as possible so they can get back to whatever else they were doing.  If it’s a choice question, there’s a good chance they’ll quickly skim and just click the first few options, or whatever happens to grab their attention the most.  This is especially a problem for research panelists who generally have no reason to fill out the survey other than to receive an incentive for completion.

You could address bias of the choices by randomizing the order, you might say.  But by doing so you’ve introduced some additional friction to the survey by putting the less common choices first.  And it doesn’t address the problem that respondents may not be answering honestly.

You could address the honesty issue by throwing in some “red herring” questions to weed out dishonest respondents.  Professional research agencies will routinely throw out 50% or more of survey responses they collect using tricks like these.  But there’s no silver bullet here.  It turns out that it’s more or less impossible to gauge someone’s honesty based on how they interact with a survey.  And even if you somehow knew who was being honest, it doesn’t address the fundamental problem that you’re constraining the answers.

Isn’t this what open text responses are supposed to solve?

(Not very) open text

Here’s the same mints question in an open text format:

In theory open text questions allow respondents to provide feedback in their own words, but it’s well known that they don’t perform very well.  They introduce a lot of friction for respondents, as there’s significant cognitive load needed to read questions and type responses.

Leading survey software companies agree.  Per the Qualtrics website: “Writing text takes a comparatively large amount of mental energy for respondents. Once a survey has more than three open-text boxes, we find that, on average, completion rates begin to decline and respondents start writing a lot less text in their responses.”  Qualtrics’ Expert Review feature will even flag surveys that have more than three open text responses as they are expected to perform poorly.  Three?  Good luck getting rich feedback with a survey.

Over the past decade, the internet has transformed dramatically with rich formats for authentic expression such as podcasts, audiobooks, and videos.  These rich formats have become increasingly popular, greatly surpassing text in terms of both creation & consumption.  Why is it that surveys are stuck in the past?

Enter conversational feedback

Few people love taking surveys.  But most people love to talk!  Imagine that you were asking a friend about their mint eating habits in real life:

There are several things to note about this simple interaction:

  • You got a clear answer to the question.
  • Not only that, you got a much more specific answer than you would have gotten with the multi-choice or open text survey question.
  • Not only that, you got an answer that was salient.  Your friend was speaking in their own words rather than being forced to pick an answer.
  • Not only that, it was actually easier for your friend to just talk through their thoughts, rather than staring at a screen trying to decipher a bunch of text and then type it out.

In other words, the conversational version not only accomplishes everything the survey does, but it provides more information, and also does it with less bias and less friction.  If you don’t believe this, next time you want to speak to a friend, try giving them a survey!

But conversations can’t scale… or can they?

If you’re reading this far, hopefully you’re convinced that conversations are a more effective medium than surveys for discovering new things.  And you’re probably also thinking, rightly so, that conversations can’t possibly scale in the same way that surveys can.  It’s simply not practical for a business to routinely conduct thousands of conversations with their audience on any given topic they’re interested in, like mint eating habits.

But what if the conversations were facilitated by AI rather than a human?  Take a moment to think about the last time you took a survey and how the experience felt.  Did you enjoy it?  Did you feel like you were providing useful feedback?  Is it not conceivable that just speaking your thoughts out loud to a computer might be better?

Consider the current state of the technology:

  • Large language models (LLMs), as it turns out, are even better at asking questions than answering them. Their ability to listen attentively, probe effectively, and operate within a structured conversation framework makes them adept at facilitating conversations for collecting feedback.
  • Text-to-speech (aka voice synthesis) technology has advanced to a point where generating human-like speech is both cost-effective and high quality.  In the near future, we’re going to see synthesis technology advance even further to the point that it’s more or less indistinguishable from human speech.
  • Speech-to-text (aka transcription) technology enables capturing and transcribing conversations in real-time. In a conversation with an AI, every detail of the conversation can be documented accurately, ensuring nothing is left behind, making analysis more accurate and efficient.
  • Automatic language translation can extend the reach of conversations across global audiences. By breaking down language barriers, businesses can gather diverse perspectives and insights, enriching the comprehensiveness of feedback collected.
  • Natural language processing (NLP) technology makes it possible to take hours of transcripts and pull out the most relevant nuggets of insights.  With sufficiently high volume, this process can ensure the same statistical rigor as a survey (this is another big topic that we’ll save for a future blog post).

Put together, these technologies now enable something never before possible: conducting deep conversations at the same scale as a survey.  With no additional time or cost, you could be collecting rich conversational feedback instead of running a survey, helping you to gather rich insights about what your target audience has to say on virtually any topic.

So does this mean surveys are dead?

We admittedly went for a catchy headline. No, surveys aren’t dead - far from it.  Surveys will continue to hold ground as a simple and predictable method to collect feedback.  Just as the technology innovation of mass produced cars did not completely get rid of the need for trains or even horses, surveys will remain an effective tool for what they are good at: quantifying responses to known entities.  But as technology opens more possibilities for collecting richer feedback, we believe surveys may start to feel like an old, inexact methodology to help businesses make decisions.