By: Laima Widmer, SVP of Market Research at Suzy
As conversational research takes hold—powered by AI moderators and built for scale—it’s easy to get caught up in the promise of speed, automation, and efficiency. But before we rush forward, we need to take a step back and ask a simple question:
What’s it actually like to be on the other side of that conversation?
Table of contents
- We’ve been here before – let’s not make the same mistakes
- Conversational qual is not a gimmick – it’s the new standard
- Natural doesn’t mean sloppy
- The respondent experience is the data quality
- Let’s get this right, together
We’ve been here before – let’s not make the same mistakes
If you’ve been in the research space long enough, you remember what happened when quantitative tools became more accessible. Agile platforms and DIY survey builders gave more people the ability to run their own research—which, in theory, was a great thing.
But it came with consequences.
Suddenly, there was an explosion of surveys that felt… off. Not intentionally bad, just kind of all over the place. Sloppy logic. Wall-of-text attributes. Poorly thought-out response lists. Weird question wording that made you wonder if anyone had tested it before launching.
It wasn’t that every DIY survey was bad – plenty were solid. But the bar for quality dipped in a lot of places, and it was the respondent who paid the price.
The same risk applies now, as conversational insights scale.
Conversational qual is not a gimmick – it’s the new standard
Right now, AI-moderated interviews still feel novel. For most respondents, it’s a new experience. But that won’t last forever. This isn’t a passing trend – it’s the direction our industry is moving in.
And that’s exactly why we have to get the experience right.
Because once the novelty wears off, we’re left with the experience itself. Was it clear? Was it engaging? Did it feel like a conversation – or like someone copy-pasted open-ends from a survey and hoped for the best?
Natural doesn’t mean sloppy
Here’s the nuance we all need to embrace: conversational doesn’t mean casual in a careless way. It means intentionally designed to feel natural.
We can’t just toss a few open-ends into a guide and expect magic. At the same time, we don’t want to over-script or write like we’re training a chatbot. What we need is a new kind of rigor – one that respects the flow of conversation while still delivering meaningful structure.
It’s a different kind of craftsmanship.
The respondent experience is the data quality
The people we talk to aren’t just sources of data – they’re collaborators in the insight process. And when they feel respected, they show up with more energy, more honesty, and more depth.
So let’s:
- Keep the language human, not robotic
- Avoid repetition and generic prompts
- Use techniques that unlock deeper thinking, not just data collection
- Design for them, not just for us
Let’s Get This Right, Together
We have a huge opportunity with conversational research – to rethink how we listen, how we engage, and how we learn.
But only if we build this with care.
We already know what happens when research gets too automated, too detached from the participant. This time, we have a chance to lead with empathy, creativity, and clarity—and to build something better.
Because when we care for the respondent experience, everyone wins.
Curious how Suzy is designing AI-moderated conversations that feel truly human? Let’s connect and explore how we’re raising the bar for respondent experience. Book a demo today.