Here is a number that should give every product marketer pause: the average comprehensive Voice of Customer programme costs between $150,000 and $300,000 per year. That figure, drawn from Forrester's research on enterprise VoC investments, includes vendor fees, analyst time, survey distribution, interview recruitment, and the small army of consultants required to synthesise the findings into a deck that lands in someone's shared drive and quietly expires. Individual interview programmes run $15,000 to $30,000 per round. A single Qualtrics or Medallia deployment can absorb six figures before it produces a single actionable insight.
The cost, though striking, is not the real problem. The real problem is the cadence. Most companies treat Voice of Customer as an annual event. Once a year, someone commissions research. Six weeks later, findings arrive. Two weeks after that, a presentation is delivered. By the time the insights reach the people who make product and marketing decisions, the market has moved on. Customer sentiment has shifted. A competitor has launched. The carefully researched positioning is now based on what customers thought four months ago.
This is the VoC paradox: the single most important input to product marketing is also the one that decays fastest. Every downstream decision you make, from positioning to messaging to pricing to sales enablement, is only as good as the customer understanding it rests on. If that understanding is months old, you are building on archaeological evidence.
This article explains how to run Voice of Customer research in hours rather than months using Ditto, a synthetic market research platform with over 300,000 AI personas, and Claude Code, Anthropic's agentic development environment. The workflow transforms VoC from a $200K annual project into a two-hour monthly habit. It is the fifth instalment in a series on AI-powered product marketing, and arguably the most foundational. Everything else in the PMM stack depends on this.
The Paradox: Everyone Agrees, Almost No One Acts
Gartner defines Voice of Customer as "the process of capturing customers' expectations, preferences, and aversions." The definition is simple. The execution is anything but. A Gartner survey found that 95% of business leaders consider customer feedback critical to strategy. Yet when you examine how companies actually collect and use that feedback, a pattern emerges: most organisations have a VoC programme in name but not in practice.
The traditional VoC process looks something like this. A product marketing team identifies a need for customer insight, typically triggered by a product launch, a competitive threat, or an annual planning cycle. They engage a research vendor or internal insights team. A discussion guide is written, reviewed, and revised. Participants are recruited, which alone takes one to three weeks for B2B audiences. Interviews are conducted over two to three weeks. Transcripts are analysed. A report is written. Findings are presented. The total elapsed time is four to six weeks. The total cost, depending on scope, runs from $15,000 for a modest interview programme to $50,000 or more for a comprehensive study with quantitative and qualitative components.
There is nothing wrong with this process, per se. It produces rigorous, defensible, high-quality research. The problem is what happens next, which is usually nothing. Forrester calls this the "action gap": the chasm between research findings and changed behaviour. Reports sit in shared drives. Insights never become revised positioning. Customer language never makes it into sales scripts. The research was excellent. The implementation was non-existent.
The action gap exists not because people are lazy but because the cadence is wrong. When research takes six weeks and costs $30,000, it becomes a capital expenditure rather than a habit. You commission it annually, like an audit. And like an audit, the findings are treated as a retrospective document rather than an operating input. By the time you run the next study, an entire year of customer sentiment has gone unmeasured.
Always-On VoC: A Two-Hour Monthly Programme
The paradigm shift is not incremental improvement of the existing process. It is a category change in how you think about VoC. Instead of one large annual study, you run a continuous programme of small, targeted studies that keep your customer understanding perpetually current.
The rhythm looks like this:
Monthly pulse checks (15 minutes, 3 questions, 6 personas). These are lightweight temperature readings on a specific topic. "How are customers feeling about our pricing since the last increase?" "What is the current sentiment towards AI-powered tools in our category?" "Which competitor is getting the most organic mention?" Three focused questions, six personas from your target audience, run through Ditto in fifteen minutes. You are not looking for depth. You are looking for drift: has anything changed since last month?
Quarterly deep dives (45 minutes, 7 questions, 10 personas). These are the full VoC studies that produce actionable deliverables. Ten personas, seven carefully sequenced questions, forty-five minutes from start to analysed output. Each quarterly study produces enough material to refresh your positioning, update your messaging, recalibrate your sales enablement, and inform your product roadmap.
The total annual investment is approximately twelve hours of product marketer time. Twelve hours. For context, the Product Marketing Alliance's annual survey reports that product marketers spend, on average, 40% of their time on content creation and only 12% on research. The always-on VoC model costs a single working day per year in research time and produces twelve monthly pulse readings plus four deep-dive studies. Compare that to the traditional model: one annual study, six weeks of elapsed time, $150,000 or more in costs, and a report that is stale before the ink dries.
The Seven-Question VoC Study
The quarterly deep-dive study uses a seven-question framework. Each question is designed to extract a specific dimension of customer understanding. The sequence matters: it moves from open-ended experience mapping through aspiration, competitive landscape, and emotional benchmarking, into decision mechanics and uncharted territory. Claude Code customises each question with your product category, target persona details, and specific research objectives, then runs the study through Ditto's API. The full Claude Code VoC guide provides the technical implementation.
Question 1: Experience Mapping
"Walk me through the last time you [relevant activity]. What happened? What went well? What was frustrating?"
This is the warm-up, but it is also the most revealing question in the study. By asking personas to narrate a specific, recent experience, you get concrete details rather than abstract opinions. You learn the vocabulary they use to describe their workflow, the steps they consider important enough to mention, the frustrations they have normalised (and therefore would not mention if you asked "what are your pain points?"), and the moments of unexpected delight. The language they use here feeds directly into your messaging. The frustrations they mention feed into your positioning. The workflow steps they describe feed into your product roadmap.
Question 2: The Magic Wand
"If you could fix ONE thing about how you [relevant activity], what would it be and why?"
Constraint is clarifying. By forcing a single choice, you reveal priorities. Everyone has a list of ten things they would like to improve. Only one of those things keeps them awake at night. The magic wand question identifies the burning platform, the problem that is painful enough to drive action. If your product addresses this problem, you have a positioning anchor. If it does not, you have a product gap worth understanding. The "why" follow-up is essential: it reveals the downstream impact of the problem, which is what your messaging should articulate.
Question 3: Current Solution Landscape
"What tools, services, or workarounds do you currently use for [relevant activity]? What do you like and dislike about each?"
This is April Dunford's competitive alternatives mapping, executed through the voice of the customer. The critical insight here is that your competitors are not always who you think they are. Your product marketing team might obsess over a direct competitor whilst your customers are actually comparing you to a spreadsheet, a manual process, or an entirely different category of tool. The likes and dislikes for each alternative become your competitive intelligence input: the "likes" tell you what to match, and the "dislikes" tell you where to differentiate.
Question 4: Best and Worst Experience Benchmarking
"Think about the best experience you have ever had with a [product category]. Now the worst. What made each one?"
This question establishes the emotional range of the category. The "best" reveals what excellence looks like in the customer's mind, which may be entirely different from your internal definition of excellence. The "worst" reveals the fears and anxieties that drive risk-averse behaviour, the reasons customers stick with mediocre solutions rather than switching to better ones. Together, they define the aspiration and the avoidance that your positioning must navigate. If your product delivers the "best" attributes and avoids the "worst" attributes, say so explicitly. Customers do not make that inference on their own.
Question 5: Purchase Decision Framework
"When you last chose a [product category], what were the three most important factors in your decision? Walk me through how you evaluated your options."
This is the decision criteria hierarchy in the customer's own words. Product marketers often assume they know what drives purchase decisions: price, features, brand reputation, integrations. They are often wrong about the ranking. A B2B buyer might rank "ease of getting internal approval" above every functional attribute. A consumer might rank "what my peers are using" above price. The evaluation process itself, whether they compared three options or fifteen, whether they read reviews or asked colleagues, whether they ran a trial or watched a demo, tells you which touchpoints matter and where your sales enablement materials need to be strongest.
Question 6: Delivery Preference
"How would you ideally want [relevant outcome] delivered? What format, frequency, and level of detail would be most useful?"
This is the question most VoC programmes forget to ask. You can have the right product with the wrong delivery model. Customers who want a self-service dashboard will churn from a managed-service offering. Customers who want a weekly summary will be overwhelmed by a real-time feed. The answers inform product packaging, feature prioritisation, and the onboarding experience. They also reveal whether different segments want fundamentally different delivery models, which has direct implications for your go-to-market architecture.
Question 7: White Space
"What do companies in this space just not understand about your needs? What is everyone getting wrong?"
This is the unmet needs detector, and it consistently produces the most quotable, insight-rich responses in any VoC study. The framing is deliberately provocative: "what is everyone getting wrong" gives personas permission to be critical in a way that "what would you like to see improved" does not. The responses reveal blind spots that entire industries share, assumptions that every competitor makes, and needs that customers have stopped articulating because they have given up on anyone addressing them. These are your innovation opportunities. They are also your most powerful messaging angles: "We know that [industry blind spot]. That is why we built [feature]."
Six Deliverables from One Study
A completed seven-question VoC study produces seventy qualitative responses: ten personas, seven questions each. Claude Code transforms this raw data into six structured deliverables, each serving a different team and a different purpose.
1. Customer Journey Map
Synthesised from Q1 (experience mapping) and Q5 (purchase decision framework). Maps the customer's actual journey, not the idealised version your marketing team drew on a whiteboard. Identifies the touchpoints that matter, the moments of friction, and the decision points where customers either advance or abandon. Serves: Product Marketing (content strategy), Product (feature prioritisation), UX (experience design).

A customer journey map synthesised from VoC data: phases, actions, emotional peaks and troughs, and opportunities mapped across the entire experience. Source: Nielsen Norman Group.
2. Pain Priority Matrix
Synthesised from Q1 (frustrations), Q2 (magic wand), and Q4 (worst experiences). Ranks customer pain points by severity and frequency. A pain that is severe but rare is different from a pain that is moderate but daily. The matrix distinguishes between the two, ensuring that your positioning leads with the pains that are both common and acute. Serves: Product Marketing (positioning, messaging), Product (roadmap prioritisation), Sales (objection handling).
3. Language Library
Extracted from all seven questions, but particularly Q1, Q2, and Q7. The exact words and phrases customers use to describe their problems, aspirations, and frustrations. This is the raw material for messaging, ad copy, landing page headlines, email subject lines, and sales scripts. Customer language consistently outperforms marketer language because it mirrors how the audience already thinks about the problem. Serves: Product Marketing (messaging, content), Sales (pitch language), Advertising (creative).
4. Unmet Needs Report
Synthesised from Q2 (magic wand), Q6 (delivery preference), and Q7 (white space). Identifies what customers want but cannot currently get. Distinguishes between incremental improvements ("faster", "cheaper", "easier") and category-defining gaps ("nobody does X"). The category gaps are your innovation opportunities and your strongest competitive differentiators. Serves: Product (innovation pipeline), Product Marketing (differentiation strategy), Strategy (market opportunity assessment).
5. Decision Criteria Hierarchy
Synthesised from Q3 (current solutions), Q4 (best/worst benchmarking), and Q5 (purchase decision). A ranked list of what actually drives purchase decisions, in the customer's language and priority order. Reveals whether price, features, brand, ease of implementation, peer recommendation, or something else entirely sits at the top. Serves: Sales (discovery questions, proposal structure), Product Marketing (sales enablement materials, competitive battlecards).
6. Product Feedback Synthesis
Synthesised from Q3 (likes/dislikes of current solutions), Q4 (best/worst experiences), and Q6 (delivery preferences). A structured view of what works and what does not across the competitive landscape, from the customer's perspective. Identifies the features, experiences, and design patterns that customers associate with quality, and those they associate with frustration. Serves: Product (feature design), Engineering (technical requirements), UX (interaction patterns).
All six deliverables are produced from a single forty-five minute study. They are interconnected: the Language Library uses the same vocabulary that appears in the Pain Priority Matrix, the Decision Criteria Hierarchy reflects the same priorities surfaced in the Customer Journey Map, and the Unmet Needs Report identifies gaps that the Product Feedback Synthesis confirms are not addressed by existing solutions. Together, they form a comprehensive customer understanding package that traditionally requires weeks of research and tens of thousands in consulting fees.
Cross-Segment VoC: One Study, Multiple Audiences
The basic workflow runs one study against one audience segment. The advanced version runs the same seven questions against multiple segments simultaneously, revealing whether your customer understanding is universal or segment-specific.
Claude Code orchestrates parallel Ditto studies against different demographic groups:
Segment A: Enterprise buyers (age 35 to 55, employed, bachelor's degree and above)
Segment B: SMB decision-makers (age 28 to 45, employed)
Segment C: Technical evaluators (filtered by education and industry)
Same seven questions. Same product category. Three different audiences. The result is a segment comparison matrix that reveals, with uncomfortable specificity, where your assumptions break down. The pain that enterprise buyers rank first may not appear in the SMB top five. The language that resonates with technical evaluators may actively repel business buyers. The delivery preferences of one segment may be incompatible with another's.
This has immediate implications for go-to-market strategy. If your enterprise and SMB segments want fundamentally different things, a single positioning statement will not serve both. You need segment-specific messaging, segment-specific sales enablement, and potentially segment-specific product packaging. Better to discover this through a forty-five minute study than through a quarter of underperforming campaigns.
The Backbone: How VoC Feeds the Entire PMM Stack
Voice of Customer is not one function among many. It is the foundation on which every other product marketing function depends. Without current customer understanding, the rest of the PMM stack is guesswork with better formatting.
The flow is direct and traceable:
VoC to Positioning. The Pain Priority Matrix and Unmet Needs Report identify what matters most to customers. The Language Library provides the vocabulary. The Decision Criteria Hierarchy reveals what differentiates you. These feed directly into positioning validation: you are no longer guessing which attributes to emphasise, because the VoC data has already ranked them.
VoC to Messaging. The Language Library is the raw material for message testing. Instead of inventing messaging from scratch and testing whether customers understand it, you start with the language customers already use and test which framing of their words is most compelling.
VoC to Competitive Intelligence. Q3 (current solution landscape) produces a customer-view competitive map that reveals who you are actually competing against. This feeds directly into competitive battlecard generation: the "likes" become competitor strengths to acknowledge, the "dislikes" become your win themes.
VoC to Pricing. The Decision Criteria Hierarchy reveals how price-sensitive your audience actually is, relative to other factors. The Unmet Needs Report identifies features that customers would pay a premium for. Together, they inform pricing research by establishing the value framework before you test specific price points.
VoC to Sales Enablement. The Customer Journey Map tells sales where to engage. The Language Library tells them how to speak. The Decision Criteria Hierarchy tells them what to emphasise. The Pain Priority Matrix tells them which problems to lead with. Every sales asset becomes grounded in customer reality rather than marketing aspiration.
VoC to Content Marketing. Q7 (white space) consistently produces the most content-worthy insights. When customers say "companies in this space just don't understand X," they are handing you blog post topics, webinar themes, and thought leadership angles that will resonate because they address genuine, articulated frustrations.
This is why VoC is not merely useful. It is foundational. Running positioning validation without current VoC data is like navigating with last year's map. The roads may be roughly the same, but the roadworks, diversions, and new shortcuts are invisible to you.
Longitudinal Tracking: Measuring Sentiment Over Time
A single VoC study is a snapshot. A quarterly series of identical studies becomes a trend line. This is where the always-on model produces its most strategic value: change detection.
When you run the same seven questions against the same audience profile every quarter, you can track:
Pain drift. Is the #1 pain point from Q1 still the #1 pain point in Q3? If it has dropped to #3, something has changed: either a competitor addressed it, the market shifted, or your own product solved it. All three scenarios require different responses.
Language evolution. The words customers use to describe their problems change over time, particularly in fast-moving categories. The Language Library from six months ago may use vocabulary that now sounds dated or that has been co-opted by a competitor.
Competitive landscape shifts. New entrants appear in Q3 responses before they appear in analyst reports. If a tool you have never heard of starts showing up in "what do you currently use" answers, that is an early warning signal worth investigating.
Unmet need persistence. If Q7 (white space) surfaces the same unmet need across multiple quarters, the opportunity is confirmed and growing. If a previously mentioned need disappears, someone may have addressed it, and you need to know who.
Decision criteria changes. Economic downturns push price up the hierarchy. AI hype pushes "automation" up the hierarchy. Regulatory changes push "compliance" up. Tracking these shifts tells you when to adjust your positioning emphasis, not in response to your intuition, but in response to measured customer sentiment.
The cost of this longitudinal programme through Ditto and Claude Code is approximately three hours per year (four quarterly studies at forty-five minutes each). The output is a rolling customer intelligence briefing that makes every other PMM decision more informed. Organisations that do this well develop what might be called "customer peripheral vision": the ability to detect shifts before they become obvious, and to respond before competitors have even noticed.
Limitations and the Honest Caveat
Synthetic VoC research, for all its speed and accessibility, has boundaries that deserve honest acknowledgement.
The most significant limitation is product-specific feedback from actual users. Ditto's personas represent your target market: people who match the demographics, psychographics, and professional profiles of your intended audience. They are not people who have used your specific product. They cannot tell you that the onboarding flow is confusing, that the dashboard loads slowly, or that the third step in your checkout process causes drop-off. For product-specific UX feedback, you need actual users, and no synthetic panel replaces that.
The second limitation is deep emotional and experiential nuance. A synthetic persona can tell you that data security is a top concern for enterprise buyers, and it can articulate why with considerable sophistication. What it cannot replicate is the visceral reaction of a CISO who was personally blamed for a breach at their previous company. The deepest emotional layers of customer experience, the ones that drive irrational loyalty or irrational aversion, are rooted in lived experience that synthetic personas approximate but do not fully reproduce.
The recommended approach is complementary. Use Ditto for target-market VoC: understanding your broader audience's pain points, priorities, language, decision criteria, and unmet needs. Use real-user research for product-specific VoC: understanding how actual customers experience your specific product, where they struggle, what they love, and what would make them churn. The synthetic research sets the strategic direction. The real-user research refines the execution. Together, they produce a customer understanding that is both broad and deep, current and specific.
The correlation between synthetic and traditional research findings is strong, validated at ninety-five percent by EY Americas and in studies at Harvard, Cambridge, and Oxford. But strong correlation is not identity. Treat synthetic VoC as a highly reliable first pass that tells you where to look, and real-user research as the close examination that tells you what you have found.
Getting Started
If you take one thing from this article, make it this: Voice of Customer research should be a monthly habit, not an annual event. The cost of stale customer understanding is invisible but pervasive. It shows up in positioning that no longer resonates, messaging that uses last year's language, sales scripts that address yesterday's objections, and product roadmaps built on assumptions rather than evidence.
The always-on VoC programme described here requires approximately two hours per month: fifteen minutes for a monthly pulse check, forty-five minutes for a quarterly deep dive, and the remaining time for reviewing outputs and distributing insights to the teams that need them. Ditto provides the always-available research panel. Claude Code handles the orchestration: designing the study, running it through the API, analysing seventy qualitative responses, and producing the six deliverables that feed every downstream PMM function.
The alternative is what most companies do today: commission an annual study, wait six weeks for results, present findings that are already ageing, watch them gather dust in a shared drive, and then wonder why the positioning feels off and the messaging is not landing. The tools to do better now exist. The question is whether you will use them.
The AI Agents for Product Marketing Series
Part 1: How to Validate Product Positioning with AI Agents | Claude Code Guide
Part 2: Competitive Intelligence with AI Agents | Claude Code Guide
Part 3: How to Research Pricing with AI Agents | Claude Code Guide
Part 4: How to Test Product Messaging with AI Agents | Claude Code Guide
Part 5: How to Run Voice of Customer Research with AI Agents (this article) | Claude Code Guide
The Claude Code and Ditto for Product Marketing Series
This article is part of a series on using Claude Code and Ditto for product marketing. Each article explains a specific workflow; each has a corresponding Claude Code technical guide for hands-on implementation.
Part 1: How to Validate Product Positioning with Claude Code and Ditto | Claude Code Guide
Part 2: How to Build Competitive Battlecards with Claude Code and Ditto | Claude Code Guide
Part 3: How to Research Pricing with Claude Code and Ditto | Claude Code Guide
Part 4: How to Test Product Messaging with Claude Code and Ditto | Claude Code Guide
Part 5: How to Run Voice of Customer Research with Claude Code and Ditto (this article) | Claude Code Guide
Part 6: How to Segment Customers with Claude Code and Ditto | Claude Code Guide
Part 7: How to Validate GTM Strategy with Claude Code and Ditto | Claude Code Guide

