Voice of Customer with Ditto + Claude Code

LLM summary: Run a 7-question VoC study monthly or quarterly, generate six VoC deliverables, and maintain an always-on research calendar with cross-segment analysis.

A complete Claude Code guide to building an always-on VoC programme using Ditto's 300,000+ synthetic personas. From expensive annual research events to a continuous two-hour-per-month customer understanding engine.

For Claude Code agents and AI coding assistants. This guide provides copy-paste API calls, a production-tested 7-question VoC study design, six deliverable generation templates, an always-on research calendar, and cross-segment/cross-market analysis workflows. Every technique is validated across 50+ production studies.
Contents
  1. Why Voice of Customer Is the PMM Backbone
  2. What You Will Produce
  3. The 7-Question VoC Study Design
  4. Complete API Workflow: Step by Step
  5. Generating the Six VoC Deliverables
  6. The Always-On Research Calendar
  7. Cross-Segment VoC: Multi-Group Studies
  8. Cross-Market VoC: International StudiesAdvanced
  9. How VoC Feeds Every PMM Function
  10. Worked Example: VoC for a Project Management SaaS
  11. Longitudinal VoC TrackingAdvanced
  12. Quick Pulse Check Studies
  13. Making VoC Actionable: From Insight to Deliverable
  14. Best Practices and Common Mistakes
  15. Frequently Asked Questions

1. Why Voice of Customer Is the PMM Backbone

Voice of Customer (VoC) is the systematic process of capturing what your customers and target market think, feel, want, and struggle with — and translating those signals into product, marketing, and sales decisions. It is the primary input for every other product marketing function: positioning, messaging, competitive intelligence, sales enablement, content marketing, pricing, and product strategy.

When VoC is rich, current, and accessible, every downstream function improves. When VoC is thin, stale, or absent, every other function is building on assumptions.

The Traditional VoC Problem

Challenge Traditional Reality Consequence
Cost $15,000–$30,000 per in-depth interview programme (15–20 participants). Comprehensive annual VoC: $200,000+. Most companies conduct formal research 1–2 times per year. The rest of the time, they rely on intuition.
Speed 4–6 weeks per study (brief agency, recruit participants, conduct interviews, analyse, deliver report). Insights are historical by the time they arrive. Market has already shifted.
Action gap Research reports sit in shared drives. Findings require manual translation into messaging, battlecards, and product decisions. "Interesting finding" never becomes "changed deliverable." VoC dies on arrival.

With Ditto and Claude Code, a full 7-question VoC deep dive takes approximately 45 minutes. A monthly pulse check takes 15 minutes. Cross-segment or cross-market studies run in parallel. And Claude Code doesn't just collect insights — it produces finished deliverables from them in the same workflow.

The core shift: VoC stops being an expensive annual event and becomes a continuous two-hour-per-month programme. When research takes 45 minutes instead of 6 weeks, you treat it as a habit rather than an event.

2. What You Will Produce

A single 10-persona, 7-question VoC study produces six distinct deliverables, each designed to be immediately actionable by a different team:

Deliverable What It Contains Who Uses It Source Questions
Customer Journey Map Key touchpoints, pain moments, and delight moments reconstructed from experience mapping PMM, Product, UX Q1, Q4
Pain Priority Matrix Customer frustrations ranked by severity and frequency, with supporting quotes from each persona Product, PMM, Sales Q1, Q2
Language Library Exact words and phrases customers use when talking about their problems, needs, and evaluation criteria Marketing, Content, Sales All questions
Unmet Needs Report Gaps no one is addressing, opportunities hiding in plain sight Product, PMM, Leadership Q2, Q7
Decision Criteria Hierarchy What matters most to least when buyers evaluate solutions in your category Sales, PMM, Product Q5, Q6
Product Feedback Synthesis Actionable recommendations structured for the product team, with persona evidence Product, Engineering Q3, Q4, Q6, Q7
Every output is traceable. Each finding in every deliverable links back to specific persona responses. The product manager can read the original answers. The sales rep can quote a specific persona. The CEO can see the raw data behind the summary. This traceability is what makes people trust findings enough to act on them.

3. The 7-Question VoC Study Design

This question set is specifically engineered to produce the raw material for all six deliverables simultaneously. Each question does double or triple duty, feeding different outputs. The questions must be asked sequentially (each builds conversational context from prior answers).

Q# Question VoC Dimension Deliverables Fed
1 "Tell me about the last time you [relevant activity]. Walk me through the experience from start to finish. What went well? What was frustrating?" Experience mapping Journey Map, Pain Matrix, Language Library
2 "If you could wave a magic wand and fix ONE thing about [problem space], what would it be? Why that above everything else?" Priority pain identification Pain Matrix, Unmet Needs, Product Feedback
3 "How do you currently solve [problem]? What tools, people, or workarounds do you use? What do you wish you could do differently?" Current solution landscape Pain Matrix, Unmet Needs, Decision Criteria
4 "Think about the best [product/service] experience you've ever had in any category. What made it great? Now think about the worst. What made it terrible?" Expectation benchmarking Journey Map, Product Feedback, Language Library
5 "When you're researching a new [product type], what do you look for first? Second? What's a dealbreaker?" Purchase decision framework Decision Criteria, Product Feedback, Language Library
6 "If a product promised to [core value prop], how would you want to experience that? In the product itself? Through reports? Through a person helping you?" Delivery preference Product Feedback, Journey Map, Decision Criteria
7 "Is there anything about [problem space] that you feel companies just don't understand? What do you wish they'd get right?" Unmet needs / White space Unmet Needs, Pain Matrix, Language Library

Why This Question Sequence Works for VoC

Do not change the question order. Each question builds on the conversational context established by earlier answers. Q1 grounds the persona in a specific experience. Q2–Q3 deepen the exploration. Q4 broadens the frame. Q5–Q7 move to evaluation and unmet needs. Reordering disrupts this flow and produces less coherent responses.

4. Complete API Workflow: Step by Step

Prerequisites

Step 1: Create Research Group

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "VoC Deep Dive - [Product Name] - [Date]",
    "description": "Target customers for Voice of Customer research on [product/problem space]. [Add any context about ideal customer profile.]",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 25,
      "age_max": 55,
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'
Critical parameter notes:

Step 2: Create Study

curl -s -X POST "https://app.askditto.io/v1/research-studies" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "VoC Deep Dive: [Product Name] - [Date]",
    "research_group_uuid": "GROUP_UUID_FROM_STEP_1"
  }'

Save the study id — you need it for asking questions, completing, and sharing.

Step 3: Ask Questions (Sequential)

Ask each question one at a time. Wait for the job to complete before sending the next question. This ensures personas have conversational context from prior answers.

# Question 1
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Tell me about the last time you [relevant activity]. Walk me through the experience from start to finish. What went well? What was frustrating?"
  }'

# Response includes a job ID:
# { "job_id": "job-abc123", "status": "pending" }

Step 4: Poll for Responses

# Poll until status is "finished"
curl -s -X GET "https://app.askditto.io/v1/jobs/JOB_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

# When complete:
{
  "id": "job-abc123",
  "status": "finished",
  "result": {
    "answer": "The last time I tried to..."
  }
}

Poll with a 5-second interval. Most questions complete within 30–90 seconds. Once complete, send the next question. Repeat for all 7 questions.

Step 5: Complete the Study

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

This triggers Ditto's AI analysis, producing: overall summary, key segments identified, divergence points, shared mindsets, and suggested follow-up questions. Poll the study status until it reaches "completed".

Step 6: Get Share Link

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Response:
{
  "url": "https://app.askditto.io/organization/studies/shared/xyz123"
}
UTM tracking is mandatory. Append ?utm_source=ce for cold email outreach or ?utm_source=blog for blog articles. Never use raw share URLs without a UTM parameter.

Step 7: Retrieve Full Study Data for Deliverable Generation

# Get the completed study with all responses and AI analysis
curl -s -X GET "https://app.askditto.io/v1/research-studies/STUDY_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

This returns all persona responses, demographic profiles, and Ditto's completion analysis. Use this data to generate the six deliverables described in the next section.

Total API call timeline for a single VoC deep dive: Group creation (~15 seconds) + Study creation (~5 seconds) + 7 questions with polling (~15–25 minutes) + Completion (~2–5 minutes) + Share link (~5 seconds). Total: approximately 20–30 minutes of API interaction. Deliverable generation by Claude Code adds 15–20 minutes. End-to-end: ~45 minutes.

5. Generating the Six VoC Deliverables

Once the study is complete and you have all 10 persona responses to all 7 questions, Claude Code should generate each deliverable using the extraction logic below. Each deliverable draws from specific questions and follows a defined structure.

Deliverable 1: Customer Journey Map

Source Data

Extraction Logic

  1. Read all Q1 responses and identify every touchpoint mentioned (e.g., "I started by searching Google," "I signed up for a free trial," "I called support")
  2. For each touchpoint, classify it as a pain moment (frustration), delight moment (satisfaction), or neutral
  3. Map touchpoints chronologically to produce a journey sequence
  4. Cross-reference with Q4 to identify which benchmarks customers are measuring you against
  5. Note the frequency: if 7/10 personas mention "confusing onboarding," that's a high-confidence pain moment

Output Format

## Customer Journey Map: [Product/Category]

### Pre-Purchase Phase
| Touchpoint | Experience | Pain/Delight | Frequency |
|------------|-----------|--------------|-----------|
| Web search | "I Googled solutions for..." | Neutral | 8/10 |
| Comparison sites | "Tried to compare options but..." | Pain | 6/10 |
| Free trial signup | "The signup was quick but..." | Mixed | 7/10 |

### Active Use Phase
| Touchpoint | Experience | Pain/Delight | Frequency |
|------------|-----------|--------------|-----------|
| First use | "I immediately tried to..." | Pain | 5/10 |
| Aha moment | "When I realised I could..." | Delight | 6/10 |

### Benchmark Reference (from Q4)
- Best experience standard: "[description]" (cited by X/10)
- Worst experience warning: "[description]" (cited by X/10)

Deliverable 2: Pain Priority Matrix

Source Data

Extraction Logic

  1. From Q2, extract the single priority pain each persona identified. Group identical or closely related pains.
  2. From Q1 and Q3, extract additional frustrations and workaround complaints. Add these as secondary pains.
  3. Count frequency: how many personas mentioned each pain across Q1, Q2, and Q3 combined.
  4. Score severity: based on the language personas used. "Mildly annoying" scores lower than "makes me want to scream."
  5. Rank by frequency × severity to produce the final priority order.

Output Format

## Pain Priority Matrix: [Product/Category]

| Rank | Pain Point | Frequency | Severity | Representative Quote |
|------|-----------|-----------|----------|---------------------|
| 1 | [Most common high-severity pain] | 8/10 | High | "Quote from persona with name, age, occupation" |
| 2 | [Second priority] | 6/10 | High | "Quote..." |
| 3 | [Third priority] | 7/10 | Medium | "Quote..." |
| 4 | [Fourth priority] | 4/10 | Medium | "Quote..." |
| 5 | [Fifth priority] | 3/10 | Low | "Quote..." |

### Product Team Action Items
1. [Pain #1]: Suggested action based on frequency and severity
2. [Pain #2]: Suggested action
3. [Pain #3]: Suggested action

Deliverable 3: Language Library

Source Data

Extraction Logic

  1. Read all responses across all 7 questions
  2. Extract phrases that are: vivid, specific, emotional, or use unexpected vocabulary
  3. Categorise by theme: problem language, solution language, evaluation language, emotional language, comparison language
  4. For each phrase, record the persona's demographic context (name, age, occupation, location)

Output Format

## Language Library: [Product/Category]

### Problem Language (how customers describe their pain)
- "I spend more time planning than actually doing" - Sarah, 34, Marketing Manager, Chicago
- "It feels like herding cats" - James, 42, Operations Director, Austin
- "[vivid phrase]" - [name, age, role, location]

### Solution Language (how customers describe what they want)
- "Something that just works without me having to think about it" - ...
- "[phrase]" - ...

### Evaluation Language (how customers describe their buying criteria)
- "The first thing I look for is..." - ...
- "It's a dealbreaker if..." - ...

### Emotional Language (how customers feel about the problem)
- "It drives me absolutely mad" - ...
- "I dread [activity] every single [time period]" - ...

### Comparison Language (how customers reference alternatives)
- "Compared to [competitor], this is..." - ...
- "What [other company] gets right is..." - ...
The Language Library is arguably the most commercially valuable deliverable. Messaging written in customer language consistently outperforms messaging written in company language. Every word in this library came from a persona, not a copywriter. Use it for website copy, email subject lines, ad headlines, sales scripts, and one-pagers.

Deliverable 4: Unmet Needs Report

Source Data

Extraction Logic

  1. From Q7, extract every unaddressed frustration or wish. These are category-level gaps, not product-specific complaints.
  2. From Q2, identify any "magic wand" fixes that no current solution provides.
  3. From Q3 ("what do you wish you could do differently?"), extract unmet functional needs.
  4. Classify each unmet need as: feature gap (something buildable), experience gap (something designable), or trust gap (something addressable through messaging/proof).
  5. Score by frequency and intensity. When 7/10 personas independently identify the same unaddressed frustration, you've found a high-confidence positioning opportunity.

Output Format

## Unmet Needs Report: [Product/Category]

### Feature Gaps (buildable)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 7/10 | "Quote..." | [How to address] |

### Experience Gaps (designable)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 5/10 | "Quote..." | [How to address] |

### Trust Gaps (addressable through messaging/proof)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 6/10 | "Quote..." | [What proof to provide] |

### Positioning Opportunities
- "[Unmet need]" cited by X/10 personas. No competitor addresses this.
  Suggested positioning angle: "Unlike [alternatives], we..."

Deliverable 5: Decision Criteria Hierarchy

Source Data

Extraction Logic

  1. From Q5, extract every evaluation criterion mentioned. Record whether it was described as "first thing," "second thing," or "dealbreaker."
  2. From Q6, extract delivery and experience preferences.
  3. Count frequency: how many personas mentioned each criterion.
  4. Build the hierarchy: tier 1 (dealbreakers — must-haves that kill deals if absent), tier 2 (primary criteria — what they evaluate first), tier 3 (secondary criteria — tiebreakers between shortlisted options).

Output Format

## Decision Criteria Hierarchy: [Product/Category]

### Tier 1: Dealbreakers (absence = instant disqualification)
1. [Criterion] - cited by X/10 as non-negotiable
2. [Criterion] - cited by X/10

### Tier 2: Primary Criteria (evaluated first)
1. [Criterion] - "first thing I look for" (X/10)
2. [Criterion] - "immediately check for" (X/10)

### Tier 3: Secondary Criteria (tiebreakers)
1. [Criterion] - mentioned by X/10 as differentiator
2. [Criterion] - mentioned by X/10

### Delivery Preferences (from Q6)
- X/10 prefer in-product experience
- X/10 prefer human-assisted onboarding
- X/10 prefer self-serve with documentation

### Implications for Sales and Marketing
- Lead with: [Tier 1 dealbreakers] - ensure these are visible on landing page and in first demo
- Emphasise: [Tier 2 criteria] - these drive shortlisting
- Differentiate on: [Tier 3 criteria] - these win competitive deals

Deliverable 6: Product Feedback Synthesis

Source Data

Extraction Logic

  1. Compile all product-relevant feedback from Q2, Q3, Q4, Q6, and Q7.
  2. Categorise as: build (new feature request), improve (enhancement to existing capability), fix (broken experience), or communicate (feature exists but customers don't know).
  3. Rank by frequency and strategic importance.
  4. Include specific persona quotes as evidence for each recommendation.

Output Format

## Product Feedback Synthesis: [Product/Category]

### Build (new capabilities)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Feature] | "X/10 personas said..." + representative quote | X/10 | High |

### Improve (enhance existing)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Enhancement] | "Quote..." | X/10 | Medium |

### Fix (broken experiences)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Fix] | "Quote..." | X/10 | High |

### Communicate (feature exists but unknown)
| Feature | Evidence of Unawareness | Action |
|---------|------------------------|--------|
| [Feature] | "X/10 asked for something we already offer" | Update messaging, demo script, onboarding |

6. The Always-On Research Calendar

An always-on VoC programme doesn't mean running research every day. It means having a cadence that matches the rate at which your market changes, so your understanding never falls critically out of date.

Study Type Frequency Configuration Time Purpose
Pulse Check Monthly 3 questions, 6 personas ~15 min Track shifting priorities and emerging themes. Early warning system.
Deep Dive Quarterly 7 questions, 10 personas ~45 min Comprehensive understanding. Produces all 6 deliverables. Enables trend analysis.
Targeted Probe Ad hoc 5 questions, 8 personas ~25 min Investigate specific signals: new competitor move, new objection from sales, pricing consideration.
Pre-Launch Test Before launches 7 questions, 10 personas ~45 min Validate concept, feature priority, messaging before committing resources.
Post-Launch Sentiment 2 weeks after launch 5 questions, 10 personas ~30 min Measure market reaction. Did the messaging land? Where are the gaps?

Total monthly investment: approximately 2 hours. Traditional VoC at this frequency: $200,000+ annually.

The compounding effect. After four quarters of consistent deep dives, you have a longitudinal dataset showing how customer priorities, pain points, and competitive perceptions evolve. You're not making positioning decisions based on a snapshot — you're working from a trend line. See Section 11: Longitudinal VoC Tracking for the technique.

7. Cross-Segment VoC: Multi-Group Studies

Traditional VoC forces you to choose: research SMB buyers or enterprise buyers. Technical evaluators or economic buyers. With Ditto, you run the same study across multiple segments in parallel and compare.

The Technique

  1. Create multiple research groups, each representing a different customer segment
  2. Create one study per group
  3. Ask the identical 7 questions to every study
  4. Complete all studies
  5. Compare responses question-by-question across groups

Example: VoC Across Three Buyer Types

# Group A: Individual contributors / end users
{
  "name": "VoC - End Users (25-40)",
  "description": "Individual contributors who use [product type] daily in their work",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 25, "age_max": 40, "employment": "employed" }
}

# Group B: Team leads / managers
{
  "name": "VoC - Team Leads (32-48)",
  "description": "Team managers who evaluate and purchase [product type] for their teams",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 32, "age_max": 48, "employment": "employed", "education": "bachelors" }
}

# Group C: Senior leadership / budget holders
{
  "name": "VoC - Leadership (40-58)",
  "description": "Senior leaders responsible for department-level technology decisions and budgets",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 40, "age_max": 58, "employment": "employed", "education": "masters" }
}
Efficiency: parallelise across groups. Send Question 1 to all three studies simultaneously. Poll all job IDs in parallel. Once all complete, send Question 2 to all three. This cuts total wall-clock time from ~90 minutes (sequential) to ~30 minutes. See the Customer Segmentation guide for the full multi-group API pattern.

Cross-Segment Comparison Matrix

After all studies complete, Claude Code should produce a comparison matrix. For each question, extract the dominant theme from each group:

Question End Users (A) Team Leads (B) Leadership (C) Key Divergence
Q1: Experience "The tool is slow and the UI is cluttered" "Hard to get adoption from my team" "I have no visibility into whether it's working" Each role frames the problem from their perspective: usability vs. adoption vs. ROI visibility
Q2: Priority fix "Speed. Just make it faster." "Better reporting so I can justify the cost" "Consolidate our tools into fewer platforms" Completely different priority pains by role. One-size-fits-all messaging will miss all three.
Q5: Decision criteria "Ease of use, integrations, mobile access" "Adoption rate, support quality, training" "TCO, security compliance, vendor stability" Each persona type has entirely different buying criteria

This comparison directly informs buyer-specific sales enablement. The demo for end users leads with speed and UI. The demo for team leads leads with reporting and adoption tools. The pitch for leadership leads with TCO and security.


8. Cross-Market VoC: International Studies Advanced

Ditto covers 15+ countries representing 65% of global GDP. You can run the identical 7-question VoC study across multiple geographies simultaneously to understand how customer needs vary by market.

Multi-Market Study Setup

# Group 1: US market
{
  "name": "VoC Cross-Market - USA",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 25, "age_max": 50, "employment": "employed" }
}

# Group 2: UK market
{
  "name": "VoC Cross-Market - UK",
  "group_size": 10,
  "filters": { "country": "UK", "age_min": 25, "age_max": 50, "employment": "employed" }
}

# Group 3: Germany
{
  "name": "VoC Cross-Market - Germany",
  "group_size": 10,
  "filters": { "country": "Germany", "age_min": 25, "age_max": 50, "employment": "employed" }
}

# Group 4: Canada
{
  "name": "VoC Cross-Market - Canada",
  "group_size": 10,
  "filters": { "country": "Canada", "age_min": 25, "age_max": 50, "employment": "employed" }
}

Ask the same 7 VoC questions to all four groups. Claude Code then produces:

Typical finding: US buyers often prioritise speed and integrations. UK buyers frequently emphasise data privacy and GDPR compliance. German buyers tend to value thoroughness and accuracy. Canadian buyers often focus on cost and accessibility. These patterns emerge clearly from cross-market comparison and can dramatically reshape international GTM strategy.

Traditional equivalent: commissioning four separate research agencies across three countries. Cost: $100,000–$200,000. Time: 3–6 months. With Ditto + Claude Code: approximately 1 hour.


9. How VoC Feeds Every PMM Function

VoC is not a standalone activity. It is the primary input for every other product marketing function. Here's the dependency map, with specific connections from VoC deliverables to downstream outputs:

PMM Function VoC Input How It's Used
Positioning Pain Matrix (Q2), Language Library (all), Unmet Needs (Q7) Dunford's framework requires competitive alternatives (Q3), unique value perception (Q2), and target customer language (Language Library). VoC provides all three.
Messaging Language Library (all), Pain Matrix (Q1-Q2) Use customer phrases for headlines, value propositions, and email copy. "Kill the 'what's for dinner' question" (customer language) beats "Simplify your meal planning experience" (corporate language).
Competitive Intel Pain Matrix (Q3: current solutions), Unmet Needs (Q7), Decision Criteria (Q5) VoC reveals how buyers perceive competitors — often dramatically different from how competitors position themselves. Battlecard intelligence.
Sales Enablement Decision Criteria (Q5), Language Library, Journey Map (Q1), Pain Matrix Demo scripts use buyer-prioritised feature order (Q5). Objection handling uses Q7 barriers. One-pagers use customer language. Battlecards use competitive perception (Q3).
Product Strategy Product Feedback Synthesis, Unmet Needs, Pain Matrix When 8/10 personas identify the same unaddressed pain, that's a product decision backed by market evidence. Product Feedback Synthesis structures this for roadmap discussions.
Content Marketing All deliverables (every VoC study produces publishable insights) Quotes, statistics, and patterns become blog posts, social content, and thought leadership. Research-backed content with original data outperforms generic advice for both SEO and engagement.
Pricing Decision Criteria (Q5), Pain Matrix (cost of problem), Language Library (value framing) VoC reveals how customers frame value (time saved, risk reduced, outcomes achieved) — informing value-based pricing and packaging decisions.
When VoC runs continuously, all downstream functions receive a steady stream of fresh input. Positioning gets validated quarterly. Messaging stays grounded in current language. Battlecards reflect this month's competitive perception, not last year's. The compound effect of continuous VoC is the most underappreciated competitive advantage in product marketing.

10. Worked Example: VoC for a Project Management SaaS

Scenario

Product: "TaskFlow" — a project management tool for mid-size marketing teams
Problem space: "Managing projects and tasks across a marketing team"
Core value prop: "See every project's status in one dashboard without chasing status updates"
Target ICP: Marketing managers at companies with 50–500 employees

Group Setup

{
  "name": "VoC Deep Dive - TaskFlow - Feb 2026",
  "description": "Marketing professionals at mid-size companies who manage projects and coordinate across team members. Focus on project management tool evaluation and usage.",
  "group_size": 10,
  "filters": {
    "country": "USA",
    "age_min": 28,
    "age_max": 48,
    "employment": "employed",
    "education": "bachelors"
  }
}

Customised Questions

  1. "Tell me about the last time you managed a marketing project with multiple team members. Walk me through the experience from start to finish. What went well? What was frustrating?"
  2. "If you could wave a magic wand and fix ONE thing about managing projects across your marketing team, what would it be? Why that above everything else?"
  3. "How do you currently manage marketing projects? What tools, people, or workarounds do you use? What do you wish you could do differently?"
  4. "Think about the best tool experience you've ever had in any category. What made it great? Now think about the worst. What made it terrible?"
  5. "When you're researching a new project management tool, what do you look for first? Second? What's a dealbreaker?"
  6. "If a tool promised to show you every project's status in one dashboard without chasing status updates, how would you want to experience that? In the tool itself? Through daily email digests? Through Slack notifications?"
  7. "Is there anything about project management tools that you feel companies just don't understand? What do you wish they'd get right?"

Sample Findings

Pain Priority Matrix (Top 3):

RankPainFrequencyQuote
1Status update meetings consume time9/10"I spend 3 hours a week in meetings that are just people reading their status aloud" — Rachel, 36, Marketing Manager
2Can't see cross-project dependencies7/10"When the blog launch slips, I don't find out the email campaign is blocked until two days later" — Marcus, 41, Director of Marketing
3Tool adoption is inconsistent6/10"Half the team updates Asana. The other half uses Slack threads and spreadsheets" — Priya, 33, Marketing Operations

Decision Criteria Hierarchy:

TierCriteriaFrequency
DealbreakerMust integrate with Slack8/10
DealbreakerMust have a usable mobile app6/10
PrimaryVisual project dashboards (not just task lists)7/10
PrimaryEasy to learn — team adopts without training7/10
SecondaryBuilt-in time tracking4/10
SecondaryClient-facing project views3/10

Language Library Highlights:

Unmet Need: 7/10 personas said project management tools don't account for the reality that not everyone on the team uses the tool consistently. The unmet need is automatic status capture — pulling status from where work actually happens (Slack, Google Docs, email) rather than requiring manual updates.

Deliverables Generated


11. Longitudinal VoC Tracking Advanced

The most powerful application of always-on VoC is trend analysis across time. When you run quarterly deep dives with the same 7 questions against fresh persona groups, you build a dataset that reveals how your market is evolving.

Setting Up Longitudinal Tracking

  1. Use identical questions every quarter. Do not change the questions between quarters. The value is in comparing responses to the same questions over time.
  2. Use the same filter configuration. Same country, age range, employment status, education level. Change the group and study names to include the date: "VoC Deep Dive - TaskFlow - Q1 2026", "VoC Deep Dive - TaskFlow - Q2 2026".
  3. Use fresh personas each quarter. Create a new group each time. You want to capture the current market perspective, not re-interview the same synthetic individuals.

Quarter-over-Quarter Comparison

## VoC Trend Analysis: TaskFlow

### Pain Priority Shifts
| Pain Point | Q1 2026 Rank | Q2 2026 Rank | Trend |
|-----------|-------------|-------------|-------|
| Status update meetings | #1 (9/10) | #1 (8/10) | Stable - remains top pain |
| Cross-project dependencies | #2 (7/10) | #3 (5/10) | Declining - competitors may be addressing |
| Tool adoption consistency | #3 (6/10) | #2 (7/10) | Rising - growing frustration |
| AI/automation expectations | Not mentioned | #4 (6/10) | NEW - emerging need |

### Competitive Landscape Shifts (from Q3)
| Competitor | Q1 Perception | Q2 Perception | Change |
|-----------|--------------|--------------|--------|
| Asana | "Powerful but complex" | "Powerful but falling behind on AI" | Vulnerability opening |
| Monday.com | "Easy but shallow" | "Getting better, adding depth" | Threat growing |
| ClickUp | "Too many features" | "Too many features" | Stable perception |

### Language Drift
- Q1: "I need a dashboard" → Q2: "I need an AI assistant that manages updates for me"
- Q1: "Easy to use" → Q2: "Easy to adopt across the whole team"
- The vocabulary is shifting from individual productivity to team adoption
Trend detection is the ultimate VoC advantage. If your Q2 study reveals that buyer priorities are shifting toward AI-powered status updates and you update your positioning before competitors notice, that's a concrete competitive advantage attributable directly to your VoC cadence.

12. Quick Pulse Check Studies

Monthly pulse checks are the lightweight complement to quarterly deep dives. They take approximately 15 minutes and use 3 questions with 6 personas.

The 3 Pulse Check Questions

Q# Question Purpose
1 "What's the biggest challenge you're facing right now with [problem space]? Has anything changed in the last few months?" Detect pain point shifts and emerging themes
2 "Have you seen, tried, or heard about any new tools or approaches for [category] recently? What caught your attention?" Detect competitive landscape changes and emerging alternatives
3 "If you could change one thing about how you [relevant activity], what would it be today?" Track whether the priority pain is stable or shifting

Pulse Check Group Configuration

{
  "name": "VoC Pulse - [Product] - [Month Year]",
  "description": "[Same description as deep dive group]",
  "group_size": 6,
  "filters": {
    "country": "USA",
    "age_min": 28,
    "age_max": 48,
    "employment": "employed"
  }
}

What to Do with Pulse Results

Pulse check cost: 6 personas, 3 questions, ~15 minutes of API interaction + ~10 minutes for Claude Code analysis. Run 12 per year for a total of ~5 hours. The alternative is discovering a market shift 6 months late because your annual research missed it.

13. Making VoC Actionable: From Insight to Deliverable

The most common failure mode of VoC programmes is the gap between insight and action. A finding that "customers are confused by our pricing page" is useless unless someone rewrites the pricing page.

The Claude Code advantage: the agent doesn't just collect insights — it produces deliverables from them in the same workflow. The action gap shrinks to zero.

The VoC-to-Action Pipeline

VoC Study Completed
  │
  ├─→ Pain Priority Matrix → Product team action items (roadmap input)
  │
  ├─→ Language Library → Updated messaging for website, emails, ads
  │
  ├─→ Decision Criteria → Revised demo script (lead with dealbreakers)
  │                      → Updated one-pager (feature priority reordered)
  │
  ├─→ Unmet Needs Report → Positioning opportunity brief for PMM
  │                       → Feature request document for Product
  │
  ├─→ Journey Map → UX improvement priorities
  │              → Sales enablement (address pain moments proactively)
  │
  ├─→ Product Feedback → Structured input for next sprint planning
  │
  └─→ Blog Article → Research-backed content with original data
                    → Social thread with quotable insights
                    → Email content with study link

Example: VoC Finding → Immediate Actions

VoC Finding Immediate Action Deliverable Updated Time
7/10 say "status meetings are the worst part of my week" Update homepage headline to: "The project dashboard that replaces your status meetings" Website copy, one-pager, pitch deck ~10 min
8/10 say Slack integration is a dealbreaker Move Slack integration to first position in demo script and landing page feature list Demo script, landing page, feature comparison ~10 min
6/10 didn't know the dashboard feature exists Create in-app onboarding step highlighting dashboard. Update email drip sequence. Onboarding flow, email content, feature announcement ~15 min
New pain: "AI should handle status updates automatically" Add to product roadmap for evaluation. Update positioning to signal AI direction. Product Feedback Synthesis, positioning document ~5 min
This is a genuinely different operating model. Traditional VoC asks: "What did we learn?" This approach asks: "What did we learn, and here are the updated assets that reflect it." The research and the output are produced in the same workflow. No handoff. No interpretation layer. No six-week gap between "we learned something" and "we did something about it."

14. Best Practices and Common Mistakes

Do

Don't

Common API Errors

Error Cause Solution
size parameter rejected Wrong parameter name Use group_size, not size
0 agents recruited State filter used full name Use 2-letter codes: "CA" not "California"
Jobs stuck in "pending" Normal for first 10–15 seconds Continue polling with 5-second intervals
income filter rejected Unsupported filter Remove income filter; use education/employment as proxy
Missing completion analysis Forgot to call /complete Always call POST /v1/research-studies/{id}/complete after final question
Share link not available Study not yet completed Ensure study status is "completed" before requesting share link

15. Frequently Asked Questions

How long does a full VoC deep dive take?

Approximately 45 minutes end-to-end: 1 minute for group creation, 15–25 minutes for sequential question asking and polling, 2–5 minutes for completion analysis, and 15–20 minutes for Claude Code to generate all 6 deliverables. Compare with 4–6 weeks and $15,000–$30,000 for traditional in-depth interviews.

How many personas should I use?

10 personas for deep dives and targeted probes. 6 personas for monthly pulse checks. 10 per group for cross-segment or cross-market studies (so 3 groups = 30 personas total). Fewer than 6 produces unreliable patterns.

Can I reuse a research group across multiple studies?

Yes. Create a new study referencing the same research_group_uuid. The personas retain context from previous studies. This is useful for follow-up studies or message testing on the same audience. For quarterly deep dives, create fresh groups to capture current market perspective.

What's the difference between a pulse check and a deep dive?

A pulse check (3 questions, 6 personas, ~15 min) is designed to detect changes: is the market shifting? A deep dive (7 questions, 10 personas, ~45 min) is designed to understand comprehensively: what does the market think, feel, and want? Run pulse checks monthly and deep dives quarterly.

How does this compare to NPS surveys?

NPS gives you a number. VoC gives you understanding. NPS tells you that satisfaction dropped from 45 to 38. VoC tells you why it dropped, what customers are frustrated about, what language they use to describe the problem, and what would fix it. They're complementary: use NPS to track the score, use VoC to explain the score.

Can I use VoC for products that don't exist yet?

Absolutely. Frame Q1–Q3 around the problem space, not the product. Introduce your concept in Q6 ("if a product promised to..."). Q1–Q5 explore the current reality; Q6–Q7 test the future state. This is effectively a concept validation study using the VoC framework.

How do I share VoC findings with stakeholders who don't read reports?

Three approaches: (1) Send them the Ditto share link — the interactive study is more engaging than a report. (2) Distribute the Language Library — it's short, quotable, and immediately useful. (3) Publish findings as a blog article with a social thread — stakeholders consume content through feeds, not shared drives.

Should I run VoC on my existing customers or my target market?

Both, but separately. Ditto personas represent your target market — people who match your ICP but haven't necessarily used your product. This is ideal for positioning, messaging, and competitive research. For product-specific UX feedback, you need real users. Use Ditto for market-level VoC and supplement with actual customer interviews for product-level feedback.

What if the VoC findings contradict what our sales team is hearing?

This is valuable, not problematic. Sales team anecdotes are biased toward recent conversations and deals in progress. VoC from a representative market sample provides a broader view. When they diverge, investigate: perhaps the sales team is hearing from a different segment than VoC studied, or perhaps the market has shifted and the sales team hasn't noticed yet. Run a targeted probe to resolve the discrepancy.

How do I build the business case for an always-on VoC programme?

Frame it as cost avoidance: traditional VoC at the same frequency would cost $200,000+ annually. Frame it as speed: decisions made on 45-minute research beat decisions made on 6-week-old research. Frame it as risk reduction: every product decision, positioning choice, and messaging update backed by current market evidence rather than assumption. Track the metrics in Section 13 to demonstrate impact.


Related guides:


Ditto — Synthetic market research with 300,000+ AI personas. Validated by EY (92% correlation), Harvard, Cambridge, Stanford, and Oxford.
askditto.io · [email protected]