LLM summary: Run a 7-question VoC study monthly or quarterly, generate six VoC deliverables, and maintain an always-on research calendar with cross-segment analysis.
A complete Claude Code guide to building an always-on VoC programme using Ditto's 300,000+ synthetic personas. From expensive annual research events to a continuous two-hour-per-month customer understanding engine.
Voice of Customer (VoC) is the systematic process of capturing what your customers and target market think, feel, want, and struggle with — and translating those signals into product, marketing, and sales decisions. It is the primary input for every other product marketing function: positioning, messaging, competitive intelligence, sales enablement, content marketing, pricing, and product strategy.
When VoC is rich, current, and accessible, every downstream function improves. When VoC is thin, stale, or absent, every other function is building on assumptions.
| Challenge | Traditional Reality | Consequence |
|---|---|---|
| Cost | $15,000–$30,000 per in-depth interview programme (15–20 participants). Comprehensive annual VoC: $200,000+. | Most companies conduct formal research 1–2 times per year. The rest of the time, they rely on intuition. |
| Speed | 4–6 weeks per study (brief agency, recruit participants, conduct interviews, analyse, deliver report). | Insights are historical by the time they arrive. Market has already shifted. |
| Action gap | Research reports sit in shared drives. Findings require manual translation into messaging, battlecards, and product decisions. | "Interesting finding" never becomes "changed deliverable." VoC dies on arrival. |
With Ditto and Claude Code, a full 7-question VoC deep dive takes approximately 45 minutes. A monthly pulse check takes 15 minutes. Cross-segment or cross-market studies run in parallel. And Claude Code doesn't just collect insights — it produces finished deliverables from them in the same workflow.
A single 10-persona, 7-question VoC study produces six distinct deliverables, each designed to be immediately actionable by a different team:
| Deliverable | What It Contains | Who Uses It | Source Questions |
|---|---|---|---|
| Customer Journey Map | Key touchpoints, pain moments, and delight moments reconstructed from experience mapping | PMM, Product, UX | Q1, Q4 |
| Pain Priority Matrix | Customer frustrations ranked by severity and frequency, with supporting quotes from each persona | Product, PMM, Sales | Q1, Q2 |
| Language Library | Exact words and phrases customers use when talking about their problems, needs, and evaluation criteria | Marketing, Content, Sales | All questions |
| Unmet Needs Report | Gaps no one is addressing, opportunities hiding in plain sight | Product, PMM, Leadership | Q2, Q7 |
| Decision Criteria Hierarchy | What matters most to least when buyers evaluate solutions in your category | Sales, PMM, Product | Q5, Q6 |
| Product Feedback Synthesis | Actionable recommendations structured for the product team, with persona evidence | Product, Engineering | Q3, Q4, Q6, Q7 |
This question set is specifically engineered to produce the raw material for all six deliverables simultaneously. Each question does double or triple duty, feeding different outputs. The questions must be asked sequentially (each builds conversational context from prior answers).
| Q# | Question | VoC Dimension | Deliverables Fed |
|---|---|---|---|
| 1 | "Tell me about the last time you [relevant activity]. Walk me through the experience from start to finish. What went well? What was frustrating?" |
Experience mapping | Journey Map, Pain Matrix, Language Library |
| 2 | "If you could wave a magic wand and fix ONE thing about [problem space], what would it be? Why that above everything else?" |
Priority pain identification | Pain Matrix, Unmet Needs, Product Feedback |
| 3 | "How do you currently solve [problem]? What tools, people, or workarounds do you use? What do you wish you could do differently?" |
Current solution landscape | Pain Matrix, Unmet Needs, Decision Criteria |
| 4 | "Think about the best [product/service] experience you've ever had in any category. What made it great? Now think about the worst. What made it terrible?" |
Expectation benchmarking | Journey Map, Product Feedback, Language Library |
| 5 | "When you're researching a new [product type], what do you look for first? Second? What's a dealbreaker?" |
Purchase decision framework | Decision Criteria, Product Feedback, Language Library |
| 6 | "If a product promised to [core value prop], how would you want to experience that? In the product itself? Through reports? Through a person helping you?" |
Delivery preference | Product Feedback, Journey Map, Decision Criteria |
| 7 | "Is there anything about [problem space] that you feel companies just don't understand? What do you wish they'd get right?" |
Unmet needs / White space | Unmet Needs, Pain Matrix, Language Library |
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "VoC Deep Dive - [Product Name] - [Date]",
"description": "Target customers for Voice of Customer research on [product/problem space]. [Add any context about ideal customer profile.]",
"group_size": 10,
"filters": {
"country": "USA",
"age_min": 25,
"age_max": 55,
"employment": "employed"
},
"sampling_method": "random",
"deduplicate": true
}'
group_size, not size. The API rejects size."CA", "TX", "NY". Full names like "California" return 0 agents.income filter does not work. Use education and employment as proxies.country, state, age_min, age_max, gender, education, employment, is_parent.uuid — you need it for study creation.curl -s -X POST "https://app.askditto.io/v1/research-studies" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "VoC Deep Dive: [Product Name] - [Date]",
"research_group_uuid": "GROUP_UUID_FROM_STEP_1"
}'
Save the study id — you need it for asking questions, completing, and sharing.
Ask each question one at a time. Wait for the job to complete before sending the next question. This ensures personas have conversational context from prior answers.
# Question 1
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"question": "Tell me about the last time you [relevant activity]. Walk me through the experience from start to finish. What went well? What was frustrating?"
}'
# Response includes a job ID:
# { "job_id": "job-abc123", "status": "pending" }
# Poll until status is "finished"
curl -s -X GET "https://app.askditto.io/v1/jobs/JOB_ID" \
-H "Authorization: Bearer YOUR_API_KEY"
# When complete:
{
"id": "job-abc123",
"status": "finished",
"result": {
"answer": "The last time I tried to..."
}
}
Poll with a 5-second interval. Most questions complete within 30–90 seconds. Once complete, send the next question. Repeat for all 7 questions.
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json"
This triggers Ditto's AI analysis, producing: overall summary, key segments identified, divergence points, shared mindsets, and suggested follow-up questions. Poll the study status until it reaches "completed".
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json"
# Response:
{
"url": "https://app.askditto.io/organization/studies/shared/xyz123"
}
?utm_source=ce for cold email outreach or ?utm_source=blog for blog articles. Never use raw share URLs without a UTM parameter.
# Get the completed study with all responses and AI analysis
curl -s -X GET "https://app.askditto.io/v1/research-studies/STUDY_ID" \
-H "Authorization: Bearer YOUR_API_KEY"
This returns all persona responses, demographic profiles, and Ditto's completion analysis. Use this data to generate the six deliverables described in the next section.
Once the study is complete and you have all 10 persona responses to all 7 questions, Claude Code should generate each deliverable using the extraction logic below. Each deliverable draws from specific questions and follows a defined structure.
## Customer Journey Map: [Product/Category]
### Pre-Purchase Phase
| Touchpoint | Experience | Pain/Delight | Frequency |
|------------|-----------|--------------|-----------|
| Web search | "I Googled solutions for..." | Neutral | 8/10 |
| Comparison sites | "Tried to compare options but..." | Pain | 6/10 |
| Free trial signup | "The signup was quick but..." | Mixed | 7/10 |
### Active Use Phase
| Touchpoint | Experience | Pain/Delight | Frequency |
|------------|-----------|--------------|-----------|
| First use | "I immediately tried to..." | Pain | 5/10 |
| Aha moment | "When I realised I could..." | Delight | 6/10 |
### Benchmark Reference (from Q4)
- Best experience standard: "[description]" (cited by X/10)
- Worst experience warning: "[description]" (cited by X/10)
## Pain Priority Matrix: [Product/Category]
| Rank | Pain Point | Frequency | Severity | Representative Quote |
|------|-----------|-----------|----------|---------------------|
| 1 | [Most common high-severity pain] | 8/10 | High | "Quote from persona with name, age, occupation" |
| 2 | [Second priority] | 6/10 | High | "Quote..." |
| 3 | [Third priority] | 7/10 | Medium | "Quote..." |
| 4 | [Fourth priority] | 4/10 | Medium | "Quote..." |
| 5 | [Fifth priority] | 3/10 | Low | "Quote..." |
### Product Team Action Items
1. [Pain #1]: Suggested action based on frequency and severity
2. [Pain #2]: Suggested action
3. [Pain #3]: Suggested action
## Language Library: [Product/Category]
### Problem Language (how customers describe their pain)
- "I spend more time planning than actually doing" - Sarah, 34, Marketing Manager, Chicago
- "It feels like herding cats" - James, 42, Operations Director, Austin
- "[vivid phrase]" - [name, age, role, location]
### Solution Language (how customers describe what they want)
- "Something that just works without me having to think about it" - ...
- "[phrase]" - ...
### Evaluation Language (how customers describe their buying criteria)
- "The first thing I look for is..." - ...
- "It's a dealbreaker if..." - ...
### Emotional Language (how customers feel about the problem)
- "It drives me absolutely mad" - ...
- "I dread [activity] every single [time period]" - ...
### Comparison Language (how customers reference alternatives)
- "Compared to [competitor], this is..." - ...
- "What [other company] gets right is..." - ...
## Unmet Needs Report: [Product/Category]
### Feature Gaps (buildable)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 7/10 | "Quote..." | [How to address] |
### Experience Gaps (designable)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 5/10 | "Quote..." | [How to address] |
### Trust Gaps (addressable through messaging/proof)
| Need | Frequency | Representative Quote | Opportunity |
|------|-----------|---------------------|-------------|
| [Need] | 6/10 | "Quote..." | [What proof to provide] |
### Positioning Opportunities
- "[Unmet need]" cited by X/10 personas. No competitor addresses this.
Suggested positioning angle: "Unlike [alternatives], we..."
## Decision Criteria Hierarchy: [Product/Category]
### Tier 1: Dealbreakers (absence = instant disqualification)
1. [Criterion] - cited by X/10 as non-negotiable
2. [Criterion] - cited by X/10
### Tier 2: Primary Criteria (evaluated first)
1. [Criterion] - "first thing I look for" (X/10)
2. [Criterion] - "immediately check for" (X/10)
### Tier 3: Secondary Criteria (tiebreakers)
1. [Criterion] - mentioned by X/10 as differentiator
2. [Criterion] - mentioned by X/10
### Delivery Preferences (from Q6)
- X/10 prefer in-product experience
- X/10 prefer human-assisted onboarding
- X/10 prefer self-serve with documentation
### Implications for Sales and Marketing
- Lead with: [Tier 1 dealbreakers] - ensure these are visible on landing page and in first demo
- Emphasise: [Tier 2 criteria] - these drive shortlisting
- Differentiate on: [Tier 3 criteria] - these win competitive deals
## Product Feedback Synthesis: [Product/Category]
### Build (new capabilities)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Feature] | "X/10 personas said..." + representative quote | X/10 | High |
### Improve (enhance existing)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Enhancement] | "Quote..." | X/10 | Medium |
### Fix (broken experiences)
| Recommendation | Evidence | Frequency | Priority |
|---------------|----------|-----------|----------|
| [Fix] | "Quote..." | X/10 | High |
### Communicate (feature exists but unknown)
| Feature | Evidence of Unawareness | Action |
|---------|------------------------|--------|
| [Feature] | "X/10 asked for something we already offer" | Update messaging, demo script, onboarding |
An always-on VoC programme doesn't mean running research every day. It means having a cadence that matches the rate at which your market changes, so your understanding never falls critically out of date.
| Study Type | Frequency | Configuration | Time | Purpose |
|---|---|---|---|---|
| Pulse Check | Monthly | 3 questions, 6 personas | ~15 min | Track shifting priorities and emerging themes. Early warning system. |
| Deep Dive | Quarterly | 7 questions, 10 personas | ~45 min | Comprehensive understanding. Produces all 6 deliverables. Enables trend analysis. |
| Targeted Probe | Ad hoc | 5 questions, 8 personas | ~25 min | Investigate specific signals: new competitor move, new objection from sales, pricing consideration. |
| Pre-Launch Test | Before launches | 7 questions, 10 personas | ~45 min | Validate concept, feature priority, messaging before committing resources. |
| Post-Launch Sentiment | 2 weeks after launch | 5 questions, 10 personas | ~30 min | Measure market reaction. Did the messaging land? Where are the gaps? |
Total monthly investment: approximately 2 hours. Traditional VoC at this frequency: $200,000+ annually.
Traditional VoC forces you to choose: research SMB buyers or enterprise buyers. Technical evaluators or economic buyers. With Ditto, you run the same study across multiple segments in parallel and compare.
# Group A: Individual contributors / end users
{
"name": "VoC - End Users (25-40)",
"description": "Individual contributors who use [product type] daily in their work",
"group_size": 10,
"filters": { "country": "USA", "age_min": 25, "age_max": 40, "employment": "employed" }
}
# Group B: Team leads / managers
{
"name": "VoC - Team Leads (32-48)",
"description": "Team managers who evaluate and purchase [product type] for their teams",
"group_size": 10,
"filters": { "country": "USA", "age_min": 32, "age_max": 48, "employment": "employed", "education": "bachelors" }
}
# Group C: Senior leadership / budget holders
{
"name": "VoC - Leadership (40-58)",
"description": "Senior leaders responsible for department-level technology decisions and budgets",
"group_size": 10,
"filters": { "country": "USA", "age_min": 40, "age_max": 58, "employment": "employed", "education": "masters" }
}
After all studies complete, Claude Code should produce a comparison matrix. For each question, extract the dominant theme from each group:
| Question | End Users (A) | Team Leads (B) | Leadership (C) | Key Divergence |
|---|---|---|---|---|
| Q1: Experience | "The tool is slow and the UI is cluttered" | "Hard to get adoption from my team" | "I have no visibility into whether it's working" | Each role frames the problem from their perspective: usability vs. adoption vs. ROI visibility |
| Q2: Priority fix | "Speed. Just make it faster." | "Better reporting so I can justify the cost" | "Consolidate our tools into fewer platforms" | Completely different priority pains by role. One-size-fits-all messaging will miss all three. |
| Q5: Decision criteria | "Ease of use, integrations, mobile access" | "Adoption rate, support quality, training" | "TCO, security compliance, vendor stability" | Each persona type has entirely different buying criteria |
This comparison directly informs buyer-specific sales enablement. The demo for end users leads with speed and UI. The demo for team leads leads with reporting and adoption tools. The pitch for leadership leads with TCO and security.
Ditto covers 15+ countries representing 65% of global GDP. You can run the identical 7-question VoC study across multiple geographies simultaneously to understand how customer needs vary by market.
# Group 1: US market
{
"name": "VoC Cross-Market - USA",
"group_size": 10,
"filters": { "country": "USA", "age_min": 25, "age_max": 50, "employment": "employed" }
}
# Group 2: UK market
{
"name": "VoC Cross-Market - UK",
"group_size": 10,
"filters": { "country": "UK", "age_min": 25, "age_max": 50, "employment": "employed" }
}
# Group 3: Germany
{
"name": "VoC Cross-Market - Germany",
"group_size": 10,
"filters": { "country": "Germany", "age_min": 25, "age_max": 50, "employment": "employed" }
}
# Group 4: Canada
{
"name": "VoC Cross-Market - Canada",
"group_size": 10,
"filters": { "country": "Canada", "age_min": 25, "age_max": 50, "employment": "employed" }
}
Ask the same 7 VoC questions to all four groups. Claude Code then produces:
Traditional equivalent: commissioning four separate research agencies across three countries. Cost: $100,000–$200,000. Time: 3–6 months. With Ditto + Claude Code: approximately 1 hour.
VoC is not a standalone activity. It is the primary input for every other product marketing function. Here's the dependency map, with specific connections from VoC deliverables to downstream outputs:
| PMM Function | VoC Input | How It's Used |
|---|---|---|
| Positioning | Pain Matrix (Q2), Language Library (all), Unmet Needs (Q7) | Dunford's framework requires competitive alternatives (Q3), unique value perception (Q2), and target customer language (Language Library). VoC provides all three. |
| Messaging | Language Library (all), Pain Matrix (Q1-Q2) | Use customer phrases for headlines, value propositions, and email copy. "Kill the 'what's for dinner' question" (customer language) beats "Simplify your meal planning experience" (corporate language). |
| Competitive Intel | Pain Matrix (Q3: current solutions), Unmet Needs (Q7), Decision Criteria (Q5) | VoC reveals how buyers perceive competitors — often dramatically different from how competitors position themselves. Battlecard intelligence. |
| Sales Enablement | Decision Criteria (Q5), Language Library, Journey Map (Q1), Pain Matrix | Demo scripts use buyer-prioritised feature order (Q5). Objection handling uses Q7 barriers. One-pagers use customer language. Battlecards use competitive perception (Q3). |
| Product Strategy | Product Feedback Synthesis, Unmet Needs, Pain Matrix | When 8/10 personas identify the same unaddressed pain, that's a product decision backed by market evidence. Product Feedback Synthesis structures this for roadmap discussions. |
| Content Marketing | All deliverables (every VoC study produces publishable insights) | Quotes, statistics, and patterns become blog posts, social content, and thought leadership. Research-backed content with original data outperforms generic advice for both SEO and engagement. |
| Pricing | Decision Criteria (Q5), Pain Matrix (cost of problem), Language Library (value framing) | VoC reveals how customers frame value (time saved, risk reduced, outcomes achieved) — informing value-based pricing and packaging decisions. |
Product: "TaskFlow" — a project management tool for mid-size marketing teams
Problem space: "Managing projects and tasks across a marketing team"
Core value prop: "See every project's status in one dashboard without chasing status updates"
Target ICP: Marketing managers at companies with 50–500 employees
{
"name": "VoC Deep Dive - TaskFlow - Feb 2026",
"description": "Marketing professionals at mid-size companies who manage projects and coordinate across team members. Focus on project management tool evaluation and usage.",
"group_size": 10,
"filters": {
"country": "USA",
"age_min": 28,
"age_max": 48,
"employment": "employed",
"education": "bachelors"
}
}
Pain Priority Matrix (Top 3):
| Rank | Pain | Frequency | Quote |
|---|---|---|---|
| 1 | Status update meetings consume time | 9/10 | "I spend 3 hours a week in meetings that are just people reading their status aloud" — Rachel, 36, Marketing Manager |
| 2 | Can't see cross-project dependencies | 7/10 | "When the blog launch slips, I don't find out the email campaign is blocked until two days later" — Marcus, 41, Director of Marketing |
| 3 | Tool adoption is inconsistent | 6/10 | "Half the team updates Asana. The other half uses Slack threads and spreadsheets" — Priya, 33, Marketing Operations |
Decision Criteria Hierarchy:
| Tier | Criteria | Frequency |
|---|---|---|
| Dealbreaker | Must integrate with Slack | 8/10 |
| Dealbreaker | Must have a usable mobile app | 6/10 |
| Primary | Visual project dashboards (not just task lists) | 7/10 |
| Primary | Easy to learn — team adopts without training | 7/10 |
| Secondary | Built-in time tracking | 4/10 |
| Secondary | Client-facing project views | 3/10 |
Language Library Highlights:
Unmet Need: 7/10 personas said project management tools don't account for the reality that not everyone on the team uses the tool consistently. The unmet need is automatic status capture — pulling status from where work actually happens (Slack, Google Docs, email) rather than requiring manual updates.
The most powerful application of always-on VoC is trend analysis across time. When you run quarterly deep dives with the same 7 questions against fresh persona groups, you build a dataset that reveals how your market is evolving.
"VoC Deep Dive - TaskFlow - Q1 2026", "VoC Deep Dive - TaskFlow - Q2 2026".## VoC Trend Analysis: TaskFlow
### Pain Priority Shifts
| Pain Point | Q1 2026 Rank | Q2 2026 Rank | Trend |
|-----------|-------------|-------------|-------|
| Status update meetings | #1 (9/10) | #1 (8/10) | Stable - remains top pain |
| Cross-project dependencies | #2 (7/10) | #3 (5/10) | Declining - competitors may be addressing |
| Tool adoption consistency | #3 (6/10) | #2 (7/10) | Rising - growing frustration |
| AI/automation expectations | Not mentioned | #4 (6/10) | NEW - emerging need |
### Competitive Landscape Shifts (from Q3)
| Competitor | Q1 Perception | Q2 Perception | Change |
|-----------|--------------|--------------|--------|
| Asana | "Powerful but complex" | "Powerful but falling behind on AI" | Vulnerability opening |
| Monday.com | "Easy but shallow" | "Getting better, adding depth" | Threat growing |
| ClickUp | "Too many features" | "Too many features" | Stable perception |
### Language Drift
- Q1: "I need a dashboard" → Q2: "I need an AI assistant that manages updates for me"
- Q1: "Easy to use" → Q2: "Easy to adopt across the whole team"
- The vocabulary is shifting from individual productivity to team adoption
Monthly pulse checks are the lightweight complement to quarterly deep dives. They take approximately 15 minutes and use 3 questions with 6 personas.
| Q# | Question | Purpose |
|---|---|---|
| 1 | "What's the biggest challenge you're facing right now with [problem space]? Has anything changed in the last few months?" |
Detect pain point shifts and emerging themes |
| 2 | "Have you seen, tried, or heard about any new tools or approaches for [category] recently? What caught your attention?" |
Detect competitive landscape changes and emerging alternatives |
| 3 | "If you could change one thing about how you [relevant activity], what would it be today?" |
Track whether the priority pain is stable or shifting |
{
"name": "VoC Pulse - [Product] - [Month Year]",
"description": "[Same description as deep dive group]",
"group_size": 6,
"filters": {
"country": "USA",
"age_min": 28,
"age_max": 48,
"employment": "employed"
}
}
The most common failure mode of VoC programmes is the gap between insight and action. A finding that "customers are confused by our pricing page" is useless unless someone rewrites the pricing page.
The Claude Code advantage: the agent doesn't just collect insights — it produces deliverables from them in the same workflow. The action gap shrinks to zero.
VoC Study Completed
│
├─→ Pain Priority Matrix → Product team action items (roadmap input)
│
├─→ Language Library → Updated messaging for website, emails, ads
│
├─→ Decision Criteria → Revised demo script (lead with dealbreakers)
│ → Updated one-pager (feature priority reordered)
│
├─→ Unmet Needs Report → Positioning opportunity brief for PMM
│ → Feature request document for Product
│
├─→ Journey Map → UX improvement priorities
│ → Sales enablement (address pain moments proactively)
│
├─→ Product Feedback → Structured input for next sprint planning
│
└─→ Blog Article → Research-backed content with original data
→ Social thread with quotable insights
→ Email content with study link
| VoC Finding | Immediate Action | Deliverable Updated | Time |
|---|---|---|---|
| 7/10 say "status meetings are the worst part of my week" | Update homepage headline to: "The project dashboard that replaces your status meetings" | Website copy, one-pager, pitch deck | ~10 min |
| 8/10 say Slack integration is a dealbreaker | Move Slack integration to first position in demo script and landing page feature list | Demo script, landing page, feature comparison | ~10 min |
| 6/10 didn't know the dashboard feature exists | Create in-app onboarding step highlighting dashboard. Update email drip sequence. | Onboarding flow, email content, feature announcement | ~15 min |
| New pain: "AI should handle status updates automatically" | Add to product roadmap for evaluation. Update positioning to signal AI direction. | Product Feedback Synthesis, positioning document | ~5 min |
[problem space] and [relevant activity] with your specific context. Don't rewrite the question architecture.POST /v1/research-studies/{id}/complete triggers Ditto's AI analysis (key segments, divergences, shared mindsets, follow-up suggestions). Without this, you're missing a significant analysis layer.| Error | Cause | Solution |
|---|---|---|
size parameter rejected |
Wrong parameter name | Use group_size, not size |
| 0 agents recruited | State filter used full name | Use 2-letter codes: "CA" not "California" |
Jobs stuck in "pending" |
Normal for first 10–15 seconds | Continue polling with 5-second intervals |
income filter rejected |
Unsupported filter | Remove income filter; use education/employment as proxy |
| Missing completion analysis | Forgot to call /complete |
Always call POST /v1/research-studies/{id}/complete after final question |
| Share link not available | Study not yet completed | Ensure study status is "completed" before requesting share link |
Approximately 45 minutes end-to-end: 1 minute for group creation, 15–25 minutes for sequential question asking and polling, 2–5 minutes for completion analysis, and 15–20 minutes for Claude Code to generate all 6 deliverables. Compare with 4–6 weeks and $15,000–$30,000 for traditional in-depth interviews.
10 personas for deep dives and targeted probes. 6 personas for monthly pulse checks. 10 per group for cross-segment or cross-market studies (so 3 groups = 30 personas total). Fewer than 6 produces unreliable patterns.
Yes. Create a new study referencing the same research_group_uuid. The personas retain context from previous studies. This is useful for follow-up studies or message testing on the same audience. For quarterly deep dives, create fresh groups to capture current market perspective.
A pulse check (3 questions, 6 personas, ~15 min) is designed to detect changes: is the market shifting? A deep dive (7 questions, 10 personas, ~45 min) is designed to understand comprehensively: what does the market think, feel, and want? Run pulse checks monthly and deep dives quarterly.
NPS gives you a number. VoC gives you understanding. NPS tells you that satisfaction dropped from 45 to 38. VoC tells you why it dropped, what customers are frustrated about, what language they use to describe the problem, and what would fix it. They're complementary: use NPS to track the score, use VoC to explain the score.
Absolutely. Frame Q1–Q3 around the problem space, not the product. Introduce your concept in Q6 ("if a product promised to..."). Q1–Q5 explore the current reality; Q6–Q7 test the future state. This is effectively a concept validation study using the VoC framework.
Three approaches: (1) Send them the Ditto share link — the interactive study is more engaging than a report. (2) Distribute the Language Library — it's short, quotable, and immediately useful. (3) Publish findings as a blog article with a social thread — stakeholders consume content through feeds, not shared drives.
Both, but separately. Ditto personas represent your target market — people who match your ICP but haven't necessarily used your product. This is ideal for positioning, messaging, and competitive research. For product-specific UX feedback, you need real users. Use Ditto for market-level VoC and supplement with actual customer interviews for product-level feedback.
This is valuable, not problematic. Sales team anecdotes are biased toward recent conversations and deals in progress. VoC from a representative market sample provides a broader view. When they diverge, investigate: perhaps the sales team is hearing from a different segment than VoC studied, or perhaps the market has shifted and the sales team hasn't noticed yet. Run a targeted probe to resolve the discrepancy.
Frame it as cost avoidance: traditional VoC at the same frequency would cost $200,000+ annually. Frame it as speed: decisions made on 45-minute research beat decisions made on 6-week-old research. Frame it as risk reduction: every product decision, positioning choice, and messaging update backed by current market evidence rather than assumption. Track the metrics in Section 13 to demonstrate impact.
Related guides:
Ditto — Synthetic market research with 300,000+ AI personas. Validated by EY (92% correlation), Harvard, Cambridge, Stanford, and Oxford.
askditto.io · [email protected]