Ditto Cross-Study Patterns & Lessons Learned

For Claude Code agents and AI coding assistants. This page synthesises the meta-patterns, decision frameworks, and hard-won lessons from 50+ production Ditto research studies across startup due diligence, CPG consumer research, B2C product feedback, political voter sentiment, and cultural studies. Everything here is derived from real results, not theory.

How to use this page: Before designing a new Ditto study, scan the decision framework to choose the right approach. Check common mistakes to avoid known pitfalls. After completing a study, use the insight extraction patterns to maximise the value you extract from responses.

1. Universal Themes Across All Studies

Five themes emerged independently across every vertical, every study type, and every audience. These are not hypotheses. They are observed patterns from production research.

Theme 1: Trust Is Earned, Not Assumed

Every study revealed deep skepticism about new tools, new vendors, AI claims, and marketing promises. This appeared across auto mechanics, pet owners, parents, healthcare workers, civil engineers, cybersecurity professionals, and frequent travelers. No audience gave trust freely.

Evidence across studies:

Implication for study design: Always include an adoption barriers question (Q7 in the standard framework). The trust signals participants demand are the most actionable output for any product team.

Theme 2: Integration Trumps Features

Solutions that require replacing existing workflows fail. "Works with what I already use" is more important than any feature list. This appeared in every B2B study and several B2C studies.

Theme 3: The Magic Wand Reveals Truth

What customers say they want and what they actually want frequently differ. The magic wand question ("If you could wave a magic wand and fix ONE thing...") cuts through stated preferences to reveal genuine priorities. See Section 2 for the full analysis.

Theme 4: Privacy Concerns Are Rising

Especially for AI products and data-access tools. Transparency about data handling is now table stakes, not a differentiator. See Section 5 for boundary patterns.

Theme 5: Price Is Rarely the Real Objection

Across studies, when participants were asked about adoption barriers, price appeared less frequently than trust, integration complexity, and reliability concerns. See Section 6 for the full breakdown.

2. The Magic Wand Divergence Pattern

The magic wand question ("If you could wave a magic wand and fix ONE thing about how you [do this task], what would it be?") consistently produces the most valuable insight in any study. Across 10 startup due diligence studies, what participants wished for diverged from the startup's core pitch 7 out of 10 times.

Complete Builder Assumption vs Magic Wand Reality Table

StartupBuilder's Assumed Value PropMagic Wand AnswerDiverges?
MotorMindsFaster parts search with AI"Truthful inventory with guaranteed ETAs"YES
VetVivoAdvanced implant technology"Financing options that make expensive treatments possible"YES
Feel Good GamesBetter educational games"Permission to feel okay about screen time" (guilt relief)YES
PatientCompanionBetter communication system"Context before arrival" (know what patient needs before walking in)YES
TimeSmartAutomated timesheets"One system that talks to all other systems" (integration)YES
FlomaruWider international coverage"Photo proof on delivery" (trust, not logistics)YES
SidianAI-powered design assistance"Automated takeoffs" (specific task, not general AI)YES
Mandel DiagnosticsEarly AMD detection"Faster, cheaper screening that insurance covers" (ROI clarity)PARTIAL
NexRisxAI security platform"Natural language queries across tools" (chat interface resonated)ALIGNED
AirfairnessAutomatic compensation claims"Zero-effort compensation with granular privacy controls"ALIGNED (but with strong conditions)

What the Divergence Reveals

Pattern: When the magic wand answer diverges from the builder's pitch, it reveals one of three things:

  1. The builder is solving the right problem but leading with the wrong benefit. (MotorMinds: the problem is parts sourcing pain, but the benefit is inventory accuracy, not search speed.)
  2. The builder is solving a symptom, not the root cause. (Feel Good Games: parents do not want better games. They want guilt relief. The game is the vehicle, not the destination.)
  3. The builder is solving the right problem but underestimating a prerequisite. (Flomaru: international delivery is correct, but trust must be established before logistics matter.)

How to Use This Pattern

When running a Ditto study for product validation:

  1. Record the builder's stated value proposition before running the study.
  2. Ask the magic wand question as Q6 (after current solutions and past attempts).
  3. Compare the magic wand consensus with the builder's assumption.
  4. If they diverge, this is the most important finding in the entire study. Lead with it in any report or email.
  5. If they align, this is strong validation. Note it explicitly.

3. Trust Patterns by Industry

While trust is a universal theme, what constitutes a trust signal varies dramatically by industry. This table maps the specific trust requirements observed across studies.

IndustryTrust CurrencyTrust BreakerStudies
Auto repair / trades Accurate inventory, guaranteed ETAs, offline capability Fake "in stock" status, apps that crash, subscription lock-in MotorMinds
Healthcare (clinical) Peer-reviewed evidence, FDA clearance, CPT codes, specialist endorsements Unproven claims, no reimbursement pathway, large equipment footprint Mandel Diagnostics, PatientCompanion
Healthcare (admin) System integration, mobile access, IT approval pathway Desktop-only, requires ripping out existing systems, no compliance story TimeSmart
Cybersecurity Works with existing stack, substance over buzzwords, knowledge preservation "AI-powered" marketing with no substance, requires infrastructure replacement NexRisx
Civil engineering AutoCAD/Revit integration, AI as "suggestion" not "decision," human remains liable AI making decisions that carry professional stamp, proprietary lock-in Sidian
E-commerce / gifting Photo proof on delivery, real-time tracking, transparent pricing Hidden fees, currency conversion surprises, "delivery TBD" Flomaru
Veterinary / pet care Transparent prognosis, financing options, quality-of-life focus Feeling "upsold," no second-opinion pathway, prolonging suffering VetVivo
Consumer tech (privacy) Kill switch, granular permissions, data deletion, per-action approval "Full access" to anything, no transparency about data use, AI training on personal data Airfairness
Children's apps Ad-free, no social features with strangers, visible educational outcomes In-app purchases targeting children, chat with strangers, YouTube-style rabbit holes Feel Good Games
Grocery retail (brand) Consistency, product quality, honest communication Quality decline, broken promises, corporate double-speak Loblaw
CPG food/beverage Ingredient transparency, authentic brand story, price justification "Borrowed sorrow" marketing, unsubstantiated sustainability claims, "superfood" hype Propeller Coffee, Ontarieau, various CPG studies
Political (voter) Record consistency, responsiveness to constituents, concrete plans Flip-flopping, ignoring district concerns, national talking points over local issues Michigan SoS, political campaign studies
Agent guidance: When designing a study for a specific industry, check this table to identify which trust signals to probe for. Include at least one question that directly addresses the trust currency for that industry (e.g., "What would convince you to try a new diagnostic device?" for healthcare, or "What concerns would you have about granting this tool access to your data?" for consumer tech).

4. The Integration Imperative

Across every B2B study and several B2C studies, integration with existing tools and workflows emerged as a non-negotiable requirement. This pattern is so consistent that it should be treated as an axiom for any product validation study.

Integration Requirements by Study

StudyIntegration RequirementConsequence of Failure
MotorMindsMust integrate with DMS/RO systems. Must work offline/low-bandwidth."DOA" - shops have dead zones and existing systems they will not replace.
SidianMust work with AutoCAD and Revit."Anything that doesn't work with our existing stack is DOA."
NexRisxMust work with existing SIEM, EDR, and threat intelligence infrastructure."Any tool that requires ripping out existing infrastructure is DOA."
TimeSmartMust connect EHR, payroll, scheduling, and timekeeping."Systems don't talk to each other" was the #1 complaint. Manual re-entry is the pain.
PatientCompanionMust comply with HIPAA. Must not add tasks to already-understaffed teams."Any solution must save time, not add tasks."
Mandel DiagnosticsCPT codes for insurance reimbursement. Fits in small practice spaces.No reimbursement code = patients will not get the test.
Pattern insight: The word "DOA" (dead on arrival) appeared in participant responses across three separate studies (MotorMinds, Sidian, NexRisx). This is not incidental. When participants use this exact phrase about tools lacking integration, it signals an absolute requirement, not a preference.

Implication for Question Design

When running a B2B product validation study, always include:

The combination of Q4 and Q7 produces a complete integration requirements document without ever asking "what integrations do you need?" directly.

5. Privacy Boundary Patterns

Three studies directly probed privacy boundaries: Airfairness (email inbox access), NexRisx (security data access), and PatientCompanion (patient health data). A consistent pattern emerged about where privacy boundaries lie and how to find them.

The Privacy Boundary Discovery Pattern

Two-step approach (validated by Airfairness study):

  1. Step 1: Name the tradeoff explicitly. "This service needs to scan your email inbox to find flight confirmations. How do you feel about that?" Do not soften or hedge. Let participants react to the full reality.
  2. Step 2: Ask what WOULD be acceptable. The magic wand or follow-up question then reveals the acceptable alternatives. For Airfairness, this produced: "folder-only access," "read-only OAuth," "one-time scan," "let me forward specific emails."

Result: You discover both where the line is AND what is on the other side. This is significantly more useful than simply learning "users have privacy concerns."

Universal Privacy Trust Signals

Across all privacy-sensitive studies, participants demanded these specific signals:

Trust SignalWhat Participants SaidStudies
Kill switch / revoke access"I need to be able to turn it off instantly"Airfairness, NexRisx
Explicit data deletion"Delete my data after you're done, and prove it"Airfairness
No AI training on personal data"You won't train your AI on my emails"Airfairness
Per-action approval"Let me approve each claim before you file it"Airfairness
Granular permissions"Let me choose exactly what you can access"Airfairness, PatientCompanion
HIPAA / compliance framework"Who sees what?" is the first questionPatientCompanion, TimeSmart
Transparency about capabilities"Be explicit about what you can and cannot access"Airfairness, NexRisx

The Granularity Spectrum

Privacy objections follow a spectrum from "absolute no" to "acceptable with conditions":

  1. Full access to everything: Near-universal rejection. "My inbox has everything. Work, personal, financial."
  2. Category-limited access: Conditionally acceptable. "Only look at flight confirmation emails."
  3. User-initiated sharing: Broadly acceptable. "Let me forward the emails I want you to check."
  4. One-time scan with deletion: Acceptable for most. "Scan once, give me results, delete everything."

Implication: When a product requires data access, the study should discover WHERE on this spectrum the target audience falls. Design questions that walk down the spectrum from broadest to narrowest access.

6. Price Is Rarely the Real Objection

Across studies, price appeared as the primary adoption barrier far less frequently than trust, integration, or reliability. This pattern held even when participants were explicitly asked about cost concerns.

What Participants Actually Said About Price

StudyPrice Mentioned?What Mattered More
MotorMindsYes, "HATE subscriptions and lock-in"Inventory accuracy mattered more. Would pay for truthful data.
VetVivoYes, $3k-$5k ceilingCeiling was elastic for younger pets with good prognosis. Financing was the real unlock.
Feel Good GamesYes, but secondaryAd-free and safe were non-negotiable. Parents will pay to avoid ads in kids' apps.
Mandel DiagnosticsYes, ROI required"How many patients per month to break even?" - but CPT codes (insurance reimbursement) mattered more than device cost.
Airfairness30% fee acceptableIf it is real money with zero effort, price is fine. Trust must be earned first.
ESPN DTCSharp elasticity cliffThis is the exception. For consumer subscriptions, price IS the primary variable. $9.99 = 65.7% adoption; $29.99 = 6.3%.
LoblawNot the issue"This was a credibility problem, not a cost problem."
When price IS the primary variable: Consumer subscription services (ESPN DTC) and mass-market CPG products. In these cases, use large panels (30-64 personas) with multiple price point testing.

When price is NOT the primary variable: B2B tools, healthcare devices, professional services, and any product requiring trust or data access. In these cases, focus study questions on trust, integration, and adoption barriers instead of pricing.

7. Recruitment Strategy Learnings

Strategy Success Matrix

Across 14+ studies, three distinct recruitment strategies emerged. Each has specific use cases and failure modes.

StrategyWhen to UseExample StudySuccess Rate
Direct industry filter Target audience maps directly to a Ditto industry filter NexRisx ("Cybersecurity"), Sidian ("Civil Engineering") High relevance. 6-8 out of 10 participants directly relevant.
Industry proxy filter No exact industry match exists. Closest proxy captures some of the target audience. MotorMinds ("Automotive Manufacturing" for auto mechanics), PatientCompanion ("Healthcare" for elder care staff) Medium relevance. 3-5 out of 10 directly relevant. Requires profile review.
General population + Q1 screen Target is defined by behaviour or life circumstance, not industry (pet owners, parents, travelers) VetVivo (age 30-60, USA), Airfairness (age 25-55, employed), Feel Good Games (is_parent: true) High relevance after screening. Q1 establishes who qualifies.

Validated Industry Proxy Mapping

When the target audience does not map to any available Ditto industry filter, use these validated proxies:

Target AudienceProxy Filter UsedWhy It WorksStudy
Auto mechanics / service advisors"Automotive Manufacturing"Captures maintenance techs and auto-adjacent rolesMotorMinds
Elder care nurses / aides"Healthcare"Broad healthcare captures elder care workers among othersPatientCompanion
Physicians / medical office staff"Healthcare"Same broad filter, different sub-populationTimeSmart
Eye care practitioners"Healthcare"Optometrists and ophthalmologists fall under healthcareMandel Diagnostics
Pet owners willing to spendNo industry filter. Age 30-60 + USA.Pet ownership is a behaviour, not an industryVetVivo
Parents of young childrenis_parent: true + age 25-40Direct demographic filter availableFeel Good Games
Frequent travelersAge 25-55 + employment: "employed"Working professionals fly more. "Travel" industry filter does not exist.Airfairness
International gift-giversBroad demographics, no industry filterGift-giving is a behaviour, not an industryFlomaru
Civil engineers"Civil Engineering"Direct match. Do NOT use generic "Engineering" (too broad).Sidian
SOC analysts / security teams"Cybersecurity"Direct match. Do NOT use "IT" (too broad).NexRisx
German bread consumerscountry: "Germany", age 30-65Cultural study. No industry filter needed.Das Heilige Brot
Michigan votersstate: "MI" (2-letter code)State filter for geographic specificityMichigan SoS
Filters that do NOT exist and will fail:

The Over-Recruit-and-Curate Principle

For niche audiences, recruit more participants than needed and remove irrelevant ones before asking questions.

Validated workflow:
  1. Recruit 15-20 participants (vs the 8-10 you actually need).
  2. Call GET /v1/research-studies/{study_id}/questions to view participant profiles.
  3. Score each participant:
    • 10 (High): Direct match (e.g., auto mechanic for MotorMinds)
    • 5 (Medium): Related experience (e.g., auto parts salesman)
    • 1 (Low): Tangentially relevant (e.g., equipment maintenance)
    • 0 (None): Unrelated (e.g., data analyst in manufacturing)
  4. Remove 0-scored participants via POST /v1/research-studies/{study_id}/agents/remove.
  5. Consider removing 1-scored participants if you have enough High and Medium participants.
  6. Proceed with 8-12 remaining relevant participants.

When to use: Any study where the industry proxy filter is imprecise (healthcare, automotive, engineering).

When to skip: Studies with direct industry matches (Cybersecurity, Civil Engineering) or general population studies (VetVivo, Flomaru).

8. Question Design Learnings

The 7-Question Framework: What Each Position Reveals

The 7-question framework was used in all 10 startup diligence studies and adapted for CPG, B2C, and political studies. Each question position serves a specific purpose:

PositionPurposeWhat It RevealsIf You Skip It
Q1: Establish relevance Confirm participant is qualified to answer Role, frequency of relevant activity, context Risk asking questions to unqualified participants. Wasted responses.
Q2: Pain identification Surface specific frustrations and problems The language customers use to describe their pain. Quotable material. You get solutions without understanding the problem. Weak insights.
Q3: Quantify impact Put numbers on the pain Hours, dollars, frequency. "10-20+ hours/week on parts sourcing." Pain feels vague and unactionable. No ROI story for the product.
Q4: Current solutions Map the competitive landscape What they use today, what works, what does not. Integration requirements. You miss the competitive landscape and integration needs entirely.
Q5: Past attempts Determine if problem is being actively solved Whether they have tried and abandoned tools. Why they abandoned them. You cannot distinguish active problem-solvers from passive complainers.
Q6: Magic wand Reveal what they actually want The ideal solution in their own words, unconstrained by practical limits. You miss the most valuable data point. See Section 2.
Q7: Adoption barriers Surface deal breakers and sales objections What would stop them from switching, even if the product is perfect. You discover barriers only after building. Expensive.

Question Design Anti-Patterns (Observed Failures)

Anti-PatternWhy It FailsFix
Describing the product in the question Leads participants to evaluate YOUR solution instead of expressing their needs Ask about the problem space. Never mention the product.
Yes/no questions Ditto personas provide richest insights with open-ended prompts "Walk me through..." / "Tell me about a time when..." / "What's the most..."
Multi-part questions that try to cover too much Participants focus on one part and ignore others One topic per question. Split complex questions into two.
Academic or formal language Produces formal, guarded responses instead of authentic reactions "What bugs you?" beats "What challenges do you face?"
Skipping Q5 (past attempts) Cannot distinguish active seekers from passive complainers Always ask. "Have you tried a new tool specifically to solve this?"
Revealing the VC context in due diligence Participants adjust answers to seem "investable" instead of honest Frame as general research about their industry/work.

The "Walk Me Through" Technique

The phrase "Walk me through..." consistently produced the most detailed, narrative responses across all studies. It creates a conversational frame that encourages step-by-step storytelling rather than summary answers.

Examples that worked:

Each of these produced responses with specific details, named tools, time estimates, and quotable frustrations.

The "Tell Me About a Time" Technique

This variant produces anecdote-rich responses with specific incidents rather than generalised opinions. Particularly effective for Q2 (pain identification).

Example: "Tell me about a time when sourcing a part was particularly painful." (MotorMinds Q2)
Result: Participants described specific incidents with named suppliers, exact delays, and dollar costs. Far more useful than "Parts sourcing is frustrating."

9. Group Sizing Impact on Insight Quality

Studies ranged from 6 personas (Das Heilige Brot) to 64 personas (ESPN DTC). Group size has measurable impact on the type and quality of insights produced.

Group SizeBest ForInsight TypeExample
6-8 Cultural research, deep qualitative, niche audiences Rich narratives, cultural context, individual stories. Low statistical confidence. Das Heilige Brot (6 German personas produced deeply detailed cultural insights about bread identity)
10 Standard due diligence, single-topic validation Pattern identification, consensus detection, quotable insights. Sufficient for problem validation. MotorMinds (10 personas), Michigan SoS (10 personas), Sidian (10 personas)
15-20 Multi-perspective studies, industries with diverse roles, over-recruit-and-curate Broader perspective range, sub-group analysis possible, higher confidence in consensus. VetVivo (20), Airfairness (20), PatientCompanion (20), Mandel (15)
30-64 Pricing studies, statistical confidence, representative panels Quantitative patterns, price elasticity curves, demographic sub-analysis. ESPN DTC (64 personas from "100 Americans" panel). Sharp pricing cliff identified.
Rule of thumb:

10. Single-Phase vs Multi-Phase Research

Most studies (13 of 15) used single-phase research: one group, one study, 7 questions, done. CareQuarter used three phases. The choice between approaches has significant implications.

Single-Phase (Standard)

CharacteristicDetail
Structure1 group, 1 study, 7 questions
Duration15-30 minutes total
OutputProblem validation, adoption barriers, magic wand insight, quotable material
LimitationCannot iterate on findings. Cannot test solutions informed by discovery.
Use whenValidating an existing product concept. Due diligence on a startup. Testing messaging or positioning.
ExamplesAll 10 startup diligence studies, CPG studies, political studies

Multi-Phase (Iterative)

CharacteristicDetail
Structure2-3 separate groups and studies, each informed by previous findings
Duration2-4 hours total
OutputFull problem discovery, solution validation, concept testing, pricing, positioning
AdvantageEach phase builds on the last. Phase 2 questions are impossible to design without Phase 1 findings.
Use whenBuilding a concept from scratch. Insufficient domain knowledge to design targeted questions. Need to validate both problem AND solution.
ExampleCareQuarter: Phase 1 (pain discovery) found "authority without power" as core pain. Phase 2 (solution validation) explored what "authority" means and tested approaches. Phase 3 (concept test) validated positioning, pricing ($175-325/month), and trigger moments.
Decision rule:

Multi-Phase Design Pattern

When using multi-phase research, follow this tested structure:

  1. Phase 1: Pain Discovery
  2. Phase 2: Solution Validation
  3. Phase 3: Concept Test

Why fresh participants for each phase: Previous participants have been "primed" by earlier questions. Fresh participants give uncontaminated reactions to refined questions.

11. Emotional Research Techniques

Several studies discovered that the most valuable insights came from questions that explicitly named emotions or created permission for emotional honesty. This is distinct from the standard 7-question framework.

Techniques That Produced Emotional Depth

TechniqueHow It WorksExampleStudy
Name the emotion Explicitly mention the feeling most people experience but do not admit "When does screen time make you feel guilty as a parent?" Feel Good Games Q4
Create a dichotomy Present two opposing frames for the same behaviour "When does it feel like guilt, and when does it feel like good parenting?" Feel Good Games Q4
Name the tradeoff State the uncomfortable reality directly "Let's talk about the privacy tradeoff. This service needs to scan your inbox." Airfairness Q6
Ask about relationship, not transaction Frame around emotional bond rather than purchase decision "How would you describe your relationship with your pet?" VetVivo Q2
Ask about difficult decisions Reference a specific emotionally charged moment "Have you ever faced a difficult medical decision for a pet?" VetVivo Q4
Stigma surfacing Ask for "honest reaction" to a potentially stigmatised category "What is your honest reaction when you see decaf on a menu?" Propeller Coffee (CPG)
When to use emotional techniques: When NOT to use: B2B workflow tools, infrastructure products, or any context where emotional framing would feel inappropriate or condescending to the participant.

12. Adoption Barrier Taxonomy

Question 7 (adoption barriers) across all studies produced a classifiable taxonomy of reasons people resist new solutions. Barriers fall into seven categories, ordered by frequency of appearance:

#Barrier CategoryFrequencyExampleStudies
1 Integration / workflow disruption Appeared in 10 of 14 studies "Must work with what I already use." MotorMinds, Sidian, NexRisx, TimeSmart, PatientCompanion, Mandel, and others
2 Trust / credibility Appeared in 9 of 14 studies "I've been burned before." "Prove it works." MotorMinds, Sidian, NexRisx, Flomaru, VetVivo, Loblaw, and others
3 Privacy / data access Appeared in 5 of 14 studies "Who sees what?" "Don't train AI on my data." Airfairness, NexRisx, PatientCompanion, TimeSmart, Feel Good Games
4 Learning curve / training Appeared in 5 of 14 studies "Staff turnover means retraining." "Must be intuitive." Mandel, PatientCompanion, TimeSmart, Sidian, MotorMinds
5 Regulatory / compliance Appeared in 4 of 14 studies "HIPAA." "FDA clearance." "Professional liability." PatientCompanion, Mandel, TimeSmart, Sidian
6 Cost / pricing model Appeared in 4 of 14 studies "Hate subscriptions." "ROI must be clear." MotorMinds, Mandel, VetVivo, ESPN DTC
7 Relationship disruption Appeared in 3 of 14 studies "Don't break my relationship with local suppliers." MotorMinds, VetVivo, Flomaru
Key insight: Integration and trust are 2x more common as barriers than price. This means product teams that lead with "affordable pricing" in their messaging are addressing the 6th most common concern while ignoring the top two. Studies that surface this hierarchy help product teams re-prioritise their development and sales strategies.

13. Insight Extraction Patterns

After running a study, the quality of insight extraction determines whether the research produces actionable results or generic observations. These patterns distinguish strong insights from weak ones.

Strong vs Weak Insight Characteristics

Strong InsightWeak Insight
Specific, quotable language from participantsGeneric sentiment ("people want better service")
Surprising or counterintuitiveObvious or expected ("users want things to work")
Actionable for the product teamAbstract observation with no clear action
Creates tension or curiosityConfirms what everyone already believes
Supported by consensus (3+ participants)Single participant opinion presented as finding
Contradicts the builder's assumptionRestates the builder's pitch in different words

The Insight Extraction Framework

Apply this framework to every completed study:

  1. Check for magic wand divergence. Compare Q6 magic wand answers with the product's stated value proposition. If they diverge, this is finding #1. See Section 2.
  2. Look for consensus language. When 3+ participants use similar words or phrases, that language is the insight. Quote it directly.
  3. Find the "but" statements. Responses containing "but" reveal the real tension: "I would use it, BUT only if..." The condition after "but" is the actual insight.
  4. Extract the adoption barrier hierarchy. Categorise Q7 responses using the taxonomy in Section 12. The order reveals what to fix first.
  5. Identify the past-attempts signal. Q5 (past attempts) tells you whether participants are active seekers or passive complainers:
  6. Find the emotional anchor. The most emotionally charged quote from any participant is the lead for any email, report, or presentation. Emotional anchors are more memorable and shareable than data points.

Insight Template for Reports and Emails

Key Finding 1: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]

Key Finding 2: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]

Key Finding 3: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]

Overall narrative: [What story do these findings tell?]
Magic wand divergence: [If applicable, what they wanted vs what is being built]
How many insights to include: 4-5 is the sweet spot for emails and reports. Three feels thin. Seven feels exhausting. Lead with the most surprising or counterintuitive finding.

14. Study Design Decision Framework

Before creating a Ditto study, answer these four questions in order:

Decision 1: What Type of Study?

If the Goal Is...Use This ApproachExample
Validate a startup for investmentSingle-phase, 7-question non-leading frameworkAll 10 startup diligence studies
Research a CPG brand for sales outreachSingle-phase, 7 questions with CPG-specific frameworkPropeller Coffee, Ontarieau
Test political messaging or voter sentimentSingle-phase, 7 questions with political framework, state filterMichigan SoS, campaign studies
Get B2C product feedback for sales outreachSingle-phase, 7 questions with B2C tech frameworkVarious B2C PM studies
Build a concept from scratchMulti-phase iterative (3 phases)CareQuarter
Test pricing elasticityLarge panel (30-64), 3-4 pricing-focused questionsESPN DTC
M&A voice-of-customerCountry-filtered panel, 4-5 focused questionsLoblaw / No Frills
Cultural researchSmall focused group (6-8), country-specific, 7 questionsDas Heilige Brot

Decision 2: How Many Personas?

If...Use
Deep qualitative, cultural research6-8 personas
Standard problem validation, due diligence10 personas
Need to over-recruit and curate (niche industry)15-20 personas (expect to keep 8-12)
Broad audience validation, multiple perspectives20 personas
Pricing study or need statistical confidence30-64 personas

Decision 3: What Recruitment Strategy?

If...Strategy
Exact industry filter exists (Cybersecurity, Civil Engineering)Direct industry filter. Recruit exact number needed.
Closest industry filter is a proxy (Healthcare for elder care)Industry proxy + over-recruit-and-curate. Recruit 15-20, keep 8-12.
Target is defined by behaviour, not industry (pet owners, travelers, parents)General population + Q1 screen. Use age, country, is_parent filters.
Target is geographically specific (state voters)State filter with 2-letter code. NEVER full state name.
Target is culturally specific (German bread consumers)Country filter + age range. Small group (6-8).

Decision 4: What Question Framework?

If...FrameworkReference
Startup due diligence7-question non-leading framework (relevance → pain → quantify → solutions → attempts → magic wand → barriers)Question Design Playbook
CPG consumer research7 questions using CPG frameworks (stigma, origin story, price justification, social signaling, gift-giving, switching)Question Design Playbook
B2C product feedback7 questions using B2C frameworks (first impression, value clarity, price sensitivity, feature trade-offs, friction)Question Design Playbook
Political voter sentiment7 questions using political frameworks (name recognition, issue ownership, messaging test, switching trigger, enthusiasm, concerns)Question Design Playbook
Pricing elasticity3-4 questions testing specific price points with representative panelESPN DTC case study
Privacy boundary discoveryName the tradeoff explicitly, then ask what WOULD be acceptableSection 5 above
Emotional / cultural researchName the emotion, create dichotomies, ask about relationships not transactionsSection 11 above

15. Common Mistakes Registry

Every mistake in this list has occurred in production and caused wasted API calls, low-quality results, or failed studies.

API Mistakes

MistakeConsequenceFix
Using full state names ("Michigan") instead of 2-letter codes ("MI") 0 agents returned. Group is empty. Always use 2-letter codes: MI, TX, CA, NY, FL, OH, PA.
Using research_group_id instead of research_group_uuid Study created without linked participants. Always use the UUID (32-character hex string), not the integer ID.
Using "text" field instead of "question" when asking 400 error or empty question. Request body: {"question": "your question here"}
Reading "answer" field instead of "response_text" Empty or null values when reading responses. Participant answers are in the response_text field.
Not waiting for all responses before asking next question Some participants miss questions. Incomplete data. Poll GET /v1/research-studies/{id}/questions until all responses received (~45-60 seconds per question).
Forgetting to call POST /v1/research-studies/{id}/complete No AI-generated analysis. Study stays in "active" state. Always complete the study after all questions are asked.
Not enabling sharing before extracting share link Share link not available or returns private/restricted. Call POST /v1/research-studies/{id}/share to enable, then GET to retrieve link.

Study Design Mistakes

MistakeConsequenceFix
Describing the product in questions Participants evaluate YOUR solution, not express their needs. Biased results. Ask about the problem domain. Never mention the product.
Using too-narrow industry filters 0 agents returned. Use proxy filters. Check Section 7 for validated proxies.
Not reviewing participant profiles before asking questions Irrelevant participants (data analysts answering auto mechanic questions). Over-recruit, review profiles, remove irrelevant agents before starting.
Skipping the magic wand question Missing the most valuable insight. See Section 2. Always ask it as Q6. "If you could wave a magic wand and fix ONE thing..."
Skipping the past-attempts question Cannot distinguish active seekers from passive complainers. Always ask Q5: "Have you tried a new tool specifically to solve this?"
Using "American Voters" as a generic group for political research Useless data. State-level insights require state filtering. Always use state-specific groups with 2-letter codes (e.g., "MI State Voters").
Revealing VC context in due diligence questions Participants adjust answers to seem "investable." Frame as general industry research. Never mention investment or VC.
Asking yes/no questions Binary responses with no nuance. Wasted question slot. Always open-ended: "Walk me through..." / "What's the most..." / "Tell me about..."

16. API Gotchas and Field Reference

Quick reference for the most commonly confused API fields and behaviours, derived from production experience.

Critical Field Reference

When You Need To...Use This FieldNOT This
Create a study from a groupresearch_group_uuidgroup_id, research_group_id
Ask a question{"question": "..."}{"text": "..."}
Read a participant's answerresponse_textanswer, response, text
Filter by US state"state": "MI" (2-letter)"state": "Michigan" (full name)
Create a study (required fields)title + objective + research_group_uuidAny subset of these

Polling Pattern for Responses

# After asking a question, poll until all responses arrive
# Typical wait: 30-60 seconds per question per 10 participants

import time
import requests

def wait_for_responses(study_id, question_index, expected_count, api_key):
    """Poll until all persona responses are received."""
    url = f"https://app.askditto.io/v1/research-studies/{study_id}/questions"
    headers = {"Authorization": f"Bearer {api_key}"}

    while True:
        response = requests.get(url, headers=headers)
        questions = response.json()

        if len(questions) > question_index:
            answers = questions[question_index].get("answers", [])
            if len(answers) >= expected_count:
                return answers

        time.sleep(10)  # Poll every 10 seconds

Complete Single-Phase Study Workflow

# 1. Create research group
POST /v1/research-groups/recruit
{"name": "Study Name", "group_size": 10, "filters": {"country": "USA", "age_min": 28, "age_max": 58}}
# Save: research_group_uuid from response

# 2. Create study
POST /v1/research-studies
{"title": "Study Title", "objective": "Understand X", "research_group_uuid": "{uuid}"}
# Save: study_id from response

# 3. Ask 7 questions sequentially
# For each question:
POST /v1/research-studies/{study_id}/questions
{"question": "Your question here"}
# Wait for all responses before asking next question

# 4. Complete the study
POST /v1/research-studies/{study_id}/complete

# 5. Enable sharing
POST /v1/research-studies/{study_id}/share
{"is_shared": true}

# 6. Get share link
GET /v1/research-studies/{study_id}/share
# Share link in response

17. Vertical-Specific Pattern Summary

Each vertical produces characteristic patterns. Knowing these in advance helps you design better questions and extract better insights.

VerticalCharacteristic PatternMost Valuable QuestionTypical Magic Wand Answer
Healthcare (B2B) Regulatory compliance gates everything. IT approval is the gatekeeper. Staff shortages amplify every pain. Q7 (barriers) reveals compliance, IT, and staffing concerns. "One system that talks to all other systems."
Healthcare (device) ROI must be quantified. CPT codes are make-or-break. Clinical evidence required. Q5 (past attempts) reveals what existing equipment does and does not do. "Cheaper, faster screening that insurance covers."
Cybersecurity Tool sprawl is universal. AI skepticism is high. Knowledge loss when staff leave. Q4 (current solutions) maps the 5-15 tool landscape. "Natural language queries across all my tools."
Engineering (regulated) Professional liability prevents AI autonomy. Generational divide on AI adoption. Q5 (past attempts) reveals secret ChatGPT usage and liability concerns. "Automate the tedious tasks (takeoffs) but keep me in control."
Auto repair / trades Burned by previous tools. Offline capability required. Relationship with local suppliers matters. Q2 (pain) produces the most vivid, specific stories. "Truthful inventory. Just tell me the truth."
Consumer tech (privacy) Full-access requests rejected. Granular permissions acceptable. Trust signals non-negotiable. The privacy tradeoff question (direct naming) produces the boundary. "Zero effort + granular control over my data."
Children / parenting Guilt is the core emotion. Safety concerns dominate. Ad-free is non-negotiable. The emotion-naming question ("When does it feel like guilt?") breaks through. "Permission to feel okay about screen time."
Pet care Pets are family. Financial limits are elastic. Age of pet drives decisions. Q4 (difficult decision) produces the most emotionally rich responses. "Financing that makes expensive treatments possible."
E-commerce / gifting Trust is the #1 barrier. Emotional stakes are high. Amazon has set expectations. Q4 (quality confidence) surfaces the core trust concern. "Photo proof on delivery."
CPG (food/beverage) Category stigma exists. Origin stories face skepticism. Price has a ceiling. Stigma question ("honest reaction") surfaces hidden biases. Varies by category. Often about authenticity and proof.
Political (voter) State-level specificity required. Generic national data is useless. Issue ownership and switching trigger questions produce actionable campaign data. Varies by state and race. Often about local responsiveness.
Grocery retail (brand) Trust and consistency matter more than price. Quality decline erodes loyalty. Open-ended sentiment question reveals whether problems are price or trust. "Consistency. Just be what you used to be."

18. Frequently Asked Questions

How many questions should a Ditto study have?

7 questions is the validated standard for problem validation and due diligence studies. Each position in the 7-question framework serves a specific purpose (see Section 8). For pricing studies, 3-4 focused questions are sufficient. For cultural research, 7 questions work well with smaller groups (6-8 personas).

When should I use multi-phase vs single-phase research?

Use single-phase (one study, 7 questions) when validating an existing product concept or performing due diligence. Use multi-phase (2-3 separate studies) when building a concept from scratch or when Phase 1 findings raise questions that require new participants to answer without bias. See Section 10 for the full comparison.

What is the most common API mistake?

Using full state names instead of 2-letter codes. "Michigan" returns 0 agents. "MI" works correctly. This has caused more wasted API calls than any other mistake. See Section 15 for the full registry.

How do I know if my study produced good insights?

Strong insights are specific (include quotable language), surprising (counterintuitive), and actionable (the product team can do something with them). If your insights could apply to any product in any industry, they are too generic. Check for magic wand divergence (Section 2), categorise adoption barriers using the taxonomy (Section 12), and ensure you have direct quotes from participants. See Section 13.

Why do 10 participants produce useful results?

10 participants is sufficient for pattern identification and consensus detection in qualitative research. Across 10 startup diligence studies with 10 participants each, consistent patterns emerged within 5-6 responses for each question. By participant 10, new themes stop appearing. For quantitative confidence (percentages, pricing), use 20-64 participants. See Section 9.

Can I reuse a research group for multiple studies?

Yes. A single research group can be used for multiple studies. This is useful when you want to ask different sets of questions to the same audience. However, for multi-phase research, use NEW groups for each phase so participants are not primed by earlier questions. Store the research_group_uuid for reuse.

What is the over-recruit-and-curate method?

Recruit 15-20 personas (more than needed), review their profiles for relevance, score each as High/Medium/Low/None based on their role and summary, remove irrelevant (None-scored) participants from the study, then proceed with the remaining relevant participants. This ensures niche studies have qualified participants even when demographic filters are imprecise. See Section 7.

How do I handle privacy-sensitive product research?

Use the two-step privacy boundary discovery pattern: (1) Name the tradeoff explicitly ("This service needs to scan your inbox. How do you feel?"), then (2) ask what WOULD be acceptable. This produces both the boundary and the acceptable alternatives. See Section 5.

What determines whether price matters in a study?

For consumer subscriptions and mass-market products, price is the primary variable. Use large panels (30-64) with multiple price points. For B2B tools, healthcare devices, and professional services, price is rarely the primary objection. Trust, integration, and reliability matter more. See Section 6.

How do these patterns apply to the free tier of Ditto?

Free-tier access provides a shared panel of approximately 12 personas. The question design patterns, insight extraction framework, and common mistakes all apply equally. The main limitations are: no custom demographic filtering, no study completion analysis, and no shareable links. Get free-tier access at https://app.askditto.io/docs/free-tier-oauth.