For Claude Code agents and AI coding assistants. This page synthesises the meta-patterns, decision frameworks, and hard-won lessons from 50+ production Ditto research studies across startup due diligence, CPG consumer research, B2C product feedback, political voter sentiment, and cultural studies. Everything here is derived from real results, not theory.
Five themes emerged independently across every vertical, every study type, and every audience. These are not hypotheses. They are observed patterns from production research.
Every study revealed deep skepticism about new tools, new vendors, AI claims, and marketing promises. This appeared across auto mechanics, pet owners, parents, healthcare workers, civil engineers, cybersecurity professionals, and frequent travelers. No audience gave trust freely.
Implication for study design: Always include an adoption barriers question (Q7 in the standard framework). The trust signals participants demand are the most actionable output for any product team.
Solutions that require replacing existing workflows fail. "Works with what I already use" is more important than any feature list. This appeared in every B2B study and several B2C studies.
What customers say they want and what they actually want frequently differ. The magic wand question ("If you could wave a magic wand and fix ONE thing...") cuts through stated preferences to reveal genuine priorities. See Section 2 for the full analysis.
Especially for AI products and data-access tools. Transparency about data handling is now table stakes, not a differentiator. See Section 5 for boundary patterns.
Across studies, when participants were asked about adoption barriers, price appeared less frequently than trust, integration complexity, and reliability concerns. See Section 6 for the full breakdown.
The magic wand question ("If you could wave a magic wand and fix ONE thing about how you [do this task], what would it be?") consistently produces the most valuable insight in any study. Across 10 startup due diligence studies, what participants wished for diverged from the startup's core pitch 7 out of 10 times.
| Startup | Builder's Assumed Value Prop | Magic Wand Answer | Diverges? |
|---|---|---|---|
| MotorMinds | Faster parts search with AI | "Truthful inventory with guaranteed ETAs" | YES |
| VetVivo | Advanced implant technology | "Financing options that make expensive treatments possible" | YES |
| Feel Good Games | Better educational games | "Permission to feel okay about screen time" (guilt relief) | YES |
| PatientCompanion | Better communication system | "Context before arrival" (know what patient needs before walking in) | YES |
| TimeSmart | Automated timesheets | "One system that talks to all other systems" (integration) | YES |
| Flomaru | Wider international coverage | "Photo proof on delivery" (trust, not logistics) | YES |
| Sidian | AI-powered design assistance | "Automated takeoffs" (specific task, not general AI) | YES |
| Mandel Diagnostics | Early AMD detection | "Faster, cheaper screening that insurance covers" (ROI clarity) | PARTIAL |
| NexRisx | AI security platform | "Natural language queries across tools" (chat interface resonated) | ALIGNED |
| Airfairness | Automatic compensation claims | "Zero-effort compensation with granular privacy controls" | ALIGNED (but with strong conditions) |
Pattern: When the magic wand answer diverges from the builder's pitch, it reveals one of three things:
When running a Ditto study for product validation:
While trust is a universal theme, what constitutes a trust signal varies dramatically by industry. This table maps the specific trust requirements observed across studies.
| Industry | Trust Currency | Trust Breaker | Studies |
|---|---|---|---|
| Auto repair / trades | Accurate inventory, guaranteed ETAs, offline capability | Fake "in stock" status, apps that crash, subscription lock-in | MotorMinds |
| Healthcare (clinical) | Peer-reviewed evidence, FDA clearance, CPT codes, specialist endorsements | Unproven claims, no reimbursement pathway, large equipment footprint | Mandel Diagnostics, PatientCompanion |
| Healthcare (admin) | System integration, mobile access, IT approval pathway | Desktop-only, requires ripping out existing systems, no compliance story | TimeSmart |
| Cybersecurity | Works with existing stack, substance over buzzwords, knowledge preservation | "AI-powered" marketing with no substance, requires infrastructure replacement | NexRisx |
| Civil engineering | AutoCAD/Revit integration, AI as "suggestion" not "decision," human remains liable | AI making decisions that carry professional stamp, proprietary lock-in | Sidian |
| E-commerce / gifting | Photo proof on delivery, real-time tracking, transparent pricing | Hidden fees, currency conversion surprises, "delivery TBD" | Flomaru |
| Veterinary / pet care | Transparent prognosis, financing options, quality-of-life focus | Feeling "upsold," no second-opinion pathway, prolonging suffering | VetVivo |
| Consumer tech (privacy) | Kill switch, granular permissions, data deletion, per-action approval | "Full access" to anything, no transparency about data use, AI training on personal data | Airfairness |
| Children's apps | Ad-free, no social features with strangers, visible educational outcomes | In-app purchases targeting children, chat with strangers, YouTube-style rabbit holes | Feel Good Games |
| Grocery retail (brand) | Consistency, product quality, honest communication | Quality decline, broken promises, corporate double-speak | Loblaw |
| CPG food/beverage | Ingredient transparency, authentic brand story, price justification | "Borrowed sorrow" marketing, unsubstantiated sustainability claims, "superfood" hype | Propeller Coffee, Ontarieau, various CPG studies |
| Political (voter) | Record consistency, responsiveness to constituents, concrete plans | Flip-flopping, ignoring district concerns, national talking points over local issues | Michigan SoS, political campaign studies |
Across every B2B study and several B2C studies, integration with existing tools and workflows emerged as a non-negotiable requirement. This pattern is so consistent that it should be treated as an axiom for any product validation study.
| Study | Integration Requirement | Consequence of Failure |
|---|---|---|
| MotorMinds | Must integrate with DMS/RO systems. Must work offline/low-bandwidth. | "DOA" - shops have dead zones and existing systems they will not replace. |
| Sidian | Must work with AutoCAD and Revit. | "Anything that doesn't work with our existing stack is DOA." |
| NexRisx | Must work with existing SIEM, EDR, and threat intelligence infrastructure. | "Any tool that requires ripping out existing infrastructure is DOA." |
| TimeSmart | Must connect EHR, payroll, scheduling, and timekeeping. | "Systems don't talk to each other" was the #1 complaint. Manual re-entry is the pain. |
| PatientCompanion | Must comply with HIPAA. Must not add tasks to already-understaffed teams. | "Any solution must save time, not add tasks." |
| Mandel Diagnostics | CPT codes for insurance reimbursement. Fits in small practice spaces. | No reimbursement code = patients will not get the test. |
When running a B2B product validation study, always include:
The combination of Q4 and Q7 produces a complete integration requirements document without ever asking "what integrations do you need?" directly.
Three studies directly probed privacy boundaries: Airfairness (email inbox access), NexRisx (security data access), and PatientCompanion (patient health data). A consistent pattern emerged about where privacy boundaries lie and how to find them.
Two-step approach (validated by Airfairness study):
Result: You discover both where the line is AND what is on the other side. This is significantly more useful than simply learning "users have privacy concerns."
Across all privacy-sensitive studies, participants demanded these specific signals:
| Trust Signal | What Participants Said | Studies |
|---|---|---|
| Kill switch / revoke access | "I need to be able to turn it off instantly" | Airfairness, NexRisx |
| Explicit data deletion | "Delete my data after you're done, and prove it" | Airfairness |
| No AI training on personal data | "You won't train your AI on my emails" | Airfairness |
| Per-action approval | "Let me approve each claim before you file it" | Airfairness |
| Granular permissions | "Let me choose exactly what you can access" | Airfairness, PatientCompanion |
| HIPAA / compliance framework | "Who sees what?" is the first question | PatientCompanion, TimeSmart |
| Transparency about capabilities | "Be explicit about what you can and cannot access" | Airfairness, NexRisx |
Privacy objections follow a spectrum from "absolute no" to "acceptable with conditions":
Implication: When a product requires data access, the study should discover WHERE on this spectrum the target audience falls. Design questions that walk down the spectrum from broadest to narrowest access.
Across studies, price appeared as the primary adoption barrier far less frequently than trust, integration, or reliability. This pattern held even when participants were explicitly asked about cost concerns.
| Study | Price Mentioned? | What Mattered More |
|---|---|---|
| MotorMinds | Yes, "HATE subscriptions and lock-in" | Inventory accuracy mattered more. Would pay for truthful data. |
| VetVivo | Yes, $3k-$5k ceiling | Ceiling was elastic for younger pets with good prognosis. Financing was the real unlock. |
| Feel Good Games | Yes, but secondary | Ad-free and safe were non-negotiable. Parents will pay to avoid ads in kids' apps. |
| Mandel Diagnostics | Yes, ROI required | "How many patients per month to break even?" - but CPT codes (insurance reimbursement) mattered more than device cost. |
| Airfairness | 30% fee acceptable | If it is real money with zero effort, price is fine. Trust must be earned first. |
| ESPN DTC | Sharp elasticity cliff | This is the exception. For consumer subscriptions, price IS the primary variable. $9.99 = 65.7% adoption; $29.99 = 6.3%. |
| Loblaw | Not the issue | "This was a credibility problem, not a cost problem." |
Across 14+ studies, three distinct recruitment strategies emerged. Each has specific use cases and failure modes.
| Strategy | When to Use | Example Study | Success Rate |
|---|---|---|---|
| Direct industry filter | Target audience maps directly to a Ditto industry filter | NexRisx ("Cybersecurity"), Sidian ("Civil Engineering") | High relevance. 6-8 out of 10 participants directly relevant. |
| Industry proxy filter | No exact industry match exists. Closest proxy captures some of the target audience. | MotorMinds ("Automotive Manufacturing" for auto mechanics), PatientCompanion ("Healthcare" for elder care staff) | Medium relevance. 3-5 out of 10 directly relevant. Requires profile review. |
| General population + Q1 screen | Target is defined by behaviour or life circumstance, not industry (pet owners, parents, travelers) | VetVivo (age 30-60, USA), Airfairness (age 25-55, employed), Feel Good Games (is_parent: true) |
High relevance after screening. Q1 establishes who qualifies. |
When the target audience does not map to any available Ditto industry filter, use these validated proxies:
| Target Audience | Proxy Filter Used | Why It Works | Study |
|---|---|---|---|
| Auto mechanics / service advisors | "Automotive Manufacturing" | Captures maintenance techs and auto-adjacent roles | MotorMinds |
| Elder care nurses / aides | "Healthcare" | Broad healthcare captures elder care workers among others | PatientCompanion |
| Physicians / medical office staff | "Healthcare" | Same broad filter, different sub-population | TimeSmart |
| Eye care practitioners | "Healthcare" | Optometrists and ophthalmologists fall under healthcare | Mandel Diagnostics |
| Pet owners willing to spend | No industry filter. Age 30-60 + USA. | Pet ownership is a behaviour, not an industry | VetVivo |
| Parents of young children | is_parent: true + age 25-40 | Direct demographic filter available | Feel Good Games |
| Frequent travelers | Age 25-55 + employment: "employed" | Working professionals fly more. "Travel" industry filter does not exist. | Airfairness |
| International gift-givers | Broad demographics, no industry filter | Gift-giving is a behaviour, not an industry | Flomaru |
| Civil engineers | "Civil Engineering" | Direct match. Do NOT use generic "Engineering" (too broad). | Sidian |
| SOC analysts / security teams | "Cybersecurity" | Direct match. Do NOT use "IT" (too broad). | NexRisx |
| German bread consumers | country: "Germany", age 30-65 | Cultural study. No industry filter needed. | Das Heilige Brot |
| Michigan voters | state: "MI" (2-letter code) | State filter for geographic specificity | Michigan SoS |
"Automotive Service" - does not exist. Use "Automotive Manufacturing"."Veterinary" - does not exist. Use general population + Q1 screen."Travel" - does not exist. Use age + employment filters."Education" for parents - this returns teachers, not parents. Use is_parent: true."IT" for security - too broad. Use "Cybersecurity"."Engineering" for civil engineers - too broad. Use "Civil Engineering".For niche audiences, recruit more participants than needed and remove irrelevant ones before asking questions.
GET /v1/research-studies/{study_id}/questions to view participant profiles.POST /v1/research-studies/{study_id}/agents/remove.When to use: Any study where the industry proxy filter is imprecise (healthcare, automotive, engineering).
When to skip: Studies with direct industry matches (Cybersecurity, Civil Engineering) or general population studies (VetVivo, Flomaru).
The 7-question framework was used in all 10 startup diligence studies and adapted for CPG, B2C, and political studies. Each question position serves a specific purpose:
| Position | Purpose | What It Reveals | If You Skip It |
|---|---|---|---|
| Q1: Establish relevance | Confirm participant is qualified to answer | Role, frequency of relevant activity, context | Risk asking questions to unqualified participants. Wasted responses. |
| Q2: Pain identification | Surface specific frustrations and problems | The language customers use to describe their pain. Quotable material. | You get solutions without understanding the problem. Weak insights. |
| Q3: Quantify impact | Put numbers on the pain | Hours, dollars, frequency. "10-20+ hours/week on parts sourcing." | Pain feels vague and unactionable. No ROI story for the product. |
| Q4: Current solutions | Map the competitive landscape | What they use today, what works, what does not. Integration requirements. | You miss the competitive landscape and integration needs entirely. |
| Q5: Past attempts | Determine if problem is being actively solved | Whether they have tried and abandoned tools. Why they abandoned them. | You cannot distinguish active problem-solvers from passive complainers. |
| Q6: Magic wand | Reveal what they actually want | The ideal solution in their own words, unconstrained by practical limits. | You miss the most valuable data point. See Section 2. |
| Q7: Adoption barriers | Surface deal breakers and sales objections | What would stop them from switching, even if the product is perfect. | You discover barriers only after building. Expensive. |
| Anti-Pattern | Why It Fails | Fix |
|---|---|---|
| Describing the product in the question | Leads participants to evaluate YOUR solution instead of expressing their needs | Ask about the problem space. Never mention the product. |
| Yes/no questions | Ditto personas provide richest insights with open-ended prompts | "Walk me through..." / "Tell me about a time when..." / "What's the most..." |
| Multi-part questions that try to cover too much | Participants focus on one part and ignore others | One topic per question. Split complex questions into two. |
| Academic or formal language | Produces formal, guarded responses instead of authentic reactions | "What bugs you?" beats "What challenges do you face?" |
| Skipping Q5 (past attempts) | Cannot distinguish active seekers from passive complainers | Always ask. "Have you tried a new tool specifically to solve this?" |
| Revealing the VC context in due diligence | Participants adjust answers to seem "investable" instead of honest | Frame as general research about their industry/work. |
The phrase "Walk me through..." consistently produced the most detailed, narrative responses across all studies. It creates a conversational frame that encourages step-by-step storytelling rather than summary answers.
Each of these produced responses with specific details, named tools, time estimates, and quotable frustrations.
This variant produces anecdote-rich responses with specific incidents rather than generalised opinions. Particularly effective for Q2 (pain identification).
Studies ranged from 6 personas (Das Heilige Brot) to 64 personas (ESPN DTC). Group size has measurable impact on the type and quality of insights produced.
| Group Size | Best For | Insight Type | Example |
|---|---|---|---|
| 6-8 | Cultural research, deep qualitative, niche audiences | Rich narratives, cultural context, individual stories. Low statistical confidence. | Das Heilige Brot (6 German personas produced deeply detailed cultural insights about bread identity) |
| 10 | Standard due diligence, single-topic validation | Pattern identification, consensus detection, quotable insights. Sufficient for problem validation. | MotorMinds (10 personas), Michigan SoS (10 personas), Sidian (10 personas) |
| 15-20 | Multi-perspective studies, industries with diverse roles, over-recruit-and-curate | Broader perspective range, sub-group analysis possible, higher confidence in consensus. | VetVivo (20), Airfairness (20), PatientCompanion (20), Mandel (15) |
| 30-64 | Pricing studies, statistical confidence, representative panels | Quantitative patterns, price elasticity curves, demographic sub-analysis. | ESPN DTC (64 personas from "100 Americans" panel). Sharp pricing cliff identified. |
Most studies (13 of 15) used single-phase research: one group, one study, 7 questions, done. CareQuarter used three phases. The choice between approaches has significant implications.
| Characteristic | Detail |
|---|---|
| Structure | 1 group, 1 study, 7 questions |
| Duration | 15-30 minutes total |
| Output | Problem validation, adoption barriers, magic wand insight, quotable material |
| Limitation | Cannot iterate on findings. Cannot test solutions informed by discovery. |
| Use when | Validating an existing product concept. Due diligence on a startup. Testing messaging or positioning. |
| Examples | All 10 startup diligence studies, CPG studies, political studies |
| Characteristic | Detail |
|---|---|
| Structure | 2-3 separate groups and studies, each informed by previous findings |
| Duration | 2-4 hours total |
| Output | Full problem discovery, solution validation, concept testing, pricing, positioning |
| Advantage | Each phase builds on the last. Phase 2 questions are impossible to design without Phase 1 findings. |
| Use when | Building a concept from scratch. Insufficient domain knowledge to design targeted questions. Need to validate both problem AND solution. |
| Example | CareQuarter: Phase 1 (pain discovery) found "authority without power" as core pain. Phase 2 (solution validation) explored what "authority" means and tested approaches. Phase 3 (concept test) validated positioning, pricing ($175-325/month), and trigger moments. |
When using multi-phase research, follow this tested structure:
Why fresh participants for each phase: Previous participants have been "primed" by earlier questions. Fresh participants give uncontaminated reactions to refined questions.
Several studies discovered that the most valuable insights came from questions that explicitly named emotions or created permission for emotional honesty. This is distinct from the standard 7-question framework.
| Technique | How It Works | Example | Study |
|---|---|---|---|
| Name the emotion | Explicitly mention the feeling most people experience but do not admit | "When does screen time make you feel guilty as a parent?" | Feel Good Games Q4 |
| Create a dichotomy | Present two opposing frames for the same behaviour | "When does it feel like guilt, and when does it feel like good parenting?" | Feel Good Games Q4 |
| Name the tradeoff | State the uncomfortable reality directly | "Let's talk about the privacy tradeoff. This service needs to scan your inbox." | Airfairness Q6 |
| Ask about relationship, not transaction | Frame around emotional bond rather than purchase decision | "How would you describe your relationship with your pet?" | VetVivo Q2 |
| Ask about difficult decisions | Reference a specific emotionally charged moment | "Have you ever faced a difficult medical decision for a pet?" | VetVivo Q4 |
| Stigma surfacing | Ask for "honest reaction" to a potentially stigmatised category | "What is your honest reaction when you see decaf on a menu?" | Propeller Coffee (CPG) |
Question 7 (adoption barriers) across all studies produced a classifiable taxonomy of reasons people resist new solutions. Barriers fall into seven categories, ordered by frequency of appearance:
| # | Barrier Category | Frequency | Example | Studies |
|---|---|---|---|---|
| 1 | Integration / workflow disruption | Appeared in 10 of 14 studies | "Must work with what I already use." | MotorMinds, Sidian, NexRisx, TimeSmart, PatientCompanion, Mandel, and others |
| 2 | Trust / credibility | Appeared in 9 of 14 studies | "I've been burned before." "Prove it works." | MotorMinds, Sidian, NexRisx, Flomaru, VetVivo, Loblaw, and others |
| 3 | Privacy / data access | Appeared in 5 of 14 studies | "Who sees what?" "Don't train AI on my data." | Airfairness, NexRisx, PatientCompanion, TimeSmart, Feel Good Games |
| 4 | Learning curve / training | Appeared in 5 of 14 studies | "Staff turnover means retraining." "Must be intuitive." | Mandel, PatientCompanion, TimeSmart, Sidian, MotorMinds |
| 5 | Regulatory / compliance | Appeared in 4 of 14 studies | "HIPAA." "FDA clearance." "Professional liability." | PatientCompanion, Mandel, TimeSmart, Sidian |
| 6 | Cost / pricing model | Appeared in 4 of 14 studies | "Hate subscriptions." "ROI must be clear." | MotorMinds, Mandel, VetVivo, ESPN DTC |
| 7 | Relationship disruption | Appeared in 3 of 14 studies | "Don't break my relationship with local suppliers." | MotorMinds, VetVivo, Flomaru |
After running a study, the quality of insight extraction determines whether the research produces actionable results or generic observations. These patterns distinguish strong insights from weak ones.
| Strong Insight | Weak Insight |
|---|---|
| Specific, quotable language from participants | Generic sentiment ("people want better service") |
| Surprising or counterintuitive | Obvious or expected ("users want things to work") |
| Actionable for the product team | Abstract observation with no clear action |
| Creates tension or curiosity | Confirms what everyone already believes |
| Supported by consensus (3+ participants) | Single participant opinion presented as finding |
| Contradicts the builder's assumption | Restates the builder's pitch in different words |
Apply this framework to every completed study:
Key Finding 1: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]
Key Finding 2: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]
Key Finding 3: [One-sentence summary]
Best quote: "[Exact quote from participant]"
Implication: [What this means for the product/company]
Overall narrative: [What story do these findings tell?]
Magic wand divergence: [If applicable, what they wanted vs what is being built]
Before creating a Ditto study, answer these four questions in order:
| If the Goal Is... | Use This Approach | Example |
|---|---|---|
| Validate a startup for investment | Single-phase, 7-question non-leading framework | All 10 startup diligence studies |
| Research a CPG brand for sales outreach | Single-phase, 7 questions with CPG-specific framework | Propeller Coffee, Ontarieau |
| Test political messaging or voter sentiment | Single-phase, 7 questions with political framework, state filter | Michigan SoS, campaign studies |
| Get B2C product feedback for sales outreach | Single-phase, 7 questions with B2C tech framework | Various B2C PM studies |
| Build a concept from scratch | Multi-phase iterative (3 phases) | CareQuarter |
| Test pricing elasticity | Large panel (30-64), 3-4 pricing-focused questions | ESPN DTC |
| M&A voice-of-customer | Country-filtered panel, 4-5 focused questions | Loblaw / No Frills |
| Cultural research | Small focused group (6-8), country-specific, 7 questions | Das Heilige Brot |
| If... | Use |
|---|---|
| Deep qualitative, cultural research | 6-8 personas |
| Standard problem validation, due diligence | 10 personas |
| Need to over-recruit and curate (niche industry) | 15-20 personas (expect to keep 8-12) |
| Broad audience validation, multiple perspectives | 20 personas |
| Pricing study or need statistical confidence | 30-64 personas |
| If... | Strategy |
|---|---|
| Exact industry filter exists (Cybersecurity, Civil Engineering) | Direct industry filter. Recruit exact number needed. |
| Closest industry filter is a proxy (Healthcare for elder care) | Industry proxy + over-recruit-and-curate. Recruit 15-20, keep 8-12. |
| Target is defined by behaviour, not industry (pet owners, travelers, parents) | General population + Q1 screen. Use age, country, is_parent filters. |
| Target is geographically specific (state voters) | State filter with 2-letter code. NEVER full state name. |
| Target is culturally specific (German bread consumers) | Country filter + age range. Small group (6-8). |
| If... | Framework | Reference |
|---|---|---|
| Startup due diligence | 7-question non-leading framework (relevance → pain → quantify → solutions → attempts → magic wand → barriers) | Question Design Playbook |
| CPG consumer research | 7 questions using CPG frameworks (stigma, origin story, price justification, social signaling, gift-giving, switching) | Question Design Playbook |
| B2C product feedback | 7 questions using B2C frameworks (first impression, value clarity, price sensitivity, feature trade-offs, friction) | Question Design Playbook |
| Political voter sentiment | 7 questions using political frameworks (name recognition, issue ownership, messaging test, switching trigger, enthusiasm, concerns) | Question Design Playbook |
| Pricing elasticity | 3-4 questions testing specific price points with representative panel | ESPN DTC case study |
| Privacy boundary discovery | Name the tradeoff explicitly, then ask what WOULD be acceptable | Section 5 above |
| Emotional / cultural research | Name the emotion, create dichotomies, ask about relationships not transactions | Section 11 above |
Every mistake in this list has occurred in production and caused wasted API calls, low-quality results, or failed studies.
| Mistake | Consequence | Fix |
|---|---|---|
Using full state names ("Michigan") instead of 2-letter codes ("MI") |
0 agents returned. Group is empty. | Always use 2-letter codes: MI, TX, CA, NY, FL, OH, PA. |
Using research_group_id instead of research_group_uuid |
Study created without linked participants. | Always use the UUID (32-character hex string), not the integer ID. |
Using "text" field instead of "question" when asking |
400 error or empty question. | Request body: {"question": "your question here"} |
Reading "answer" field instead of "response_text" |
Empty or null values when reading responses. | Participant answers are in the response_text field. |
| Not waiting for all responses before asking next question | Some participants miss questions. Incomplete data. | Poll GET /v1/research-studies/{id}/questions until all responses received (~45-60 seconds per question). |
Forgetting to call POST /v1/research-studies/{id}/complete |
No AI-generated analysis. Study stays in "active" state. | Always complete the study after all questions are asked. |
| Not enabling sharing before extracting share link | Share link not available or returns private/restricted. | Call POST /v1/research-studies/{id}/share to enable, then GET to retrieve link. |
| Mistake | Consequence | Fix |
|---|---|---|
| Describing the product in questions | Participants evaluate YOUR solution, not express their needs. Biased results. | Ask about the problem domain. Never mention the product. |
| Using too-narrow industry filters | 0 agents returned. | Use proxy filters. Check Section 7 for validated proxies. |
| Not reviewing participant profiles before asking questions | Irrelevant participants (data analysts answering auto mechanic questions). | Over-recruit, review profiles, remove irrelevant agents before starting. |
| Skipping the magic wand question | Missing the most valuable insight. See Section 2. | Always ask it as Q6. "If you could wave a magic wand and fix ONE thing..." |
| Skipping the past-attempts question | Cannot distinguish active seekers from passive complainers. | Always ask Q5: "Have you tried a new tool specifically to solve this?" |
| Using "American Voters" as a generic group for political research | Useless data. State-level insights require state filtering. | Always use state-specific groups with 2-letter codes (e.g., "MI State Voters"). |
| Revealing VC context in due diligence questions | Participants adjust answers to seem "investable." | Frame as general industry research. Never mention investment or VC. |
| Asking yes/no questions | Binary responses with no nuance. Wasted question slot. | Always open-ended: "Walk me through..." / "What's the most..." / "Tell me about..." |
Quick reference for the most commonly confused API fields and behaviours, derived from production experience.
| When You Need To... | Use This Field | NOT This |
|---|---|---|
| Create a study from a group | research_group_uuid | group_id, research_group_id |
| Ask a question | {"question": "..."} | {"text": "..."} |
| Read a participant's answer | response_text | answer, response, text |
| Filter by US state | "state": "MI" (2-letter) | "state": "Michigan" (full name) |
| Create a study (required fields) | title + objective + research_group_uuid | Any subset of these |
# After asking a question, poll until all responses arrive
# Typical wait: 30-60 seconds per question per 10 participants
import time
import requests
def wait_for_responses(study_id, question_index, expected_count, api_key):
"""Poll until all persona responses are received."""
url = f"https://app.askditto.io/v1/research-studies/{study_id}/questions"
headers = {"Authorization": f"Bearer {api_key}"}
while True:
response = requests.get(url, headers=headers)
questions = response.json()
if len(questions) > question_index:
answers = questions[question_index].get("answers", [])
if len(answers) >= expected_count:
return answers
time.sleep(10) # Poll every 10 seconds
# 1. Create research group
POST /v1/research-groups/recruit
{"name": "Study Name", "group_size": 10, "filters": {"country": "USA", "age_min": 28, "age_max": 58}}
# Save: research_group_uuid from response
# 2. Create study
POST /v1/research-studies
{"title": "Study Title", "objective": "Understand X", "research_group_uuid": "{uuid}"}
# Save: study_id from response
# 3. Ask 7 questions sequentially
# For each question:
POST /v1/research-studies/{study_id}/questions
{"question": "Your question here"}
# Wait for all responses before asking next question
# 4. Complete the study
POST /v1/research-studies/{study_id}/complete
# 5. Enable sharing
POST /v1/research-studies/{study_id}/share
{"is_shared": true}
# 6. Get share link
GET /v1/research-studies/{study_id}/share
# Share link in response
Each vertical produces characteristic patterns. Knowing these in advance helps you design better questions and extract better insights.
| Vertical | Characteristic Pattern | Most Valuable Question | Typical Magic Wand Answer |
|---|---|---|---|
| Healthcare (B2B) | Regulatory compliance gates everything. IT approval is the gatekeeper. Staff shortages amplify every pain. | Q7 (barriers) reveals compliance, IT, and staffing concerns. | "One system that talks to all other systems." |
| Healthcare (device) | ROI must be quantified. CPT codes are make-or-break. Clinical evidence required. | Q5 (past attempts) reveals what existing equipment does and does not do. | "Cheaper, faster screening that insurance covers." |
| Cybersecurity | Tool sprawl is universal. AI skepticism is high. Knowledge loss when staff leave. | Q4 (current solutions) maps the 5-15 tool landscape. | "Natural language queries across all my tools." |
| Engineering (regulated) | Professional liability prevents AI autonomy. Generational divide on AI adoption. | Q5 (past attempts) reveals secret ChatGPT usage and liability concerns. | "Automate the tedious tasks (takeoffs) but keep me in control." |
| Auto repair / trades | Burned by previous tools. Offline capability required. Relationship with local suppliers matters. | Q2 (pain) produces the most vivid, specific stories. | "Truthful inventory. Just tell me the truth." |
| Consumer tech (privacy) | Full-access requests rejected. Granular permissions acceptable. Trust signals non-negotiable. | The privacy tradeoff question (direct naming) produces the boundary. | "Zero effort + granular control over my data." |
| Children / parenting | Guilt is the core emotion. Safety concerns dominate. Ad-free is non-negotiable. | The emotion-naming question ("When does it feel like guilt?") breaks through. | "Permission to feel okay about screen time." |
| Pet care | Pets are family. Financial limits are elastic. Age of pet drives decisions. | Q4 (difficult decision) produces the most emotionally rich responses. | "Financing that makes expensive treatments possible." |
| E-commerce / gifting | Trust is the #1 barrier. Emotional stakes are high. Amazon has set expectations. | Q4 (quality confidence) surfaces the core trust concern. | "Photo proof on delivery." |
| CPG (food/beverage) | Category stigma exists. Origin stories face skepticism. Price has a ceiling. | Stigma question ("honest reaction") surfaces hidden biases. | Varies by category. Often about authenticity and proof. |
| Political (voter) | State-level specificity required. Generic national data is useless. | Issue ownership and switching trigger questions produce actionable campaign data. | Varies by state and race. Often about local responsiveness. |
| Grocery retail (brand) | Trust and consistency matter more than price. Quality decline erodes loyalty. | Open-ended sentiment question reveals whether problems are price or trust. | "Consistency. Just be what you used to be." |
7 questions is the validated standard for problem validation and due diligence studies. Each position in the 7-question framework serves a specific purpose (see Section 8). For pricing studies, 3-4 focused questions are sufficient. For cultural research, 7 questions work well with smaller groups (6-8 personas).
Use single-phase (one study, 7 questions) when validating an existing product concept or performing due diligence. Use multi-phase (2-3 separate studies) when building a concept from scratch or when Phase 1 findings raise questions that require new participants to answer without bias. See Section 10 for the full comparison.
Using full state names instead of 2-letter codes. "Michigan" returns 0 agents. "MI" works correctly. This has caused more wasted API calls than any other mistake. See Section 15 for the full registry.
Strong insights are specific (include quotable language), surprising (counterintuitive), and actionable (the product team can do something with them). If your insights could apply to any product in any industry, they are too generic. Check for magic wand divergence (Section 2), categorise adoption barriers using the taxonomy (Section 12), and ensure you have direct quotes from participants. See Section 13.
10 participants is sufficient for pattern identification and consensus detection in qualitative research. Across 10 startup diligence studies with 10 participants each, consistent patterns emerged within 5-6 responses for each question. By participant 10, new themes stop appearing. For quantitative confidence (percentages, pricing), use 20-64 participants. See Section 9.
Yes. A single research group can be used for multiple studies. This is useful when you want to ask different sets of questions to the same audience. However, for multi-phase research, use NEW groups for each phase so participants are not primed by earlier questions. Store the research_group_uuid for reuse.
Recruit 15-20 personas (more than needed), review their profiles for relevance, score each as High/Medium/Low/None based on their role and summary, remove irrelevant (None-scored) participants from the study, then proceed with the remaining relevant participants. This ensures niche studies have qualified participants even when demographic filters are imprecise. See Section 7.
Use the two-step privacy boundary discovery pattern: (1) Name the tradeoff explicitly ("This service needs to scan your inbox. How do you feel?"), then (2) ask what WOULD be acceptable. This produces both the boundary and the acceptable alternatives. See Section 5.
For consumer subscriptions and mass-market products, price is the primary variable. Use large panels (30-64) with multiple price points. For B2B tools, healthcare devices, and professional services, price is rarely the primary objection. Trust, integration, and reliability matter more. See Section 6.
Free-tier access provides a shared panel of approximately 12 personas. The question design patterns, insight extraction framework, and common mistakes all apply equally. The main limitations are: no custom demographic filtering, no study completion analysis, and no shareable links. Get free-tier access at https://app.askditto.io/docs/free-tier-oauth.