Ditto Question Design Playbook for Claude Code

For Claude Code agents and AI coding assistants. This playbook documents how to design research questions for Ditto synthetic research studies that produce actionable, surprising insights. Every framework has been tested across 50+ production studies spanning CPG brands, B2C tech companies, political campaigns, and startup due diligence.

The core insight: The difference between a useful Ditto study and a wasted one is almost entirely in the questions. Good questions surface tension, reveal psychology, and produce findings that make stakeholders say "I didn't know that." Bad questions confirm what everyone already assumes.

Prerequisite: This page assumes you understand the basic Ditto API workflow. If not, start with the Complete Guide.

1. Universal Question Design Principles

These principles apply to every Ditto study, regardless of industry or use case. They have been validated across 50+ production studies.

The Three Requirements of a Good Ditto Question

RequirementWhat It MeansWhy It Matters
Open-ended Invites explanation, not yes/no Ditto personas produce rich qualitative responses. Yes/no questions waste this capability entirely.
Single-topic Asks about one thing at a time Multi-part questions get partial answers. Each persona focuses on whichever part interests them most, producing inconsistent data.
Non-leading Does not suggest the "right" answer Leading questions produce confirmatory data that is useless for decision-making. Let personas surprise you.

Conversational Tone

Ditto personas respond better to conversational questions than formal survey language. They are synthetic people, not survey instruments.

Formal (Worse)Conversational (Better)
"Please rate your satisfaction with the current product offering on a scale of 1-5.""How do you feel about [product] right now? What works and what frustrates you?"
"To what extent do you agree with the following statement...""[Candidate] says '[quote]'. What's your honest reaction?"
"What factors influence your purchasing decision?""Walk me through the last time you bought [product]. What made you choose that specific one?"

Prompt Patterns That Produce Rich Responses

These conversational starters consistently produce longer, more detailed, more emotionally honest responses:

2. The 7-Question Core Framework

This framework has been used across all verticals and consistently produces actionable insights. It is the foundation that all vertical-specific frameworks build upon.

#PurposeQuestion TemplateWhat It Reveals
1 Establish relevance "In your current or past work, how often do you [task]? Walk me through what that process typically looks like." Filters irrelevant participants. Establishes baseline behaviour. Gives you language to use in later questions.
2 Pain identification "What's the most frustrating part of [task]? Tell me about a time when it was particularly painful." Emotional pain with specific stories. "Tell me about a time" produces anecdotes, not abstractions.
3 Quantify impact "Roughly how much time per week do you spend on [task]? What's the cost of that time to your business?" Hard numbers for ROI calculations. Converts emotional pain into business metrics.
4 Current solutions "What tools or methods do you currently use to [solve problem]? What works well? What doesn't?" Competitive landscape. What they already use reveals switching costs and feature expectations.
5 Past attempts "Have you ever tried a new tool specifically to make [task] easier? What happened? Why did you stick with it or abandon it?" Shows if this is an ACTIVE problem (they have tried to solve it) or a passive complaint. Also reveals why previous solutions failed.
6 Magic wand "If you could wave a magic wand and fix ONE thing about how you [task], what would it be?" The single most valuable question. Removes practical constraints. Reveals what they actually want, which frequently differs from what the product builder assumes. See Section 11.
7 Adoption barriers "What would make you hesitant to switch to a new [solution], even if it promised to save you time and money?" Real sales objections and product requirements. Surfaces deal breakers that customers do not volunteer unprompted.
Why 7 questions: Fewer than 5 questions misses patterns. More than 9 produces diminishing returns and persona fatigue. 7 is the validated sweet spot from 50+ studies. Each question takes ~45-60 seconds for all persona responses, so 7 questions complete in ~7-8 minutes.

3. Question Design Rules and Anti-Patterns

Rules (Do These)

RuleExampleWhy
Ask open-ended questions "What frustrates you about [task]?" Ditto's value is qualitative depth. Yes/no wastes it.
One topic per question Separate "What do you like?" from "What do you dislike?" Multi-topic questions get inconsistent coverage.
Use conversational language "Walk me through..." not "Please describe..." Produces more natural, detailed responses.
Ask about feelings AND behaviours "How did that make you feel?" AND "What did you do next?" Feelings reveal motivation. Behaviours reveal reality.
Sequence from broad to specific Q1 establishes context, Q7 probes barriers Context from early questions makes later responses richer.
Include a "magic wand" question "If you could fix ONE thing..." Consistently produces the study's most actionable insight.
End with adoption barriers "What would make you hesitant to..." Surfaces objections you would not discover otherwise.

Anti-Patterns (Never Do These)

Anti-PatternBad ExampleWhy It FailsFix
Yes/no questions "Do you like coffee?" Produces a single word. No depth. "What's your relationship with coffee? When and why do you reach for it?"
Leading questions "Don't you think X is better than Y?" Tells the persona what you want to hear. "How would you compare X and Y? What does each get right and wrong?"
Double-barrelled "What do you like about the product AND how much would you pay?" Personas answer whichever part interests them most. Split into two separate questions.
Jargon-heavy "How do you evaluate the ROI of your MarTech stack?" Not all personas understand industry jargon. "How do you know if the marketing tools you pay for are actually worth it?"
Scale/rating "Rate your satisfaction 1-5" Produces a number, not insight. Use qualitative depth. "How satisfied are you with [product]? What would move that needle?"
Hypothetical-only "Would you use a tool that does X?" Everyone says yes to hypotheticals. No signal. "Have you ever tried a tool that does X? What happened?"
Describing the solution "Our tool uses AI to automatically do X. Would you use it?" Biases responses. They react to your description, not their own needs. "If you could fix ONE thing about how you do [task], what would it be?"
Generic/obvious "Is honesty important in a politician?" Everyone says yes. Zero signal. "What would [Candidate] need to say or DO to convince you they're honest?"

4. CPG / Consumer Goods Questions

CPG research requires questions that surface the psychology behind purchase decisions. Consumers rarely buy CPG products for purely rational reasons. The goal is to find the tension between stated preferences and actual behaviour.

Six Proven CPG Question Frameworks

FrameworkQuestion PatternWhat It SurfacesWhen to Use
Category Stigma "What's your honest reaction when you see [category] on a menu/shelf? Be completely candid." Hidden biases and barriers to purchase that brands don't realise exist. Products in stigmatised or misunderstood categories (decaf coffee, non-alcoholic drinks, supplements, plant-based meat)
Origin Story Skepticism "When a brand tells you their founder's personal tragedy/passion inspired the product, does that make you MORE or LESS likely to buy? Why?" Trust dynamics around founder narratives. Whether authenticity marketing works or backfires. Brands with strong founder stories, artisan products, "mission-driven" brands
Price Justification "What would make you pay 2x more for [product category] than what you currently buy?" Value perception thresholds. What "premium" means to consumers. Whether the brand's differentiator actually justifies the price. Premium-priced products, craft/artisan goods, organic/natural products
Social Signaling "If someone saw this product in your shopping cart, what would they think about you? What would buying this say about you?" Identity and status motivations. Whether the product is aspirational, neutral, or embarrassing to be seen with. Lifestyle brands, status-adjacent products, products people display
Gift-Giving Lens "Would you buy this as a gift? For whom? What occasion? Why or why not?" Perceived positioning and price-appropriateness. Whether the brand is seen as "thoughtful gift" vs "generic filler." Premium beverages, specialty foods, craft products, personal care
Switching Triggers "What would make you switch from your current [product] to something new? What would the new product need to promise?" Competitive opportunities. What matters more than loyalty. Where current solutions actually fail. Commoditised categories, brand-loyal categories, habit-driven purchases

Additional CPG Question Patterns

PatternExample Question
Subscription Resistance"How do you feel about subscribing to [product category] vs buying as needed? What would a subscription need to offer?"
Ingredient Scrutiny"Do you read the ingredient list before buying [category]? What would make you put a product back on the shelf?"
Occasion Mapping"Walk me through the last time you bought [product]. Where were you? Who was it for? What time of day?"
Brand Loyalty Test"Your usual [product] is out of stock. What do you do? What alternative would you grab?"
Packaging Reaction"If you saw this product on a shelf next to [competitors], what would make you reach for it first?"

CPG Study Template (7 Questions)

Context: Premium sparkling water brand, Canadian market

  1. Category perception: "What's your honest reaction when you see 'premium sparkling water' at the grocery store? Does the idea of paying more for water feel reasonable or absurd to you?"
  2. Purchase triggers: "Walk me through the last time you bought sparkling water. What brand did you pick and why? What made you choose that specific one?"
  3. Price sensitivity: "Your usual sparkling water costs $1.50. This brand costs $3.50. What would it need to offer to justify that price difference?"
  4. Origin story test: "This water comes from a specific natural spring in [location]. The founder discovered it on a hiking trip. Does that story make you more or less interested? Why?"
  5. Social signaling: "If you brought this to a dinner party as a host, what would your guests think? Would it start a conversation?"
  6. Switching trigger: "You have been buying [competitor] for years. What would a new brand need to promise to make you switch?"
  7. Dealbreaker: "What would make you NOT buy a premium sparkling water, even if you liked the taste? What is the one thing that would kill the deal?"

CPG Insight Quality Test

After running a CPG study, evaluate each insight against these criteria:

CriterionGood InsightWeak Insight
Specific"Consumers say 'the fizz is sharper' vs competitor""Consumers like the taste"
QuotableContains a direct quote you can use in the emailParaphrased summary with no memorable language
SurprisingContradicts an assumption the brand likely holdsConfirms what the brand already knows
ActionableThe brand could change something based on thisInteresting but no clear next step

5. B2C Tech / Product Manager Questions

Product managers need data they can action. They live in a world of feature prioritisation, pricing tiers, landing page conversion, and user retention. Questions must produce findings that map to PM decisions.

Ten B2C Tech Question Frameworks

FrameworkQuestion PatternPM Decision It Informs
First Impression (10-Second Test) "You have 10 seconds to look at this landing page: [URL]. What do you think this product does? Now read the full page. Were you right?" Landing page clarity, above-the-fold messaging
Value Clarity "In your own words, what problem does this product solve? Who is it for?" Whether the value proposition actually lands with users
Price Sensitivity "At what price does this go from 'worth trying' to 'not worth the hassle'? What's the maximum you'd pay?" Pricing tier design, willingness-to-pay ceiling
Switching Cost "What would make you switch from [competitor] to this? What would need to be true?" Competitive positioning, differentiation requirements
Feature Trade-offs "You can only keep 3 features from this product. Which ones? What can you live without?" Feature prioritisation from actual user perspective
Friction Points "What's the most confusing thing about this product? Where did you get stuck or give up?" UX pain points, onboarding blockers
Social Proof / WOM "Would you recommend this to a friend? What exactly would you tell them? What would you warn them about?" Word-of-mouth potential, referral messaging
Abandonment Triggers "What would make you stop using this product after the first month? What would be a dealbreaker?" Churn risks, retention priorities
Pricing Model Preference "Would you prefer to pay $X/month or $Y/year? What about a one-time purchase? Why?" Revenue model design
Design Reaction "Looking at this product, does the design feel premium, professional, cheap, or playful? Does it match what you'd expect?" Brand perception, design-market fit

B2C Tech Study Templates

Template A: Landing Page Focus

  1. "You have 10 seconds to look at this landing page: [URL]. What do you think this product does? Now read the full page. Were you right? What was clear and what was confusing?"
  2. "If you were looking for a [category] tool, would you sign up for this? What would convince you, and what might put you off?"
  3. "This product costs [price]. Is that what you expected? At what price would you definitely not sign up?"
  4. "How does this compare to [competitor]? Which would you pick, and why?"
  5. "What's the single biggest reason you would NOT try this product?"
  6. "If this product disappeared tomorrow, how would you solve [problem] instead? How annoyed would you be?"
  7. "If you could change one thing about this product concept, what would it be?"

Template B: Pricing Focus

  1. "How much do you currently spend on [category] tools per month? Is that a lot for you, or barely noticeable?"
  2. "This product offers a free tier and a $[X]/month paid tier. At what point would you upgrade? What would the paid version need to include?"
  3. "If the price were $[A] for basic and $[B] for premium, which would you choose? What would the premium need to have?"
  4. "At what price does this go from 'bargain' to 'fair' to 'too expensive'? Give me the three numbers."
  5. "Would you pay more if it integrated directly with [tool they already use]? How much more?"
  6. "What would make you cancel after the first month? After 6 months?"
  7. "If a competitor offered the same thing for free, would you switch? What would keep you paying?"

Template C: Competitive Positioning

  1. "What tools do you currently use for [task]? Walk me through your workflow."
  2. "What does [competitor] get right that you wish other tools copied?"
  3. "What frustrates you about [competitor]? If you could fix one thing, what would it be?"
  4. "I'm going to describe a new product: [description]. How does that compare to what you use now?"
  5. "What would this new product need to do BETTER than [competitor] to make you switch?"
  6. "How important is [specific feature] vs [other feature]? If you had to choose one, which?"
  7. "What would be a dealbreaker that would make you immediately rule out a new [category] tool?"

Demographic Targeting by Product Type

Product TypeTarget DemographicsDitto Filters
Productivity SaaSKnowledge workers 25-45age_min: 25, age_max: 45, employment: "employed"
Consumer appGeneral consumers 18-45age_min: 18, age_max: 45
Prosumer toolsTech-savvy freelancers/creators 25-40age_min: 25, age_max: 40
Finance/budgetingAdults managing finances 25-55age_min: 25, age_max: 55
Health/wellnessHealth-conscious adults 25-50age_min: 25, age_max: 50
Creative toolsDesigners, marketers 22-40age_min: 22, age_max: 40
EducationStudents, parents, lifelong learnersage_min: 18, age_max: 55
GamingGamers 16-35age_min: 18, age_max: 35

6. Political / Voter Research Questions

Political voter research requires state-specific questions that surface voter sentiment, messaging effectiveness, and issue priorities. The value to campaigns is that these insights come from voters in THEIR state/district.

Critical: Research groups MUST be filtered by state using 2-letter codes ("MI" not "Michigan"). Full state names return 0 agents. Never fall back to generic groups. See Recruitment Strategies.

Six Political Question Frameworks

FrameworkQuestion PatternWhat It Surfaces
Name Recognition / Perception "What's your gut reaction when you hear [Candidate Name]'s name? What comes to mind first?" Awareness levels, first impressions, whether they are known at all
Issue Ownership "On [issue], who do you trust more: [Candidate A] or [Candidate B]? Why?" Competitive positioning on specific issues, where each candidate is strong/weak
Messaging Test "[Candidate] says: '[exact campaign message]'. What's your reaction? Does this make you more or less likely to support them?" Message effectiveness, whether specific talking points land or fall flat
Switching Trigger "What would [Candidate] need to say or do to change your mind about them? What would win or lose your vote?" Persuadable voter dynamics, what moves the needle
Enthusiasm Gap "How excited are you to vote for [Candidate]? What would make you more enthusiastic? What might make you stay home?" Turnout likelihood, motivating factors, apathy indicators
Concern Surfacing "What's your biggest concern about [Candidate]? What makes you nervous?" Attack vulnerabilities, perception risks, issues the campaign needs to address

Political Study Template (7 Questions)

Context: Democratic candidate for Governor, Ohio

  1. State issues: "What are the most important issues facing Ohio right now? What keeps you up at night as a voter?"
  2. Candidate perception: "What's your honest first reaction when you hear [Candidate Name]? What do you know about them?"
  3. Issue ownership: "On [top state issue], who do you trust more: [Candidate] or [Opponent]? What makes you say that?"
  4. Messaging test: "[Candidate] says: '[key campaign message]'. How does that land with you? Does it feel genuine or like political talk?"
  5. Switching test: "What would [Candidate] need to say or do to win your vote? What specific action or promise would move the needle?"
  6. Concern: "What's the ONE thing about [Candidate] that concerns you most? What would you want to ask them face-to-face?"
  7. Enthusiasm: "On a scale of 'definitely voting' to 'might not bother,' where are you for this race? What would change that?"

Political Question Openers (5 Patterns)

Rotate across these opening patterns to avoid repetitive study designs:

PatternOpening Question Style
A: The Voter VoiceLead with state issues, let voters set the agenda
B: The Competitive EdgeLead with head-to-head comparison between candidates
C: The Quote LeadStart with a direct candidate quote and gauge reaction
D: The Issue TestStart with a specific policy and test whether voters care
E: The Urgency OpenerStart with "What's at stake?" to surface emotional drivers

7. Startup Due Diligence Questions (Non-Leading)

Startup due diligence questions must be non-leading. The VC context is never revealed. You are validating whether a problem exists and is severe enough to build a business around, without describing or suggesting the startup's specific solution.

Never do this: "Our tool uses AI to automatically source auto parts. Would you use it?" This describes the solution, biases responses, and produces useless confirmatory data.

The 7-Question Non-Leading Framework

#PurposeTemplateWhat VCs Learn
1 Establish relevance "In your current or past work, how often do you need to [task]? Walk me through what that process typically looks like." Does the target persona actually do this task? How central is it?
2 Pain identification "What's the most frustrating part of [task]? Tell me about a time when it was particularly painful." Is the pain real? Is it emotional or just inconvenient?
3 Quantify impact "Roughly how much time per week do you or your team spend on [task]? What's the cost of that time to your business?" Hard ROI numbers. Can the startup justify its price?
4 Current solutions "What tools, websites, or methods do you currently use to [solve problem]? What works well? What doesn't?" Competitive landscape. What already exists.
5 Past attempts "Have you ever tried a new tool or system specifically to make [task] easier? What happened? Why did you stick with it or abandon it?" Is this an ACTIVE problem? If they have never tried to solve it, it might not hurt enough.
6 Magic wand "If you could wave a magic wand and fix ONE thing about how you [task], what would it be?" What they actually want vs what the startup is building. These often diverge.
7 Adoption barriers "What would make you hesitant to switch to a new [solution type], even if it promised to save you time and money?" Real sales objections. Integration requirements. Deal breakers.

Why Each Question Matters for Investment Decisions

Industry-Specific Due Diligence Question Adaptations

The 7-question framework adapts to specific industries. Here are tested adaptations from production studies:

IndustryQ1 Adaptation (Relevance)Q6 Adaptation (Magic Wand)
Auto repair (MotorMinds) "How often do you need to find, source, or order parts for vehicles?" "If you could fix ONE thing about how you source parts, what would it be?"
Veterinary (VetVivo) "Do you currently have a pet? What kind? How would you describe your relationship?" "If money were no object, what would ideal veterinary care look like for your pet?"
Children's ed (Feel Good Games) "Do you have children? What ages? How much screen time do they have each day?" "If you could design the perfect app or game for your child, what would it do?"
Elder care (PatientCompanion) "What is your role in elder care? How do you typically interact with elderly patients?" "If you could design the perfect system for patients to communicate needs, what would it look like?"
Healthcare admin (TimeSmart) "What is your role in healthcare? How much time is spent on administrative tasks?" "If you could design the perfect system for tracking time and ensuring correct payment?"
International commerce (Flomaru) "Do you have loved ones in another country? Have you tried to send gifts internationally?" "If you could design the perfect international flower delivery service, what would it look like?"
Civil engineering (Sidian) "What is your role in civil engineering? Walk me through a typical project." "If you could have an AI assistant specifically for civil engineering work, what would you want it to do?"
Medical devices (Mandel) "What is your role in eye care? What services do you currently offer?" "If you could add any new capability or diagnostic tool to your practice, what would it be?"
Cybersecurity (NexRisx) "What is your role in cybersecurity? Walk me through your day-to-day responsibilities." "If you could have one tool to make your security work dramatically easier, what would it do?"
Travel/compensation (Airfairness) "How often do you fly? For what purposes? How many flights in the past year?" "What would make you actually sign up and use an automatic flight compensation service?"

8. Cultural and Attitudinal Research Questions

Cultural research explores attitudes, norms, and behaviours within specific populations. This requires particular sensitivity to how questions are framed.

Cultural Research Frameworks

FrameworkQuestion PatternUse Case
Tradition vs Modernity "How do you feel about [modern alternative] compared to the traditional way? Is something lost or gained?" Testing new products against cultural norms
Cultural Identity "When someone from outside your culture talks about [topic], what do they usually get wrong?" Understanding cultural perceptions and sensitivities
Generational Divide "How do your parents/grandparents feel about [topic] compared to you? Where do you agree and disagree?" Understanding generational shifts in attitudes
Daily Ritual "Walk me through your morning/evening routine involving [product/activity]. What role does it play in your day?" Understanding habitual behaviour and rituals
National Pride "How do you feel about [domestic product] vs [imported product]? Does where it comes from matter to you?" Country-of-origin effects on purchase decisions

Real Example: Das Heilige Brot (German Bread Study)

6 German personas aged 30-65 were asked about bread culture. Key finding: hotel toast was called "the carb mattress" while Brötchen was described in near-spiritual terms. This revealed that bread is not a commodity in Germany - it is a cultural identity marker. Questions included: "What does good bread mean to you?", "What is the worst bread experience you have ever had?", and "If a foreigner asked you about German bread, what would you tell them?"

View study

9. Pricing Validation Questions

Pricing questions require careful construction. Direct "would you pay X?" questions produce unreliable data. Instead, use frameworks that surface price sensitivity indirectly.

Pricing Question Frameworks

FrameworkQuestion PatternWhat It Reveals
Current Spending Anchor "How much do you currently spend on [category] per month? Break it down." Reference price. What "normal" spending looks like.
Three-Point Pricing "At what price would [product] feel like a bargain? Fair? Too expensive to consider?" Price sensitivity range. Where the ceiling is.
Specific Price Test "Would you pay $X/month for [specific benefit]? Why or why not?" Whether a specific price point works. Reasons for rejection.
Tier Preference "If the price were $A for basic and $B for premium, which would you choose? What would premium need to include?" Tier structure validation. What drives upgrades.
Cancellation Trigger "What would make you cancel after the first month?" Retention risks. What "not worth it" looks like.
Value Justification "What would make you pay 2x more than what you currently spend on [category]?" Premium positioning requirements. Value perception.
Free vs Paid "If a free version existed with [limitations], would you use that instead? What would make you upgrade?" Freemium conversion triggers. Free tier boundaries.

Real Example: ESPN DTC Pricing

64 US personas tested across four price points. At $9.99: 65.7% would subscribe. At $29.99: 6.3% would. The sharp elasticity cliff meant that a $29.99 price point would dramatically underperform market expectations. This directly informed a hedge fund's trading thesis on Disney stock.

Real Example: CareQuarter Pricing

Three tiers tested: $175/month (routine care coordination), $325/month (complex needs), $125/event (crisis response). Every persona confirmed these were within acceptable range. The study also revealed that monthly subscription was strongly preferred over per-event pricing, even though per-event was cheaper for occasional users.

10. Competitive Intelligence Questions

Competitive intelligence questions explore why customers choose competitors, what they value, and what would trigger switching.

Competitive Intelligence Frameworks

FrameworkQuestion Pattern
Choice Drivers "Why did you choose [competitor] over other options? What sealed the deal?"
Unique Strengths "What does [competitor] get right that nobody else does?"
Hidden Frustrations "What frustrates you most about [competitor]? If you could fix one thing, what would it be?"
Switching Threshold "What would make you switch from [competitor] to something new? What would the new option need to promise?"
New Entrant Requirements "If a new company entered this space, what would they need to do to get your attention?"
Loyalty Test "[Competitor] raises their price by 30%. Do you stay or leave? What's your threshold?"
Feature Gap "What do you wish [competitor] did that they don't do? What's missing?"

11. The Magic Wand Question: Why It Works

Across 50+ production studies, the "magic wand" question consistently produces the study's single most actionable insight. Here is why, and the evidence.

What the Magic Wand Question Does

By asking "If you could wave a magic wand and fix ONE thing about [task], what would it be?", you achieve three things:

  1. Remove practical constraints. Without budget, politics, or technology limitations, personas reveal what they actually want vs what they think is possible.
  2. Force prioritisation. "ONE thing" prevents laundry lists. It reveals the most important need.
  3. Surface the gap. What personas say they want is frequently different from what the product builder assumes. This gap is the most valuable insight.

Magic Wand Results Across 10 Startup Studies

StartupBuilder Assumed...Magic Wand Revealed...Gap
MotorMinds Faster parts search and AI recommendations "Truthful inventory with guaranteed ETAs" Trust and accuracy matter more than speed or AI
PatientCompanion Better call button systems for patients "Context before arrival - know WHAT they need before walking in" Staff need information, not just alerts
TimeSmart Automated timesheet entry "Real-time visibility into contract compliance and pay accuracy" The problem is not data entry - it is not knowing where you stand
Flomaru Wider delivery coverage and faster shipping "Photo proof on delivery" and "see what it actually looks like" Trust verification matters more than logistics
NexRisx Better vulnerability prioritisation algorithms "One place to see everything without switching tabs" Tool consolidation matters more than better algorithms
Sidian AI that automates engineering design "AI suggestions are fine; AI decisions are terrifying" Augmentation accepted; autonomy rejected
Feel Good Games More educational game content "Permission to feel okay about screen time" Parental guilt is the product's real competitor, not other apps
VetVivo Cheaper implant options "Honest conversation about quality of life, not just procedure success rates" Emotional guidance matters more than price
Mandel Diagnostics More advanced screening technology "Clear ROI and reimbursement codes (CPT) before I invest" Business case matters more than clinical capability
Airfairness Automatic claim filing "Folder-only access, not full inbox. Show me exactly what you can see." Privacy controls matter more than convenience
Pattern: In 7 out of 10 studies, the magic wand consensus did NOT match the startup's primary feature pitch. This is the question's value: it exposes misalignment between what is being built and what is actually wanted.

12. Adapting Questions by Country and Culture

CountryAdaptation NotesExample Difference
USA Direct, opinion-friendly. Americans are comfortable expressing strong preferences. "What's your honest first reaction?" works well.
UK More reserved. Understatement common. "Quite good" means "excellent." "What are your thoughts on..." (softer framing) may produce more candour than "What's your HONEST reaction?"
Germany Direct and detailed. Quality and process matter deeply. Will give thorough, structured answers. Questions about craftsmanship and quality standards produce rich responses. "Walk me through the process" works exceptionally well.
Canada Similar to US but more polite and consensus-oriented. May hedge more in responses. "What concerns would you have?" produces more honest answers than "What's wrong with this?"

Currency and Pricing Adaptation

When asking pricing questions, always use the local currency:

13. Real Examples with Outcomes

These are real questions from production studies, paired with the outcomes they produced.

MotorMinds: "Ghost Inventory" Discovery

Question: "What's the most frustrating part of getting the parts you need? Tell me about a time when sourcing a part was particularly painful."

Outcome: Every single participant described "ghost inventory" - systems showing parts as in-stock when they are not. One participant: "If it says 10:30, it rolls in at 10:30. No mid-year misses." This was the #1 finding that shaped the entire investment evaluation.

View full study

Airfairness: Privacy Dealbreaker Discovery

Question: "Let's talk about the privacy tradeoff. This service needs to scan your email inbox to find flight confirmations. How do you feel about that?"

Outcome: "Full inbox access" was a near-universal dealbreaker. But the follow-up magic wand question revealed that "folder-only access" or "forward specific emails" was acceptable. This distinction - full access vs limited access - was the difference between a product that works and one that does not.

View full study

Feel Good Games: Parental Guilt Discovery

Question: "When does your child's screen time make you feel guilty as a parent, and when does it feel like good parenting?"

Outcome: Guilt was the dominant emotion, not concern about content quality. Parents want "permission to feel okay" about screen time. The "educational" label is table stakes but not trusted. This reframed the entire value proposition from "better games" to "guilt-free screen time."

View full study

NexRisx: Tool Sprawl Discovery

Question: "What security tools and platforms do you currently use in your work?"

Outcome: Every security team juggles 5-15+ tools that do not talk to each other. Context-switching is constant. The insight that "one place to see everything" mattered more than better algorithms reframed NexRisx's positioning from "AI-powered vulnerability prioritisation" to "unified security view."

View full study

14. How Many Questions to Ask

Study TypeRecommended QuestionsRationale
Standard research7Covers all aspects of the 7-question framework
Quick validation4-5When you need a fast answer to a specific question
Deep dive / due diligence7Full framework. Do not go beyond 7 - diminishing returns.
Pricing only4-6Focused on willingness to pay
Messaging test4-5Test 2-3 messages, then probe reactions
Competitive intel5-7Covers choice drivers, frustrations, switching triggers
Do not exceed 9 questions. After 9 questions, persona responses become shorter and less detailed. The quality of responses degrades. If you need more coverage, run a second study with a different group rather than extending the first study.

15. Frequently Asked Questions

Should I always use 7 questions?

Seven is the default and validated sweet spot. Use fewer (4-5) for focused studies (pricing only, messaging test). Never use more than 9. If you need broader coverage, run a second study.

Can I ask follow-up questions based on responses?

Yes. Questions are asked sequentially within a study, and each persona maintains context from previous questions. You can reference earlier answers: "You mentioned [X] in your previous response. Tell me more about that." This is a powerful technique but use it sparingly - planned questions produce more consistent data than reactive ones.

How do I know if my questions are good enough?

Apply these tests: (1) Could the answer be "yes" or "no"? If so, rewrite. (2) Would the answer tell you something you do not already assume? If not, it is too obvious. (3) Could the answer change a business decision? If not, it is not actionable. (4) Does it ask about one thing or multiple things? If multiple, split it.

What if a question does not produce useful responses?

This happens occasionally. The most common causes: (1) the question is too abstract - add a concrete scenario, (2) the question assumes knowledge the persona does not have - simplify, (3) the question is leading - the persona tells you what you wanted to hear. Do not spend time trying to "fix" a poor question mid-study. Note what went wrong and design better questions next time.

Should I show the product/website to personas?

For B2C tech studies, referencing a specific URL in questions works well ("Look at this landing page for 10 seconds..."). For due diligence studies, do NOT describe or link to the startup's product - let participants describe their own needs without priming. For CPG, you can reference the product category without naming the specific brand in early questions, then introduce it later.

How do I design questions for niche industries?

Start with Q1 as a relevance filter: "In your current or past work, how often do you [industry-specific task]?" This immediately tells you if the persona has relevant experience. Even with broad demographic filters (age + country instead of exact industry), a well-designed Q1 separates relevant from irrelevant participants in the response data.