For Claude Code agents and AI coding assistants. This playbook documents how to design research questions for Ditto synthetic research studies that produce actionable, surprising insights. Every framework has been tested across 50+ production studies spanning CPG brands, B2C tech companies, political campaigns, and startup due diligence.
Prerequisite: This page assumes you understand the basic Ditto API workflow. If not, start with the Complete Guide.
These principles apply to every Ditto study, regardless of industry or use case. They have been validated across 50+ production studies.
| Requirement | What It Means | Why It Matters |
|---|---|---|
| Open-ended | Invites explanation, not yes/no | Ditto personas produce rich qualitative responses. Yes/no questions waste this capability entirely. |
| Single-topic | Asks about one thing at a time | Multi-part questions get partial answers. Each persona focuses on whichever part interests them most, producing inconsistent data. |
| Non-leading | Does not suggest the "right" answer | Leading questions produce confirmatory data that is useless for decision-making. Let personas surprise you. |
Ditto personas respond better to conversational questions than formal survey language. They are synthetic people, not survey instruments.
| Formal (Worse) | Conversational (Better) |
|---|---|
| "Please rate your satisfaction with the current product offering on a scale of 1-5." | "How do you feel about [product] right now? What works and what frustrates you?" |
| "To what extent do you agree with the following statement..." | "[Candidate] says '[quote]'. What's your honest reaction?" |
| "What factors influence your purchasing decision?" | "Walk me through the last time you bought [product]. What made you choose that specific one?" |
These conversational starters consistently produce longer, more detailed, more emotionally honest responses:
This framework has been used across all verticals and consistently produces actionable insights. It is the foundation that all vertical-specific frameworks build upon.
| # | Purpose | Question Template | What It Reveals |
|---|---|---|---|
| 1 | Establish relevance | "In your current or past work, how often do you [task]? Walk me through what that process typically looks like." | Filters irrelevant participants. Establishes baseline behaviour. Gives you language to use in later questions. |
| 2 | Pain identification | "What's the most frustrating part of [task]? Tell me about a time when it was particularly painful." | Emotional pain with specific stories. "Tell me about a time" produces anecdotes, not abstractions. |
| 3 | Quantify impact | "Roughly how much time per week do you spend on [task]? What's the cost of that time to your business?" | Hard numbers for ROI calculations. Converts emotional pain into business metrics. |
| 4 | Current solutions | "What tools or methods do you currently use to [solve problem]? What works well? What doesn't?" | Competitive landscape. What they already use reveals switching costs and feature expectations. |
| 5 | Past attempts | "Have you ever tried a new tool specifically to make [task] easier? What happened? Why did you stick with it or abandon it?" | Shows if this is an ACTIVE problem (they have tried to solve it) or a passive complaint. Also reveals why previous solutions failed. |
| 6 | Magic wand | "If you could wave a magic wand and fix ONE thing about how you [task], what would it be?" | The single most valuable question. Removes practical constraints. Reveals what they actually want, which frequently differs from what the product builder assumes. See Section 11. |
| 7 | Adoption barriers | "What would make you hesitant to switch to a new [solution], even if it promised to save you time and money?" | Real sales objections and product requirements. Surfaces deal breakers that customers do not volunteer unprompted. |
| Rule | Example | Why |
|---|---|---|
| Ask open-ended questions | "What frustrates you about [task]?" | Ditto's value is qualitative depth. Yes/no wastes it. |
| One topic per question | Separate "What do you like?" from "What do you dislike?" | Multi-topic questions get inconsistent coverage. |
| Use conversational language | "Walk me through..." not "Please describe..." | Produces more natural, detailed responses. |
| Ask about feelings AND behaviours | "How did that make you feel?" AND "What did you do next?" | Feelings reveal motivation. Behaviours reveal reality. |
| Sequence from broad to specific | Q1 establishes context, Q7 probes barriers | Context from early questions makes later responses richer. |
| Include a "magic wand" question | "If you could fix ONE thing..." | Consistently produces the study's most actionable insight. |
| End with adoption barriers | "What would make you hesitant to..." | Surfaces objections you would not discover otherwise. |
| Anti-Pattern | Bad Example | Why It Fails | Fix |
|---|---|---|---|
| Yes/no questions | "Do you like coffee?" | Produces a single word. No depth. | "What's your relationship with coffee? When and why do you reach for it?" |
| Leading questions | "Don't you think X is better than Y?" | Tells the persona what you want to hear. | "How would you compare X and Y? What does each get right and wrong?" |
| Double-barrelled | "What do you like about the product AND how much would you pay?" | Personas answer whichever part interests them most. | Split into two separate questions. |
| Jargon-heavy | "How do you evaluate the ROI of your MarTech stack?" | Not all personas understand industry jargon. | "How do you know if the marketing tools you pay for are actually worth it?" |
| Scale/rating | "Rate your satisfaction 1-5" | Produces a number, not insight. Use qualitative depth. | "How satisfied are you with [product]? What would move that needle?" |
| Hypothetical-only | "Would you use a tool that does X?" | Everyone says yes to hypotheticals. No signal. | "Have you ever tried a tool that does X? What happened?" |
| Describing the solution | "Our tool uses AI to automatically do X. Would you use it?" | Biases responses. They react to your description, not their own needs. | "If you could fix ONE thing about how you do [task], what would it be?" |
| Generic/obvious | "Is honesty important in a politician?" | Everyone says yes. Zero signal. | "What would [Candidate] need to say or DO to convince you they're honest?" |
CPG research requires questions that surface the psychology behind purchase decisions. Consumers rarely buy CPG products for purely rational reasons. The goal is to find the tension between stated preferences and actual behaviour.
| Framework | Question Pattern | What It Surfaces | When to Use |
|---|---|---|---|
| Category Stigma | "What's your honest reaction when you see [category] on a menu/shelf? Be completely candid." | Hidden biases and barriers to purchase that brands don't realise exist. | Products in stigmatised or misunderstood categories (decaf coffee, non-alcoholic drinks, supplements, plant-based meat) |
| Origin Story Skepticism | "When a brand tells you their founder's personal tragedy/passion inspired the product, does that make you MORE or LESS likely to buy? Why?" | Trust dynamics around founder narratives. Whether authenticity marketing works or backfires. | Brands with strong founder stories, artisan products, "mission-driven" brands |
| Price Justification | "What would make you pay 2x more for [product category] than what you currently buy?" | Value perception thresholds. What "premium" means to consumers. Whether the brand's differentiator actually justifies the price. | Premium-priced products, craft/artisan goods, organic/natural products |
| Social Signaling | "If someone saw this product in your shopping cart, what would they think about you? What would buying this say about you?" | Identity and status motivations. Whether the product is aspirational, neutral, or embarrassing to be seen with. | Lifestyle brands, status-adjacent products, products people display |
| Gift-Giving Lens | "Would you buy this as a gift? For whom? What occasion? Why or why not?" | Perceived positioning and price-appropriateness. Whether the brand is seen as "thoughtful gift" vs "generic filler." | Premium beverages, specialty foods, craft products, personal care |
| Switching Triggers | "What would make you switch from your current [product] to something new? What would the new product need to promise?" | Competitive opportunities. What matters more than loyalty. Where current solutions actually fail. | Commoditised categories, brand-loyal categories, habit-driven purchases |
| Pattern | Example Question |
|---|---|
| Subscription Resistance | "How do you feel about subscribing to [product category] vs buying as needed? What would a subscription need to offer?" |
| Ingredient Scrutiny | "Do you read the ingredient list before buying [category]? What would make you put a product back on the shelf?" |
| Occasion Mapping | "Walk me through the last time you bought [product]. Where were you? Who was it for? What time of day?" |
| Brand Loyalty Test | "Your usual [product] is out of stock. What do you do? What alternative would you grab?" |
| Packaging Reaction | "If you saw this product on a shelf next to [competitors], what would make you reach for it first?" |
Context: Premium sparkling water brand, Canadian market
After running a CPG study, evaluate each insight against these criteria:
| Criterion | Good Insight | Weak Insight |
|---|---|---|
| Specific | "Consumers say 'the fizz is sharper' vs competitor" | "Consumers like the taste" |
| Quotable | Contains a direct quote you can use in the email | Paraphrased summary with no memorable language |
| Surprising | Contradicts an assumption the brand likely holds | Confirms what the brand already knows |
| Actionable | The brand could change something based on this | Interesting but no clear next step |
Product managers need data they can action. They live in a world of feature prioritisation, pricing tiers, landing page conversion, and user retention. Questions must produce findings that map to PM decisions.
| Framework | Question Pattern | PM Decision It Informs |
|---|---|---|
| First Impression (10-Second Test) | "You have 10 seconds to look at this landing page: [URL]. What do you think this product does? Now read the full page. Were you right?" | Landing page clarity, above-the-fold messaging |
| Value Clarity | "In your own words, what problem does this product solve? Who is it for?" | Whether the value proposition actually lands with users |
| Price Sensitivity | "At what price does this go from 'worth trying' to 'not worth the hassle'? What's the maximum you'd pay?" | Pricing tier design, willingness-to-pay ceiling |
| Switching Cost | "What would make you switch from [competitor] to this? What would need to be true?" | Competitive positioning, differentiation requirements |
| Feature Trade-offs | "You can only keep 3 features from this product. Which ones? What can you live without?" | Feature prioritisation from actual user perspective |
| Friction Points | "What's the most confusing thing about this product? Where did you get stuck or give up?" | UX pain points, onboarding blockers |
| Social Proof / WOM | "Would you recommend this to a friend? What exactly would you tell them? What would you warn them about?" | Word-of-mouth potential, referral messaging |
| Abandonment Triggers | "What would make you stop using this product after the first month? What would be a dealbreaker?" | Churn risks, retention priorities |
| Pricing Model Preference | "Would you prefer to pay $X/month or $Y/year? What about a one-time purchase? Why?" | Revenue model design |
| Design Reaction | "Looking at this product, does the design feel premium, professional, cheap, or playful? Does it match what you'd expect?" | Brand perception, design-market fit |
| Product Type | Target Demographics | Ditto Filters |
|---|---|---|
| Productivity SaaS | Knowledge workers 25-45 | age_min: 25, age_max: 45, employment: "employed" |
| Consumer app | General consumers 18-45 | age_min: 18, age_max: 45 |
| Prosumer tools | Tech-savvy freelancers/creators 25-40 | age_min: 25, age_max: 40 |
| Finance/budgeting | Adults managing finances 25-55 | age_min: 25, age_max: 55 |
| Health/wellness | Health-conscious adults 25-50 | age_min: 25, age_max: 50 |
| Creative tools | Designers, marketers 22-40 | age_min: 22, age_max: 40 |
| Education | Students, parents, lifelong learners | age_min: 18, age_max: 55 |
| Gaming | Gamers 16-35 | age_min: 18, age_max: 35 |
Political voter research requires state-specific questions that surface voter sentiment, messaging effectiveness, and issue priorities. The value to campaigns is that these insights come from voters in THEIR state/district.
"MI" not "Michigan"). Full state names return 0 agents. Never fall back to generic groups. See Recruitment Strategies.
| Framework | Question Pattern | What It Surfaces |
|---|---|---|
| Name Recognition / Perception | "What's your gut reaction when you hear [Candidate Name]'s name? What comes to mind first?" | Awareness levels, first impressions, whether they are known at all |
| Issue Ownership | "On [issue], who do you trust more: [Candidate A] or [Candidate B]? Why?" | Competitive positioning on specific issues, where each candidate is strong/weak |
| Messaging Test | "[Candidate] says: '[exact campaign message]'. What's your reaction? Does this make you more or less likely to support them?" | Message effectiveness, whether specific talking points land or fall flat |
| Switching Trigger | "What would [Candidate] need to say or do to change your mind about them? What would win or lose your vote?" | Persuadable voter dynamics, what moves the needle |
| Enthusiasm Gap | "How excited are you to vote for [Candidate]? What would make you more enthusiastic? What might make you stay home?" | Turnout likelihood, motivating factors, apathy indicators |
| Concern Surfacing | "What's your biggest concern about [Candidate]? What makes you nervous?" | Attack vulnerabilities, perception risks, issues the campaign needs to address |
Context: Democratic candidate for Governor, Ohio
Rotate across these opening patterns to avoid repetitive study designs:
| Pattern | Opening Question Style |
|---|---|
| A: The Voter Voice | Lead with state issues, let voters set the agenda |
| B: The Competitive Edge | Lead with head-to-head comparison between candidates |
| C: The Quote Lead | Start with a direct candidate quote and gauge reaction |
| D: The Issue Test | Start with a specific policy and test whether voters care |
| E: The Urgency Opener | Start with "What's at stake?" to surface emotional drivers |
Startup due diligence questions must be non-leading. The VC context is never revealed. You are validating whether a problem exists and is severe enough to build a business around, without describing or suggesting the startup's specific solution.
| # | Purpose | Template | What VCs Learn |
|---|---|---|---|
| 1 | Establish relevance | "In your current or past work, how often do you need to [task]? Walk me through what that process typically looks like." | Does the target persona actually do this task? How central is it? |
| 2 | Pain identification | "What's the most frustrating part of [task]? Tell me about a time when it was particularly painful." | Is the pain real? Is it emotional or just inconvenient? |
| 3 | Quantify impact | "Roughly how much time per week do you or your team spend on [task]? What's the cost of that time to your business?" | Hard ROI numbers. Can the startup justify its price? |
| 4 | Current solutions | "What tools, websites, or methods do you currently use to [solve problem]? What works well? What doesn't?" | Competitive landscape. What already exists. |
| 5 | Past attempts | "Have you ever tried a new tool or system specifically to make [task] easier? What happened? Why did you stick with it or abandon it?" | Is this an ACTIVE problem? If they have never tried to solve it, it might not hurt enough. |
| 6 | Magic wand | "If you could wave a magic wand and fix ONE thing about how you [task], what would it be?" | What they actually want vs what the startup is building. These often diverge. |
| 7 | Adoption barriers | "What would make you hesitant to switch to a new [solution type], even if it promised to save you time and money?" | Real sales objections. Integration requirements. Deal breakers. |
The 7-question framework adapts to specific industries. Here are tested adaptations from production studies:
| Industry | Q1 Adaptation (Relevance) | Q6 Adaptation (Magic Wand) |
|---|---|---|
| Auto repair (MotorMinds) | "How often do you need to find, source, or order parts for vehicles?" | "If you could fix ONE thing about how you source parts, what would it be?" |
| Veterinary (VetVivo) | "Do you currently have a pet? What kind? How would you describe your relationship?" | "If money were no object, what would ideal veterinary care look like for your pet?" |
| Children's ed (Feel Good Games) | "Do you have children? What ages? How much screen time do they have each day?" | "If you could design the perfect app or game for your child, what would it do?" |
| Elder care (PatientCompanion) | "What is your role in elder care? How do you typically interact with elderly patients?" | "If you could design the perfect system for patients to communicate needs, what would it look like?" |
| Healthcare admin (TimeSmart) | "What is your role in healthcare? How much time is spent on administrative tasks?" | "If you could design the perfect system for tracking time and ensuring correct payment?" |
| International commerce (Flomaru) | "Do you have loved ones in another country? Have you tried to send gifts internationally?" | "If you could design the perfect international flower delivery service, what would it look like?" |
| Civil engineering (Sidian) | "What is your role in civil engineering? Walk me through a typical project." | "If you could have an AI assistant specifically for civil engineering work, what would you want it to do?" |
| Medical devices (Mandel) | "What is your role in eye care? What services do you currently offer?" | "If you could add any new capability or diagnostic tool to your practice, what would it be?" |
| Cybersecurity (NexRisx) | "What is your role in cybersecurity? Walk me through your day-to-day responsibilities." | "If you could have one tool to make your security work dramatically easier, what would it do?" |
| Travel/compensation (Airfairness) | "How often do you fly? For what purposes? How many flights in the past year?" | "What would make you actually sign up and use an automatic flight compensation service?" |
Cultural research explores attitudes, norms, and behaviours within specific populations. This requires particular sensitivity to how questions are framed.
| Framework | Question Pattern | Use Case |
|---|---|---|
| Tradition vs Modernity | "How do you feel about [modern alternative] compared to the traditional way? Is something lost or gained?" | Testing new products against cultural norms |
| Cultural Identity | "When someone from outside your culture talks about [topic], what do they usually get wrong?" | Understanding cultural perceptions and sensitivities |
| Generational Divide | "How do your parents/grandparents feel about [topic] compared to you? Where do you agree and disagree?" | Understanding generational shifts in attitudes |
| Daily Ritual | "Walk me through your morning/evening routine involving [product/activity]. What role does it play in your day?" | Understanding habitual behaviour and rituals |
| National Pride | "How do you feel about [domestic product] vs [imported product]? Does where it comes from matter to you?" | Country-of-origin effects on purchase decisions |
6 German personas aged 30-65 were asked about bread culture. Key finding: hotel toast was called "the carb mattress" while Brötchen was described in near-spiritual terms. This revealed that bread is not a commodity in Germany - it is a cultural identity marker. Questions included: "What does good bread mean to you?", "What is the worst bread experience you have ever had?", and "If a foreigner asked you about German bread, what would you tell them?"
Pricing questions require careful construction. Direct "would you pay X?" questions produce unreliable data. Instead, use frameworks that surface price sensitivity indirectly.
| Framework | Question Pattern | What It Reveals |
|---|---|---|
| Current Spending Anchor | "How much do you currently spend on [category] per month? Break it down." | Reference price. What "normal" spending looks like. |
| Three-Point Pricing | "At what price would [product] feel like a bargain? Fair? Too expensive to consider?" | Price sensitivity range. Where the ceiling is. |
| Specific Price Test | "Would you pay $X/month for [specific benefit]? Why or why not?" | Whether a specific price point works. Reasons for rejection. |
| Tier Preference | "If the price were $A for basic and $B for premium, which would you choose? What would premium need to include?" | Tier structure validation. What drives upgrades. |
| Cancellation Trigger | "What would make you cancel after the first month?" | Retention risks. What "not worth it" looks like. |
| Value Justification | "What would make you pay 2x more than what you currently spend on [category]?" | Premium positioning requirements. Value perception. |
| Free vs Paid | "If a free version existed with [limitations], would you use that instead? What would make you upgrade?" | Freemium conversion triggers. Free tier boundaries. |
64 US personas tested across four price points. At $9.99: 65.7% would subscribe. At $29.99: 6.3% would. The sharp elasticity cliff meant that a $29.99 price point would dramatically underperform market expectations. This directly informed a hedge fund's trading thesis on Disney stock.
Three tiers tested: $175/month (routine care coordination), $325/month (complex needs), $125/event (crisis response). Every persona confirmed these were within acceptable range. The study also revealed that monthly subscription was strongly preferred over per-event pricing, even though per-event was cheaper for occasional users.
Competitive intelligence questions explore why customers choose competitors, what they value, and what would trigger switching.
| Framework | Question Pattern |
|---|---|
| Choice Drivers | "Why did you choose [competitor] over other options? What sealed the deal?" |
| Unique Strengths | "What does [competitor] get right that nobody else does?" |
| Hidden Frustrations | "What frustrates you most about [competitor]? If you could fix one thing, what would it be?" |
| Switching Threshold | "What would make you switch from [competitor] to something new? What would the new option need to promise?" |
| New Entrant Requirements | "If a new company entered this space, what would they need to do to get your attention?" |
| Loyalty Test | "[Competitor] raises their price by 30%. Do you stay or leave? What's your threshold?" |
| Feature Gap | "What do you wish [competitor] did that they don't do? What's missing?" |
Across 50+ production studies, the "magic wand" question consistently produces the study's single most actionable insight. Here is why, and the evidence.
By asking "If you could wave a magic wand and fix ONE thing about [task], what would it be?", you achieve three things:
| Startup | Builder Assumed... | Magic Wand Revealed... | Gap |
|---|---|---|---|
| MotorMinds | Faster parts search and AI recommendations | "Truthful inventory with guaranteed ETAs" | Trust and accuracy matter more than speed or AI |
| PatientCompanion | Better call button systems for patients | "Context before arrival - know WHAT they need before walking in" | Staff need information, not just alerts |
| TimeSmart | Automated timesheet entry | "Real-time visibility into contract compliance and pay accuracy" | The problem is not data entry - it is not knowing where you stand |
| Flomaru | Wider delivery coverage and faster shipping | "Photo proof on delivery" and "see what it actually looks like" | Trust verification matters more than logistics |
| NexRisx | Better vulnerability prioritisation algorithms | "One place to see everything without switching tabs" | Tool consolidation matters more than better algorithms |
| Sidian | AI that automates engineering design | "AI suggestions are fine; AI decisions are terrifying" | Augmentation accepted; autonomy rejected |
| Feel Good Games | More educational game content | "Permission to feel okay about screen time" | Parental guilt is the product's real competitor, not other apps |
| VetVivo | Cheaper implant options | "Honest conversation about quality of life, not just procedure success rates" | Emotional guidance matters more than price |
| Mandel Diagnostics | More advanced screening technology | "Clear ROI and reimbursement codes (CPT) before I invest" | Business case matters more than clinical capability |
| Airfairness | Automatic claim filing | "Folder-only access, not full inbox. Show me exactly what you can see." | Privacy controls matter more than convenience |
| Country | Adaptation Notes | Example Difference |
|---|---|---|
| USA | Direct, opinion-friendly. Americans are comfortable expressing strong preferences. | "What's your honest first reaction?" works well. |
| UK | More reserved. Understatement common. "Quite good" means "excellent." | "What are your thoughts on..." (softer framing) may produce more candour than "What's your HONEST reaction?" |
| Germany | Direct and detailed. Quality and process matter deeply. Will give thorough, structured answers. | Questions about craftsmanship and quality standards produce rich responses. "Walk me through the process" works exceptionally well. |
| Canada | Similar to US but more polite and consensus-oriented. May hedge more in responses. | "What concerns would you have?" produces more honest answers than "What's wrong with this?" |
When asking pricing questions, always use the local currency:
These are real questions from production studies, paired with the outcomes they produced.
Question: "What's the most frustrating part of getting the parts you need? Tell me about a time when sourcing a part was particularly painful."
Outcome: Every single participant described "ghost inventory" - systems showing parts as in-stock when they are not. One participant: "If it says 10:30, it rolls in at 10:30. No mid-year misses." This was the #1 finding that shaped the entire investment evaluation.
Question: "Let's talk about the privacy tradeoff. This service needs to scan your email inbox to find flight confirmations. How do you feel about that?"
Outcome: "Full inbox access" was a near-universal dealbreaker. But the follow-up magic wand question revealed that "folder-only access" or "forward specific emails" was acceptable. This distinction - full access vs limited access - was the difference between a product that works and one that does not.
Question: "When does your child's screen time make you feel guilty as a parent, and when does it feel like good parenting?"
Outcome: Guilt was the dominant emotion, not concern about content quality. Parents want "permission to feel okay" about screen time. The "educational" label is table stakes but not trusted. This reframed the entire value proposition from "better games" to "guilt-free screen time."
Question: "What security tools and platforms do you currently use in your work?"
Outcome: Every security team juggles 5-15+ tools that do not talk to each other. Context-switching is constant. The insight that "one place to see everything" mattered more than better algorithms reframed NexRisx's positioning from "AI-powered vulnerability prioritisation" to "unified security view."
| Study Type | Recommended Questions | Rationale |
|---|---|---|
| Standard research | 7 | Covers all aspects of the 7-question framework |
| Quick validation | 4-5 | When you need a fast answer to a specific question |
| Deep dive / due diligence | 7 | Full framework. Do not go beyond 7 - diminishing returns. |
| Pricing only | 4-6 | Focused on willingness to pay |
| Messaging test | 4-5 | Test 2-3 messages, then probe reactions |
| Competitive intel | 5-7 | Covers choice drivers, frustrations, switching triggers |
Seven is the default and validated sweet spot. Use fewer (4-5) for focused studies (pricing only, messaging test). Never use more than 9. If you need broader coverage, run a second study.
Yes. Questions are asked sequentially within a study, and each persona maintains context from previous questions. You can reference earlier answers: "You mentioned [X] in your previous response. Tell me more about that." This is a powerful technique but use it sparingly - planned questions produce more consistent data than reactive ones.
Apply these tests: (1) Could the answer be "yes" or "no"? If so, rewrite. (2) Would the answer tell you something you do not already assume? If not, it is too obvious. (3) Could the answer change a business decision? If not, it is not actionable. (4) Does it ask about one thing or multiple things? If multiple, split it.
This happens occasionally. The most common causes: (1) the question is too abstract - add a concrete scenario, (2) the question assumes knowledge the persona does not have - simplify, (3) the question is leading - the persona tells you what you wanted to hear. Do not spend time trying to "fix" a poor question mid-study. Note what went wrong and design better questions next time.
For B2C tech studies, referencing a specific URL in questions works well ("Look at this landing page for 10 seconds..."). For due diligence studies, do NOT describe or link to the startup's product - let participants describe their own needs without priming. For CPG, you can reference the product category without naming the specific brand in early questions, then introduce it later.
Start with Q1 as a relevance filter: "In your current or past work, how often do you [industry-specific task]?" This immediately tells you if the persona has relevant experience. Even with broad demographic filters (age + country instead of exact industry), a well-designed Q1 separates relevant from irrelevant participants in the response data.