Validate Product Positioning with Ditto + Claude Code

A complete Claude Code guide to running positioning validation studies using April Dunford's framework, Ditto's 300,000+ synthetic personas, and the full API workflow. From zero to validated positioning in 30 minutes.

For Claude Code agents and AI coding assistants. This guide provides copy-paste API calls, exact question designs, and proven workflows for validating product positioning with Ditto's synthetic research API. Every curl command is production-tested across 50+ studies.
Contents
  1. What Is Positioning Validation and Why It Matters
  2. April Dunford's 5-Component Positioning Framework
  3. The 7-Question Positioning Study Design
  4. Complete API Workflow: Step by Step
  5. Interpreting Results and Generating Deliverables
  6. Cross-Segment Positioning Comparison
  7. Iterative Validation: Three Rounds in One Afternoon
  8. Worked Example: Validating a Project Management Tool
  9. Advanced Techniques
  10. Best Practices and Common Mistakes
  11. Frequently Asked Questions

1. What Is Positioning Validation and Why It Matters

Positioning defines how a product is perceived in the market: who it serves, how it differs from alternatives, and why a specific customer segment should care. Positioning validation tests whether those claims actually land with the target audience.

Most product marketing teams skip this step. Not because they doubt its value, but because the traditional process (agency brief → participant recruitment → interviews → analysis) takes 6–10 weeks and costs $10,000–$30,000 per round. By the time results arrive, the launch window has closed.

With Ditto and Claude Code, you can validate positioning against the complete April Dunford framework in approximately 30 minutes. This changes positioning validation from a quarterly event into a continuous habit.

What You Will Produce

Deliverable What It Contains Who Uses It
Positioning Validation Scorecard How each of Dunford's 5 components scored, with supporting evidence PMM, Product, Leadership
Competitive Alternative Map What customers actually do today (not just direct competitors) PMM, Sales, Product
Value Resonance Ranking Which value propositions landed, which fell flat, which generated scepticism PMM, Marketing
Market Category Feedback How customers naturally categorise you (may differ from your intended category) PMM, Product, Leadership
Proof Point Gap Analysis Where customers expressed scepticism and what evidence they need PMM, Sales Enablement
Quotable Insights Direct persona quotes for positioning documents and presentations PMM, Leadership

2. April Dunford's 5-Component Positioning Framework

The study design in this guide maps directly to April Dunford's positioning framework from Obviously Awesome. Each component is interdependent:

# Component Definition What You Need to Validate
1 Competitive Alternatives What customers would do if your product did not exist Are the alternatives you assume correct? Are there alternatives you haven't considered?
2 Unique Attributes Capabilities that differentiate you from those alternatives Do customers actually perceive these attributes as unique? Do they notice them at all?
3 Value and Proof The demonstrable outcome of your unique attributes Does the stated value resonate? What proof do customers need to believe it?
4 Target Customers The segment that cares most about your differentiated value Is the segment you chose actually the one that responds most strongly?
5 Market Category The frame of reference that makes your value obvious How do customers naturally categorise you? Does it match your intended category?
Key insight: Your differentiated value only makes sense relative to specific alternatives, for specific customers, within a specific category. The components are interdependent. Validating them individually is not enough — you must test how they work together.

3. The 7-Question Positioning Study Design

Each question in this design maps to one or more of Dunford's five components. Questions are open-ended and qualitative, designed to elicit natural language responses rather than scaled ratings.

Q# Question Component(s) Tested What You Learn
1 "When you think about [problem space], what comes to mind first? What frustrates you most about the options currently available?" Competitive Alternatives How customers frame the problem; what solutions they associate with it; pain points with current options
2 "Walk me through how you currently solve [problem]. What tools, services, or workarounds do you use? What's missing?" Competitive Alternatives + Status Quo Actual competitive landscape from the customer's perspective; gaps in current solutions; the "do nothing" alternative
3 "If I told you there was a product that [unique value proposition], what's your gut reaction? What excites you? What makes you sceptical?" Unique Attributes + Value Emotional response to positioning; which claims register; which trigger scepticism; proof points needed
4 "How would you describe [product] to a colleague? What category would you put it in?" Market Category Natural category language; whether your intended category matches customer mental models
5 "Compared to [competitor A] and [competitor B], what would make you choose a new option? What's the minimum bar?" Competitive Differentiation Switching triggers; competitive table stakes; how competitors are perceived; minimum viable differentiation
6 "If [product] could only do ONE thing brilliantly for you, what should that be? Why does that matter more than everything else?" Primary Value Driver Core value from the customer's perspective (may differ from your assumption); priority ranking
7 "What would stop you from trying something like this? What would you need to see or hear to feel confident switching?" Adoption Barriers + Proof Points Objections; risk factors; required evidence; trust signals needed to convert
Customisation guidance: Replace [problem space], [product], [unique value proposition], [competitor A], and [competitor B] with your specific product context. The question structure should remain the same — only the bracketed placeholders change.

Why Each Question Matters


4. Complete API Workflow: Step by Step

This is the complete sequence of API calls to run a positioning validation study. Each command is copy-paste ready.

Prerequisites

Step 1: Create the Research Group

Recruit 10 personas matching your ideal customer profile.

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Product Positioning Validation - [Your Product]",
    "description": "Target customers for positioning validation study. [ICP description].",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 28,
      "age_max": 55,
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'
Critical parameter notes:

Save the returned uuid — you need it for the next step.

# Response (extract the uuid):
{
  "uuid": "abc123-def456-...",
  "name": "Product Positioning Validation - [Your Product]",
  "filters": { ... },
  "agents": [ ... ]  // 10 persona objects
}

Step 2: Create the Research Study

curl -s -X POST "https://app.askditto.io/v1/research-studies" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Positioning Validation: [Your Product] - [Date]",
    "research_group_uuid": "GROUP_UUID_FROM_STEP_1"
  }'

Save the returned id — this is your study ID for all subsequent calls.

# Response:
{
  "id": 12345,
  "name": "Positioning Validation: [Your Product] - Feb 2026",
  "research_group_uuid": "abc123-def456-..."
}

Step 3: Ask Questions (Sequentially)

Questions must be asked one at a time. Send Question 1, poll until all jobs complete, then send Question 2, and so on. Do not batch questions — the API processes them asynchronously and earlier answers provide context for later questions.
# Question 1: Competitive Alternatives
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "When you think about [problem space], what comes to mind first? What frustrates you most about the options currently available?"
  }'

The response returns an array of job_ids — one per persona:

# Response:
{
  "job_ids": ["job-001", "job-002", "job-003", ... "job-010"]
}

Step 4: Poll for Responses

Poll each job until status is "finished". Typically takes 15–60 seconds per question batch.

# Poll a single job:
curl -s -X GET "https://app.askditto.io/v1/jobs/JOB_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Response when complete:
{
  "id": "job-001",
  "status": "finished",
  "result": {
    "answer": "The first thing that comes to mind is..."
  }
}
Efficient polling pattern: Poll all 10 job IDs in a loop with a 5-second interval. Once all return "finished", proceed to the next question. A full 7-question study with 10 personas typically completes in 4–8 minutes of polling time.

Repeat Steps 3–4 for all 7 questions. The full question sequence:

# Question 1: "When you think about [problem space], what comes to mind first?..."
# → Poll until complete
# Question 2: "Walk me through how you currently solve [problem]..."
# → Poll until complete
# Question 3: "If I told you there was a product that [value prop]..."
# → Poll until complete
# Question 4: "How would you describe [product] to a colleague?..."
# → Poll until complete
# Question 5: "Compared to [competitor A] and [competitor B]..."
# → Poll until complete
# Question 6: "If [product] could only do ONE thing brilliantly..."
# → Poll until complete
# Question 7: "What would stop you from trying something like this?..."
# → Poll until complete

Step 5: Complete the Study

After all 7 questions have been answered, trigger the completion analysis:

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

This generates Ditto's automated analysis: overall summary, key segments, divergences, shared mindsets, and suggested follow-up questions. Poll the returned job IDs until complete.

Step 6: Get the Share Link

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Response:
{
  "url": "https://app.askditto.io/organization/studies/shared/xyz123"
}
UTM tracking is mandatory. Append ?utm_source=ce for cold emails or ?utm_source=blog for blog articles. Never use raw share URLs without a UTM parameter.

5. Interpreting Results and Generating Deliverables

Once the study completes, Claude Code should synthesise persona responses into the following deliverables. Each maps to specific questions in the study:

Positioning Validation Scorecard

For each of Dunford's 5 components, assess:

Component Source Questions Score Criteria
Competitive Alternatives Q1, Q2 Strong: Personas name the alternatives you expected. Weak: They name alternatives you hadn't considered, or their primary alternative is "do nothing."
Unique Attributes Q3, Q5 Strong: Personas articulate your differentiators back to you. Weak: They don't notice or don't care about what you think makes you unique.
Value and Proof Q3, Q6, Q7 Strong: Personas express excitement about the stated value. Weak: They express scepticism and the proof they require doesn't exist yet.
Target Customers Q6, all responses Strong: Consistent enthusiasm across the persona group. Weak: Divergent responses suggest the segment is too broad or poorly defined.
Market Category Q4 Strong: Personas categorise you as intended. Weak: They place you in a different category, or cannot categorise you at all.

Competitive Alternative Map

From Q1 and Q2 responses, extract and cluster every alternative mentioned:

Rank by frequency of mention. The most-cited alternative is your actual competitive reference point, regardless of what your sales team thinks.

Proof Point Gap Analysis

From Q3 (scepticism) and Q7 (barriers/evidence needed), compile a table:

Claim That Triggered Scepticism Proof Point Needed Do We Have This?
"Sounds too good to be true" Case study from similar company Yes / No / Partial
"I've heard the AI pitch before" Live demo or free trial Yes / No / Partial
"How does it integrate with X?" Integration documentation Yes / No / Partial

This table directly informs what content your marketing and sales teams need to create next.


6. Cross-Segment Positioning Comparison

Positioning rarely lands uniformly. Enterprise buyers, mid-market teams, and SMBs respond differently to the same positioning.

How to Run It

Create 2–3 separate research groups with different demographic filters, then run the identical 7 questions against each:

# Group A: SMB decision-makers
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Positioning Validation - SMB Buyers",
    "description": "Small business owners and managers for positioning validation",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 25,
      "age_max": 45,
      "employment": "self_employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'

# Group B: Enterprise evaluators
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Positioning Validation - Enterprise Buyers",
    "description": "Corporate professionals for enterprise positioning validation",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 30,
      "age_max": 55,
      "education": "masters",
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'

Create a separate study for each group, ask the same 7 questions, then compare the results side by side.

Cross-Segment Output Matrix

Component SMB Response Pattern Enterprise Response Pattern Implication
Competitive Alternatives "We use spreadsheets and free tools" "We have [Enterprise Tool X] but it's overkill" Different competitive reference points; messaging needs segment-specific framing
Market Category "It's like a simpler version of X" "It's a lightweight alternative to our existing stack" Category positioning may need to vary by segment
Primary Value "Save me time" "Reduce vendor complexity" Lead with different value props per segment
Time estimate: Cross-segment comparison adds approximately 20 minutes to the workflow because the studies run in parallel. The deliverable is a segment-by-segment comparison that would cost $30,000–$90,000 through traditional research agencies.

7. Iterative Validation: Three Rounds in One Afternoon

The most significant advantage of this workflow is not the speed of a single round. It is the ability to iterate.

Three-Round Framework

Round Purpose Time Adjustments from Previous Round
Round 1 Baseline validation ~30 min N/A — initial hypothesis test
Round 2 Revised positioning ~25 min Modify Q3 value prop based on Round 1 scepticism; adjust Q5 competitors based on Q1–Q2 findings
Round 3 Refined validation ~20 min Final positioning language tested; category refined based on Q4 feedback; proof points addressed

What to Change Between Rounds

After 3 rounds: You have 210 data points (10 personas × 7 questions × 3 rounds), with each round informed by the findings of the previous one. That is more primary positioning research than most companies conduct in a year.

8. Worked Example: Validating a Project Management Tool

Scenario

Product: "FlowBoard" — an AI-powered project management tool for remote teams
Current positioning: "The project management tool that thinks ahead"
Target segment: Remote-first startup teams (5–50 employees)
Competitors: Asana, Linear
Value proposition: Uses AI to predict project delays and automatically rebalance workloads

Customised Questions

  1. "When you think about managing projects across a remote team, what comes to mind first? What frustrates you most about the options currently available?"
  2. "Walk me through how you currently manage projects and tasks in your remote team. What tools do you use? What's missing?"
  3. "If I told you there was a project management tool that uses AI to predict project delays before they happen and automatically rebalances workloads across your team, what's your gut reaction? What excites you? What makes you sceptical?"
  4. "How would you describe FlowBoard to a colleague? What category would you put it in?"
  5. "Compared to Asana and Linear, what would make you choose a new project management tool? What's the minimum bar?"
  6. "If FlowBoard could only do one thing brilliantly for your remote team, what should that be? Why does that matter more than everything else?"
  7. "What would stop you from trying a new project management tool? What would you need to see or hear to feel confident switching?"

Group Setup

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Remote Startup PMs - FlowBoard Positioning",
    "description": "Remote-working professionals aged 25-40 for project management positioning validation",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 25,
      "age_max": 40,
      "employment": "employed",
      "education": "bachelors"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'

Hypothetical Findings

Component Finding Positioning Implication
Competitive Alternatives 8/10 personas named Notion first, not Asana or Linear Primary competitive reference is Notion, not traditional PM tools. Reframe competitive positioning.
Unique Attributes (AI prediction) 6/10 expressed scepticism: "AI predictions sound impressive but are they accurate?" Need proof points: accuracy metrics, demo, or case study. The AI claim triggers doubt without evidence.
Market Category 7/10 said "project management tool," 3/10 said "team coordination platform" "Project management tool" is the natural category. "Thinks ahead" is a differentiator, not a category.
Primary Value (Q6) "Visibility into what everyone is working on" — mentioned by 9/10 personas Customers care more about visibility than prediction. Consider leading with "see everything, miss nothing" rather than "predicts delays."
Adoption Barriers (Q7) "Migration pain" — cited by 7/10. "Will my team actually use it?" — cited by 5/10 Messaging must address migration and adoption friction. Free trial with import tool is critical.

Round 2 Revision

Based on these findings, the revised Q3 for Round 2 would test: "If I told you there was a project management tool that gives you instant visibility into what every remote team member is working on, flags risks before they become problems, and makes it painless to switch from Notion, what's your gut reaction?"


9. Advanced Techniques

A/B Positioning Test

Run two studies with identical groups but different value propositions in Q3. Compare which version generates more excitement and less scepticism.

# Study A — Q3: "...a product that predicts project delays using AI..."
# Study B — Q3: "...a product that gives you instant visibility into your remote team..."

# Compare: excitement levels, scepticism triggers, and natural language used

Category Exploration Study

If Q4 reveals category confusion, run a follow-up study focused specifically on category:

Combining with Message Testing

Once positioning is validated, use the findings to design a messaging test. The "language harvest" from positioning responses (exact words and phrases customers used) becomes the raw material for messaging variants. See the Question Design Playbook for messaging study templates.

Over-Recruit and Curate

For high-stakes positioning decisions, recruit 15–20 personas, review their profiles, remove any that don't match your ICP closely enough, then run the study with the curated 10. This ensures higher-quality responses.

# Recruit 15 personas
"group_size": 15

# Review profiles, then remove poor matches:
curl -s -X DELETE "https://app.askditto.io/v1/research-studies/STUDY_ID/agents/AGENT_UUID" \
  -H "Authorization: Bearer YOUR_API_KEY"

10. Best Practices and Common Mistakes

Do

Don't

Common API Errors

Error Cause Solution
size parameter rejected Wrong parameter name Use group_size, not size
0 agents recruited State filter used full name Use 2-letter codes: "TX" not "Texas"
Jobs stuck in "pending" Normal for first 10–15 seconds Continue polling with 5-second intervals
income filter rejected Unsupported filter Remove income filter; use education/employment as proxy
Missing completion analysis Forgot to call /complete Always call POST /v1/research-studies/{id}/complete after final question

11. Frequently Asked Questions

How long does a full positioning validation study take?

Approximately 30 minutes end to end: 1–2 minutes for group creation, 4–8 minutes for question asking and polling, 2–3 minutes for completion analysis, plus time for Claude Code to synthesise deliverables.

How many personas should I use?

10 is the recommended minimum for positioning studies. It provides enough diversity to identify patterns while keeping the data manageable. For cross-segment comparison, use 10 per segment.

Can I validate B2C positioning, or is this only for B2B?

Both. Dunford's framework originated in B2B but the components apply equally to B2C. For B2C, adjust the demographic filters to match your consumer profile and modify Q5 to reference consumer alternatives rather than enterprise tools.

Should I test positioning for different geographies?

Yes, if you operate in multiple markets. Ditto covers 15+ countries. Run the same 7 questions against groups in each target market. Category perception and competitive alternatives often vary significantly by geography.

How does this compare to traditional positioning research?

Traditional: 6–10 weeks, $10,000–$30,000 per round, 15–20 interviews. Ditto + Claude Code: 30 minutes, fraction of the cost, 10 persona responses. EY Americas validated 95% correlation between Ditto synthetic responses and traditional research methods. The recommended approach is hybrid: use Ditto for the fast first pass, then validate top candidates with real customers.

What if the results contradict my positioning hypothesis?

That is the point. Positioning validation exists to catch misalignment before you build an entire GTM strategy on incorrect assumptions. If results contradict your hypothesis, revise and run Round 2. The cost of being wrong at this stage is 30 minutes. The cost of launching with wrong positioning is months of misfiring sales and marketing.

Can I use this for repositioning an existing product?

Absolutely. For repositioning, add an additional question between Q2 and Q3: "Have you heard of [product]? If so, how would you describe what it does?" This baseline perception question reveals your current positioning in the market, which you can then compare against your proposed new positioning in Q3.

How do I know when positioning is "validated"?

Positioning is validated when: (1) the competitive alternatives you assumed match what customers actually report, (2) customers can articulate your differentiated value back to you, (3) they categorise you as intended, (4) scepticism is addressable with proof points you possess, and (5) the primary value driver customers identify matches what you lead with. If all five components score "strong" on the scorecard, your positioning is validated.


Related guides:


Ditto — Synthetic market research with 300,000+ AI personas. Validated by EY Americas (95% correlation), Harvard, Cambridge, Stanford, and Oxford.
askditto.io · [email protected]