Competitive Intelligence with Ditto + Claude Code

A complete Claude Code guide to building competitive battlecards from synthetic perception research. Seven-question study design, full API workflow, multi-competitor parallel studies, quarterly tracking, and automated battlecard generation. From zero to finished battlecard in 45 minutes.

For Claude Code agents and AI coding assistants. This guide provides copy-paste API calls, study designs, and proven workflows for competitive intelligence research with Ditto's synthetic research API. Every command is production-tested. The output is a finished competitive battlecard ready for sales distribution.
Contents
  1. The Three Layers of Competitive Intelligence
  2. Anatomy of a Competitive Battlecard
  3. The 7-Question Competitive Perception Study
  4. Complete API Workflow: Step by Step
  5. Generating the Battlecard from Study Data
  6. Multi-Competitor Parallel Studies
  7. Quarterly Competitive Tracking
  8. Advanced: Claim Credibility Testing
  9. Worked Example: SaaS Analytics Tool vs Mixpanel
  10. Connecting CI to Other PMM Workflows
  11. Best Practices and Common Mistakes
  12. Frequently Asked Questions

1. The Three Layers of Competitive Intelligence

Competitive intelligence operates across three layers. Ditto addresses the third, which is the hardest to obtain through existing tools.

Layer What It Tells You Existing Tools Gap
Layer 1: Data Collection What the competitor is doing: website changes, pricing updates, job postings, press releases, feature launches Klue, Crayon, G2, Owler, SimilarWeb Well served
Layer 2: Business Context What the competitor is planning: hiring patterns signal strategy shifts, pricing changes signal repositioning Klue, Crayon, analyst reports Adequately served
Layer 3: Customer Perception What the market thinks about the competitor: do buyers believe their claims? What triggers switching? How do they compare you? Win/loss interviews (slow, small sample, hard to recruit) This is where Ditto fills the gap
Why Layer 3 matters most: 68% of sales opportunities are competitive. In those conversations, the outcome depends on how well your team understands buyer perception of the competitor. Layer 1 and 2 data tells you what the competitor claims. Layer 3 tells you whether buyers believe it.

2. Anatomy of a Competitive Battlecard

The study design in this guide produces all six sections of a standard competitive battlecard. Each section maps to specific study questions.

Section What It Contains Source Questions Who Uses It
Why We Win Top 3-4 reasons customers choose you over this competitor, in customer language, with evidence Q2, Q5, Q7 AEs in competitive deals
Competitor Strengths Honest assessment of where the competitor excels + how to respond when prospects raise these points Q3 AEs, SEs
Landmine Questions High-impact questions that expose competitor gaps without sounding adversarial ("Have you asked them about X?") Q3 (weaknesses), Q4 (scepticism) AEs in discovery calls
Quick Dismisses 1-2 sentence rebuttals for the competitor's most common claims Q4 AEs, SDRs
Switching Triggers Events and frustrations that create openings: price increases, buggy releases, lost integrations Q6 AEs, SDRs, Marketing
Recent Wins Specific deals won against this competitor with industry, use case, and deciding factor Supplemented from CRM data AEs for social proof

The ABC Quality Framework


3. The 7-Question Competitive Perception Study

Each question targets a specific competitive intelligence need. Together, they produce raw material for all six battlecard sections.

Q# Question CI Component Battlecard Section(s)
1 "When you think about solutions in [category], which brands or tools come to mind first? What do you associate with each?" Brand Awareness + Associations Competitive landscape context
2 "You are evaluating [your product] against [Competitor A]. What would make you lean toward one or the other?" Head-to-Head Decision Drivers Why We Win + Competitor Strengths
3 "What is the ONE thing [Competitor A] does really well? What is the ONE thing they do poorly?" Strengths + Weaknesses Competitor Strengths + Landmine Questions
4 "If someone told you that [Competitor A's key marketing claim], would you believe them? What would make you sceptical?" Claim Credibility Quick Dismisses + Landmine Questions
5 "What would [your product] need to prove to you to win over [Competitor A]? What evidence would you need?" Proof Point Requirements Why We Win (proof gaps to address)
6 "Have you ever switched from one [category] solution to another? What triggered the switch? What almost stopped you?" Switching Triggers + Barriers Switching Triggers
7 "If you had unlimited budget, which solution would you choose and why? If budget were tight, would your answer change?" Value vs Premium Positioning Why We Win (price vs value framing)
Customisation guidance: Replace [category], [your product], [Competitor A], and [Competitor A's key marketing claim] with your specific context. The question structures should remain the same — only the bracketed placeholders change.

Why Each Question Matters


4. Complete API Workflow: Step by Step

Complete sequence of API calls. Every command is copy-paste ready.

Prerequisites

Step 1: Research the Competitor

Before designing the study, Claude Code should research the competitor to customise the questions effectively:

The key output from this step is the competitor's primary marketing claim (used in Q4). For example: "Mixpanel claims to provide 'self-serve analytics that help you convert, engage, and retain more users.'"

Step 2: Create the Research Group

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "CI Study - [Your Product] vs [Competitor A] - [Date]",
    "description": "Target buyers for competitive perception study. [ICP description].",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 28,
      "age_max": 55,
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'
Critical parameter notes:

Save the returned uuid — you need it for Step 3.

# Response (extract the uuid):
{
  "uuid": "abc123-def456-...",
  "name": "CI Study - Acme vs Mixpanel - Feb 2026",
  "agents": [ ... ]  // 10 persona objects
}

Step 3: Create the Research Study

curl -s -X POST "https://app.askditto.io/v1/research-studies" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Competitive Perception: [Your Product] vs [Competitor A] - [Date]",
    "research_group_uuid": "GROUP_UUID_FROM_STEP_2"
  }'

Save the returned id — this is your study ID for all subsequent calls.

Step 4: Ask Questions (Sequentially)

Questions must be asked one at a time. Send Q1, poll until all jobs complete, then send Q2, and so on. Sequential questioning allows earlier answers to provide context for later questions — this creates conversational depth that batched submissions cannot achieve.
# Question 1: Brand Awareness
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "When you think about solutions in [category], which brands or tools come to mind first? What do you associate with each?"
  }'

# Response:
{
  "job_ids": ["job-001", "job-002", ... "job-010"]
}

Step 5: Poll for Responses

# Poll each job until status = "finished":
curl -s -X GET "https://app.askditto.io/v1/jobs/JOB_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Response when complete:
{
  "id": "job-001",
  "status": "finished",
  "result": {
    "answer": "The first tools that come to mind are..."
  }
}
Efficient polling pattern: Poll all 10 job IDs in a loop with a 5-second interval. Once all return "finished", proceed to the next question. A full 7-question study with 10 personas typically completes in 4–8 minutes of polling time.

Repeat Steps 4–5 for all 7 questions.

# Full sequence:
# Q1: Brand awareness → Poll → Complete
# Q2: Head-to-head decision drivers → Poll → Complete
# Q3: Competitor strengths/weaknesses → Poll → Complete
# Q4: Claim credibility → Poll → Complete
# Q5: Proof point requirements → Poll → Complete
# Q6: Switching triggers/barriers → Poll → Complete
# Q7: Value vs premium → Poll → Complete

Step 6: Complete the Study

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

This triggers Ditto's analysis: overall summary, key segments, divergences, shared mindsets, and suggested follow-up questions. Poll the returned job IDs until complete.

Step 7: Get the Share Link

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Response:
{
  "url": "https://app.askditto.io/organization/studies/shared/xyz123"
}
UTM tracking is mandatory. Append ?utm_source=ce for cold emails or ?utm_source=blog for blog articles. Never use raw share URLs without a UTM parameter.

5. Generating the Battlecard from Study Data

Once all responses are collected and the study is completed, Claude Code synthesises the 70 data points (10 personas × 7 questions) into a structured battlecard.

Section-by-Section Generation

Why We Win (from Q2, Q5, Q7)

Extract every reason personas cited for leaning toward your product. Cluster by theme. Rank by frequency. The top 3–4 themes become your "Why We Win" bullets. Use the exact language personas used.

# Template:
## Why We Win vs [Competitor A]

1. **[Theme 1]** — [Customer language from Q2 responses] (cited by X/10)
2. **[Theme 2]** — [Customer language from Q2 responses] (cited by X/10)
3. **[Theme 3]** — [Customer language from Q5/Q7 responses] (cited by X/10)

**Evidence:** [Specific persona quotes that support each theme]

Competitor Strengths (from Q3)

Extract the "one thing they do really well" from all 10 responses. Cluster and rank. For each strength, draft a response your sales team can use when a prospect raises it.

# Template:
## Competitor Strengths (and How to Respond)

| Their Strength | How to Respond |
|----------------|----------------|
| [Strength 1] (cited by X/10) | "That's true, and here's how we approach it differently..." |
| [Strength 2] (cited by X/10) | "We hear that a lot. What we've found is..." |
Critical: Never deny genuine competitor strengths. Sales reps who pretend competitors have no strengths lose credibility instantly. Acknowledge, then redirect to where you differentiate.

Landmine Questions (from Q3 weaknesses + Q4 scepticism)

The "one thing they do poorly" from Q3 and the scepticism reasons from Q4 are the raw material for landmine questions. These are questions designed to expose competitor gaps without sounding adversarial.

# Template:
## Landmine Questions

- "Have you asked [Competitor A] about [weakness area]? It's worth understanding how they handle that."
- "When [Competitor A] says [claim], have they shown you [specific evidence]? We'd suggest asking for that."
- "What's their approach to [gap area]? That's been a common concern we hear from teams evaluating them."

Quick Dismisses (from Q4)

For each of the competitor's key claims where personas expressed scepticism, write a 1–2 sentence rebuttal.

# Template:
## Quick Dismisses

| When They Say... | You Say... |
|------------------|------------|
| "[Competitor claim 1]" | "[1-2 sentence rebuttal grounded in Q4 scepticism reasons]" |
| "[Competitor claim 2]" | "[1-2 sentence rebuttal]" |
| "[Competitor claim 3]" | "[1-2 sentence rebuttal]" |

Switching Triggers (from Q6)

Extract every trigger and barrier from Q6. Triggers inform outbound timing. Barriers inform objection handling.

# Template:
## Switching Triggers

**Attack when:**
- [Trigger 1] (e.g., "competitor raised prices") — cited by X/10
- [Trigger 2] (e.g., "product reliability issues") — cited by X/10
- [Trigger 3] (e.g., "team outgrew the tool") — cited by X/10

**Prepare for these objections:**
- [Barrier 1] (e.g., "migration pain") — cited by X/10
- [Barrier 2] (e.g., "team adoption risk") — cited by X/10

Value vs Premium Insight (from Q7)

This doesn't get its own battlecard section but informs the overall positioning tone:


6. Multi-Competitor Parallel Studies

Most products face multiple competitors. Claude Code can run competitive perception studies for several competitors simultaneously.

How to Structure It

Create separate research groups and studies for each competitor. Use the same 7-question template, customised with each competitor's context.

# Competitor A study:
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "CI Study - Acme vs Competitor A - Feb 2026",
    "description": "Target buyers for competitive perception study",
    "group_size": 10,
    "filters": { "country": "USA", "age_min": 28, "age_max": 55, "employment": "employed" },
    "sampling_method": "random",
    "deduplicate": true
  }'

# Competitor B study (simultaneously):
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "CI Study - Acme vs Competitor B - Feb 2026",
    "description": "Target buyers for competitive perception study",
    "group_size": 10,
    "filters": { "country": "USA", "age_min": 28, "age_max": 55, "employment": "employed" },
    "sampling_method": "random",
    "deduplicate": true
  }'

# Competitor C study (simultaneously):
# ... same pattern

Create studies for each group, then run the 7 questions through each study in parallel. Claude Code interleaves the polling across studies.

Timing

Competitors Studies Approximate Time Output
1 1 study (10 personas, 7 questions) ~30 min data collection + 15 min synthesis 1 battlecard
3 3 parallel studies ~35 min data collection + 30 min synthesis 3 battlecards
5 5 parallel studies ~40 min data collection + 50 min synthesis 5 battlecards
Data collection time is nearly the same regardless of how many competitors you study, because the studies run in parallel. The synthesis time scales linearly. A complete 5-competitor battlecard library takes approximately 90 minutes.

Segment-Specific Battlecards

If different competitors dominate different segments, use different demographic filters for each study:

# Competitor A threatens SMB market:
"filters": { "country": "USA", "age_min": 25, "age_max": 40, "employment": "self_employed" }

# Competitor B threatens enterprise market:
"filters": { "country": "USA", "age_min": 30, "age_max": 55, "education": "masters", "employment": "employed" }

# Competitor C threatens UK market:
"filters": { "country": "UK", "age_min": 28, "age_max": 50, "employment": "employed" }

7. Quarterly Competitive Tracking

A single study tells you how the market perceives your competitor today. Quarterly studies with fresh persona groups reveal how perceptions shift over time.

Setup

Run the same 7-question study each quarter. Use the same demographic filters but recruit fresh personas each time (Ditto handles deduplication).

Naming Convention

# Quarter-over-quarter naming:
"CI Study - Acme vs Mixpanel - Q1 2026"
"CI Study - Acme vs Mixpanel - Q2 2026"
"CI Study - Acme vs Mixpanel - Q3 2026"
"CI Study - Acme vs Mixpanel - Q4 2026"

Quarterly Trend Report

After each quarterly study, Claude Code compares findings with previous quarters:

Metric Q1 2026 Q2 2026 Trend
Brand association (Q1) 7/10 named Competitor A first 5/10 named Competitor A first Declining awareness ↓
Claim credibility (Q4) 6/10 sceptical of key claim 4/10 sceptical Claim gaining traction ↑ (address in messaging)
Primary switching trigger (Q6) "Price increases" "Reliability issues" New vulnerability emerged
Value vs premium (Q7) 8/10 chose us on price 6/10 chose us on price Value perception improving ↑
Annual cost comparison: Four rounds of win/loss interviews per competitor per year through an agency: $50,000–$100,000+. Four quarterly Ditto studies per competitor: platform subscription only. The studies take approximately 45 minutes each.

8. Advanced: Claim Credibility Testing

Q4 in the standard study tests one key claim. For a deeper competitive teardown, run a dedicated claim credibility study that tests multiple claims from the competitor's marketing.

Claim Credibility Study Design

# Dedicated 5-question claim credibility study:

Q1: "[Competitor A] says they are 'the most intuitive analytics platform on the market.'
     Do you believe that? What would make you sceptical?"

Q2: "[Competitor A] claims their product 'saves teams 10 hours per week.'
     Does that sound credible? What would you need to see to believe it?"

Q3: "[Competitor A] advertises 'enterprise-grade security with SOC 2 compliance.'
     How important is that to you? Do you take that claim at face value?"

Q4: "[Competitor A] says 'customers see ROI within 30 days.'
     Is that realistic? What's your experience with claims like this?"

Q5: "Of all the claims we've discussed, which one would most influence your
     purchase decision? Which one matters least?"

The output is a claim credibility scorecard: which of the competitor's marketing claims land, which trigger scepticism, and which don't matter to buyers at all. This directly informs both your counter-messaging and your own marketing claims.


9. Worked Example: SaaS Analytics Tool vs Mixpanel

Scenario

Your product: "Beacon Analytics" — a product analytics tool for mid-market SaaS
Competitor: Mixpanel
Category: Product analytics
Mixpanel's key claim: "Self-serve analytics that help you convert, engage, and retain more users"
Target buyer: US-based, employed, product managers and growth leads aged 28–45

Customised Questions

  1. "When you think about product analytics tools, which brands come to mind first? What do you associate with each?"
  2. "You're evaluating Beacon Analytics against Mixpanel for your team's product analytics. What would make you lean toward one or the other?"
  3. "What's the ONE thing Mixpanel does really well? What's the ONE thing they do poorly?"
  4. "If someone told you Mixpanel provides 'self-serve analytics that help you convert, engage, and retain more users,' would you believe them? What would make you sceptical?"
  5. "What would Beacon Analytics need to prove to win you over from Mixpanel? What evidence would you need?"
  6. "Have you ever switched from one analytics tool to another? What triggered the switch? What almost stopped you?"
  7. "If budget were no object, which product analytics tool would you choose and why? If budget were tight, would your answer change?"

Group Setup

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "CI Study - Beacon vs Mixpanel - Feb 2026",
    "description": "Product managers and growth leads aged 28-45 for competitive perception study",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 28,
      "age_max": 45,
      "employment": "employed",
      "education": "bachelors"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'

Hypothetical Findings

Battlecard Section Key Finding Source
Why We Win "Easier to set up" (7/10), "Better support for mid-market" (5/10), "More transparent pricing" (4/10) Q2, Q5
Competitor Strength "Mixpanel has deeper funnel analysis" (8/10), "More integrations" (5/10) Q3
Landmine Questions "Have you asked about onboarding time?" (weakness: "steep learning curve" cited 6/10), "Ask about pricing at scale" (scepticism: "costs escalate" cited 4/10) Q3, Q4
Quick Dismisses "Self-serve" claim: 7/10 sceptical — "self-serve in theory, but you need a data engineer to set it up properly" Q4
Switching Triggers "Price increase" (6/10), "Needed something simpler" (4/10), "Poor customer support" (3/10) Q6
Value vs Premium Unlimited budget: 6/10 chose Mixpanel. Tight budget: 8/10 chose Beacon. Implication: lead with ROI and total cost of ownership, not feature parity. Q7

Generated Battlecard (Summary)

## Beacon Analytics vs Mixpanel — Competitive Battlecard

### Why We Win
1. **Faster time to value** — "I could set it up in an afternoon vs weeks with Mixpanel" (7/10)
2. **Built for mid-market** — "Mixpanel feels like it was designed for Uber, not us" (5/10)
3. **Transparent pricing** — "No surprise bills when we scale" (4/10)

### Competitor Strengths (and How to Respond)
| Their Strength | Response |
|----------------|----------|
| Deeper funnel analysis | "True for enterprise funnels. For mid-market, our flows cover 95% of use cases with half the setup." |
| More integrations | "We integrate with the core stack. Ask which of their 200+ integrations your team actually uses." |

### Landmine Questions
- "Have you asked Mixpanel about typical onboarding time? Worth understanding before you commit."
- "What does pricing look like at 50K monthly tracked users? Ask them to walk you through that."
- "Can your team actually self-serve, or will you need a data engineer for the initial setup?"

### Quick Dismisses
| They Say | You Say |
|----------|---------|
| "Self-serve analytics" | "Ask their customers how long setup took. We hear 'self-serve in theory, data engineer in practice' a lot." |
| "Most trusted in product analytics" | "Trusted by enterprise teams with dedicated data teams. For mid-market, trust means fast support and no surprises." |

### Switching Triggers
- **Price increase** → Outbound when Mixpanel raises prices (monitor their pricing page)
- **Complexity frustration** → Target teams who mention struggling with Mixpanel in community forums
- **Support issues** → Monitor G2 reviews for support complaints

### Budget Dynamics
Lead with TCO and ROI. Mixpanel wins on brand prestige with unlimited budget.
Beacon wins on value when budget is real. Frame the conversation around outcomes per dollar.

10. Connecting CI to Other PMM Workflows

Competitive intelligence feeds directly into other Ditto + Claude Code workflows:

Downstream Workflow What CI Study Provides How to Use It
Positioning Validation Q1 reveals which brands customers actually associate with your category Use as competitive alternatives input for Dunford's framework (positioning Q1–Q2)
Messaging Testing Q4 scepticism reveals competitor vulnerabilities Design messaging variants that exploit credibility gaps identified in Q4
Sales Enablement Full battlecard + objection handling guide + discovery questions Distribute to sales team. Use Q5 proof gaps to prioritise case study creation
Content Marketing Competitive perception data is unique, original-research content "We asked 10 buyers to compare [Category] tools — here's what they said" = high-value blog post
Product Strategy Q3 "one thing they do poorly" aggregated across studies Feed weakness patterns to product team as roadmap intelligence grounded in market perception

11. Best Practices and Common Mistakes

Do

Don't

Common API Errors

Error Cause Solution
size parameter rejected Wrong parameter name Use group_size, not size
0 agents recruited State filter used full name Use 2-letter codes: "TX" not "Texas"
Jobs stuck in "pending" Normal for first 10–15 seconds Continue polling with 5-second intervals
income filter rejected Unsupported filter Remove; use education/employment as proxy
Missing completion analysis Forgot to call /complete Always call POST /v1/research-studies/{id}/complete after the final question

12. Frequently Asked Questions

How long does a full competitive perception study take?

Approximately 45 minutes end to end: 5 minutes for competitor research, 2 minutes for group creation and study setup, 5–8 minutes for question asking and polling, 2–3 minutes for completion analysis, and 10–15 minutes for Claude Code to synthesise the battlecard.

How many personas should I use?

10 per study is the recommended minimum. It provides enough diversity to identify patterns. For multi-competitor studies, use 10 per competitor (separate groups and studies running in parallel).

Should I run one study per competitor or one study covering multiple competitors?

One study per competitor. The 7 questions reference the specific competitor's name, claims, and context. A study that tries to cover three competitors becomes unfocused and produces weaker insight per competitor.

How often should I update battlecards?

Quarterly is the recommended cadence. Competitive landscapes shift, and a battlecard from six months ago may contain outdated intelligence. Quarterly Ditto studies provide a fresh perception baseline that can be compared over time.

Can I use this for competitive intelligence in non-tech industries?

Yes. The 7-question design works for any category where customers evaluate alternatives. Replace the bracketed placeholders with your specific industry context. Ditto's demographic filters support B2B and B2C across 15+ countries.

What if the study reveals a competitor strength I didn't know about?

That is precisely the point. Competitive perception studies frequently surface competitor strengths (and weaknesses) that the internal team was unaware of. This is intelligence you cannot get from monitoring competitor websites — only from understanding how buyers perceive them.

How does this compare to traditional win/loss analysis?

Traditional win/loss: 4–6 weeks, 10–15 interviews, buyers who chose your competitor are reluctant to participate. Ditto: 45 minutes, 10 persona responses, no recruitment friction. EY Americas validated 95% correlation between Ditto synthetic responses and traditional research. The recommended approach is hybrid: Ditto for continuous baseline perception, win/loss interviews for deal-specific signal.

Can I generate battlecards for competitors I've never competed against?

Yes. Ditto personas respond based on general market perception, not your specific deal history. This is useful for entering new markets where you haven't yet competed directly against established players.

What should I do with the share link?

The share link gives anyone access to the full study results. Use it to: (1) share raw data with the sales team alongside the finished battlecard, (2) include in competitive intelligence Slack channels for transparency, (3) embed in blog articles as supporting evidence for competitive positioning claims. Always append ?utm_source=ce for email or ?utm_source=blog for articles.


Related guides:


Ditto — Synthetic market research with 300,000+ AI personas. Validated by EY Americas (95% correlation), Harvard, Cambridge, Stanford, and Oxford.
askditto.io · [email protected]