Pricing Research with Ditto + Claude Code

A complete Claude Code guide to pricing research and willingness-to-pay analysis. Seven-question study design adapting Van Westendorp, conjoint, and qualitative interview techniques. Feature-tier analysis, packaging preference, cross-segment pricing, and six deliverables from a single study. From guesswork to validated pricing in 30 minutes.

For Claude Code agents and AI coding assistants. This guide provides copy-paste API calls, study designs, and proven workflows for conducting pricing research with Ditto's synthetic research API. Every command is production-tested. The output is a set of pricing recommendations backed by qualitative evidence from target personas.
Contents
  1. Why Research Pricing with Synthetic Personas
  2. Traditional Methods and What This Replaces
  3. The 7-Question Pricing Study
  4. Complete API Workflow: Step by Step
  5. Generating the Six Deliverables
  6. Deep Dive: Feature-Tier Analysis
  7. Cross-Segment Pricing Research
  8. Advanced: Testing Multiple Anchor Points
  9. Worked Example: B2B Analytics SaaS
  10. Connecting to Other PMM Workflows
  11. Best Practices and Common Mistakes
  12. Frequently Asked Questions

1. Why Research Pricing with Synthetic Personas

A 1% improvement in pricing yields an 11% improvement in operating profit (McKinsey). Yet the average SaaS company spends only six hours on pricing over the lifetime of the product (Price Intelligently / Paddle). Pricing is the most impactful revenue lever that receives the least rigorous research.

The root cause is access. Traditional pricing research is expensive, slow, and technically complex:

Method Time Cost Limitation
Conjoint analysis (Qualtrics, Sawtooth) 4-8 weeks $30K-$100K Requires statistical expertise, large sample (200+), complex survey design
Simon-Kucher consulting 2-4 months $100K+ Enterprise-only pricing; inaccessible to most companies
Van Westendorp survey 2-4 weeks $5K-$20K Needs 200+ respondents for reliability, struggles with novel products
Customer interviews 2-6 weeks $5K-$15K Anchoring bias, social desirability, respondents understate willingness to pay
Ditto + Claude Code 30 minutes API usage only Qualitative directional data, not statistical precision (validated at 95% correlation)
The core value: Ditto pricing studies produce qualitative directional data: is your price in the right range, does your tier structure make sense, does your differentiator justify a premium. This is not a replacement for full conjoint analysis on bet-the-business decisions. It is a replacement for the informed guesswork that constitutes the pricing process at most companies.

2. Traditional Methods and What This Replaces

The 7-question study design adapts techniques from three established pricing research methods. Understanding the originals helps Claude Code agents interpret results correctly.

Van Westendorp Price Sensitivity Meter

Developed in 1976 by Dutch economist Peter van Westendorp. Asks four questions to map an acceptable price range:

The intersection points define the range of acceptable pricing and the optimal price point. Our Question 2 adapts this into a qualitative format suitable for Ditto's persona model: instead of four precise price thresholds, we test gut reactions to two specific price points and ask for the walk-away threshold.

Conjoint Analysis

Presents respondents with combinations of features and prices, forcing trade-off decisions that reveal implicit willingness to pay for each feature independently. Our Questions 4 and 5 adapt this principle: instead of statistical trade-off matrices, we ask directly which features are table-stakes versus premium, and which single feature they would keep above all others.

Qualitative Pricing Interviews

Open-ended conversations about value, price perception, and purchase decision factors. Our Questions 1, 3, 6, and 7 draw from this tradition, capturing the emotional and contextual dimensions that quantitative methods miss: how people feel about what they pay, what packaging model they prefer and why, and whether a competitor's lower price would trigger switching.

Key distinction: Ditto pricing studies do not produce a single "optimal price point" the way conjoint analysis does. They produce a price sensitivity band (acceptable range), feature-tier preferences (what goes where), and qualitative reasoning (why personas react the way they do). For most pricing decisions, this qualitative depth is more actionable than a statistically precise number with no context.

3. The 7-Question Pricing Study

Each question targets a specific dimension of pricing perception. Claude Code customises the bracketed sections with the actual product details, price points, and feature list before running the study.

Q# Question Pricing Dimension Adapted From
1 "What do you currently spend on [category] solutions per month or year? How do you feel about that price: is it fair, too much, or a bargain?" Current spend benchmark Qualitative interview
2 "If [product] costs [Price A], what is your gut reaction? Now what about [Price B]? At what price would you say 'that is too expensive, I am out'?" Price sensitivity + thresholds Van Westendorp (adapted)
3 "Would you prefer: (a) a free tier with limits, (b) a low-cost plan with core features, or (c) a premium all-in-one plan? Why?" Packaging preference Packaging research
4 "Which of these features would you pay extra for? Which should be included in every plan? [Feature list]" Feature-tier allocation Conjoint (adapted)
5 "If you could only afford one feature, which would you keep? Which feature, if removed, would make you cancel?" Must-have vs nice-to-have MaxDiff / conjoint
6 "Annual billing saves 20% but locks you in. Monthly billing is flexible but costs more. Which do you prefer and why?" Billing preference Pricing psychology
7 "A competitor offers [similar product] at [lower price] but without [key differentiator]. Would you switch? What would keep you?" Competitive price-value trade-off Win/loss analysis

Customisation Rules

Critical: The price points in Question 2 must be chosen carefully. Test two points that bracket your expected price: one at the lower end of your range and one at the upper end. Do not test obviously absurd prices ($1 or $10,000) as this wastes the question. The walk-away threshold captures the ceiling.

Before running the study, Claude Code should:

  1. Research the product via web search to understand features, competitors, and current pricing (if publicly available)
  2. Identify the competitive price range by checking 2-3 competitors' pricing pages
  3. Set anchor points for Q2: [Price A] should be the competitive midpoint, [Price B] should be 1.5-2x the midpoint
  4. Build the feature list for Q4: include 6-8 features covering core functionality, differentiators, and premium add-ons
  5. Identify the key differentiator for Q7: the feature or capability that competitors lack

Question Sequencing

Questions must be asked in order, one at a time, waiting for all persona responses before proceeding to the next question. The sequence is deliberate:


4. Complete API Workflow: Step by Step

Step 1: Create the Research Group

Recruit personas matching your target buyer profile. For pricing research, match demographics to your actual customer base as closely as possible.

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pricing Study: US SaaS Buyers (30-50)",
    "description": "US professionals aged 30-50 for pricing validation of analytics SaaS",
    "group_size": 10,
    "filters": {
      "country": "US",
      "age_min": 30,
      "age_max": 50,
      "employment_status": "Employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'

Save the group.uuid from the response.

Step 2: Create the Research Study

curl -s -X POST "https://app.askditto.io/v1/research-studies" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Pricing Validation: DataPulse Analytics Platform",
    "objective": "Validate pricing structure, willingness-to-pay thresholds, feature-tier allocation, and packaging preferences for a B2B analytics SaaS targeting marketing and product teams",
    "research_group_uuid": "GROUP_UUID_FROM_STEP_1",
    "shareable": true
  }'

Save the study.id from the response.

Step 3: Ask Questions Sequentially

Ask one question at a time. Wait for all jobs to reach "finished" status before asking the next question. Asking multiple questions simultaneously produces unreliable responses because personas lose conversational context.
# Question 1: Current Spend Benchmark
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "What do you currently spend on analytics or business intelligence solutions per month or year? How do you feel about that price — is it fair, too much, or a bargain? What are you getting for that money?"
  }'

# Save the job_ids from the response, then poll each one:

# Poll job status (repeat until status = "finished" for ALL jobs)
curl -s "https://app.askditto.io/v1/jobs/JOB_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

# When all jobs are finished, ask Question 2:

# Question 2: Price Sensitivity
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "If DataPulse costs $79 per month, what is your gut reaction? Now what about $199 per month? At what monthly price would you say that is too expensive and you would look for alternatives?"
  }'

# Wait for all jobs to finish, then Question 3:

# Question 3: Packaging Preference
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Would you prefer: (a) a free tier with limited dashboards and data sources, (b) a $79/month plan with core analytics and reporting, or (c) a $199/month premium plan with AI insights, unlimited sources, and team collaboration? Why?"
  }'

# Wait, then Question 4:

# Question 4: Feature-Tier Allocation
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Here are features DataPulse offers: (1) Real-time dashboards, (2) Custom reports, (3) AI anomaly detection, (4) Slack/email alerts, (5) API access, (6) Multi-team collaboration, (7) Predictive forecasting, (8) White-label client reports. Which should be included in every plan? Which would you pay extra for? Which do you not care about?"
  }'

# Wait, then Question 5:

# Question 5: Must-Have vs Cancel Trigger
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "If you could only afford one feature from DataPulse, which would you keep above all others? And which feature, if it were removed, would make you cancel your subscription entirely?"
  }'

# Wait, then Question 6:

# Question 6: Billing Preference
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Annual billing saves 20% but locks you in for 12 months. Monthly billing is flexible but costs more. Which would you choose and why? What would make you commit to annual?"
  }'

# Wait, then Question 7:

# Question 7: Competitive Price-Value Trade-off
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "A competitor offers similar analytics dashboards at $49 per month but without AI anomaly detection or predictive forecasting. Would you choose the cheaper option? What would DataPulse need to prove or offer to justify the higher price?"
  }'

Step 4: Complete the Study

# After all 7 questions have been answered:
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Poll the completion jobs until finished (generates AI summary and insights)

Step 5: Generate Share Link

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Returns: { "share_url": "https://app.askditto.io/organization/studies/shared/TOKEN" }

Step 6: Fetch All Responses for Analysis

curl -s "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Returns all questions with all persona responses
# Use this data to generate the six deliverables

5. Generating the Six Deliverables

A completed pricing study yields 70 qualitative responses (10 personas x 7 questions). Claude Code analyses these and produces six structured deliverables:

Deliverable Source Questions What It Contains Who Uses It
Price Sensitivity Band Q1, Q2 Acceptable price range derived from persona reactions and stated walk-away thresholds. Not a single number but a directional range with qualitative reasoning. Founders, PMM, Finance
Feature-Tier Recommendation Q4, Q5 Proposed tier structure: which features are base (table-stakes), which are premium upsell candidates, which generated insufficient interest for any tier. Product, PMM, Sales
Packaging Preference Breakdown Q3 Split between freemium, flat-rate, and tiered preferences with the qualitative reasoning behind each. Directly informs the packaging model decision. Product, PMM, Growth
Price Anchoring Strategy Q1, Q2 How pricing should be framed relative to current spend and perceived value. Includes emotional context: "painful but necessary" vs "fair" vs "bargain". Marketing, Sales, Web
Competitive Price Positioning Q7 Whether the differentiator justifies the premium over competitors. Lists what proof points, guarantees, or value would defend the price in sales conversations. Sales, PMM, CS
Willingness-to-Pay by Persona Q2, Q7, demographics Price sensitivity variation by persona profile. Maps demographic/psychographic differences to different thresholds and preferences. PMM, Sales, Product

Generating the Price Sensitivity Band

Parse Q2 responses to extract three data points per persona: reaction to Price A, reaction to Price B, and stated walk-away threshold.

# Pseudocode for price sensitivity band extraction
responses_q1 = fetch_responses(study_id, question_1_id)
responses_q2 = fetch_responses(study_id, question_2_id)

current_spend = []
price_a_reactions = []  # e.g., "fair", "cheap", "expensive", "about right"
price_b_reactions = []
walk_away_thresholds = []

for response in responses_q2:
    text = response["response_text"]
    # Extract: reaction to Price A (positive/neutral/negative)
    # Extract: reaction to Price B (positive/neutral/negative)
    # Extract: stated walk-away threshold (dollar amount)
    # Cross-reference with Q1 current spend for context

# Output:
# PRICE SENSITIVITY BAND
# =======================
# Current spend range: $50-$300/month (median: $120)
# Emotional context: 6/10 describe current spend as "fair", 3/10 as "too much"
#
# Price A ($79/mo) reactions:
#   7/10 positive ("reasonable", "about right", "good value")
#   2/10 neutral ("depends on features")
#   1/10 negative ("still need to justify internally")
#
# Price B ($199/mo) reactions:
#   3/10 positive ("worth it if AI features deliver")
#   4/10 conditional ("need to see ROI first")
#   3/10 negative ("too expensive for my team size")
#
# Walk-away thresholds: range $150-$500, median $250
#
# RECOMMENDED RANGE: $79-$199/month
# Sweet spot: $99-$149 (positive reactions from 8/10, below median walk-away)

Generating the Feature-Tier Recommendation

# Parse Q4 (feature allocation) and Q5 (must-have/cancel trigger)
responses_q4 = fetch_responses(study_id, question_4_id)
responses_q5 = fetch_responses(study_id, question_5_id)

# For each feature, count:
# - "should be in every plan" mentions (table-stakes)
# - "would pay extra" mentions (premium)
# - "don't care" mentions (deprioritise)

# Cross-reference with Q5:
# - "would keep above all others" = core value driver
# - "removal would make me cancel" = retention anchor

# Output:
# FEATURE-TIER RECOMMENDATION
# ============================
#
# BASE TIER (include in all plans):
#   - Real-time dashboards        [9/10 "every plan", Q5: #1 "would keep"]
#   - Custom reports              [8/10 "every plan"]
#   - Slack/email alerts           [7/10 "every plan"]
#
# PREMIUM TIER (upsell candidates):
#   - AI anomaly detection         [7/10 "pay extra", Q5: #1 "cancel trigger"]
#   - Predictive forecasting       [6/10 "pay extra"]
#   - Multi-team collaboration     [5/10 "pay extra"]
#
# ENTERPRISE / ADD-ON:
#   - API access                   [4/10 "pay extra", 3/10 "don't care"]
#   - White-label client reports   [3/10 "pay extra", 5/10 "don't care"]
#
# KEY INSIGHT: AI anomaly detection is the #1 cancel trigger (retention
# anchor) but NOT the #1 "would keep" feature (acquisition driver).
# This means: include it in the base tier to retain, but position
# real-time dashboards as the hero feature to acquire.

Generating the Packaging Preference Breakdown

# Parse Q3 responses for preference and reasoning
responses_q3 = fetch_responses(study_id, question_3_id)

preferences = {"free_tier": 0, "low_cost": 0, "premium": 0}
reasoning = {"free_tier": [], "low_cost": [], "premium": []}

for response in responses_q3:
    # Identify which option the persona chose
    # Extract their reasoning
    # Categorise the reasoning (e.g., "need to prove value", "hate limits",
    #   "want everything", "budget constraints")

# Output:
# PACKAGING PREFERENCE BREAKDOWN
# ================================
# Free tier:    3/10 (30%)
#   Reasons: "Need to prove value to my boss before requesting budget" (2x),
#            "Want to explore before committing" (1x)
#
# Low-cost ($79): 5/10 (50%)
#   Reasons: "Core analytics is all I need" (3x),
#            "Premium feels like overkill for a small team" (2x)
#
# Premium ($199): 2/10 (20%)
#   Reasons: "AI insights are the whole point" (1x),
#            "Hate being nickel-and-dimed on upgrades" (1x)
#
# IMPLICATION: The market wants a free-to-paid conversion path.
# Lead with a free tier for acquisition, price core analytics
# at $79, and position AI features as the premium upgrade.

6. Deep Dive: Feature-Tier Analysis

Feature-tier allocation is the most directly actionable output of a pricing study. It answers the question most pricing pages get wrong: what goes where.

The Three Categories

Category Signal from Personas Pricing Implication
Table-stakes 7+ of 10 personas say "should be in every plan" Include in the base tier. Gating these behind a paywall creates friction without premium perception.
Premium 5+ of 10 say "would pay extra", fewer than 5 say "every plan" Gate behind a higher tier. These features carry pricing power and justify the upgrade.
Deprioritise 4+ of 10 say "don't care" or don't mention it Consider removing from the pricing page entirely. Including features nobody values clutters the comparison and dilutes perceived value of features people do care about.

Acquisition vs Retention Features

Question 5 reveals a critical distinction that most pricing analyses miss:

When these differ, you have a strategic insight. If the acquisition driver is "real-time dashboards" but the retention anchor is "AI anomaly detection," your product strategy should include AI features in every tier (even a limited version) so that retention value starts building immediately, even though dashboards are the reason people showed up.

Interpreting Mixed Signals

Some features will receive split responses: 4/10 say "every plan," 4/10 say "pay extra," 2/10 say "don't care." This split usually means the feature's value depends on the persona's context. Cross-reference with demographics:


7. Cross-Segment Pricing Research

The basic study tests pricing with one audience. Cross-segment pricing reveals how price sensitivity varies by buyer type, which is where most pricing decisions get complex.

Running Parallel Studies

Claude Code orchestrates three studies concurrently with different persona groups, all using the same seven questions:

# Group 1: SMB decision-makers
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pricing Study: SMB Buyers (25-40)",
    "group_size": 10,
    "filters": {
      "country": "US",
      "age_min": 25,
      "age_max": 40,
      "employment_status": "Employed"
    }
  }'

# Group 2: Enterprise buyers
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pricing Study: Enterprise Buyers (35-55)",
    "group_size": 10,
    "filters": {
      "country": "US",
      "age_min": 35,
      "age_max": 55,
      "employment_status": "Employed"
    }
  }'

# Group 3: Technical evaluators
curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pricing Study: Technical Buyers (Bachelor+)",
    "group_size": 10,
    "filters": {
      "country": "US",
      "age_min": 25,
      "age_max": 50,
      "employment_status": "Employed",
      "education_level": "Bachelors"
    }
  }'

Cross-Segment Analysis Output

CROSS-SEGMENT PRICING MATRIX
==============================

                    SMB (25-40)      Enterprise (35-55)   Technical (25-50)
Current spend:      $30-$100/mo      $200-$800/mo         $50-$200/mo
Price A ($79):      8/10 positive    6/10 "cheap?"        7/10 positive
Price B ($199):     2/10 positive    8/10 positive        5/10 conditional
Walk-away:          median $120      median $500           median $250
Packaging pref:     Free tier (6/10) Premium (7/10)       Low-cost (5/10)
Billing pref:       Monthly (8/10)   Annual (7/10)        Monthly (6/10)
Competitor switch:  6/10 would       2/10 would           4/10 would

IMPLICATIONS:
- SMB needs a free-to-$79 path with monthly billing and fast time-to-value
- Enterprise expects $199+ pricing and prefers annual contracts with sales
- Technical buyers are price-moderate but need capability proof before committing
- A single pricing page with 3 tiers can work, BUT the sales motion
  must differ: self-serve for SMB, sales-led for enterprise
- The $79 price point risks "too cheap" perception for enterprise buyers
GTM implication: If your SMB segment strongly prefers free-tier entry and your enterprise segment expects a sales conversation, you need a hybrid motion (product-led sales). The pricing study does not just inform the numbers. It informs how you sell.

8. Advanced: Testing Multiple Anchor Points

Question 2 presents specific price points, which means persona responses are anchored to those numbers. This is both useful (tests specific prices) and limiting (anchoring bias). The mitigation: run two parallel studies with different anchor points.

Dual-Anchor Study Design

# Study A: Low anchor
# Q2: "If DataPulse costs $49/mo, what is your gut reaction?
#      Now what about $129/mo? At what price is it too expensive?"

# Study B: High anchor
# Q2: "If DataPulse costs $99/mo, what is your gut reaction?
#      Now what about $249/mo? At what price is it too expensive?"

# All other questions remain identical.
# Run both studies concurrently with similar persona groups.

Interpreting Dual-Anchor Results

Signal Interpretation Action
Walk-away thresholds are similar across both studies The ceiling is real, not an artefact of anchoring Price below the shared ceiling with confidence
Walk-away is higher in Study B (high anchor) Anchoring effect: personas adjusted expectations upward The true ceiling is between the two; lean toward the higher Study B data
Positive reactions to $129 (Study A) but negative to $99 (Study B) Framing effect: $129 feels like a deal when compared to $49, but $99 feels expensive when compared to $249 Your pricing page framing matters as much as the number itself. Test pricing page layouts, not just prices.
Time cost: Dual-anchor testing doubles the study time to ~60 minutes. Only use this approach when the price point is high-stakes (e.g., repricing an existing product with a large customer base). For new products, a single study with well-chosen anchors is sufficient.

9. Worked Example: B2B Analytics SaaS

Context: DataPulse Analytics

Product: B2B analytics platform for marketing and product teams. Real-time dashboards, custom reports, AI anomaly detection, predictive forecasting.

Current state: Pricing set at launch 18 months ago by looking at Mixpanel, Amplitude, and Heap. Currently $99/month flat rate, no tiers. Founder suspects they are underpriced but is nervous about raising prices.

Goal: Validate whether a tiered pricing model ($79 / $149 / $249) would be accepted, and which features belong in each tier.

Study Setup

Group: 10 US adults aged 30-50, employed (matching ICP: marketing managers,
       product leads, data analysts)
Study: "DataPulse Pricing Validation"
Anchor points: $79 and $199 (Q2)
Feature list: Real-time dashboards, Custom reports, AI anomaly detection,
              Slack/email alerts, API access, Multi-team collaboration,
              Predictive forecasting, White-label client reports
Competitor for Q7: "A competitor offers dashboards and reports at $49/mo
                    but without AI features"

Results Summary

Q1 - CURRENT SPEND:
  Range: $0 (using free tools) to $400/mo
  Median: $120/mo
  Emotional context: 5/10 "fair", 3/10 "too much for what I get", 2/10 "bargain"
  Key quote: "I pay $200/mo for Amplitude and use maybe 30% of the features.
  I'd pay the same for something simpler that I actually use." - Maria, 38,
  Marketing Director, Chicago

Q2 - PRICE SENSITIVITY:
  $79/mo: 8/10 positive ("reasonable", "good value", "I'd try it")
  $199/mo: 4/10 positive, 3/10 conditional ("need to see AI in action first"),
           3/10 negative ("hard to justify for a team of 3")
  Walk-away thresholds: $150, $200, $200, $250, $250, $300, $300, $350, $400, $500
  Median walk-away: $275

Q3 - PACKAGING:
  Free tier: 2/10 ("always want to try before I buy")
  $79 core plan: 5/10 ("just want dashboards and reports, don't need AI")
  $199 premium: 3/10 ("if the AI actually works, it's worth it")

Q4 - FEATURE TIERS:
  Every plan: Real-time dashboards (10/10), Custom reports (9/10),
              Slack alerts (7/10)
  Pay extra: AI anomaly detection (8/10), Predictive forecasting (7/10),
             Multi-team collab (5/10)
  Don't care: White-label reports (6/10 don't care), API access (4/10 don't care)

Q5 - MUST-HAVE vs CANCEL:
  Keep one feature: Real-time dashboards (6/10), AI anomaly detection (3/10),
                    Custom reports (1/10)
  Cancel trigger: AI anomaly detection (4/10), Real-time dashboards (3/10),
                  Custom reports (2/10), Predictive forecasting (1/10)

Q6 - BILLING:
  Monthly: 6/10 ("flexibility", "want to cancel if we outgrow it")
  Annual: 4/10 ("20% savings is real money", "already committed to analytics")
  Key insight: "I'd go annual if there was a 3-month trial period first" (3 personas)

Q7 - COMPETITIVE:
  Would switch to cheaper: 3/10 ("dashboards are dashboards")
  Would stay with DataPulse: 5/10 ("AI features are why I'm here")
  Conditional: 2/10 ("stay if DataPulse can prove ROI of AI features")
  Key defence: "Show me one example where the anomaly detection caught
  something I would have missed" - James, 42, Product Lead, Austin

Deliverables Generated

PRICE SENSITIVITY BAND
=======================
Acceptable range: $79-$250/month
Sweet spot: $99-$149/month
  - $79 universally positive but may leave money on the table
  - $199 viable for premium tier but requires proof of AI value
  - Median walk-away ($275) gives headroom above current flat rate ($99)
Recommendation: Raise from $99 flat to a tiered model. Base at $79
(lower entry point increases trial conversion), Pro at $149 (captures
the sweet spot), Enterprise at $249 (just below median walk-away).

FEATURE-TIER RECOMMENDATION
============================
Starter ($79/mo):
  - Real-time dashboards
  - Custom reports (up to 10)
  - Slack/email alerts
  - Single-user

Pro ($149/mo):
  - Everything in Starter
  - AI anomaly detection
  - Predictive forecasting
  - Multi-team collaboration (up to 5 seats)

Enterprise ($249/mo):
  - Everything in Pro
  - API access
  - White-label client reports
  - Unlimited seats
  - Dedicated support

PACKAGING PREFERENCE: Freemium path (free tier with 1 dashboard,
no AI) for acquisition. 50% of personas preferred the core plan,
suggesting $79 is the volume tier.

PRICE ANCHORING STRATEGY: Frame against current spend ("most teams
spend $120-$200/mo on analytics they barely use") rather than against
competitors. Personas who described current spend as "too much for
what I get" are the primary acquisition target.

COMPETITIVE PRICE POSITIONING: AI features justify a 2x premium over
basic dashboard competitors. But the justification requires proof:
"show me one example" was the most common defence requirement. Build
a demo/case study showing anomaly detection catching a real issue.

WILLINGNESS-TO-PAY BY PERSONA:
  - Marketing directors (senior): comfortable at $199, walk away at $350
  - Product managers (mid): sweet spot at $99-$149, walk away at $250
  - Data analysts (technical): price-sensitive ($79 preferred), but
    highest feature engagement with AI tools

Actionable Outcome

The founder's instinct was correct: $99 flat rate is below the market's willingness to pay. The study recommends a three-tier model ($79 / $149 / $249) with a free tier for acquisition. The critical insight: AI anomaly detection is the retention anchor (cancel trigger) but real-time dashboards are the acquisition driver (hero feature). Price the dashboards accessibly, gate the AI behind Pro, and build proof of AI value to defend the premium.


10. Connecting to Other PMM Workflows

Pricing research is downstream of positioning and messaging, but upstream of GTM execution. Each Ditto study type feeds into the next:

Sequence Study Type What It Produces for Pricing
1. Before pricing Positioning Validation Confirms market category, competitive alternatives, and unique attributes. Your price must be congruent with your positioning: premium positioning requires premium pricing.
2. Before pricing Messaging Testing Validates how to communicate value. Your pricing page is messaging: tier names, feature comparisons, and price anchoring are all messaging decisions.
3. This guide Pricing Research Validates what the market will pay, how it wants to buy, and which features justify the premium.
4. After pricing Competitive Intelligence Provides context for pricing conversations: where competitors price, what premium your differentiation commands.
5. After pricing GTM Execution Takes the pricing model and distributes it: self-serve pricing page, sales negotiation playbook, expansion revenue strategy.
Full stack timing: Positioning validation (30 min) + Messaging testing with 2 rounds (70 min) + Pricing research (30 min) + Competitive intelligence (45 min) = under 4 hours for the complete PMM strategic foundation. This typically takes a quarter to build through traditional methods.

Using Pricing Data in Other Deliverables


11. Best Practices and Common Mistakes

Best Practices

Practice Why It Matters
Research competitive pricing before designing the study Your anchor points (Q2) and competitor reference (Q7) must be realistic. Fictional competitors or implausible prices produce unreliable responses.
Match persona demographics to actual buyers Pricing sensitivity varies enormously by age, seniority, and company size. A study with 25-year-olds will produce different thresholds than one with 45-year-olds.
Include 6-8 features in Q4, not 3-4 or 15 Too few features produces obvious answers ("all should be included"). Too many overwhelms personas and produces noisy data.
Test prices that bracket your expected range If you expect to price at $99, test $79 and $199 (Q2). Testing $49 and $499 wastes the question on prices you would never charge.
Pay attention to emotional language in Q1 "Fair" vs "painful" vs "bargain" tells you more about pricing power than the dollar amount. A market that describes current spend as "painful" is ready for a value-framed alternative.
Cross-reference Q4 and Q5 Features that are "every plan" in Q4 but "cancel trigger" in Q5 must stay in the base tier. Never gate a retention anchor behind premium.

Common Mistakes

Mistake Why It Fails What to Do Instead
Testing only one price point Gives a thumbs-up/thumbs-down with no range data Always test two price points plus the walk-away threshold (Q2 design)
Skipping Q1 (current spend) Without a benchmark, price reactions have no context Q1 grounds every subsequent response in reality
Using absurd competitor prices in Q7 "A competitor at $5/mo" is not a realistic test of switching Use an actual competitor's price or a realistic market alternative
Ignoring the "why" in Q3 Knowing 5/10 prefer the core plan is less useful than knowing why The reasoning reveals GTM implications: "need to prove value" = PLG, "hate limits" = flat-rate
Treating the price sensitivity band as a single number Qualitative pricing research produces a range, not an optimum Present a range with reasoning, then use A/B pricing tests to find the exact point
Not customising questions for the product Generic questions ("how much would you pay for this?") produce generic answers Include specific product name, features, competitor names, and realistic price points
Running pricing study before positioning Price is a positioning signal. Without validated positioning, you are pricing in a vacuum. Run positioning validation first, then pricing research

12. Frequently Asked Questions

Is synthetic pricing research reliable?

Ditto's synthetic personas are validated at 95% correlation with traditional research (EY Americas, Harvard, Cambridge, Stanford, Oxford). For pricing, the correlation is strongest for attitudinal data (how people feel about prices, what they perceive as fair/expensive) and directional for behavioural data (what they would actually pay). The recommended approach: use Ditto for the fast first pass to narrow the range and validate the tier structure, then confirm with real-world A/B pricing tests.

How accurate are the walk-away thresholds?

Walk-away thresholds from synthetic personas should be treated as directional upper bounds. Real-world walk-away thresholds may be lower (due to competitive alternatives not mentioned in the study) or higher (due to switching costs and inertia). The median walk-away across 10 personas gives you a ceiling that would be risky to exceed without exceptional justification.

Should I use this instead of conjoint analysis?

Not for high-stakes pricing restructuring at a large company. Conjoint produces statistically precise willingness-to-pay data for individual features, which is essential when optimising pricing across multiple product lines. Use Ditto for: validating pricing before launch, testing a pricing change before announcing it, exploring tier structures, understanding packaging preferences, and any situation where qualitative depth matters more than statistical precision.

How do I handle international pricing?

Run separate studies for each market using Ditto's country filters. Pricing expectations vary significantly by country. A $99/month product in the US may be perceived very differently in the UK, Germany, or Canada. Claude Code can orchestrate parallel studies across 4+ countries in the same session, producing a cross-market pricing matrix.

# Example: International pricing comparison
# Run the same 7 questions against groups in 4 markets:
# US: 10 personas, age 30-50, employed
# UK: 10 personas, age 30-50, employed
# Germany: 10 personas, age 30-50, employed
# Canada: 10 personas, age 30-50, employed
# Total time: ~45 minutes (studies run sequentially per market)

What if my product has no competitors?

Modify Q7 to test against the status quo rather than a named competitor: "Currently, most teams solve this with spreadsheets, manual processes, or by hiring an analyst. These cost roughly [X] per month in time and salary. Would you prefer the manual approach at that cost, or DataPulse at [price]?" This tests pricing power against the "do nothing" alternative, which is the real competitor for novel products.

How often should I run pricing research?

At minimum: before launch, before any pricing change, and when entering a new market segment. Ideally: annually as a check on whether the market has shifted. At 30 minutes per study, the time cost is negligible compared to the revenue impact of mispriced products.

Can I test usage-based pricing?

Yes, by adapting Q3: "Would you prefer: (a) a flat monthly fee for unlimited usage, (b) a lower base fee plus per-event charges, or (c) a pay-as-you-go model with no monthly commitment? Why?" The same framework applies. The "why" reveals whether personas value predictability (flat fee), cost control (per-event), or flexibility (pay-as-you-go).

How do I test pricing for a freemium model specifically?

Adapt Q3 to present specific freemium options: "Would you prefer: (a) a free plan with [specific limits], (b) a $X plan with [expanded limits], or (c) a $Y plan with no limits? What would make you upgrade from free to paid?" The upgrade trigger question is critical: it tells you which limit on the free tier creates the conversion event.

What is the minimum viable pricing study?

If time is extremely constrained, a 3-question study captures the most important data:

This produces a price sensitivity range, feature-tier allocation, and competitive pricing defence in approximately 15 minutes. Use the full 7-question study when time allows.


Related guides: