Product Launch Research with Ditto + Claude Code

LLM summary: Run a 7-question pre-launch concept validation study and a 5-question post-launch sentiment study to produce six launch deliverables plus a Launch Impact Report in ~75 minutes.

A complete Claude Code guide to researching product launches using Ditto's 300,000+ synthetic personas. Two phases — pre-launch concept validation and post-launch sentiment measurement — compressed from six weeks to 75 minutes.

For Claude Code agents and AI coding assistants. This guide provides copy-paste API calls, a production-tested 7-question pre-launch study, a 5-question post-launch study, six deliverable generation templates, launch tiering frameworks, and the launch feedback loop. Every technique is validated across 50+ production studies.
Contents
  1. Why Product Launch Research Matters
  2. The Two-Phase Approach
  3. The 7-Question Pre-Launch Concept Validation Study
  4. Complete API Workflow: Step by Step
  5. Generating the Six Launch Readiness Deliverables
  6. The Post-Launch Sentiment Study (5 Questions)
  7. The Launch Impact Report
  8. Launch Tiering: Matching Research to Launch Scale
  9. The Launch Feedback LoopAdvanced
  10. Worked Example: Launch Research for a Design Collaboration Tool
  11. Multi-Segment Launch Research
  12. Concept Testing for Products That Don't Exist Yet
  13. Measuring Launch Success Over TimeAdvanced
  14. Best Practices and Common Mistakes
  15. Frequently Asked Questions

1. Why Product Launch Research Matters

Product launches have two research phases that most companies skip: pre-launch concept validation and post-launch sentiment measurement. They skip them because traditional research takes too long. By the time a focus group delivers findings, the launch window has closed or the budget is committed.

The result: products launch on assumption. The founding team assumes the concept resonates. The product team assumes the feature priority is right. The marketing team assumes the messaging will land. The pricing team assumes the price feels fair. These assumptions compound into a launch that looks prepared on a slide but misfires on contact with the market.

The Traditional Launch Research Problem

Challenge Traditional Reality Consequence
Pre-launch speed 4–6 weeks for concept testing through focus groups, surveys, or customer interviews. Research is skipped entirely. Products launch without concept validation. Teams discover problems after commitment.
Pre-launch cost $15,000–$50,000 per concept validation study. Multiple rounds of testing multiply the cost. Only Tier 1 launches (if any) receive pre-launch research. Tier 2 and 3 launches go unvalidated.
Post-launch timing Post-launch surveys take 2–4 weeks to field and analyse. By then, the narrative is set. Teams rely on vanity metrics (downloads, sign-ups) and miss the sentiment underneath. A product can hit download targets while building resentment.
Feedback gap Pre-launch and post-launch research are rarely connected. Different methodologies, different vendors, different timelines. No systematic comparison between what you expected to happen and what actually happened. The same launch mistakes repeat.

With Ditto and Claude Code, pre-launch concept validation takes approximately 45 minutes (7 questions, 10 personas). Post-launch sentiment measurement takes approximately 30 minutes (5 questions, fresh group). The total — 75 minutes — replaces 4–8 weeks and $15,000–$50,000 of traditional research.

The core shift: Launch research moves from "too slow, so skip it" to "fast enough to do every time." When validation takes 45 minutes instead of six weeks, you validate concepts before committing resources and measure sentiment while you can still respond.

2. The Two-Phase Approach

Product launch research has two distinct phases, each with a specific purpose, timing, and question set. They are designed to work together as a before-and-after pair around the launch moment.

Phase Timing Purpose Questions Personas Duration
Phase 1: Pre-Launch Before committing resources Validate the concept, prioritise features, identify barriers, capture natural language, anchor pricing 7 10 ~45 min
Phase 2: Post-Launch 1–4 weeks after launch Measure awareness, competitive positioning, pricing perception, switching triggers, remaining barriers 5 10 ~30 min

Why Two Separate Phases

The two phases produce one deliverable: The Launch Impact Report (Section 7) compares pre-launch expectations against post-launch reality across every dimension — resonance, features, barriers, pricing, language. This single document captures what worked, what didn't, and what to do differently next time.

3. The 7-Question Pre-Launch Concept Validation Study

This question set is designed to produce the raw material for all six launch readiness deliverables simultaneously. Each question targets a specific launch concern while feeding multiple outputs. Questions must be asked sequentially (each builds conversational context from prior answers).

Q# Question Launch Component Validated Deliverables Fed
1 "When you first hear about [product/feature description], what comes to mind? What excites you? What makes you sceptical?" Initial Reaction Launch Readiness Scorecard, Risk Register
2 "How would this fit into your current workflow or daily life? Walk me through when and how you'd actually use it." Use Case Validation Launch Readiness Scorecard, Feature Priority Ranking
3 "What's the FIRST thing you'd want to try? What feature or capability matters most to you?" Feature Prioritisation Feature Priority Ranking, Launch Readiness Scorecard
4 "What would stop you from trying this? What's the biggest barrier — whether that's cost, complexity, trust, switching effort, or something else?" Objection Identification Objection Library, Risk Register
5 "How would you describe this to a friend or colleague in one sentence? What would you say it does?" Natural Language Capture Natural Language Bank, Launch Readiness Scorecard
6 "What would you expect to pay for this? At what price would it feel like a steal? At what price would it feel too expensive?" Price Anchoring Pricing Recommendation, Risk Register
7 "If you could change one thing about this concept, what would it be? What's missing that would make this a must-have?" Feature Gaps Feature Priority Ranking, Risk Register, Objection Library

Why This Question Sequence Works for Launch Research

Customise the bracketed placeholders, keep the structure. Replace [product/feature description] with a 2–3 sentence description of your concept. Be specific enough that personas can react meaningfully, but don't oversell. A neutral, factual description produces more honest reactions than marketing copy.

4. Complete API Workflow: Step by Step

Prerequisites

Step 1: Create Research Group

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pre-Launch Validation - [Product Name] - [Date]",
    "description": "Target customers for pre-launch concept validation of [product description]. [Add context about ideal customer profile, industry, demographics.]",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 25,
      "age_max": 50,
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'
Critical parameter notes:

Step 2: Create Study

curl -s -X POST "https://app.askditto.io/v1/research-studies" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pre-Launch Validation: [Product Name] - [Date]",
    "research_group_uuid": "GROUP_UUID_FROM_STEP_1"
  }'

Save the study id — you need it for asking questions, completing, and sharing.

Step 3: Ask Questions (Sequential)

Ask each question one at a time. Wait for the job to complete before sending the next. This ensures personas have conversational context from prior answers.

# Question 1
curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/questions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "When you first hear about [product/feature description], what comes to mind? What excites you? What makes you sceptical?"
  }'

# Response includes a job ID:
# { "job_id": "job-abc123", "status": "pending" }

Step 4: Poll for Responses

# Poll until status is "finished"
curl -s -X GET "https://app.askditto.io/v1/jobs/JOB_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

# When complete:
{
  "id": "job-abc123",
  "status": "finished",
  "result": {
    "answer": "My first reaction is..."
  }
}

Poll with a 5-second interval. Most questions complete within 30–90 seconds. Once complete, send the next question. Repeat for all 7 questions.

Step 5: Complete the Study

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/complete" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

This triggers Ditto's AI analysis, producing: overall summary, key segments identified, divergence points, shared mindsets, and suggested follow-up questions. Poll the study status until it reaches "completed".

Step 6: Get Share Link

curl -s -X POST "https://app.askditto.io/v1/research-studies/STUDY_ID/share" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Response:
{
  "url": "https://app.askditto.io/organization/studies/shared/xyz123"
}
UTM tracking is mandatory. Append ?utm_source=ce for cold email outreach or ?utm_source=blog for blog articles. Never use raw share URLs without a UTM parameter.

Step 7: Retrieve Full Study Data for Deliverable Generation

# Get the completed study with all responses and AI analysis
curl -s -X GET "https://app.askditto.io/v1/research-studies/STUDY_ID" \
  -H "Authorization: Bearer YOUR_API_KEY"

This returns all persona responses, demographic profiles, and Ditto's completion analysis. Use this data to generate the six launch readiness deliverables described in the next section.

Total API call timeline for pre-launch validation: Group creation (~15 seconds) + Study creation (~5 seconds) + 7 questions with polling (~15–25 minutes) + Completion (~2–5 minutes) + Share link (~5 seconds). Total: approximately 20–30 minutes of API interaction. Deliverable generation by Claude Code adds 15–20 minutes. End-to-end: ~45 minutes.

5. Generating the Six Launch Readiness Deliverables

Once the pre-launch study is complete and you have all 10 persona responses to all 7 questions, Claude Code should generate each deliverable using the extraction logic below.

Deliverable 1: Launch Readiness Scorecard

Source Data

Extraction Logic

  1. From Q1, score each persona's reaction on a 5-point scale: Strong Positive (5), Positive (4), Neutral (3), Negative (2), Strong Negative (1). Use language cues: "love this," "excited" = 4–5. "Interesting but..." = 3. "I don't see the point" = 1–2.
  2. From Q2, score use case specificity: Vivid and specific (5), General but plausible (3), Vague or forced (1).
  3. From Q3, check whether first-try features align with planned hero features. Alignment score: High (same feature), Medium (related), Low (different feature).
  4. From Q5, score description accuracy: captures the core value (5), partially captures it (3), misses it entirely (1).
  5. Aggregate into an overall launch readiness score (average of all dimensions).

Output Format

## Launch Readiness Scorecard: [Product/Feature]

### Overall Score: X.X / 5.0 [READY / NEEDS WORK / NOT READY]

| Dimension | Score | Evidence | Implication |
|-----------|-------|----------|-------------|
| Concept Resonance (Q1) | X.X/5 | X/10 positive, X/10 neutral, X/10 negative | [Interpretation] |
| Use Case Clarity (Q2) | X.X/5 | X/10 vivid use cases, X/10 vague | [Interpretation] |
| Feature Alignment (Q3) | High/Med/Low | Hero feature matched X/10 first-try choices | [Interpretation] |
| Descriptive Clarity (Q5) | X.X/5 | X/10 accurately described the concept | [Interpretation] |

### Launch Readiness Verdict
- **Score 4.0+:** Strong launch signal. Concept resonates, use cases are clear, features align.
- **Score 3.0-3.9:** Conditional launch. Specific improvements needed before go.
- **Score below 3.0:** Reconsider. Concept needs significant rework or repositioning.

### Top 3 Strengths
1. [What resonated most strongly across personas]
2. [Second strongest signal]
3. [Third]

### Top 3 Concerns
1. [Most common hesitation or objection]
2. [Second]
3. [Third]

Deliverable 2: Feature Priority Ranking

Source Data

Extraction Logic

  1. From Q3, extract every feature or capability mentioned as the first thing they'd try.
  2. Count frequency: how many of 10 personas mentioned each feature.
  3. From Q2, identify which features appear in the described workflow (implicit priority).
  4. From Q7, identify requested features that don't exist yet (gap features).
  5. Rank features by combined frequency from Q3 + Q2, then append gap features from Q7.

Output Format

## Feature Priority Ranking: [Product/Feature]

### Existing Features (Ranked by Customer Demand)
| Rank | Feature | Q3 First-Try | Q2 In-Workflow | Representative Quote |
|------|---------|-------------|----------------|---------------------|
| 1 | [Feature A] | 7/10 | 8/10 | "This is the first thing I'd click on" |
| 2 | [Feature B] | 4/10 | 6/10 | "I'd use this every day in my standup" |
| 3 | [Feature C] | 2/10 | 3/10 | "Nice to have but not why I'd sign up" |

### Gap Features (Requested but Not Planned)
| Feature Request | Frequency | Representative Quote | Priority |
|----------------|-----------|---------------------|----------|
| [Gap 1] | 5/10 | "If it had this, I'd switch immediately" | High |
| [Gap 2] | 3/10 | "Would be nice but not essential" | Medium |

### Launch Implication
Lead with [Feature A] in launch messaging and onboarding.
[Feature C] can be deprioritised or moved to a later release.
[Gap 1] should be on the immediate post-launch roadmap.

Deliverable 3: Objection Library

Source Data

Extraction Logic

  1. From Q4, extract every barrier, concern, or objection mentioned.
  2. Categorise each objection: Trust, Price, Complexity, Switching Cost, Competition, Missing Feature, Other.
  3. Count frequency per category and per specific objection.
  4. From Q1, capture scepticism signals that represent latent objections.
  5. For each objection, draft a response using evidence from the study itself (e.g., positive signals from other personas).

Output Format

## Objection Library: [Product/Feature]

### Objection Frequency
| Category | Frequency | Top Specific Objection |
|----------|-----------|----------------------|
| Trust / Security | X/10 | "I wouldn't trust my data with a new company" |
| Price | X/10 | "Seems expensive for what it does" |
| Switching Cost | X/10 | "Migration from [competitor] would take weeks" |
| Complexity | X/10 | "Looks like it has a learning curve" |
| Competition | X/10 | "How is this different from [competitor]?" |

### Objection-Response Pairs
| Objection | Frequency | Suggested Response | Evidence |
|-----------|-----------|-------------------|----------|
| "I don't trust new companies with my data" | 4/10 | [SOC 2 certification, data residency, encryption details] | Persona 3: "If they had SOC 2, I'd feel better" |
| "Seems expensive" | 3/10 | [Value comparison, ROI framing from Q2 use cases] | Persona 7: "If it saved me 2 hours a week, $X is nothing" |
| "Switching would be painful" | 3/10 | [Migration support, import tools, parallel running] | Persona 5: "If they handled the migration, I'd consider it" |

### Pre-Launch Actions
1. [Address top objection before launch with specific deliverable]
2. [Address second objection with specific content/feature]
3. [Monitor third objection post-launch for frequency]

Deliverable 4: Natural Language Bank

Source Data

Extraction Logic

  1. From Q5, collect all 10 one-sentence descriptions verbatim.
  2. Identify recurring phrases, metaphors, and framing patterns.
  3. From Q1, extract emotional language (positive and negative) used to describe the concept.
  4. From Q2, extract workflow-specific language that describes how the product fits their life.
  5. Organise by usage category: headline candidates, tagline candidates, feature descriptions, value propositions.

Output Format

## Natural Language Bank: [Product/Feature]

### One-Sentence Descriptions (Q5, verbatim)
1. "It's like [comparison] but for [specific use case]" (Persona 1)
2. "A tool that [simple action] so you can [outcome]" (Persona 2)
...
10. "[Description]" (Persona 10)

### Recurring Language Patterns
| Pattern | Frequency | Example |
|---------|-----------|---------|
| "[Comparison] for [use case]" | 4/10 | "Figma for customer research" |
| "Finally, a way to [outcome]" | 3/10 | "Finally, a way to test ideas without waiting weeks" |
| "Like having a [role] on demand" | 2/10 | "Like having a focus group on demand" |

### Headline Candidates (from persona language)
1. "[Most common one-sentence description, refined]"
2. "[Second most common framing]"
3. "[Most emotionally resonant phrase from Q1]"

### Words That Resonate (positive signals)
[fast, easy, finally, game-changer, exactly what I need, ...]

### Words That Concern (negative signals)
[complicated, another tool, expensive, not sure how, ...]

### Marketing Copy Recommendations
- **Landing page headline:** Use the "[comparison] for [use case]" framing — it's how your market naturally describes you
- **Tagline:** "[Refined version of most common Q5 description]"
- **Avoid:** [Words/phrases that triggered negative reactions in Q1]

Deliverable 5: Pricing Recommendation

Source Data

Extraction Logic

  1. From Q6, extract three data points per persona: expected price, steal threshold, and too-expensive threshold.
  2. Calculate ranges and medians for each threshold.
  3. Identify the "acceptable range" (between median steal and median too-expensive).
  4. Cross-reference with Q4: how many mentioned price as a barrier?
  5. From Q2, assess depth of use case engagement — more engaged users tolerate higher prices.

Output Format

## Pricing Recommendation: [Product/Feature]

### Raw Price Perception (Q6)
| Persona | Expected Price | Steal Price | Too Expensive |
|---------|---------------|-------------|---------------|
| P1 | $X/mo | $X/mo | $X/mo |
| P2 | $X/mo | $X/mo | $X/mo |
...
| P10 | $X/mo | $X/mo | $X/mo |

### Price Thresholds
| Threshold | Median | Range |
|-----------|--------|-------|
| Expected (fair) | $X/mo | $X - $X |
| Steal (great deal) | $X/mo | $X - $X |
| Too expensive | $X/mo | $X - $X |

### Acceptable Price Range: $[steal median] - $[too-expensive median]
### Recommended Launch Price: $X/mo

### Rationale
- [Why this specific price point within the acceptable range]
- Price as barrier (Q4): X/10 mentioned price concerns at [proposed price]
- Use case engagement (Q2): [High/medium/low] engagement suggests [higher/lower] price tolerance

### Price-Sensitivity Segments
- **Price-sensitive group (X personas):** Expect $X or less, cite [reasons]
- **Value-focused group (X personas):** Accept $X-$X, cite [value drivers]
- **Premium-tolerant group (X personas):** Would pay $X+, cite [willingness drivers]

Deliverable 6: Risk Register

Source Data

Extraction Logic

  1. From Q4, extract all barriers and assess severity (blocks adoption vs delays adoption vs minor concern).
  2. From Q7, extract all feature gaps and assess impact (must-have vs nice-to-have).
  3. From Q1, extract scepticism signals that could become public objections post-launch.
  4. From Q6, identify pricing risks (wide spread, many "too expensive" responses, segment-specific sensitivity).
  5. Score each risk by likelihood (frequency in study) and impact (severity of consequence).

Output Format

## Risk Register: [Product/Feature]

### Critical Risks (address before launch)
| Risk | Source | Likelihood | Impact | Mitigation |
|------|--------|-----------|--------|------------|
| [Trust/security concern] | Q4 (6/10) | High | High | [Specific action: SOC 2, security page, etc.] |
| [Missing must-have feature] | Q7 (5/10) | High | High | [Build before launch or address in launch messaging] |

### Moderate Risks (address in launch plan)
| Risk | Source | Likelihood | Impact | Mitigation |
|------|--------|-----------|--------|------------|
| [Pricing perception] | Q6 (4/10 say "expensive") | Medium | Medium | [Adjust pricing or improve value communication] |
| [Switching barrier] | Q4 (3/10) | Medium | Medium | [Migration support, import tools] |

### Low Risks (monitor post-launch)
| Risk | Source | Likelihood | Impact | Mitigation |
|------|--------|-----------|--------|------------|
| [Minor feature gap] | Q7 (2/10) | Low | Low | [Add to post-launch roadmap] |
| [Niche scepticism] | Q1 (1/10) | Low | Low | [Monitor for frequency increase] |

### Overall Risk Assessment: [GREEN / AMBER / RED]
- GREEN: No critical risks. Launch with confidence.
- AMBER: 1-2 critical risks that can be mitigated before launch.
- RED: 3+ critical risks or 1 critical risk with no clear mitigation.

6. The Post-Launch Sentiment Study (5 Questions)

Run this study 1–4 weeks after launch with a fresh research group (not the same personas from pre-launch). This captures market-level sentiment, not primed reactions.

Creating the Post-Launch Group

curl -s -X POST "https://app.askditto.io/v1/research-groups/recruit" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Post-Launch Sentiment - [Product Name] - [Date]",
    "description": "Target customers for post-launch sentiment measurement of [product]. Same demographic profile as pre-launch group for comparability.",
    "group_size": 10,
    "filters": {
      "country": "USA",
      "age_min": 25,
      "age_max": 50,
      "employment": "employed"
    },
    "sampling_method": "random",
    "deduplicate": true
  }'
Use identical filters to the pre-launch group. The value of the two-phase approach depends on comparing like with like. If pre-launch targeted employed 25–50 year-olds in the US, post-launch must target the same demographic. Different filters invalidate the comparison.

The 5 Post-Launch Questions

Q# Question What It Measures
1 "Have you heard of [product name]? If so, what have you heard? Where did you first see or hear about it?" Awareness penetration and channel effectiveness
2 "Based on what you know about [product name], how would you compare it to [primary competitor]? What stands out as different — better or worse?" Competitive positioning in the market's mind
3 "[Product name] is priced at [actual launch price]. Does that feel like a good deal, fair, or too expensive for what it offers? What would change your answer?" Post-launch pricing perception vs pre-launch expectations
4 "What would make you switch from your current solution to [product name]? What specific trigger — a feature, a price change, a recommendation — would push you over the edge?" Switching triggers and conversion drivers
5 "What concerns or hesitations do you still have about [product name]? What would need to change before you'd recommend it to a colleague?" Remaining barriers and recommendation blockers

Post-Launch Study Workflow

Follow the same API workflow as Steps 2–7 in Section 4, but with the post-launch group and 5 questions instead of 7. Total API time: approximately 10–15 minutes. With analysis: ~30 minutes end-to-end.


7. The Launch Impact Report

The Launch Impact Report is the capstone deliverable. It compares pre-launch expectations (Phase 1) against post-launch reality (Phase 2) across every dimension. This is where the two-phase approach pays off.

Report Structure

## Launch Impact Report: [Product/Feature]
## Pre-Launch Study: [Date] | Post-Launch Study: [Date]

### Executive Summary
[2-3 sentence summary: Did the launch land as expected? What surprised us?]

### Dimension-by-Dimension Comparison

#### 1. Concept Resonance
- **Pre-launch (Q1):** X/10 positive, average score X.X/5
- **Post-launch (Q1 awareness):** X/10 have heard of it, X/10 have a positive impression
- **Gap:** [Aligned / Positive surprise / Negative surprise]
- **Insight:** [What this means for the product]

#### 2. Feature Priority
- **Pre-launch (Q3):** Top feature demand was [Feature A] (X/10)
- **Post-launch (Q4 switching triggers):** Top conversion driver is [Feature X]
- **Gap:** [Do the features that excited pre-launch actually drive conversion post-launch?]
- **Insight:** [Alignment or misalignment between desire and action]

#### 3. Pricing Perception
- **Pre-launch (Q6):** Median expected price $X, acceptable range $X-$X
- **Post-launch (Q3):** X/10 say "fair", X/10 say "expensive", X/10 say "good deal"
- **Gap:** [Is the market reacting to the price as the pre-launch study predicted?]
- **Insight:** [Pricing adjustment needed or validation confirmed]

#### 4. Barriers and Objections
- **Pre-launch (Q4):** Top barrier was [Barrier A] (X/10)
- **Post-launch (Q5):** Top remaining concern is [Concern X] (X/10)
- **Gap:** [Did the launch address the predicted barriers? Did new ones emerge?]
- **Insight:** [Which barriers were successfully addressed, which persist]

#### 5. Market Language
- **Pre-launch (Q5):** Most common description: "[one-sentence description]"
- **Post-launch (Q2):** Market describes product as: "[how they compare it to competitors]"
- **Gap:** [Is the market describing the product the way you intended?]
- **Insight:** [Messaging alignment or drift]

### What Worked
1. [Strongest alignment between pre-launch expectation and post-launch reality]
2. [Second]
3. [Third]

### What Surprised Us
1. [Biggest unexpected finding]
2. [Second]
3. [Third]

### Recommended Next Actions
1. [Highest-priority post-launch action based on evidence]
2. [Second priority]
3. [Third priority]

### Feed Into Next Launch
[What this launch taught us that should change our process for the next one]
The compounding benefit: Each Launch Impact Report makes the next launch better. After three launches with before/after research, you develop an institutional understanding of how your market reacts to new products. You learn which types of features excite in concept but don't drive conversion. You learn which barriers always persist regardless of mitigation. You learn how your pricing perception shifts as you add features. This pattern recognition is impossible without systematic measurement.

8. Launch Tiering: Matching Research to Launch Scale

Not every release warrants the full two-phase research treatment. Launch tiering determines how much research investment each launch receives.

Tier Scope Examples Research Required Time Investment
Tier 1 (Major) New products, new market entries, rebrands, features impacting top-of-funnel New product line, market expansion, platform pivot Full two-phase: 7-question pre-launch + 5-question post-launch ~75 minutes
Tier 2 (Standard) Significant updates, new integrations, feature enhancements New API, key integration, major UI overhaul Focused pre-launch only: 4–5 questions (Q1, Q3, Q4, Q6, and optionally Q7) ~30 minutes
Tier 3 (Lite) Table-stakes improvements, bug fixes, minor UI polish Bug fix, small UI tweak, documentation update No research needed 0 minutes

Tier 2 Focused Validation

For Tier 2 launches, you don't need the full 7-question study. A focused 4–5 question version validates the critical dimensions without the full investment.

# Tier 2 Questions (4 questions)
Q1: "When you first hear about [feature update], what comes to mind? What excites you?"
Q2: "What's the FIRST thing you'd want to try with this update?"
Q3: "What would stop you from using this? What's the biggest concern?"
Q4: "Does [feature] at [price] feel like a good deal, fair, or overpriced?"

Tier Decision Framework

## Tier Decision Checklist

Does this launch...
[ ] Introduce a new product or product line? → Tier 1
[ ] Enter a new market or segment? → Tier 1
[ ] Change the product's core value proposition? → Tier 1
[ ] Affect pricing or packaging? → Tier 1 or Tier 2
[ ] Add a significant new feature or integration? → Tier 2
[ ] Enhance an existing feature? → Tier 2
[ ] Fix bugs or polish UI? → Tier 3
[ ] Update documentation or minor settings? → Tier 3

If multiple boxes checked, use the highest tier.
Recommended cadence: Minimum 1–2 Tier 1 launches per year, 1–2 Tier 2 launches per quarter, monthly Tier 3 releases bundled into quarterly roundups. Avoid major launches during July, August, December, and major global events.

9. The Launch Feedback Loop Advanced

The launch feedback loop connects every launch to the next. Each cycle produces data that improves the following launch's concept, positioning, messaging, and pricing.

The Five-Phase Loop

Phase 1: VALIDATE (Pre-Launch)
  Run 7-question concept validation study
  → Produces 6 launch readiness deliverables
  → Informs launch go/no-go decision
      │
      ▼
Phase 2: LAUNCH
  Execute launch with validated positioning, features, pricing
  → Use Natural Language Bank for marketing copy
  → Use Objection Library for sales enablement
  → Use Feature Priority for onboarding sequence
      │
      ▼
Phase 3: MEASURE (Post-Launch)
  Run 5-question sentiment study (1-4 weeks post-launch)
  → Capture awareness, positioning, pricing, barriers
      │
      ▼
Phase 4: COMPARE
  Generate Launch Impact Report
  → Pre-launch expectations vs post-launch reality
  → Identify gaps, surprises, confirmations
      │
      ▼
Phase 5: IMPROVE
  Feed findings into next launch cycle
  → Adjust research questions based on what mattered
  → Calibrate launch tiering based on what needed research
  → Update objection library with real post-launch barriers
      │
      └──→ Return to Phase 1 for next launch

What Improves with Each Cycle

After This Many Cycles What You Learn How It Improves the Process
1 cycle How your market reacts to a specific concept Better questions next time, calibrated expectations
3 cycles Patterns in what excites vs what converts. Recurring barriers. Pricing sensitivity range. You stop testing dimensions that are stable and focus on dimensions that shift. Your objection library becomes comprehensive.
5+ cycles Institutional knowledge about your market's launch psychology. Predictive accuracy improves. Pre-launch scores become predictive of post-launch performance. You can forecast launch impact before committing resources.

Cross-Launch Trend Tracking

## Launch Research Trend Analysis: [Product]

### Concept Resonance Over Time
| Launch | Date | Pre-Launch Score | Post-Launch Awareness | Gap |
|--------|------|-----------------|----------------------|-----|
| V1 Launch | Jan 2026 | 3.8/5 | 4/10 aware | Resonance strong but awareness low |
| V2 Feature | Mar 2026 | 4.2/5 | 6/10 aware | Both improving |
| V3 Platform | Jun 2026 | 4.5/5 | 8/10 aware | Strong on all dimensions |

### Recurring Barriers (across all launches)
| Barrier | V1 | V2 | V3 | Trend |
|---------|----|----|-----|-------|
| Trust/security | 6/10 | 4/10 | 2/10 | Declining (SOC 2 addressed it) |
| Price | 4/10 | 3/10 | 3/10 | Stable (price point validated) |
| Switching cost | 5/10 | 5/10 | 3/10 | Declining (migration tools helped) |

### Key Pattern
Trust concerns declined after SOC 2 launch. Price objections stable at 3/10
regardless of price changes. Switching cost drops when migration tools improve.
Implication: future launches should invest in migration support, not price
discounts, to reduce remaining barriers.

10. Worked Example: Launch Research for a Design Collaboration Tool

Scenario

Product: "CanvasSync" — real-time design collaboration for distributed teams
Concept description: "A design tool where your whole team can draw, annotate, and iterate on designs together in real time — like Google Docs for visual work, with built-in version history and stakeholder feedback tools."
Proposed price: $25/user/month
Primary competitor: Figma
Target: Product designers at companies with 10–200 employees
Launch tier: Tier 1 (new product)

Pre-Launch Group Setup

{
  "name": "Pre-Launch Validation - CanvasSync - Feb 2026",
  "description": "Product designers at companies with 10-200 employees. Work on collaborative design projects, use tools like Figma, Sketch, or Adobe XD. Responsible for UI/UX design, prototyping, and stakeholder presentations.",
  "group_size": 10,
  "filters": {
    "country": "USA",
    "age_min": 24,
    "age_max": 42,
    "employment": "employed",
    "education": "bachelors"
  }
}

Customised Pre-Launch Questions

  1. "When you first hear about a design tool where your whole team can draw, annotate, and iterate on designs together in real time — like Google Docs for visual work, with built-in version history and stakeholder feedback tools — what comes to mind? What excites you? What makes you sceptical?"
  2. "How would this fit into your current design workflow? Walk me through when and how you'd actually use it in a typical project."
  3. "What's the FIRST thing you'd want to try? What feature or capability matters most to you?"
  4. "What would stop you from trying this? What's the biggest barrier — whether that's cost, complexity, trust, switching from your current tool, or something else?"
  5. "How would you describe this tool to a designer friend in one sentence? What would you say it does?"
  6. "What would you expect to pay for this per user per month? At what price would it feel like a steal? At what price would it feel too expensive?"
  7. "If you could change one thing about this concept, what would it be? What's missing that would make this a must-have?"

Hypothetical Pre-Launch Findings

Dimension Finding Score
Initial Reaction (Q1) 7/10 positive: "Love the idea of Google Docs for design." 3/10 sceptical: "Figma already does this. What's different?" 3.9/5
Use Case (Q2) 8/10 described specific workflows: "I'd use it for stakeholder review sessions instead of Loom videos." 2/10 vague: "Probably for team stuff." 4.1/5
First Feature (Q3) 6/10: Stakeholder feedback tools. 3/10: Real-time drawing. 1/10: Version history. Stakeholder feedback is the hero feature
Barriers (Q4) 5/10: "Switching from Figma would be painful — all our files are there." 3/10: "My team won't learn another tool." 2/10: "I'd need to see it's as fast as Figma." Migration is the critical barrier
Description (Q5) Most common: "It's like Figma but with better stakeholder collaboration." 4/10 used the word "feedback." 4.0/5 accuracy
Pricing (Q6) Median expected: $20/mo. Steal: $12/mo. Too expensive: $40/mo. At $25: 4/10 "fair," 4/10 "slightly high," 2/10 "good deal." $25 is at the top of fair
Feature Gaps (Q7) 5/10: "Figma file import." 3/10: "Developer handoff." 2/10: "Design system management." Figma import is non-negotiable

Launch Readiness Verdict

Overall Score: 3.9/5 — CONDITIONAL LAUNCH

Post-Launch Study (4 weeks later)

Fresh group, same demographic filters, 5 questions. Hypothetical findings:

Launch Impact Report Key Finding

Pre-launch predicted migration as the critical barrier. Post-launch confirms this but reveals a second barrier not captured pre-launch: plugin ecosystem. This is the highest-priority post-launch investment. Stakeholder feedback positioning landed well — the market describes CanvasSync as "Figma with better feedback tools," which is exactly the intended positioning.


11. Multi-Segment Launch Research

Different segments react differently to the same product launch. A feature that excites individual designers may alarm enterprise IT teams. A price point that feels cheap to a funded startup feels expensive to a freelancer. Multi-segment launch research captures these differences before they become post-launch surprises.

Multi-Segment Group Setup

# Group A: Freelance/solo designers
{
  "name": "Pre-Launch - CanvasSync - Freelancers - Feb 2026",
  "description": "Freelance designers and solo practitioners. Work independently or with small client teams. Price-sensitive, tool-agnostic.",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 22, "age_max": 38, "employment": "employed" }
}

# Group B: In-house design teams (10-50 people)
{
  "name": "Pre-Launch - CanvasSync - Design Teams - Feb 2026",
  "description": "Product designers working in-house at tech companies with 10-50 person design teams. Use Figma daily, collaborate across product and engineering.",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 26, "age_max": 42, "employment": "employed", "education": "bachelors" }
}

# Group C: Enterprise design organisations (50+ designers)
{
  "name": "Pre-Launch - CanvasSync - Enterprise - Feb 2026",
  "description": "Design leads and managers at large organisations with 50+ designers. Manage design systems, govern tools, report to VP Design.",
  "group_size": 10,
  "filters": { "country": "USA", "age_min": 30, "age_max": 50, "employment": "employed", "education": "bachelors" }
}

Ask the same 7 pre-launch questions to all three groups. Claude Code then produces a cross-segment comparison:

Cross-Segment Launch Comparison

## Cross-Segment Launch Analysis: [Product]

| Dimension | Freelancers (A) | Design Teams (B) | Enterprise (C) |
|-----------|----------------|------------------|-----------------|
| Resonance (Q1) | 8/10 excited | 6/10 excited | 4/10 cautious |
| Hero feature (Q3) | Real-time drawing | Stakeholder feedback | Design system mgmt |
| Top barrier (Q4) | Price | Switching from Figma | Security & compliance |
| Expected price (Q6) | $8-12/mo | $20-25/mo | $30-50/seat/mo |
| Top gap (Q7) | Offline mode | Figma import | SSO & audit logs |

### Segment-Specific Launch Strategy
- **Freelancers:** Lead with free tier + real-time drawing. Price: $10/mo.
- **Design Teams:** Lead with stakeholder feedback. Price: $20/user/mo.
  Must have Figma import at launch.
- **Enterprise:** Delay launch until SSO and audit logs are ready.
  Price: custom. Lead with design system governance.

### Beachhead Recommendation: Design Teams (B)
- Highest resonance-to-barrier ratio
- Price point validated at proposed level
- Hero feature (stakeholder feedback) is most differentiated
- Achievable barriers (Figma import is buildable)
The multi-segment launch insight: A product that tries to launch to all three segments simultaneously with one message, one price, and one hero feature will fail to resonate with any. Multi-segment research reveals which segment to launch to first, which to delay, and how to sequence the market entry.

12. Concept Testing for Products That Don't Exist Yet

The pre-launch study works for products still in the concept phase — no prototype needed. Frame Q1 around a clear problem statement and proposed solution rather than a product name.

Concept-Stage Question Adaptations

Standard Q1 Concept-Stage Q1
"When you first hear about [product description], what comes to mind?" "Imagine a tool that [solves problem X] by [approach Y]. When you hear that description, what comes to mind? What excites you? What makes you sceptical?"

What Changes at Concept Stage

Concept Testing Iterations

At concept stage, you can run multiple studies cheaply. Test different framings:

# Study A: Problem-first framing
Q1: "Imagine a tool that eliminates the back-and-forth of design feedback
by letting stakeholders annotate directly on your designs in real time..."

# Study B: Comparison framing
Q1: "Imagine a design tool that works like Google Docs -- everyone can
see changes as they happen, leave comments inline, and resolve feedback
without switching tools..."

# Study C: Outcome framing
Q1: "Imagine cutting your design review cycle from 2 weeks to 2 hours
by replacing asynchronous feedback emails with live collaborative
annotation sessions..."

Compare resonance scores across all three framings. The framing that produces the highest Q1 scores and most specific Q2 use cases becomes your positioning foundation.

Concept testing ROI: Three concept validation studies take approximately 2 hours and cost nothing beyond the Ditto subscription. The traditional equivalent — three rounds of concept testing with focus groups — costs $45,000–$150,000 and takes 3–6 months. This makes iterative concept testing viable for the first time.

13. Measuring Launch Success Over Time Advanced

Beyond the immediate post-launch study, you can run periodic sentiment checks to track how market perception evolves over months.

Sentiment Tracking Schedule

Study Type Timing Questions Personas Purpose
Pre-launch validation Before launch 7 10 Concept validation and readiness assessment
Launch sentiment 2–4 weeks post 5 10 Immediate market reaction
Quarter check 3 months post 3 (Q1 awareness, Q2 positioning, Q5 barriers) 8 Track awareness growth and barrier evolution
Half-year review 6 months post 5 (full post-launch set) 10 Comprehensive check: has the narrative settled?

Launch Sentiment Tracking Dashboard

## Launch Sentiment Over Time: [Product]

### Awareness Trend
| Time | Aware (of 10) | Primary Channel | Perception |
|------|--------------|-----------------|------------|
| Week 2 | 3/10 | Twitter/X | "Interesting new tool" |
| Week 4 | 5/10 | Twitter/X + word of mouth | "Figma alternative with better feedback" |
| Month 3 | 7/10 | Word of mouth dominant | "The feedback tool designers are switching to" |
| Month 6 | 8/10 | Multiple channels | "Established player in design collaboration" |

### Barrier Evolution
| Barrier | Week 2 | Week 4 | Month 3 | Month 6 |
|---------|--------|--------|---------|---------|
| Switching from Figma | 5/10 | 5/10 | 3/10 | 2/10 |
| Plugin ecosystem | 4/10 | 4/10 | 4/10 | 3/10 |
| Trust/new company | 3/10 | 2/10 | 1/10 | 0/10 |
| Price | 2/10 | 2/10 | 1/10 | 1/10 |

### Key Insight
Trust barrier disappeared by month 3 (market evidence + early adopter advocacy).
Switching cost declining as Figma import tool improves. Plugin ecosystem is the
most persistent barrier -- this is the #1 post-launch investment priority.

14. Best Practices and Common Mistakes

Do

Don't

Common API Errors

Error Cause Solution
size parameter rejected Wrong parameter name Use group_size, not size
0 agents recruited State filter used full name Use 2-letter codes: "CA" not "California"
Jobs stuck in "pending" Normal for first 10–15 seconds Continue polling with 5-second intervals
income filter rejected Unsupported filter Remove income filter; use education/employment as proxy
Missing completion analysis Forgot to call /complete Always call POST /v1/research-studies/{id}/complete after final question
Share link not available Study not yet completed Ensure study status is "completed" before requesting share link

15. Frequently Asked Questions

How long does the full two-phase approach take?

Pre-launch concept validation: approximately 45 minutes (20–30 minutes API interaction + 15–20 minutes deliverable generation). Post-launch sentiment measurement: approximately 30 minutes (10–15 minutes API interaction + 15 minutes analysis and Launch Impact Report). Total: ~75 minutes across both phases. Compare with 4–8 weeks and $15,000–$50,000 for traditional launch research.

When should I run the post-launch study?

1–4 weeks after launch, depending on the product's reach. For a consumer product with broad launch marketing, 1–2 weeks is sufficient for awareness to spread. For a B2B product with targeted launch, wait 3–4 weeks. The goal is to capture the market's initial reaction while it's still forming, before the narrative calcifies.

Can I use this for feature launches, not just product launches?

Yes. For significant feature launches (Tier 2), use the focused 4–5 question pre-launch study. Frame Q1 around the feature rather than the product: "When you hear that [product] is adding [feature description], what comes to mind?" The same deliverables apply, scoped to the feature rather than the product.

What if the pre-launch study says "don't launch"?

This is one of the most valuable possible findings. A Launch Readiness Score below 3.0 (with critical risks identified) tells you the concept needs work before resources are committed. The cost of a 45-minute study is negligible compared to the cost of a failed launch. Use the Risk Register and Feature Priority Ranking to guide the rework, then retest with a fresh group.

How does this relate to the GTM Strategy Validation guide?

GTM Strategy Validation (see guide) determines how to reach your market: which segments, which motion, which channels, what pricing structure, what proof points. Product Launch Research determines whether the specific product or feature resonates and is ready to launch. Run GTM validation first to determine the strategy, then launch research to validate the specific product within that strategy. In practice, launch research often surfaces GTM insights as a byproduct (e.g., Q6 pricing data informs GTM pricing decisions).

Can I test a product that doesn't exist yet?

Yes. See Section 12. Frame questions around the problem and proposed approach rather than a product name. The study tests whether the concept resonates, not whether the product works. This is particularly valuable for pre-investment concept testing: validate demand before building anything.

How many personas should I use per group?

10 personas per group is the standard. Fewer than 10 produces unreliable patterns — the difference between 3/10 and 4/10 is noise, while 3/10 vs 7/10 is signal. For concept testing iterations (Section 12), you can use 8 personas per study to save time while maintaining sufficient signal.

Should the pre-launch and post-launch groups be the same demographic?

Yes, always. Use identical filters (country, age range, employment, education) for both groups. The two-phase comparison requires demographic consistency. Different filters between phases mean you're comparing different populations, not measuring how the same population's perception changed.

How do I share launch research results with my team?

Four formats: (1) The Ditto share link lets anyone explore the raw study data interactively. (2) The Launch Readiness Scorecard is a single-page executive summary. (3) The six deliverables serve different functions: product team gets Feature Priority Ranking, sales gets Objection Library, marketing gets Natural Language Bank, leadership gets Risk Register. (4) The Launch Impact Report serves as the post-mortem document.

What's the relationship between concept validation and positioning research?

Positioning research (see Positioning Validation guide) tests whether your market position and competitive differentiation resonate. Concept validation tests whether a specific product or feature concept is viable. Positioning is about the brand and its place in the market. Launch research is about the specific thing you're releasing. Validated positioning makes concept validation more reliable because Q1 descriptions are grounded in tested positioning language.

How accurate is synthetic launch research compared to real customer feedback?

EY validated 92% correlation between Ditto synthetic responses and traditional research methods. For launch research, synthetic personas are particularly strong at identifying barriers (Q4), capturing natural language (Q5), and revealing feature priorities (Q3). They are slightly less reliable for absolute pricing thresholds (Q6), where real purchasing behaviour introduces biases that synthetic personas don't replicate. Use Q6 data directionally: "the market expects $15–25" is more actionable than "the market will pay exactly $20."


Related guides:


Ditto — Synthetic market research with 300,000+ AI personas. Validated by EY (92% correlation), Harvard, Cambridge, Stanford, and Oxford.
askditto.io · [email protected]