The global market research industry is worth roughly $80 billion. In 2026, the single most consequential question facing that industry is whether synthetic audiences, AI-generated panels that simulate human responses, can replace the traditional focus group. Not supplement it. Not complement it. Replace it. The answer, based on the evidence available today, is: sometimes yes, sometimes no, and the distinction between the two is where the real commercial value lies.
This article is a category piece, not a product review. It is written for the research director, the CMO, or the procurement lead who has been asked to evaluate whether their organisation should shift budget from traditional qualitative research to synthetic alternatives. It lays out the case for each approach, the evidence on accuracy, the cost comparison in real numbers, and a practical framework for deciding which methodology to use when.
Disclosure: I am co-founder of Ditto, one of the synthetic research platforms discussed in this article. I have tried to be fair throughout, but you should factor that interest into your reading. Where data comes from a platform's own materials, I say so. Where independent validation exists, I name the auditor.
The Case for Traditional Focus Groups
Before making the case for synthetic audiences, it is worth articulating clearly and honestly why traditional focus groups have dominated qualitative research for seven decades. They are not a legacy technology clinging to relevance. They do things that synthetic research currently cannot.
Emotional Depth and Body Language
A skilled moderator in a focus group reads the room. They notice when a participant's body language contradicts their words. They observe the micro-hesitation before a socially desirable answer. They catch the energy shift when a concept genuinely excites the group versus when participants are being politely supportive. Synthetic personas can report emotions, but they cannot experience them. They cannot lean forward, furrow a brow, or exchange a knowing glance with the person beside them. For research that depends on emotional authenticity, the observation of genuine human reactions to stimuli, traditional focus groups remain the superior methodology.
Genuine Surprise and Novelty
Large language models are trained on historical data. They can interpolate brilliantly within the space of what has existed before, but they cannot encounter something genuinely new. When a participant in a traditional focus group sees a product concept that has no precedent, their reaction is authentic: confusion, excitement, revulsion, curiosity. A synthetic persona, by contrast, will extrapolate from the nearest adjacent category. The response will be plausible. It may not be real. For truly novel product categories, where no training data exists, real human reactions are irreplaceable.
Regulatory Acceptance
In pharmaceuticals, financial services, and certain legal contexts, research evidence must meet regulatory standards. The FDA has not accepted synthetic research as primary evidence for drug labelling decisions. The FCA does not recognise AI-generated consumer panels in market abuse investigations. If your research needs to withstand regulatory scrutiny, traditional focus groups with documented recruitment, screened participants, and auditable transcripts remain the required standard. This may change, but it has not changed yet.
Cultural Nuance and Regional Specificity
Census data and population-level statistics capture broad demographic patterns. They do not capture the lived experience of belonging to a specific community, navigating a particular cultural context, or holding attitudes shaped by hyperlocal events. A focus group of eight Mexican-American mothers in San Antonio will surface nuances about food purchasing behaviour that no synthetic panel, however well-calibrated, can fully replicate. The gap narrows as training data improves, but it has not closed.
The Moderator's Intuition
Perhaps the most underappreciated advantage of traditional qualitative research is the experienced moderator's ability to follow unexpected threads. When a participant says something surprising, a good moderator probes. They abandon the discussion guide. They follow the insight wherever it leads. Synthetic research is structured: you ask a question, you receive a response. The serendipity of an unscripted conversation, where the most valuable finding was not one you thought to ask about, is difficult to replicate algorithmically.
The Case for Synthetic Audiences
Having made the strongest case for traditional research, it is equally important to acknowledge where synthetic audiences offer decisive, and in some cases transformative, advantages. For a comprehensive introduction to how synthetic research works, see What Is Synthetic Market Research?.
Speed: Minutes vs Weeks
A traditional focus group takes six to eight weeks from brief to debrief. Recruitment alone accounts for two to four weeks. Facility booking, moderator scheduling, travel logistics, and analysis add the rest. A synthetic study on a platform like Ditto returns results in minutes. Not days. Not hours. Minutes. According to Qualtrics' 2025 State of Research report, 71% of research leaders now cite speed as their top operational priority. When a product team needs to test three messaging variants before a board meeting on Friday, six to eight weeks is not a timeline. It is a disqualification.
Cost: Orders of Magnitude
A single traditional focus group costs between $8,000 and $15,000, covering recruitment, facility hire, moderator fees, incentive payments, and analysis. A synthetic study costs between $50 and $500, depending on the platform and scope. That is not an incremental improvement. It is a structural change in the economics of research. For the price of one focus group, an organisation can run 20 to 300 synthetic studies. The implication is not merely budgetary. It changes what questions organisations can afford to ask.
Scale: Simultaneous Global Coverage
If you need to test a packaging concept across 50 markets simultaneously, traditional research requires 50 sets of recruitment, 50 facilities, 50 moderators, and a multilingual analysis team. The project timeline extends to months and the budget to six figures. A synthetic platform can simulate panels across 50+ countries in a single afternoon. The Definitive Guide to Synthetic Research provides worked examples of multi-market studies that would be logistically impossible using traditional methods.
Hard-to-Reach Demographics
Try recruiting twelve C-suite executives for a focus group on enterprise software procurement. Or eight oncologists for a pharmaceutical concept test. Or a panel of swing voters in a contested congressional district three weeks before an election. Traditional recruitment for specialist populations is slow, expensive, and often yields panels too small for reliable inference. Synthetic audiences can model hard-to-reach demographics, from niche B2B decision-makers to healthcare professionals, without the recruitment bottleneck.
Iteration: Test Ten Variants in the Time of One
Traditional research is structurally hostile to iteration. Each variant requires a new session, new recruitment, new budget approval. Synthetic research inverts this constraint. An A/B test of ten pricing structures, seven messaging angles, or five packaging designs can run simultaneously, with results available before lunch. This changes the research from a validation exercise, confirming a decision already made, to a genuine exploration tool, informing the decision itself. For more on how synthetic platforms compare on iteration speed, see the four-way platform comparison.
Privacy and Compliance Simplicity
Traditional focus groups collect personally identifiable information: names, contact details, demographic data, video recordings of sessions. Each of these creates a GDPR obligation. Synthetic research collects no PII whatsoever. There are no consent forms, no data processing agreements with participants, no video recordings to store and eventually delete. For European organisations navigating the complexities of post-Schrems II data transfers, this simplification is non-trivial.
Availability: No Recruitment Delays, No No-Shows
Every research manager has a horror story about the focus group where three of eight participants did not show, rendering the session statistically useless and the budget wasted. Synthetic panels have 100% attendance, 100% completion rates, and zero scheduling conflicts. They are available at 3am on a Sunday if that is when you need them.
Where Synthetic Wins Decisively
There are use cases where synthetic research is not merely competitive with traditional methods but decisively superior. These are the scenarios where adopting synthetic audiences delivers the clearest return on investment.
Rapid Iteration and A/B Testing
Packaging design, messaging frameworks, pricing tiers, landing page copy, product naming: any decision that benefits from testing multiple variants quickly is better served by synthetic research. A CPG brand testing six packaging concepts can have results in an hour. The same exercise using traditional focus groups would take months and cost tens of thousands. For product teams using design tools, platforms like Ditto integrate directly with Figma, Canva, and Framer, enabling feedback on designs without leaving the tool.
Multi-Market Research
A global brand launching in Southeast Asia needs to understand consumer attitudes across Thailand, Indonesia, Vietnam, the Philippines, and Malaysia. Traditional research in five markets simultaneously is a quarter-long project. Synthetic research delivers it in a day. The 2026 Market Map shows which platforms offer the broadest geographic coverage.
Always-On Brand Tracking
Traditional brand health studies run quarterly at best. Synthetic brand tracking can run weekly or even daily, providing continuous sentiment monitoring that catches shifts in perception as they happen, not three months after the fact. For brands navigating fast-moving categories, social media crises, or competitive launches, this temporal resolution is transformative.
Hard-to-Reach Populations
Political campaigns need voter sentiment from specific districts. B2B technology companies need feedback from IT directors at mid-market firms. Venture capital firms need customer validation for niche SaaS products. In each case, traditional recruitment is the bottleneck. Synthetic panels remove it. For a deeper exploration of how this applies to political research and VC due diligence, see the 2026 Buyer's Guide.
Product Development Feedback Loops
When product teams can test a design iteration every morning and have results by the afternoon standup, research stops being a stage gate and becomes a continuous input. This is the promise of synthetic research for product-led organisations: not a replacement for user testing, but an always-available supplement that reduces the number of ideas that need to reach real users before being validated or killed.
Where Traditional Wins Decisively
Equally, there are use cases where traditional research is not merely preferable but essential. Responsible advocates of synthetic research should be candid about these.
Genuinely Novel Product Categories
When Apple introduced the iPhone in 2007, no amount of synthetic research could have predicted the full spectrum of human reactions to a device category that did not yet exist. If your product is genuinely unprecedented, if there is no adjacent category from which an AI can extrapolate, real human encounters with the physical product are necessary. Synthetic research can help refine the concept once early reactions are collected, but it cannot generate the initial reaction to true novelty.
Deep Emotional Exploration
Research into grief, addiction, chronic illness, financial distress, or other deeply personal experiences requires the empathy and adaptability of a trained human moderator. Synthetic personas can discuss these topics, but they do so from a position of simulation, not experience. For healthcare brands, charities, and public health organisations, the authenticity of traditional qualitative research in sensitive domains is not a luxury. It is a requirement.
Regulatory-Grade Evidence
If your research will be cited in an FDA submission, a financial regulatory filing, or legal proceedings, you need traditional research with documented methodology, screened and consented participants, and an audit trail that a regulator can inspect. No synthetic research platform currently meets this standard. As noted in our Buyer's Guide to AI Behavioural Prediction, this is a limitation of the current regulatory environment, not necessarily of the technology itself, but it is the environment in which you must operate today.
Co-Creation and Ideation Workshops
Some of the most valuable qualitative research is not evaluative but generative. Co-creation workshops, where participants build on each other's ideas in real time, produce outputs that neither the moderator nor the participants could have envisioned alone. The emergent, collaborative nature of these sessions, where energy begets energy and one participant's half-formed idea triggers another's breakthrough, is not something synthetic panels can replicate. The interaction is the methodology.
Board and Investor Presentations
This is a pragmatic rather than a methodological point, but it matters. When presenting research to a board of directors or a potential investor, the phrase "we spoke to real customers" carries weight that "we ran a synthetic panel" does not yet command. Perception lags reality. Even if the synthetic data is equally reliable, the audience's confidence in it may not be. For high-stakes presentations where credibility is paramount, traditional research provides a rhetorical advantage that synthetic research has not yet earned in most boardrooms.
The Validation Evidence
The question of whether synthetic audiences can match traditional focus group outcomes is not theoretical. Multiple organisations have run parallel studies, and the results are instructive. For a detailed analysis of how to evaluate these numbers, see Can AI Predict Human Behaviour?.
Stanford: 85% Individual Replication
The generative agents research at Stanford, led by the team that would go on to found Simile, demonstrated that LLM-based personas could replicate individual human responses on the General Social Survey with approximately 85% accuracy across 1,052 participants. This is peer-reviewed research published in academic proceedings. It measures individual-level replication: how closely a single synthetic persona mirrors a specific real person's stated attitudes.
EY and Ditto: 92% Aggregate Correlation
EY independently audited Ditto's synthetic research platform, running 50+ parallel studies comparing synthetic panel responses with real focus group outcomes. The aggregate correlation was 92%. This is the only independently audited figure in the current market. Crucially, it measures population-level alignment, not individual prediction. The synthetic panel gets the overall picture right, even if individual synthetic personas do not perfectly mirror any single real person.
Evidenza: 88% Self-Reported
Evidenza reports 88% average accuracy across 100+ head-to-head tests. The methodology is not publicly documented in detail. Separately, Evidenza notes that EY's Chief Marketing Officer reported 95% correlation with a specific internal project. Self-reported benchmarks are not inherently unreliable, but they have not been verified by an independent third party.
Artificial Societies: 95% Self-Reported
Artificial Societies claims a 95% human replication level. As with Evidenza, the detailed methodology behind this figure is not publicly available. The Y Combinator backing and Point72 Ventures investment suggest that sophisticated due diligence has been conducted, but that is not the same as published validation.
What the Numbers Mean and Don't Mean
These four figures, 85%, 88%, 92%, and 95%, cannot be ranked on a single scale. Each measures something different, validated by a different party, using a different methodology. Stanford's 85% is the most methodologically conservative. Ditto's 92% is the most rigorously audited for commercial application. Evidenza's 88% and Artificial Societies' 95% may be perfectly accurate but have not been independently verified. Any vendor who invites direct comparison is being either careless or deliberately misleading.
The Say-Do Gap
There is an important irony in the traditional vs synthetic accuracy debate. Traditional focus groups themselves are not perfectly accurate predictors of real-world behaviour. The say-do gap, the well-documented discrepancy between what consumers say in a research setting and what they actually do in the market, is estimated at 40-60% for many product categories. Traditional research has its own accuracy ceiling, and it is lower than many practitioners acknowledge. Synthetic research is being held to a standard that traditional research itself does not consistently meet.
The Hybrid Model: Where Most Organisations Will Land
The framing of synthetic vs traditional as an either/or choice is commercially convenient for vendors on both sides but intellectually dishonest. The most sophisticated research organisations in 2026 are not choosing between the two. They are integrating both into a hybrid methodology that uses each approach where it is strongest.
Synthetic for Screening, Traditional for Depth
The emerging pattern is to use synthetic research for the first pass: screening concepts, narrowing options, identifying the two or three directions worth exploring further. Traditional research then provides the depth: exploring emotional resonance, testing physical prototypes, and generating the rich qualitative texture that synthetic research approximates but does not match. This is not a compromise. It is a better methodology than either approach alone.
Synthetic for Scale, Traditional for Validation
When a global brand needs to test a concept across 30 markets, running synthetic research in all 30 and traditional research in the five most strategically important markets is both faster and more rigorous than running traditional research in five and hoping those findings generalise. The synthetic data provides breadth. The traditional data provides depth and a validation check on the synthetic results.
The 80/20 Split Emerging in Enterprise
Conversations with enterprise research teams suggest that the leading organisations are moving towards an approximate 80/20 split: 80% of research volume handled synthetically, with 20% reserved for traditional methods on high-stakes decisions, regulatory requirements, and genuinely novel explorations. This represents a dramatic shift from even two years ago, when synthetic research was a curiosity rather than a core methodology. See our Buyer's Guide for a detailed framework on how to structure this transition.
Case Examples of Hybrid Workflows
A mid-size CPG brand testing a new product line might run synthetic studies across 15 demographic segments in a morning, identify the three most promising concepts, then commission two traditional focus groups to explore the winners in depth. Total timeline: two weeks instead of three months. Total cost: $15,000 instead of $90,000. Research quality: arguably superior, because the traditional research is focused on concepts that have already passed a synthetic screening filter rather than concepts selected on the basis of internal opinion.
A political campaign might use synthetic voter panels to test 20 messaging variants across key districts in a single afternoon, then conduct traditional town halls and focus groups around the three messages that resonated most strongly. The synthetic research does not replace the human interaction. It ensures that the human interaction is focused on the right questions.
The Cost Comparison (Real Numbers)
Cost comparisons in this space are often vague. Here are specific numbers, drawn from published pricing and industry benchmarks.
Traditional Focus Group
Per-group cost: $8,000-$15,000 (recruitment, facility, moderator, incentives, analysis)
Timeline: 6-8 weeks from brief to debrief
Geographic coverage: 1 market per group
Participant count: 6-10 per group
Annual budget for monthly research: 12 groups at $8K-$15K = $96,000-$180,000, covering a single market
Synthetic Study
Per-study cost: $50-$500 depending on platform and scope
Timeline: Minutes to hours (Ditto, Artificial Societies) or up to 72 hours (Evidenza)
Geographic coverage: Any market, multiple markets simultaneously
Panel size: 10-1,000+ synthetic personas per study
Annual budget for continuous research: $50,000-$75,000 for unlimited studies across all markets
The Structural Difference
The cost comparison is not merely about cheaper research. It is about a fundamentally different research model. Traditional research is a periodic, high-cost activity. Organisations conduct research when the budget allows and the decision warrants it. Synthetic research is a continuous, low-cost activity. Organisations can ask questions whenever they arise, test hypotheses as they form, and track sentiment in real time. The shift from periodic to continuous research may be more consequential than the cost saving itself.
Making the Business Case
If you are convinced that synthetic research merits evaluation, the next challenge is persuading your organisation. Scepticism is rational. The technology is new, the claims are bold, and the research industry has a long memory for overpromised innovations. Here is a practical approach.
Start with Low-Risk Use Cases
Do not begin by proposing that synthetic research replace your annual brand tracking study. Begin with concept screening, message testing, or competitive positioning analysis, use cases where the cost of being wrong is low and the speed advantage is immediately visible. A successful pilot on a low-stakes project builds credibility for higher-stakes applications.
Run Parallel Studies to Build Confidence
The most effective way to satisfy internal sceptics is to run the same study using both methodologies and compare the results. If synthetic and traditional methods produce substantially similar findings, the case for synthetic is made on speed and cost. If they diverge, you learn something valuable about the boundary conditions of synthetic research in your specific context. Either outcome advances your understanding. For guidance on how platforms perform in head-to-head comparisons, see the four-way platform comparison.
Position Synthetic as Additive, Not Reductive
The framing matters. "We want to replace focus groups with AI" triggers institutional antibodies. "We want to add a rapid-turnaround research capability that lets us test more hypotheses without increasing the research budget" does not. The most successful internal advocates position synthetic research as expanding the volume and speed of research, not as eliminating the qualitative capability.
Preserve Traditional Budget for High-Stakes Decisions
Reassure stakeholders that traditional research will continue for decisions that require emotional depth, regulatory compliance, or board-level credibility. The hybrid model is not a retreat from synthetic research. It is the intellectually honest position, and it makes the overall proposal easier to approve.
The Future Is Hybrid, but the Balance Is Shifting Fast
Five years ago, the question of whether AI could simulate human research participants was academic. Two years ago, it was speculative. Today, with independently audited accuracy rates, Fortune 500 adoption, $100 million venture rounds, and platforms operating at commercial scale across 50+ countries, it is operational. The question has moved from "does it work?" to "where does it work best?"
The honest answer is that synthetic research works best where speed, scale, cost, and iteration matter most. Traditional research works best where emotional depth, regulatory compliance, genuine novelty, and audience credibility matter most. Most organisations need both. The balance will continue shifting towards synthetic as the technology improves, but the shift will be asymptotic, not absolute. There will always be research questions that require a human in a room, reacting to something they have never seen before, with a skilled moderator watching their face.
What is changing, and changing rapidly, is the proportion of research questions that fall into each category. The 80/20 split emerging in enterprise today may become 90/10 within five years. For the research director evaluating synthetic platforms in 2026, the strategic question is not whether to adopt, but how quickly to build the internal capability and institutional confidence to use these tools effectively.
The $80 billion market research industry is not going to shrink. It is going to accelerate. Organisations will ask more questions, test more hypotheses, and make more evidence-based decisions than ever before, because for the first time, the economics allow it. The winners will be the organisations that figured out the hybrid model first.
For a head-to-head comparison of the four leading synthetic platforms, see Synthetic Research Platforms Compared. For the science behind AI behavioural prediction, see Can AI Predict Human Behaviour?. For an introduction to the field, see What Is Synthetic Market Research?.

