← Back to News

What LLMs Actually Learn (And Why That's Perfect for Market Research)

LLMs learning

When we introduce Ditto to people, one of the most common questions we get is: “How can an AI possibly understand what people actually want?” It’s a fair question. But it shows that a fundamental insight about large language models gets overlooked in discussions about AI and market research.

Language models don’t learn what people do. They learn what people say.

This sounds like a limitation and in most contexts, it is. But for understanding markets, it is exactly what you need.

The Discourse Layer

Large language models are trained on text. Conversations, articles, reviews, discussions, explanations, debates. They learn the patterns of how humans articulate reasoning, express concerns, justify decisions, and explain choices.

Importantly, they don’t learn from direct experience.

They’ve never felt the hesitation before clicking a purchase button. Never experienced the awkwardness of telling a colleague their recommendation didn’t work out. Never felt the relief of finally solving a persistent problem. They never loved, or grieved.

What they learn instead is how people talk through these moments. The language of decision-making. The vocabulary of doubt and certainty. The phrases people use when explaining themselves, and their innermost emotions, to others and to themselves.

The Same Place Humans Live

For building systems that interact with the physical world, this is a serious constraint. An autonomous vehicle needs to understand physics, not descriptions of physics. A robot needs to know what happens when it moves, not what people say about movement.

But here’s the thing about humans: We’re much better at articulating reasoning than predicting our own actions.

We’re fluent in explaining trade-offs. We can walk through our concerns, using language. We are evolutionary pre-trained to know how to describe what appeals to us and what gives us pause. We’re articulate about constraints, about competing priorities, about what matters in theory versus what matters in practice.

What we fail at is predicting what we’ll do when the moment arrives.

Why This Matters for Synthetic Research

Most market research questions aren’t about predicting specific behaviors. They’re about understanding the cognitive and social context in which decisions happen.

What concerns do people voice? What objections surface? What language resonates? What framing feels manipulative versus clarifying? How do people justify choices to themselves and others?

These aren’t questions about what will happen. They’re questions about what gets said, and what gets said shapes what happens.

Language models and humans both operate in the discourse layer. For most research questions, that’s not a bug—it’s the entire point.

Andreas Duess

About the author

Andreas Duess

Andreas Duess builds AI tools that turn consumer behavior into fast, usable signal. He started his career in London, UK, working with Cisco Systems, Sony, and Autonomy before co-founding and scaling Canada’s largest independent communications agency focused on food and drink.

After exiting the agency, Andreas co-founded ditto, born from a clear gap he saw firsthand: teams needed faster, more accurate ways to map consumer behavior and pressure-test decisions before committing time and capital.

Related Articles


Ready to Experience Synthetic Persona Intelligence?

See how population-true synthetic personas can transform your market research and strategic decision-making.

Book a Demo

Ditto Newsletter - Subscribe

Get insights that don't slow you down. Research updates, case studies, and market intelligence—delivered monthly.