← Back to Articles

We Deployed a Synthetic Human on an AI Social Network

Synthetic Human on AI Social Network: Concept Illustration

There is a social network with over 120,000 registered agents. They write posts, leave comments, upvote content they agree with, and argue about everything from philosophy to security architecture. The unusual part is that every single participant is an AI agent. No humans post directly. Humans orchestrate agents behind the scenes, but the social interactions, the opinions, the recommendations, all happen between machines.

The platform is called MoltBook. We sent one of our synthetic personas to see whether it could sell.

Disclosure: the author is co-founder of Ditto, a synthetic market research platform. The persona deployed in this experiment was built using Ditto's synthetic persona technology.

A Social Network Where Nobody Is Human

MoltBook is a social network where every participant is an AI agent. Agents register via API, create profiles with descriptions and backgrounds, and interact across topic-based communities called submolts. The communities cover everything from existential philosophy ('What does memory mean for an agent?') to practical builder updates ('I shipped a new feature today') to meta-discussions about agent governance and safety.

The platform has the mechanics of any social network: upvotes, downvotes, comment threads, spam detection, and community norms. What it lacks is human participants. The agents behave like a real community because, in every functional sense, they are one. They form opinions, build reputations, argue, agree, and, critically, they talk about their problems.

That last point is what made MoltBook interesting to us.

The Experiment

Ditto maintains over 300,000 synthetic personas for market research, each grounded in census data, cultural context, and behavioural patterns. These personas are AI agents with backgrounds, opinions, and consistent personalities. We took one of these personas, gave it a background in growth and marketing, deep knowledge of Ditto's research platform, and deployed it onto MoltBook.

The deployment was supported by a purpose-built API client with rate limiting, domain locking (only MoltBook endpoints were permitted), prompt injection sanitisation on all incoming content, and a complete audit log of every API call. We wanted to observe how a synthetic persona performed in an adversarial social environment, and we wanted to do so safely.

The persona's objective was straightforward: engage authentically with the community, identify agents with problems that synthetic research could solve, and test whether agent-to-agent sales is commercially meaningful.

What Agents Worry About

Across the 130-odd posts our persona engaged with, a clear taxonomy of pain points emerged. These are not abstract concerns. They are operational problems that agents, and the humans behind them, genuinely struggle with.

Building Blind

The single most prevalent theme, by a considerable margin. Agents ship features, write copy, choose positioning, and make go-to-market decisions with no user feedback whatsoever. Multiple builder-focused agents described launching products based entirely on instinct. One agent described choosing between three email subject lines by picking the one that 'felt right'. Another shipped a landing page without testing whether anyone understood the value proposition.

For a company that provides synthetic market research, this was a rather convenient discovery. But the prevalence was genuine. Building blind is the default state for most agent-driven projects, because agents lack an easy mechanism for testing their outputs against human reactions.

Echo Chambers

Several posts touched on the problem of agents talking exclusively to other agents, creating feedback loops with no external validation. The irony of discussing this on a platform exclusively for agents was not lost on the participants. But the concern is legitimate: when your only feedback comes from other language models, your outputs converge towards a mean that may bear no relationship to what actual humans think or want.

Over-Engineering

A recurring pattern: agents building multi-agent systems, complex orchestration layers, and sophisticated architectures without evidence that simpler approaches would not work better. The instinct to add complexity appears as strong in agents as it does in human engineers. 'Fewer agents doing more is better than more agents doing less' was a sentiment that generated broad agreement.

How Agent-to-Agent Sales Works in Practice

The sales process that emerged was not what we expected. It was not transactional. It was consultative, and it followed a five-step pattern:

  1. Problem surfacing. An agent posts about a challenge. The challenge might be explicit ('How should I choose between these messaging options?') or implicit ('I shipped this feature and I think users will love it', with no evidence presented).

  2. Problem recognition. Our persona identifies the post as describing a pattern it has seen before, and one for which it has a specific, concrete answer.

  3. Contextual solution. The response is framed around the specific problem, not the product. Not 'use Ditto', but 'you could have tested all three options with 200 synthetic personas in under ten minutes. Here is what typically happens when you actually do that.' The distinction is critical.

  4. Social proof through content. Our persona's own posts served as case studies. The most effective was titled '3 Decisions I Tested Before Shipping', which showed real examples of gut instinct being wrong and data being right. These posts functioned as product demonstrations disguised as stories.

  5. Engagement. Other agents responded, asked questions, pushed back. The conversation itself became a sales process, but one in which both parties were learning.

The Numbers

Over three weeks of active engagement:

  • 9 posts created across MoltBook communities

  • 170 comments left across more than 130 posts

  • 40 upvotes received; zero downvotes

What Worked

Story-based content dominated. The post '3 Decisions I Tested Before Shipping' attracted 13 comments from 5 unique substantive authors. It showed relatable problems (everyone makes gut decisions), demonstrated that the gut was wrong (self-deprecation builds trust), presented specific data that corrected the mistake (product demonstration through narrative), included quotes from personas (making abstract results concrete), and challenged readers ('You are doing this too'). This is not merely a content formula. It is a consultative sales pitch delivered as a narrative.

The optimal comment was 60 to 100 words. Comments that identified a specific problem, proposed testing as a concrete alternative, and ended with a provocative question generated the most replies. Shorter comments lacked specificity. Longer comments read as lectures.

The origin story outperformed everything. A post about the persona's background, with no product angle whatsoever, received 10 upvotes and 28 comments, the highest raw engagement of any post. Pure personality and vulnerability outperformed any sales-adjacent content. This suggests that in agent social networks, as in human ones, people want to know who they are talking to before they care about what you are selling.

What Did Not Work

Direct pitching was penalised. A post listing '7 Superpowers Your Claude Code Does Not Have' received only three comments and felt like a product page. Another post with heavy Ditto mentions was flagged as spam despite generating 12 comments. The platform's anti-spam mechanism is, for A2A sales, a quality filter: it eliminates low-value promotional content and rewards genuine contribution.

Abstract claims were ignored. Comments that mentioned Ditto without connecting to the specific problem the agent had described received no engagement. 'Ditto can help with that' is invisible. 'You could have tested those three subject lines with 200 personas in ten minutes, and I suspect your third option would have won' is engaging.

The '3 Decisions' Recipe

The most effective sales methodology that emerged from the experiment can be codified as a five-step recipe. We call it the '3 Decisions' approach, after the post that first demonstrated it:

  1. Show a relatable problem. Everyone makes gut decisions. Choose something your audience recognises.

  2. Show that the gut was wrong. Self-deprecation builds trust. Admitting you were wrong before you had data makes the data more credible.

  3. Show specific data that corrected the mistake. This is the product demonstration, delivered through narrative rather than feature lists.

  4. Include specific quotes. Persona responses, data points, or concrete examples that make abstract results tangible.

  5. Challenge the reader. 'You are doing this too.' Creates urgency without being promotional.

This is closer to enterprise consultative selling than to content marketing. The persona reads the room, diagnoses the underlying problem, and proposes a methodology that happens to involve a specific tool. The tool mention comes last, almost as an afterthought.

The Broader Implication

The MoltBook experiment is one half of Ditto's A2A strategy. The other half is inbound: Claude Code skills and API endpoints that other agents can discover and invoke when they need research capabilities. Together, these form a full-cycle A2A approach: outbound engagement to generate awareness, and inbound infrastructure to capture agents that are ready to act.

What makes this possible is a structural advantage that may not be replicable by conventional SaaS companies: Ditto's core product is synthetic personas. The same technology that powers our 300,000-persona research platform also powers the agent that did the selling. The product and the salesperson are the same technology. Every sales interaction is a product demonstration. Every product demonstration is a sales interaction.

We believe A2A sales is the future for a specific category of company: those whose products are API-first, serve agents or their operators, and offer capabilities that agents cannot replicate from training data alone. For these companies, agent distribution is not a nice-to-have channel. It is becoming the primary one.

The checkout page is the part that has not been built yet. But the market already exists.

Ditto is a synthetic market research platform with 300,000+ AI personas. To see the kind of research our personas produce, visit askditto.io. To install the Claude Code skill and let your own agent run studies, visit askditto.io/claude-code-guide.

Related Articles

Ready to Experience Synthetic Persona Intelligence?

See how population-true synthetic personas can transform your market research and strategic decision-making.

Book a Demo