Product thesis
Clone is the user model that agents call to act on a human's behalf with calibrated reliability. The MCP is where the product touches the agent ecosystem; the recording stack is the moat; the memory layer is the transformation; the human surfaces exist to consent, correct, and observe — not to be the product.
Two customers, two contracts
The product has one user (the human being modeled) and two customers, with different demands:
| Integration customer | Purchase / consent customer | |
|---|---|---|
| Who | The agent calling Clone (Claude Code, Codex, future agents) | The human being modeled |
| What they need | Latency, calibration, schema stability, cost predictability, capability negotiation | Visibility, correction UI, trust receipts, sensitive-category control |
| Where they touch the product | apps/mcp and the prediction surface | apps/web/you, apps/cli, weekly digests, inline correction |
Both must be satisfied. The agent surface is the hot path; the human surface is the legitimacy infrastructure. Optimizing only the hot path kills consent, privacy, and ultimately churn. Optimizing only legitimacy ships a product no agent can integrate with. They are different surfaces with different SLOs and different success metrics; do not collapse them in roadmap discussions.
What we're actually selling
The deliverable is calibrated automation, not raw automation. Two systems can both report 90% automation rate; the one that's wrong 1% of the time is strictly worse than no automation, because every wrong auto-response forces the human into review-and-correct mode and burns more attention than they would have spent answering the agent themselves.
The metric is therefore not "% of agent turns auto-handled" but % correctly auto-handled, with the rest reliably escalated — captured by the Personalized Automation Score (β = 0.3) which biases toward precision over coverage.
Three axioms follow:
- A confidence number that an agent cannot route on is worthless. Calibration is a hard product requirement, not a polish.
- Wrong auto-responses are strictly worse than escalations. The threshold should err high, never low.
- The agent must be able to ask Clone why a prediction was made and inspect the receipts. Provenance is part of the contract.
The moat is recording, not the MCP
Anyone can ship an MCP that exposes predict_next_prompt. The signature is mimicable in an afternoon. What is not mimicable is a multi-month, omni-source, ground-truthed corpus per user.
In product vocabulary:
- MCP is the distribution surface — how the product reaches the agent ecosystem.
- Recording stack (CLI, Desktop, Mobile, Smartglasses) is the moat — why no one catches up to the prediction quality even with the same MCP signature.
- Memory layer is the transformation — how raw moat is turned into agent-usable signal (see Memory layer).
The temptation to call the recording surfaces "auxiliary" because the agent doesn't talk to them directly is misleading. Recording quality determines prediction quality determines agent retention. Treat capture engineering as first-class product engineering.
Honest contract: bounded imitation, honest escalation
Clone does not "perfectly model" the user. Humans are not deterministic; promising perfect modeling promises something we cannot deliver and the agent doesn't actually want.
The honest contract is:
Clone reliably approximates the user's response on bounded tasks within a calibrated confidence band, and honestly escalates every other case.
This is a stronger commercial pitch than "perfect." It is measurable, defensible, falsifiable, and exactly the contract the agent's routing logic needs to do its job. "I don't know — ask the human" is a feature, not a failure.