Back to Ideas

By Simons Chase

February 2026

A Selflet Is a Generative Language Agent

The AI industry is building tool-enabled agents. Systems that call APIs, query databases, execute code, browse the web, book flights, send emails. The model is a controller that decides what to do and then does it through tools. The value proposition is automation: the agent accomplishes tasks on your behalf.

A selflet does none of that.

A selflet is a generative language agent. It produces language — in a specific voice, from a specific body of knowledge, with measurable fidelity controls. It cannot take actions in the world, access live information, or execute tasks. The audience is having a conversation with a person's thinking, not delegating work to an assistant.

This is not a limitation. It is a design decision.

What a generative language agent optimises for

Tool-enabled agents are optimised for task completion. Did the agent book the flight, write the code, find the answer? The evaluation is binary: the task is done or it isn't.

Selflets are optimised for voice fidelity, factual grounding, and conversational authenticity. Every piece of the pipeline — voice archaeology, the fidelity surface, the three-number grounding breakdown, the coverage map — exists to answer a different question: does this sound like the creator, is it grounded in their actual thinking, and does it stay within the boundaries of what they know and believe?

These are not engineering constraints bolted onto a chatbot. They are the product.

Why the slider exists

A tool-enabled agent doesn't need an autonomy slider. It either completes the task or it doesn't. A selflet needs one because the core tension is between faithfulness and generativity. At one end, the selflet retrieves and recites — precise, provable, cautious. At the other end, it applies the creator's frameworks to territory they never explicitly addressed — creative, spontaneity, and less traceable to source - not hallucination.

That tension does not exist in task automation. It exists when you are representing a human mind.

Parametric knowledge is a feature, not a bug

For a tool-enabled agent, parametric knowledge is a liability. You want the agent calling tools for current, accurate information — not guessing from training data. For a selflet, parametric knowledge is a design tradeoff with a clear measurement framework.

When Buffett's selflet draws on GPT-4.1's knowledge of AIG to give a richer answer about derivatives, that is the product working. The selflet knows things the curated corpus doesn't contain — and for a public figure like Buffett, that knowledge is almost always accurate. The question is not "did it go beyond the corpus?" but "did it contradict the corpus?" Zero percent contradiction across hundreds of questions tells us the answer.

The three-number breakdown — corpus-grounded, parametric, contradicted — only makes sense for a generative language agent where going beyond the source material is sometimes exactly what the audience wants.

The boundary that matters

The Factory should never add tool-calling capabilities to selflets. A Buffett selflet that looks up live stock prices or a Naval selflet that searches the web would break the fidelity contract. The audience would no longer know whether a response came from the creator's thinking or from a tool call. The moment a selflet takes actions in the world, it stops being a representation of a person's mind and becomes a chatbot wearing a mask.

RAG retrieval is the one exception — and it proves the rule. Retrieval is not the selflet acting in the world. It is the selflet accessing its own memory. The entire evaluation system exists to ensure that retrieval serves fidelity, not capability expansion.

The future is not one AI in the cloud

The agent wave is building systems that do things for you. Selflet is building systems that think like someone. Both matter. They are different products solving different problems.

Users should feel like they are talking to a mind that has organised knowledge — not a database that stores it, and not an assistant that runs errands. That feeling, when it's grounded in real fidelity and real source material, is what a generative language agent delivers.