
Most conversations with an LLM end when the tab closes. Whatever you figured out in there — the new term you coined, the structure you talked through, the tradeoff you finally made sense of — exists only in the chat history, until you remember to copy it somewhere useful. That somewhere is usually a Google Doc, a Notion page, or a Slack thread. None of which an LLM can read back in any structured way the next time you start a new conversation.
We've built Metapad in part to fix this problem. And we use Metapad to build Metapad. This post is about how that loop actually works.
The Problem with Chat Alone
Treat an LLM like a thinking partner and you discover something quickly: the thinking is good, but the persistence is bad.
A typical conversation might:
- Coin five new terms you'll want to use again
- Make three decisions whose rationale you'll need to recall
- Surface eight relationships between concepts that didn't exist in your head before
- Land on one structural insight you'll want to share with the team
Then the conversation ends and almost none of it survives in usable form. The next session starts fresh. Your teammate, who wasn't in the chat, has no idea any of it happened.
You can paste a transcript into a wiki. You can summarise the conversation into a document. Both are better than nothing. Neither is structured enough for the next LLM session to read your prior thinking back without re-ingesting everything from scratch — and neither helps your team query "what did we decide about X?" without scrolling through a wall of prose.
The Loop: Conversation → Metamodel → Graph
Here is the working loop we've settled into.
1. Have the conversation
Open a chat with the LLM. Talk through whatever you're trying to clarify — a positioning strategy, a project plan, a domain you're learning, an architecture you're designing.
This is the same as before. The LLM is good at this. Use it.
2. Notice the structure emerging
After a few exchanges, structure starts to surface. The conversation contains:
- Things — entities, concepts, components, people, decisions
- Properties of those things — status, owner, deadline, rationale
- Relationships between them — depends on, supports, contradicts, replaces
This is the moment to stop and capture the shape of what you're discussing. Not paragraphs. Types.
You say to the LLM: "Let's build a metamodel for this. What node types do we need? What relationship types? Which connections should be allowed?"
The LLM is good at this too. It will propose types, name them, and explain its choices. You push back. You merge. You rename. After a few minutes you have a small, opinionated schema for the domain you're working on.
3. Fill the graph
Now use the LLM to populate the graph. Each thing in the conversation becomes a typed node. Each relationship becomes a typed edge. Properties get filled in. Descriptions capture the narrative that doesn't fit in a property field.
In Metapad, the LLM does this directly via MCP. It calls create_nodes, create_relationships, update_nodes — building up the model in real time while you watch. You correct as it goes. You add detail where the prose matters and trim where it doesn't.
By the end of the session, what started as a chat lives as a structured, queryable graph. The terms have definitions. The decisions have rationales. The relationships are explicit.
4. The next session starts in context
Open a new chat tomorrow. Open Metapad. Connect the LLM to the model via MCP.
Ask: "What did we decide about X last week?" The LLM doesn't have to remember — the model does. It calls search_nodes and get_relationships and reads back what you actually committed to.
Ask: "Where does this new idea conflict with what we already have?" The LLM can answer because the existing graph is typed, not just prose. It can see structurally where the new idea fits or breaks.
The conversation is no longer ephemeral. It compounds.
What This Demonstrates
This loop is also a quiet demonstration of three things we think matter about modelling tools in 2026.
Models you can talk to
Every Metapad model has an embedded AI interface. The AI reads the typed graph and grounds its answers in what is actually there, rather than hallucinating against unstructured prose. The difference is large in practice. An AI talking about a model often gets details wrong. An AI talking over a model rarely does — it can look up the answer.
Modelling as a team sport
Once the model exists in Metapad, it isn't tied to your chat session. Your teammate can open it, browse the graph, comment on a node, suggest a missing relationship. The Reader Workspace makes the model legible to people who don't model themselves. The conversation that produced the graph stops being a private artifact and becomes shared infrastructure.
Models that compute
The graph is queryable through an API. Diagrams render from it. Reports render from it. If the domain calls for it, simulations run over it. The model is the source of truth; everything else is a projection.
A Concrete Example: Modelling Our Own Positioning
We recently used this loop to clarify Metapad's own positioning.
The conversation started loose: "What is Metapad actually for? Who is it for? Why are people confused when we say 'business prototyping'?"
After a couple of hours of back-and-forth, structure emerged. We had Brands, Offerings, Audiences, Channels, Properties, Strategic Moves, Goals — and the relationships between them. We built a metamodel. We filled a graph. We named the model metapad-marketing.
That model now lives in Metapad. When we start a new conversation about marketing — a new campaign, a new audience, a new tradeoff — we don't start from scratch. We open the model and the AI reads it. The next conversation builds on what the last one decided.
We'll write about that model in detail in a later post. For now, the point is just this: the conversation that produced the positioning was preserved as a graph, not lost to chat history. We can interrogate it, evolve it, and share it with the team.
Why This Beats the Alternatives
Three habits people reach for when they want LLM thinking to persist, and why none of them are quite right:
Pasting transcripts into a wiki. Captures the prose, loses the structure. The next LLM has to re-read the whole thing to find anything. Your teammate does too.
Summarising into a document. Captures the structure as you saw it at the time, but the structure isn't queryable. "Show me everything we decided about Audience X" turns into a Ctrl-F exercise.
Custom GPTs / memory features. Personal. Locked to one tool. Invisible to teammates. Whatever the LLM "remembers" isn't a shared artifact you can read, edit, or hand over.
A typed graph in Metapad is none of these. It's a first-class artifact — readable, queryable, editable, shareable, and connectable to any LLM that speaks MCP.
Try the Loop
You don't need an enterprise transformation to try this. Pick something you're already trying to think through — a project plan, a research question, a domain you're learning. Open Metapad. Start a conversation. Let structure emerge. Capture it.
The discipline takes some getting used to. It's tempting to stay in pure prose mode because it feels faster in the moment. But the cost shows up the second time you want to use the thinking — and the moment a teammate needs to read your work without sitting through the original conversation.
The next conversation you have should compound the last one.
Want to try conversational modelling yourself? Create a free Metapad account and connect your favourite LLM via MCP.