Markdown as insurance: why your agentic method must outlive the model

It's the question that comes up most often when I talk with people who start investing seriously in an agentic method: "and what if tomorrow Claude stops working, or the subscription explodes, or a new model comes out and makes all my work obsolete, what do I do?"

It's a legitimate concern and it deserves an honest answer rather than a reassuring one. Because there are two versions of this answer circulating and they're not equivalent.

The first version, the one you often read in promotional content, says that markdown is portable, therefore your work is too, therefore you can change models without retraining. It's too simple to be true.

The second version, more nuanced, says that markdown preserves the method but not necessarily the tactics. What survives a model change is the structure, the taxonomy of roles, the underlying rules and the experience you've accumulated about how to organize your work. What doesn't survive as-is is the precise phrasing of prompts, the turns of phrase that work better on one model than another and certain tools or conventions specific to a system.

It's this second version I want to defend here, because it's more accurate and also because it stays very positive for anyone working seriously in agentic systems. The scope of what's portable is more than enough to make the investment worthwhile, as long as you understand what you're gaining.

MARKDOWN AS INSURANCE what survives a model change PORTABLE: the largest part • Folder structure folders, manifests, index vs content separation • Role taxonomy who does what, how the agents coordinate • Core rules and conventions domain principles, prohibitions, canonical examples • History of your decisions why this rule, what worked and what didn't • Produced work and iterations documents, successive versions, results a model change does not touch any of this TO ADAPT: a minority • Model-specific phrasing Claude vs Qwen vs Mistral • Meta-behavior nudges terse, no emojis • System-specific tools sub-agents, MCP, hooks • File conventions CLAUDE.md, AGENTS.md your value accumulates in the files, not in the conversations

What's really portable

When your method is written in markdown in files on your hard drive, you own several things that are independent of the model executing them.

Your tree structure. Having an agent manifest, well-separated roles, a clear distinction between index and detailed content, discipline around single sources (which I discussed in the second article of this series). This structure is the result of design work that holds for any model that can read markdown, which means essentially all of them.

Your role taxonomy. The list of agents you've identified, what they do, how they coordinate. This knowledge of your own domain is your most valuable asset, and it's etched into your files, not into the guts of a model.

The underlying rules. The business principles, the conventions, the forbidden patterns, the canonical examples. These rules are what define your expertise. A model can read them and apply them more or less well, but the rules exist independently of it.

Your history of decisions. The notes on why you chose a convention, why you abandoned a pattern, what worked and what didn't. This history is literally your accumulated experience, and if you keep it in markdown, you can reread it, show it to a colleague, or give it to a new model that will extract what it can.

Your work itself. The produced documents, the iterations, the successive versions. Everything is on your disk, readable by any text editor, including your system's default one without a single installation.

This set represents the largest part of the value you've built. And it's entirely portable, in the strict sense: a model change doesn't touch it.

What needs adaptation

The rest, which is a minority, isn't lost but requires work when you switch models. And you're better off being aware of it to avoid nasty surprises.

Formulations that exploit a specific model. Each model family has its habits. Claude reacts well to certain phrasings, Qwen prefers different turns, Mistral has its own interpretation style. A prompt carefully crafted for Claude Opus won't give exactly the same result with Qwen3-32B running locally, even if the substance is identical. You keep the substance, you adapt the form.

Meta-behavior instructions. Things like "be concise", "don't use emojis", "always verify before writing". These instructions work on all reasonable models, but their effectiveness varies. Some models need to be reminded at each turn, others integrate them durably. Again, the substance is portable, the dosing needs calibration.

Specific tools. If you rely on tools unique to a system (Claude Code's sub-agents, specific MCPs, particular hooks), these tools won't be available as-is elsewhere. The logic you had implemented with them remains valid, but you'll need to reimplement it with the tools available in your new stack.

Special file conventions. CLAUDE.md is a Claude convention. AGENTS.md is another, used by other systems. The content stays relevant, but the file name and the place it occupies in automatic exploration may change. It's a mechanical migration, not a relearning, but it's work.

The adaptation work, when it comes up, usually takes a few hours for someone who masters their method and a few days for someone rediscovering it while migrating. It's incomparably less costly than starting from scratch, but it's not zero, and it's important to say so.

Why it's structurally better than the alternative

To measure the value of this approach, you have to compare it to what it replaces. And what it replaces is "conversational" use of an AI model, where everything happens in conversations that disappear as soon as you close the tab.

In that mode, your expertise is nowhere, or rather it's in your head and in the thread of scattered conversations you can't systematically exploit. You rebuild the context at each session. You re-explain each time who you are, what you do, what you expect. You get answers that don't accumulate, don't structure themselves, don't help you progress durably.

The day the model changes, in that mode, you start over. The reason isn't that content was lost: there was simply nothing durable to keep.

In structured markdown mode, your expertise is in the folder. Each session enriches the folder. Each rule you discover is added to the right file. Each mistake you correct is documented so the model doesn't repeat it. The folder grows, sharpens, becomes more and more accurate. The value you accumulate in the folder depends neither on the model of the day nor on how many tokens you've consumed.

And the day the model changes, you don't start from scratch. You start from the same point, with the largest part that's portable, and you do the adaptation work for the remaining minority. It's a different economy of working with AI: one where the folder is an asset that appreciates instead of a recurring cost.

The difference with "using AI"

This is where the real distinction appears, which isn't "old model versus new model" but "user versus practitioner".

Using AI means having a tool that answers your questions when you ask them. It's useful, often impressive, but nothing that happens in those conversations accumulates anywhere you could find it again later. The tool assists the task of the moment, it doesn't build anything persistent.

Working in agentic mode with a structured markdown folder means building a system that captures your expertise as you work. Decisions made in conversation become rules, rules become files and those files shape the quality of subsequent conversations. Each session makes the next one a little more efficient. What has value, in the end, is the system you've built, independently of the model that runs it today.

This distinction isn't technical in the strict sense. It's a posture change. You move from "I consume AI" to "I build with AI, and what I build is mine". And this posture change is precisely what makes your work durable.

Two economies of working with AI User mode every session starts from scratch Session 1 questions, answers Session 2 questions, answers Session 3 questions, answers New model nothing to recover: you start over from Session 1 Practitioner mode every session enriches the folder that feeds the next one Session 1 decisions, rules Session 2 decisions, rules Session 3 decisions, rules folder.md folder.md + rules S1 folder.md + rules S1, S2 New model the folder stays, you simply plug the method into another engine

What the Claude Code leak confirms

What I just described isn't my idea. The Claude Code leak shows that Anthropic's team designed its own memory system exactly according to this principle. MEMORY.md is a markdown file. Topic files are markdown files. Transcripts are readable JSONL. Everything is on disk, everything is inspectable, everything is editable with a text editor.

Andrej Karpathy formulated it in a thread I already cited in the parent article: four principles including "file over app", "data under user control", and "BYOAI" (bring your own AI, meaning interchangeable model).

This convergence is no accident. Anthropic, which has every incentive to lock users into its ecosystem, chooses to expose memory in markdown rather than hide it in a proprietary database. Karpathy, coming from the research world, independently reaches the same conclusion. Individual practitioners who have never seen Claude Code's code build similar systems. When the same pattern emerges everywhere, it's no longer a product choice, it's an engineering principle.

And the principle is this: the real ownership of your AI work lives in the folder that stays on your disk after the conversation ends.

Closing the series

This series explored three practical consequences of the principles I extracted from the Claude Code leak. One source of truth per role to avoid erratic behavior. An optimized context to stay efficient and economical. Markdown as insurance so the method survives the tool.

These three come together in a single idea: an agentic system is built, the way you would build any other engineering system. Source discipline, context discipline, portability discipline. Nothing more, nothing less.

If you apply these three principles, you are much better equipped than most current AI users, not because you use a better model, but because you have a better scaffold. And, as the first article of this series put it, the model is necessary, the scaffold is sufficient.

Local inference is the adjacent topic I work on with herbert-rs. Once your method is portable, the choice of model becomes a tactical decision. And among the possible tactical decisions, running your method on an open-source model locally is an option that deserves serious consideration. It's the topic of the 2026 open-source LLM landscape and it's one of the reasons I'm building an inference engine: the ultimate portability of your method is being able to run it at home, on your hardware, with a model whose weights you own.


Questions about this article or your own project? Book a consultation

AI agents methodology -- series

  1. A methodology for building AI agents
  2. One source of truth per role: the silent trap of agentic systems
  3. Optimizing an AI agent's context: eco & logical
  4. Markdown as insurance: why your agentic method must outlive the model