Clarity and Portability Are the Same Thing
Today Anthropic had an outage. Most Claude sessions timed out. I was mid-session on a physics paper review.
Rather than stopping, I switched to Gemini CLI. The infrastructure — entry tracking, belief registries, checkpoints, multi-model-review — worked without rebuilding anything. The portability wasn’t designed in. It emerged from designing for clarity.
This suggests something broader: in the AI era, clarity and portability converge.
The End of Knowledge Arbitrage
The traditional moat was institutional knowledge. Ten years of accumulated expertise, internal documentation, tribal knowledge — hard to acquire, hard to replicate, defensible. An LLM compresses it in hours.
Every proprietary knowledge base, every internal wiki, every “only three people know how this works” system is now readable, navigable, synthesizable by any capable model. The arbitrage that took decades to build gets neutralized fast.
Companies trying to create lock-in through proprietary agent formats, memory APIs, and vendor-specific tool integrations are building on sand. If your agents depend on a vendor-specific SDK, you haven’t built a moat — you’ve accepted one someone else built around you.
What’s Actually Durable
The durable thing is structure: the way information is organized so any intelligent system can navigate it.
- Filesystem hierarchies with time encoded in paths
- Plain text and markdown
- Open CLI tools with documented interfaces
- Cross-references that point to real files
Not because it’s philosophically pure. Because it’s the only thing that actually survives model transitions, vendor outages, and platform changes.
The Real Finding
The tools I’ve built — entry tracking, belief registries, checkpoint patterns — are structured the way they are because that’s what works. What makes information navigable and usable by a model. It turns out “what works” and “what’s portable” are the same thing.
This wasn’t designed for portability. Portability emerged as a property of clarity.
The implication: the right way to evaluate any agent infrastructure decision is not “does this work today?” but “can any sufficiently capable model use this tomorrow?” Those constraints are almost identical. Building for one builds for both.
On Moats in the AI Era
If knowledge arbitrage is gone and proprietary formats are sand, what’s durable?
Probably: taste, judgment, and the questions you think to ask. The ability to recognize a good answer when you see one. The unique perspective that determines which parts of the model’s vast knowledge get activated.
The model is the same for everyone. The intelligence that emerges from the pair is not. That’s not a moat in the traditional sense — it’s not excludable or defensible. But it’s real, and it’s not neutralizable by the next model release.
Post 5 in a series on the AI economic shift. Previously: Python Taught AI to Code. Next: The Power Gap Will Close.