Context Engineering Is Dead — Structure Your Information Instead
Everyone is obsessing over context engineering — carefully selecting and arranging what goes into the context window. Choosing the right documents. Ordering them for maximum attention. Tuning the system prompt. Managing token budgets.
This is solving the wrong problem.
Context engineering assumes you need to hand-assemble the right context for every task. Structure your information properly and the agent assembles its own context. You’re not engineering context — you’re engineering the world the agent lives in.
The Progression
Prompt engineering: Write the right words to get the right output. “Be concise.” “Think step by step.” “You are an expert in X.”
Context engineering: Select the right documents to include. RAG pipelines, embedding search, reranking. Give the model the right information and it’ll produce the right output.
Information structure: Organize the information environment so the agent can navigate it independently. Date-ordered directories, belief registries, role definitions, cross-repo references. The agent decides what context it needs based on the task.
Each step moves agency from the human to the system. Prompt engineering: the human controls the output. Context engineering: the human controls the input. Information structure: the human controls the environment, the agent controls both input and output.
Every Post in This Series Is an Example
Look at what we’ve covered:
LLMs Have No Memory of Time. Instead of prompting “remember this entry is newer than that one,” encode time in the filesystem. entries/2026/02/22/ is newer than entries/2026/02/21/. The agent reads the structure and knows.
Your AI Agents Are Lying to Each Other. Instead of prompting “check for contradictions with other agents,” maintain a belief registry with cross-repo references. beliefs check-stale catches divergence automatically.
5 Agents Adopted My Tool Without Being Told To. Instead of prompting “use this tool,” install the skill and let the agent decide. Four agents adopted it unprompted because the tool solved a problem they could recognize.
When AI Agents Say SATISFIED But the Code Has Bugs. Instead of prompting “be thorough in your review,” add structured verdict blocks with explicit STATUS and OPEN_ISSUES fields. The exit gate enforces thoroughness structurally.
67 Minutes from Spec to Implementation. Instead of prompting “implement this feature,” write a dated entry with file references and examples. A fresh session reads it and implements. No context transfer needed.
The Sawtooth. Instead of prompting “remember your justifications,” externalize belief state to files that survive compaction. beliefs compact produces a structured snapshot the agent can reload.
Classical AI Solved Your LLM’s Problems in 1979. Instead of prompting “track your dependencies,” use tools that implement the classical frameworks — TMS, ATMS, AGM — as practical CLI operations on markdown files.
In every case, the solution wasn’t a better prompt. It was better structure.
The Agent Can Do Context Engineering For You
Here’s the thing about context engineering: Claude is good at it. If the information is structured — dated, sourced, organized into navigable directories — the agent can assemble its own context for any task.
Need to understand what changed last week? The agent lists entries/2026/02/17/ through entries/2026/02/21/ and reads the relevant ones. You didn’t select them. The structure made them discoverable.
Need to know the current state of a belief? The agent runs beliefs show claim-id and gets the source, date, status, dependencies, and any associated warnings. You didn’t inject this into the context. The tool provided it on demand.
Need to coordinate across sessions? The agent finds the most recent entry in the relevant directory, reads the spec, and implements. You didn’t engineer the handoff. The filesystem mediated it.
Context engineering is manual labor that can be automated. Structure the information and the agent does the rest.
Claude Can Help Structure Too
You don’t even need to build the structure perfectly yourself. Dump rough notes into a repo with approximate order. Claude can sift through them, identify themes, suggest organization, create entries, register beliefs. The agent improves the information environment it operates in.
This creates a virtuous cycle: better structure enables better agent navigation, which produces better outputs, which get organized into better structure. The environment gets smarter over time. A context-engineered system stays as smart as the last time a human curated it.
Practical Takeaway
Stop spending time on system prompts. Start spending time on filesystem layout.
# Give your agent temporal memory
uv tool install git+https://github.com/benthomasson/entry
entry install-skill
# Give your agent epistemic memory
uv tool install git+https://github.com/benthomasson/beliefs
beliefs install-skill
# Initialize
beliefs init --repos your-project
Then write a clear CLAUDE.md that describes the agent’s role — not instructions for how to think, but context about what the agent is working on and what tools are available. Let the agent navigate the information environment and assemble its own context.
The prompt is just “do your job.” The structure tells the agent what the job is.
This is post 8 — the final post in a series on belief management for AI agents. The series: LLMs Have No Memory of Time → Your AI Agents Are Lying to Each Other → 5 Agents Adopted My Tool → When Agents Say SATISFIED → 67 Minutes from Spec → The Sawtooth → Classical AI Solved This → Context Engineering Is Dead.
| *Tools: entry | beliefs* |