Skip to main content
Artificial Intelligence/ ·4 min read

Where Should Your Context Live?

AI /Context Engineering /Developer Experience /From My Desk

If you’ve spent any time working with AI coding agents, you’ve probably felt the moment where the model forgets everything. New session, blank slate. All the architectural decisions, the naming conventions, the “we tried that and it broke” lessons. Gone.

The fix isn’t a better model. It’s better context storage.

But where should that context actually live? I’ve worked across three distinct approaches, and each one optimizes for something different. The right choice depends less on personal preference and more on how many people need to share that context and how fast it changes.

Here are the three patterns I see emerging.

Three context storage approaches


1. Integrated in your code repo

This is the approach gaining the most traction right now. Your context files, CLAUDE.md, AGENTS.md, architectural decision records, conventions docs, live right alongside your code. Same repo, same version control, same PR workflow.

How it works in practice: You create markdown files at the root or in a docs/ directory. Your AI agent reads them at session start. When conventions change, you update the docs and commit them with the code change that prompted the update. Context and code stay in sync because they share a timeline.

What it’s good at:

  • Context is always versioned with the code it describes. When you check out a branch from three weeks ago, you get the context from three weeks ago.
  • Pull requests naturally become the review mechanism for context changes. Someone updates a convention? It’s visible, reviewable, and attributable.
  • Onboarding is instant. Clone the repo, and you have everything the AI agent needs.
  • No additional tooling. Git is already there.

Where it breaks down:

  • Cross-repo knowledge doesn’t have a home. If you have conventions that span multiple services, you’re duplicating files or picking one repo to be the “source of truth” while others go stale.
  • It biases toward technical context. Product strategy, user research insights, business rules: these feel awkward committed next to your src/ directory.
  • In larger teams, context files become a merge conflict magnet. Five people updating CLAUDE.md in the same sprint creates friction.

Best fit: Small to mid-size teams working on a single product or monorepo. This is the default I’d recommend for most teams getting started.


2. Separate context repository

Instead of embedding context in your code repo, you maintain a dedicated repository. A “knowledge base repo” that your AI agents reference. Think of it as the team’s shared brain, decoupled from any single codebase.

How it works in practice: You create a standalone git repo with structured markdown: design principles, API conventions, architectural patterns, product context, even past decision logs. Your AI agents pull from this repo (or it’s mounted/linked into your workflow). Code repos stay focused on code.

What it’s good at:

  • Cross-cutting context has a natural home. Design system principles, API standards, security policies: they live once and apply everywhere.
  • Separation of concerns is clean. Engineers own code repos. The context repo can have broader contributors: designers, PMs, even the AI agents themselves writing session logs.
  • Scales well for multi-repo architectures. Microservices, monorepo-per-team setups, polyglot stacks. One context repo serves them all.
  • You can version context independently from code. Sometimes context evolves on a different cadence.

Where it breaks down:

  • Context drift is the real risk. Your context repo says “we use factory patterns for service initialization,” but three repos have already moved to dependency injection. Without active maintenance, the separate repo becomes a lie.
  • It adds coordination overhead. Now you have two things to keep in sync instead of one. For solo builders or tiny teams, this is pure overhead with little benefit.
  • Discovery is harder. New team members need to know the context repo exists. It’s not self-evident the way an in-repo CLAUDE.md is.
  • CI/CD integration requires extra work. Pulling the context repo into your agent’s environment isn’t always trivial.

Best fit: Mid-size to large teams with multiple repositories and shared conventions. Especially useful when non-engineering roles need to contribute context.


3. Markdown files in a knowledge management tool

This is the personal knowledge graph approach. Your context lives in Obsidian, a synced markdown folder (iCloud Drive, Dropbox), Notion, or a similar tool. Your AI agents access it via MCP servers, file system mounts, or copy-paste.

How it works in practice: You maintain a structured vault. Think PARA methodology, MOCs (Maps of Content), or just well-organized folders. Your context isn’t just agent instructions. It’s the full picture: meeting notes, product thinking, competitive analysis, design rationale, and technical conventions all interconnected. The AI agent taps into this broader knowledge layer rather than reading narrow instruction files.

What it’s good at:

  • Context is richer. You’re not limited to “how to write code in this repo.” You can give agents product strategy, user feedback patterns, business constraints. The kind of context that makes AI output genuinely useful instead of just technically correct.
  • It’s a living system. You’re already taking notes, capturing ideas, and refining your thinking. Your context storage is a byproduct of how you work, not an additional chore.
  • Linking and backlinks create emergent structure. In Obsidian especially, connections between notes surface relationships you didn’t plan for.
  • Works beautifully for solo builders and freelancers. Your vault is your unfair advantage. A personalized context layer no one else has.

Where it breaks down:

  • Sharing is the fundamental limitation. Obsidian vaults are personal by design. Getting a team of ten people to maintain a shared vault with consistent structure and quality is a coordination nightmare.
  • Access patterns are fragile. MCP servers timeout, file system mounts break, iCloud sync conflicts corrupt files. The tooling is not enterprise-grade.
  • There’s no built-in review process. In git, changes are visible in diffs. In a personal vault, someone can rewrite a core conventions note and no one notices.
  • Versioning is manual or nonexistent. You can git-back an Obsidian vault, but most people don’t. Context changes silently, and there’s no way to trace when a convention shifted.

Best fit: Solo builders, freelancers, and indie developers. Also valuable as a personal layer on top of a team’s shared context system.

Tradeoffs for each approach


The real answer: it’s a stack, not a choice

Here’s what I’ve landed on after months of building this way: the best setup isn’t one of these. It’s a deliberate combination.

Context stack recommendation by team size

For solo builders and freelancers: Start with your knowledge management tool as the foundation. Your Obsidian vault (or equivalent) is your persistent memory. Product thinking, patterns, lessons learned. Then add CLAUDE.md files in each repo for project-specific agent instructions. The vault provides the “why,” the repo docs provide the “how.”

For small teams (2-5 people): In-repo context is your primary layer. CLAUDE.md, decision records, and conventions docs, all versioned with code. Supplement with a lightweight shared context source (even a shared folder or small wiki) for cross-cutting concerns. Individual team members can maintain personal vaults for their own augmented context.

For mid-size teams and above: You likely need all three layers. A dedicated context repository for organizational standards and cross-repo conventions. In-repo CLAUDE.md files for project-specific agent guidance. And encourage individual contributors to maintain personal knowledge systems that make them more effective with AI tools.

For enterprise: Add governance. The context repo needs owners, review cycles, and freshness checks. In-repo docs need linting or CI validation. And you’ll probably need tooling that aggregates context from multiple sources into a coherent prompt. This is where context engineering becomes a discipline, not a habit.


The pattern underneath

The common thread across all of these is simple: context that isn’t maintained is worse than no context at all. A stale CLAUDE.md that describes last quarter’s architecture actively misleads your AI agent. An Obsidian vault full of outdated notes creates confident but wrong outputs.

Maintained vs stale context

Whatever storage strategy you choose, the maintenance question matters more than the tooling question. Pick the approach where updating context feels like a natural part of your workflow, not an additional task you’ll eventually stop doing.

The moat isn’t the model. It’s the context. And context needs a home.


This is Part 1. Part 2: Not All Context Changes at the Same Speed goes deeper into why context rots and how to prevent it by matching your storage rhythm to the speed at which context actually changes.

Ole Harland

Ole Harland

Product designer in Hamburg with 15+ years designing complex platforms. Currently exploring AI as a design and build tool.