Back to blog

The Spine Pattern: Multi-Repo Context for AI-Assisted Development

8 min read by Titus Soporan

Context is a hard problem. Every time you start a new session with an AI coding agent, you’re back to zero. The agent doesn’t know your architecture decisions, your naming conventions, or that you spent three hours yesterday figuring out why the auth flow breaks on refresh.

This “cold start” problem gets exponentially worse when you’re working across multiple repositories. Different codebases, different languages, different frameworks - and somehow you need to keep all that context in sync while making progress.

I’ve been experimenting with a pattern I call the “Spine” - a lightweight meta-repository that sits above your actual codebases and serves as a context orchestration layer for AI-assisted development. It’s not a monorepo. It’s not git submodules. It’s markdown files and a systemized approach to context that makes coding agents behave much closer to what I actually want.

What the Spine Pattern Is#

The spine is a separate git repository that contains:

  1. Layered CLAUDE.md files - A routing hierarchy that tells agents where they are and what patterns to follow
  2. A task system - Templatized checklists that capture planning, decisions, and phased implementation
  3. Cross-cutting documentation - Anything that spans multiple codebases

The actual code repositories remain independent. They don’t know about the spine. The spine is operator-level tooling - it’s for you, the person orchestrating work across systems, not something you impose on a team.

spine/                          # Meta-repo (separate git)
+-- CLAUDE.md                   # Master navigation + global context
+-- _docs/                      # Cross-cutting documentation
+-- _tasks/                     # Active work tracking
|   +-- active/
|   +-- backlog/
|   +-- completed/
|
+-- project-a/                  # (separate git repo)
|   +-- CLAUDE.md               # Project-specific context
+-- project-b/                  # (separate git repo)
|   +-- CLAUDE.md
+-- project-c/                  # (separate git repo)
    +-- CLAUDE.md

The key insight: the spine tracks orchestration, not code. It’s where context lives between sessions.

The CLAUDE.md Hierarchy: Routing for Agents#

Each level of the hierarchy narrows the context:

  • Global (~/.claude/CLAUDE.md) - Your development philosophy, tool preferences, universal patterns
  • Workspace (spine/CLAUDE.md) - How this collection of projects fits together, navigation, shared conventions
  • Project (project-a/CLAUDE.md) - This specific codebase’s architecture, deployment, patterns

This hierarchy is primarily a routing layer for AI agents, not documentation for humans. When an agent lands in a project, it reads up the chain and understands where it is in the system. The workspace-level file is particularly important - it’s the map that shows how independent repos relate.

The Task System: From Planning to Parallel Execution#

The _tasks/ directory is where the magic happens. But the real leverage isn’t in the task files themselves - it’s in the workflow around them.

The Workflow#

  1. Plan in conversation - I sit down with Claude Code and iterate on planning. Multiple rounds. We explore the codebase, identify the right approach, surface edge cases. This might take 30 minutes or 2 hours depending on complexity.

  2. Commit to structure - Once I’m happy with the plan, I dump all that context into a task file:

    • Key decisions made (and why)
    • Important files identified
    • Phased checklist for implementation
  3. Fresh session, parallel execution - I start a new Claude Code session, gather the relevant context (the task file + project CLAUDE.md files), and spawn multiple agents to work on different phases in parallel.

The task file becomes a checkpoint. All that planning context that would normally vanish when you close the terminal? It’s now persisted in markdown, ready to inform the next session.

Task File Structure#

# Task: Feature Name
 
## Status: IN PROGRESS
 
## Context
[Why we're doing this, background, constraints]
 
## Key Decisions
- Decision 1: We chose X because Y
- Decision 2: Rejected Z approach because...
 
## Important Files
- `project-a/src/auth/handler.ts` - Auth flow entry point
- `project-b/api/routes/users.py` - User API endpoints
- `project-c/components/Login.tsx` - Frontend component
 
## Phases
 
### Phase 1: Backend Changes
- [ ] Update user model
- [ ] Add new endpoint
- [ ] Write integration test
 
### Phase 2: Frontend Changes
- [ ] Create new component
- [ ] Wire up API calls
- [ ] Handle error states
 
### Phase 3: Integration
- [ ] End-to-end test
- [ ] Update documentation

Real Example: 6-Repo Migration#

I’m currently using this pattern for a contracting gig involving a backend migration. The setup:

  • 2 legacy repositories (being replaced)
  • 2 new repositories (the replacements)
  • 2 net-new repositories (new functionality)
  • Mixed languages and frameworks across the stack

Work constantly crosses boundaries. A single feature might touch the legacy API, the new API, and the frontend. Different teams own different repos. The context fragmentation is brutal without some orchestration layer.

Prefixed Tasks as Routing#

With 6 repos, prefixed task files become a routing mechanism:

_tasks/active/
+-- be-1-user-migration.md      # Backend-focused
+-- fe-2-dashboard-redesign.md  # Frontend-focused
+-- x-3-auth-flow-update.md     # Cross-cutting (touches everything)

The prefix tells me (and any agent I spawn) which repositories are in scope. When I pick up x-3-auth-flow-update.md, I know I’m about to work across the entire stack. When I pick up be-1-user-migration.md, I can focus context on just the backend repos.

This routing becomes essential when you’re spawning multiple parallel agents. Each agent gets the task file relevant to its scope, plus the CLAUDE.md files for the repos it needs to touch.

Why Not Monorepo or Submodules?#

Monorepo: Different teams own different repos. We’d never put legacy and new code together - they have different lifecycles, different CI, different deployment targets. A monorepo forces coupling that doesn’t match organizational reality.

Git submodules: The obvious “multi-repo” answer, but submodules are for code dependencies, not context orchestration. I don’t need the spine to track specific commits of each repo. I need it to track my understanding of how they fit together.

Polyrepo orchestrators (Nx, Turborepo): I haven’t tried these deeply, but looking at Nx, it seems to require adopting their approach across all repos. That’s a big commitment when things are moving quickly. The markdown + coding agents approach is lightweight - I can set it up in 10 minutes and throw it away if something better emerges.

The Lightweight Advantage#

Here’s what I keep coming back to: markdown files + coding agents is incredibly light.

There’s no build system. No configuration. No learning curve beyond “put context in files where agents will find it.” When a better approach emerges (and it will - this space is moving fast), I can migrate or abandon the pattern without unwinding complex tooling.

This echoes the Unix philosophy: small, composable tools that do one thing well. Text files are the universal interface. I can grep my task files, git diff my context changes, pipe things through rg and fd. I live in the terminal, and the spine works with that workflow instead of pulling me into some GUI or web dashboard.

The spine is just text files. The workflow is just habits. That’s the whole thing.

Key Takeaways#

  1. Context is the bottleneck - AI coding agents are powerful, but they start cold every session. The spine solves the cold start problem.

  2. Operator-level tooling - This pattern is for you, the person orchestrating work. It’s not something you impose on a team or check into business codebases.

  3. Planning → Task → Parallel execution - The real leverage is in the workflow: iterate on planning, commit context to a task file, spawn parallel agents with focused scope.

  4. Prefixes as routing - When working across many repos, task prefixes tell you (and agents) which codebases are in scope.

  5. Lightweight beats elaborate - Markdown files and habits beat complex tooling when the landscape is changing quickly.

Try It#

If you’re working across multiple repositories with AI coding agents, the easiest way to get started:

curl https://tsoporan.com/blog/spine-pattern-multi-repo-ai-development.md

Feed that to your agent with your project list:

Using the spine pattern from that markdown, create a spine for:
[project-a], [project-b], [project-c]

Your agent will scaffold the structure for you - the meta-repo, the CLAUDE.md hierarchy, the _tasks/ directories. Then start your next planning session by dumping decisions and checklists into a task file.

You’ll feel the difference immediately. The agent knows where it is. You’re not re-explaining your architecture. The cold start problem is solved.


I use this pattern across SocialTide (where I manage multiple client sites and a platform) and contract work involving complex migrations. The approach scales from 2 repos to 6+ without changing the fundamentals.

If you’re interested in how I use AI coding agents more broadly, check out my post on Building with Claude Code: Dev Philosophy.