The .md file playbook
You keep hearing that AI coding tools work better with markdown files in your repo. Here's exactly what to create, where to put it, and why it changes everything.
Your AI can access specs. But access isn't the problem.
Yes, you can wire up MCP servers to Linear, Notion, or Jira. Your AI can read your tickets. The access problem is solved. But access was never the real problem.
The real problems: tickets are written for humans in sprint planning — "Add GitHub OAuth, see Figma for designs" — not for an AI that needs to know which auth library, which route structure, and which session strategy to use. When the ticket is done, it gets archived. Six months later, nobody can find why auth works the way it does. And when the spec changes mid-project, the edit history in Linear disappears.
Specs in your repo fix all of this. They're version-controlled, reviewable in PRs, and live next to the code they describe. The AI reads them with zero hops — no MCP setup, no searching for the right ticket, no hoping the ticket has enough detail. Read the full argument.
Spec in Linear vs spec in the repo
Same feature. Same AI. The difference is where the spec lives and how detailed it is.
Spec in Linear (via MCP)
$ claude "Build AUTH-234"
Fetching Linear issue AUTH-234...
Title: "Add GitHub OAuth login"
Description: "See Figma for designs.
AC: user can log in with GitHub"
No auth library specified. Using NextAuth...
Route structure unclear. Guessing /pages/...
It works, but uses the wrong auth library
and the wrong route conventions.
Spec in repo
$ claude "Build what's in specs/auth-prd.md"
Reading specs/auth-prd.md ✓
Reading CLAUDE.md ✓
Using Supabase Auth + GitHub OAuth
Route: src/app/login/page.tsx
Session: httpOnly cookies (per conventions)
4 files created. All compile. All match spec.
The Linear ticket had enough context for a human in the sprint meeting. The repo spec had enough context for an AI to build it correctly on the first try.
5-minute quickstart
Two files. Five minutes. An immediate improvement to every AI interaction in your project.
Step 1: Create CLAUDE.md in your repo root
This is the file Claude Code reads first. It tells the AI your project's rules — what framework you use, how files are organized, and what to never do. Think of it as onboarding instructions for an AI teammate.
# Project conventions
## Stack
- Next.js 16 (App Router), Supabase, Tailwind CSS v4
## Rules
- Server Components by default. Only 'use client' for interactivity.
- Server Actions for mutations, not Route Handlers.
## File structure
- Pages: src/app/[route]/page.tsx
- Components: src/components/[feature]/[component].tsx
## Don'ts
- Never use localStorage for auth tokens
- Never add 'use client' to a layoutTip
Step 2: Create your first spec
Pick the next feature you're about to build. Before you start prompting, write a short markdown file describing what you want. Put it in a specs/ folder. Then point the AI at it instead of typing a long prompt.
# Auth: GitHub OAuth Login
## Context
We need GitHub OAuth so users can connect their repos.
## Requirements
- Sign in with GitHub button on /login
- Callback handler at /auth/callback
- Session stored in httpOnly cookies (not localStorage)
- Redirect to dashboard after auth
## Non-goals
- Email/password auth (not needed for v1)
- Multiple OAuth providers
## Open questions
- Do we need to request repo:write scope?Now instead of a 5-paragraph prompt, you say: claude "Build what's in specs/auth-prd.md". The AI reads the spec, reads your CLAUDE.md conventions, and generates code that matches both.
Important
The complete file toolkit
You don't need all of these on day one. CLAUDE.md and specs are the essentials. Add the others as your project grows.
| File | What it does |
|---|---|
| CLAUDE.md | Tells AI your project's rules |
| specs/*.md | Describes features before you build them |
| MEMORY.md | AI remembers your preferences across sessions |
| README.md | Project overview for humans (and AI) |
| docs/adr/*.md | Records why you chose X over Y |
| docs/runbooks/*.md | Step-by-step operational procedures |
| retro/*.md | Post-incident investigations & lessons learned |
| CHANGELOG.md | Release history |
CLAUDE.md in depth
A file in your repo root that Claude Code reads before generating any code. Every major AI coding tool supports a version of this — the file name differs but the concept is the same.
How to build it over time
Start with the basics from the quickstart. Then, every time the AI generates code that violates a convention, add a rule. Over weeks, your CLAUDE.md becomes a comprehensive guide that prevents the AI from repeating mistakes.
The "Don'ts" section is the most valuable part. AI tools are eager to help and will reach for common patterns even when they're wrong for your project. Explicit "never do X" rules are more effective than "prefer Y" suggestions.
# Project conventions
## Stack
- Next.js 16 (App Router)
- Supabase (auth + database)
- Tailwind CSS v4
- TypeScript (strict mode)
## Rules
- Server Components by default. Only 'use client' for interactivity.
- Server Actions for mutations, not Route Handlers.
- All request APIs are async: await cookies(), await headers()
- Use @neondatabase/serverless for Postgres, not @vercel/postgres
## File structure
- Pages: src/app/[route]/page.tsx
- Components: src/components/[feature]/[component].tsx
- Utilities: src/lib/[name].ts
- Specs: specs/[feature].md
## Don'ts
- Never use localStorage for auth tokens
- Never add 'use client' to a layout
- Never use default exports for components (named exports only)
- Never install a package without checking if we already have an equivalent
## Testing
- Integration tests hit real database, not mocks
- Test files live next to the code they test: foo.test.ts
## Before building a feature
- Read the relevant spec in specs/ first
- Check for existing components in src/components/ui/ before creating new onesTip
Instruction files across tools
Every major AI tool has its own instruction file. You don't need to maintain five copies — start with CLAUDE.md, then create minimal equivalents for other tools that say "Read CLAUDE.md for full conventions."
| Tool | Instruction file |
|---|---|
| Claude Code | CLAUDE.md |
| Cursor | .cursorrules or .cursor/rules/*.md |
| Codex | AGENTS.md or codex.md |
| GitHub Copilot | .github/copilot-instructions.md |
| Windsurf | .windsurfrules |
Specs — the most important file you're not writing
For AI tools: Instead of writing a long prompt explaining what you want, you write a spec once and point the AI at it. The spec is reusable — multiple AI sessions, multiple tools, multiple team members can all reference the same spec.
For your team:A spec in the repo is reviewable in a PR. Your tech lead can approve the approach before anyone writes code. This is how you prevent "I built the wrong thing."
For your future self: Six months from now, when someone asks "why does auth work this way?", the spec explains it. Unlike a Linear ticket that's been archived, the spec is in the repo, searchable, and version-controlled.
What a good spec contains
- Context — Why does this feature exist? What problem does it solve?
- Requirements — What must it do? Be specific enough that an AI could build it from this alone.
- Non-goals — What is explicitly out of scope? This prevents AI tools from over-engineering.
- Open questions— What's unresolved? This signals where judgment is needed.
- Constraints — Are there technical or business constraints?
When to write one: Before any feature that would take more than ~2 hours to build. If you're about to spend 30 minutes explaining the feature to an AI in a chat window, that time is better spent writing a spec file.
Tip
auth-prd.md is findable forever. AUTH-234.md is meaningless after the sprint ends.MEMORY.md — your AI gets smarter over time
CLAUDE.md is the instruction manual you write. MEMORY.md is the notebook the AI writes for itself — things it learns about you, your project, and your preferences as you work together.
When you correct the AI ("no, don't mock the database in tests — we got burned by that last quarter"), it saves that as a memory. Next session, it remembers. When it learns your role, preferences, or project context, it stores that too.
You're always in control. The AI asks before saving memories. You can review, edit, or delete any memory at any time.
What gets saved
- User memories: "Senior engineer, deep Go experience, new to React" → The AI tailors explanations accordingly
- Feedback memories: "Integration tests must hit real DB, not mocks" → The AI never generates mock-based tests again
- Project memories: "Merge freeze after March 5" → The AI flags non-critical PRs near that date
- Reference memories: "Pipeline bugs tracked in Linear project INGEST" → The AI knows where to look
- [User role](user_role.md) — Senior eng, deep Go experience, new to React
- [Testing preference](feedback_testing.md) — Integration tests must hit real DB, not mocks
- [Merge freeze](project_merge_freeze.md) — No non-critical merges after 2026-03-05
- [Linear tracking](reference_linear.md) — Pipeline bugs tracked in Linear project "INGEST"---
name: Real database in tests
description: Never mock the database in integration tests
type: feedback
---
Integration tests must hit a real database, not mocks.
**Why:** Prior incident where mock/prod divergence masked a broken migration.
**How to apply:** Any test file that touches data persistence
should use the test database, not jest.mock().Tip
Important
ADRs — why you chose X over Y
Architecture Decision Records (ADRs) are short documents that record a decision, the context behind it, and why you chose what you chose. You might not need them on day one, but the first time someone (or an AI) asks "why do we use Supabase instead of NextAuth?", you'll wish you had one.
Why AI tools love them: When the AI encounters a choice (which database? which auth library?), it checks for ADRs. If it finds one, it follows the decision instead of guessing. Without ADRs, AI tools suggest whatever is most popular on the internet — which may be wrong for your project.
When to write one: Any time you make a technical decision that someone might question later. An ADR should be 20–40 lines. If it's longer, you're writing an essay, not recording a decision.
# ADR-003: Use Supabase Auth over NextAuth
## Status
Accepted
## Context
We need authentication with GitHub OAuth.
NextAuth and Supabase Auth both support this.
## Decision
Supabase Auth. We already use Supabase for the database,
and having auth + data in one service reduces complexity.
## Consequences
- Vendor lock-in to Supabase for auth
- Simpler session management (Supabase handles refresh)
- One fewer dependencyTip
README.md — the front door
The README is the first file AI tools scan when orienting to a new project. A good README gives the AI a map of your project before it reads any code.
What makes a great one: Working setup instructions (test them from scratch!), architecture overview, links to specs and ADRs, and a list of key directories. Don't over-engineer it — the README points to other docs. It doesn't replace them.
Tip
Runbooks — when things break at 2am
Step-by-step procedures for operational scenarios — deploy rollbacks, incident response, database migrations. When you ask "how do I roll back a deploy?", the AI looks for a runbook first. If it finds one, it gives you the exact steps your team agreed on.
When to write one: After every incident. For every deploy process. The best time to write a runbook is right after you figured something out the hard way.
# Rollback a production deploy
## When to use
Production deploy caused errors. Need to revert.
## Steps
1. Go to Vercel dashboard → Deployments
2. Find the last known good deployment
3. Click ••• → Promote to Production
4. Verify the rollback in production
5. Post in #engineering Slack channel
## Escalation
If rollback doesn't fix it, page the on-call engineer.Retros — so you never fight the same bug twice
Some bugs take days to fix. You try three approaches, two of them make it worse, and the fix turns out to be a subtle interaction nobody expected. Six months later, something similar breaks. If the retro isn't in the repo, you start from zero.
A retro .md file captures what went wrong, what you tried, and what actually worked. Not just the fix — the full investigation. The dead ends matter because they tell the next person (or AI) what not to try.
Why AI tools love retros: When the AI encounters a related issue, it reads the retro and skips straight to the approach that worked. It learns from your team's hard-won experience instead of suggesting the same failed approaches.
When to write one: After any bug that took more than a day to fix, any incident, or any problem where the root cause was surprising. The best retros are written while the pain is still fresh.
# Retro: Draft content data loss on publish
## What happened
Users reported that publishing a draft would sometimes
overwrite the document with an empty string.
## Timeline
- Mar 28: First report from user. Could not reproduce.
- Mar 29: Found race condition between Yjs auto-save
and the publish endpoint. Both writing to content column.
- Mar 30: Fix deployed. Added snapshot guard.
## What we tried
1. Debouncing the auto-save → didn't help, race window
was between auto-save and the publish API call
2. Locking the row during publish → deadlocked with
the PartyKit sync writing yjs_state
3. Snapshot comparison before write → THIS WORKED.
Save a snapshot before publish, compare after,
reject if content changed during the window.
## Root cause
The publish endpoint read content, transformed it, and
wrote it back. But between the read and write, Yjs
auto-save could overwrite content with a stale buffer.
## Prevention
- Added snapshot guard to all content write paths
- Added integration test that simulates concurrent writes
- Added CLAUDE.md rule: never write to content column
without checking the snapshot guardTip
CHANGELOG.md — a history that helps
Keep a changelog format: ## [version] - date with Added/Changed/Fixed sections.
AI tools reference the changelog to understand what's changed recently, what patterns are used for versioning, and what's been deprecated. Not essential on day one, but easy to maintain once you start.
Writing tips
How to write docs that AI tools actually use.
- Write for the scanner. Headers, bullet points, code blocks. Nobody reads walls of text — and neither does AI. Clear structure = better AI output.
- Be specific, not vague. "Use Supabase Auth with GitHub OAuth" is useful. "Use appropriate authentication" is not. AI tools need concrete details to generate correct code.
- Name files after the feature, not the ticket.
auth-prd.mdis findable and meaningful forever.AUTH-234.mdmeans nothing after the sprint ends. - Keep files short.A 200-line spec is a spec nobody reads — including AI. If it's over ~100 lines, split it up.
- Update or delete. A wrong doc is worse than no doc. If a spec no longer reflects reality, update it in the same PR that changed the code.
- Use frontmatter for status. A
status: draft | active | deprecatedfield tells both readers and AI whether to trust the document. - Link between docs. Reference your ADRs from your specs. Reference your specs from CLAUDE.md. The web of connections makes every doc more powerful.
Naming & folder structure
Here's where everything goes.
├── CLAUDE.md # AI conventions & guardrails (start here!)
├── README.md # Project overview
├── CHANGELOG.md # Release history
├── .claude/
│ └── memory/ # AI memory (auto-managed, don't edit manually)
│ ├── MEMORY.md # Memory index
│ ├── user_role.md
│ └── feedback_*.md
├── specs/ # Feature specs & PRDs
│ ├── auth-prd.md
│ ├── notifications.md
│ └── search-v2.md
├── docs/
│ ├── adr/ # Architecture decisions
│ │ ├── 001-nextjs.md
│ │ ├── 002-supabase.md
│ │ └── 003-redis.md
│ └── runbooks/ # Operational procedures
│ ├── deploy-rollback.md
│ └── incident-response.md
├── retro/ # Post-incident retros & lessons learned
│ ├── 2026-03-30-data-loss.md
│ └── 2026-02-15-auth-outage.md
└── src/ # Application codeTip
AI tools that read your repo
This isn't specific to one tool. Every major AI coding tool benefits from structured docs in the repo.
- Claude Code — Reads
CLAUDE.mdautomatically on every interaction. References specs, ADRs, and any file you point it at. MaintainsMEMORY.mdacross sessions. The most repo-aware AI tool. - Cursor — Indexes all project files including markdown. Uses them for autocomplete context and chat responses. Supports
.cursorrulesfor project-specific instructions. - GitHub Copilot — Uses repo context for inline suggestions. Benefits from well-structured README and specs.
- Windsurf — Reads project markdown files for context. Supports
.windsurfrulesfor instruction files. - Codex — Uses repo files as context for code generation. Reads
AGENTS.mdfor project-level instructions. Specs give it the clearest signal of intent.
Important
Getting started checklist
Today (5 minutes)
- Create CLAUDE.md in your repo root with your stack, rules, and don'ts
- Create a specs/ folder
This week
- Write a spec for the next feature you're building
- Try prompting your AI tool with "Build what's in specs/[your-spec].md"
- Every time the AI gets something wrong, add a rule to CLAUDE.md
This month
- Write an ADR for your last major technical decision
- Create a runbook for your deploy process
- Review and clean up your CLAUDE.md — remove anything outdated
Ongoing
- Write a spec before every feature > 2 hours
- Write an ADR for every "why did we choose X?" decision
- Review your AI's MEMORY.md occasionally — delete stale entries
verso.md makes this easy
Writing markdown in VS Code or GitHub's editor works, but verso.md gives you real-time collaboration, AI assistance that understands your repo, and a direct publish-to-GitHub workflow. It's the editor purpose-built for the files you just learned about.