Tinker AI
Read reviews
4 min read Owner

I’ve been noticing a pattern in the codebases I work in. The repos with thorough READMEs have a much better experience with AI tools — Cline, Cursor, Aider, all of them. The repos with sparse READMEs produce more wrong-pattern suggestions, more “this is the wrong abstraction,” more friction.

The cause is straightforward. The AI tools read READMEs. A good README sets context that the AI uses for every subsequent suggestion.

What’s interesting is that this is changing how teams write READMEs.

The pre-AI README

For a long time, READMEs were primarily for human onboarding. Useful sections:

  • What the project is
  • How to install
  • How to run
  • Where to find more docs

The README was usually thin because anyone working in the codebase eventually got context from other engineers, code itself, design docs, etc. The README was the entry point but not the primary source.

Many internal codebases had skeletal READMEs because nobody really used them. The team passed knowledge through pairing, code review, and Slack.

The post-AI README

When AI tools read READMEs, the README becomes the standing context for every AI interaction. This changes what’s worth putting there.

What’s now valuable:

Architecture overview. Not just “this is a Next.js app” but “this is a Next.js 15 App Router app with React Server Components by default, Tailwind v4, Drizzle ORM, and Supabase.” The model needs the stack to suggest matching code.

Convention statements. “We prefer X over Y because Z.” The AI uses these to make consistent choices.

Active patterns. “Server actions for mutations, tRPC for queries.” The AI picks up the pattern.

Anti-patterns. “Do not use react-query; we use TanStack Query throughout.” Saves the AI from defaulting to the more popular library.

Layout map. “Routes in app/, components in src/components/, utilities in src/lib/utils/”. The AI knows where to put things.

Test conventions. “Vitest for unit tests, Playwright for e2e. Tests live in tests folders co-located with source files.” The AI generates matching test patterns.

Domain vocabulary. “We use ‘profile’ to mean the user’s public-facing record, ‘account’ for billing-related state. They’re different tables.” Disambiguates terms the model might conflate.

These are useful for humans too, but they were often skipped because humans could ask. AI tools don’t ask; they default. Defaulting wrong costs you in every session.

The CLAUDE.md trend

Many teams have been adding CLAUDE.md files specifically aimed at AI tools. The format usually includes:

  • The same context as the README, more concisely
  • Specific rules (“when adding a new endpoint, do X”)
  • Links to deeper docs (“for the auth flow, see docs/architecture/auth.md”)
  • Common pitfalls (“the user table has subtle gotchas; see CLAUDE.md#user-table”)

CLAUDE.md is read by Claude Code, Cursor, Cline, and other tools as project-specific context. It’s becoming a standard convention.

The interesting shift: teams are now writing documentation specifically for AI consumption. The audience for this kind of doc was, until recently, “the engineer reading the code.” Now it’s “the engineer plus the AI tools the engineer is using.”

What “AI-readable” docs look like

There’s a stylistic shift in how docs are being written:

More structured. Headings, bullets, tables. The AI tools reason about structured docs better than long prose.

More explicit. Things that would have been “obvious” before are now stated. “We use camelCase for variables, kebab-case for files, PascalCase for components.” The AI doesn’t infer from convention; it follows what’s stated.

More specific. Generic guidance (“write good tests”) becomes concrete (“each public function has a test for happy path, error cases, and at least one edge case”).

More current. Stale docs are now actively harmful. The AI follows the doc, even if the doc is outdated. Teams maintain docs more carefully because the cost of stale docs is now ongoing pain in AI sessions, not just occasional confusion.

What’s getting documented that wasn’t before

A category of knowledge that used to be tribal now gets documented:

Why we don’t use library X. Previously: “the senior engineer remembers we tried it and it didn’t work.” Now: a paragraph in the README explaining the tradeoff.

Why this code looks weird. Previously: a comment in the code (sometimes). Now: a “weird code” section in the README with pointers.

Domain-specific terminology. Previously: ambient knowledge. Now: a glossary section.

Where the bodies are buried. Previously: known to the senior engineers. Now: a “things to know about this codebase” section.

This is good documentation hygiene that probably should have existed pre-AI. AI tools made the cost of not having it more visible.

The new doc burden

The flip side: more documentation is more work to maintain.

A README that includes architecture, conventions, anti-patterns, and a layout map is a 300-1000 line document. Keeping it accurate requires discipline. Stale sections produce wrong AI suggestions.

Teams that have invested in this often:

  • Make doc updates part of code review for relevant changes
  • Have a quarterly doc review process
  • Use linting/CI to catch stale references (broken links, references to renamed files)
  • Treat doc clarity as a coding standard

The maintenance cost is real. The benefit is also real — but only for teams that maintain.

What I’d recommend

For teams adopting AI tools at scale:

Audit your top-level docs. Are they accurate? Specific? Useful for someone (or something) trying to learn the codebase quickly?

Add a CLAUDE.md or AI.md file. Concise, structured, focused on what AI tools need to know.

Per-package docs in monorepos. Each major package should have its own concise README capturing its specific context.

Make doc updates first-class. Code review should consider doc impact. Stale docs are tech debt.

Test the docs. Periodically run AI tools on tasks in your codebase. Notice when they get things wrong. The wrong things often reveal doc gaps.

For individual contributors:

Read the README before starting an AI session. If it’s sparse, mention it to your team. Add to it as you go.

Notice when AI suggestions don’t fit. This often means the doc didn’t capture what’s actually true. Update.

Push back on docs that are aspirational. Documentation that says “we use X” when the team really uses Y is worse than no documentation. Make it match reality.

The unexpected upside

AI tools have made teams better at documentation. Many teams that wrote skeletal READMEs five years ago now write thorough ones. The skill of explaining your codebase clearly — to both humans and AIs — is becoming a more common engineering practice.

This is good. Even setting aside AI use, codebases with thorough docs are better to work in. The AI tools are providing the forcing function for an improvement that was always worth making.

The unintended consequence of AI coding tools, then: better documentation as a default. Worth celebrating.