Why encoding codebase patterns as AI instructions works better than writing docs nobody reads.
Every developer knows the documentation paradox: you spend hours writing docs explaining how your codebase works, then your teammate (or your future self) ignores them and asks ChatGPT instead. The AI gives a plausible but wrong answer, because it doesn't know your specific patterns. So you debug for an hour, realize the AI hallucinated your auth flow, and write more documentation that nobody will read.
I broke out of this cycle by replacing most of my traditional documentation with Claude skills: structured instructions that teach AI how this specific codebase works.
The result: AI that follows my architecture instead of guessing. Consistent code across contributors. And documentation that's actually used, because the consumer is a machine that reads everything.
The Problem: AI Without Context
Modern AI coding assistants are remarkably capable at generic tasks. Ask Claude to "add a REST endpoint" and you'll get clean, working code. But it won't match YOUR patterns.
In my codebase, API routes use Elysia with specific validation patterns. Database queries go through Drizzle ORM with a particular transaction style. Background jobs use Inngest with step-level checkpointing. Auth checks follow a specific middleware pattern.
Without context, Claude produces code that works but doesn't belong. It may use Express conventions in an Elysia codebase. It writes raw SQL instead of using the ORM. It puts business logic in API routes instead of service functions.
The code passes type-checking but creates architectural drift. Over weeks, your codebase becomes a patchwork of conflicting patterns. Some human-written, some AI-generated, all slightly different.
What Claude Skills Are
A Claude skill is a markdown file in .claude/skills/ that encodes a specific pattern or workflow. When Claude encounters a relevant task, it reads the skill and follows the prescribed approach.
Here's a simplified example of a skill for adding API routes:
---
name: skill-name
description: A description of when to trigger this skill, e.g. whenever backend changes
---
# Adding API Routes (Elysia)
## Pattern
All API routes follow this structure:
1. Define route in `src/server/routes/`
2. Use Elysia's type-safe body validation
3. Check auth via `auth.api.getSession({ headers: request.headers })`
4. Return consistent response shapes: `{ data }` on success, throw on error
5. Register route in `src/server/api.ts`
## Example
typescript
// src/server/routes/bookmarks.ts
import { Elysia, t } from "elysia";
import { auth } from "@/lib/auth";
import { db } from "@/lib/db";
import { bookmarks } from "@/lib/db/schema";
export const bookmarkRoutes = new Elysia({ prefix: "/bookmarks" })
.get("/", async ({ request }) => {
const session = await auth.api.getSession({
headers: request.headers,
});
if (!session) throw new Error("Unauthorized");
const results = await db
.select()
.from(bookmarks)
.where(eq(bookmarks.userId, session.user.id));
return { data: results };
});
## Anti-patterns
- Do NOT use Express-style `req, res` parameters
- Do NOT put database queries directly in route handlers for complex logic
- Do NOT skip auth checks on protected routes
This isn't documentation in the traditional sense. It's an instruction set optimized for an AI reader. Explicit patterns, concrete examples, clear anti-patterns.
Why This Works Better Than Documentation
1. The consumer actually reads it
Human developers skim docs, search for the snippet they need, copy-paste, and move on. Claude reads the entire skill every time. It doesn't skip sections. It doesn't assume it already knows. Every instruction is followed.
2. It enforces consistency
When three developers work on a codebase, you get three slightly different patterns. When those developers work with Claude skills, you get one pattern replicated exactly.
Ask Claude to "add user profiles with database table, API endpoint, and settings page." It reads the relevant skills for database schemas, API routes, and UI patterns, then produces code that matches every convention in your codebase.
3. It catches architectural drift
Without skills, Claude makes reasonable guesses. With skills, Claude follows explicit rules. The difference is subtle in any single interaction but compounds over weeks.
I've seen codebases where 6 months of AI-assisted development created a mess. Some files using one state management approach, others using a different one, auth patterns inconsistent across routes. Skills prevent this.
4. It encodes why, not just what
Good skills explain the reasoning:
## Why Inngest over BullMQ
We use Inngest for background jobs because:
- Step-level checkpointing (failed step retries from that step, not the beginning)
- No Redis dependency
- Built-in AgentKit for AI agent workflows
- Durable webhooks (Stripe events never lost)
Do NOT suggest switching to BullMQ, Temporal, or custom queue implementations.
This prevents Claude from "helpfully" suggesting alternatives that would break the architecture.
The Skills I Actually Use
After building and refining over months, here are the categories of skills that deliver the most value:
Stack-specific patterns. How to add API routes, database tables, React hooks, UI components. These are the most-used skills because they cover the daily work of adding features.
Integration guides. How Stripe webhooks flow through Inngest, how auth works across web and mobile, how the RAG pipeline connects document upload to AI chat. These encode the complex cross-cutting concerns that are hardest to get right.
Anti-pattern lists. What NOT to do. These are surprisingly effective because Claude's most common failure mode is producing code that works but violates architectural decisions.
Workflow skills. Higher-level skills for common multi-step tasks: "add a complete feature" (schema + API + hooks + UI), "set up a new integration," "create an email template." These orchestrate multiple lower-level patterns.
All these skills ship with Eden Stack.
Model Context Protocol (MCP): The Other Half
Skills teach Claude HOW to write code. MCP (Model Context Protocol) servers teach Claude HOW to interact with external services.
Instead of manually creating a Neon database, copying the connection string, creating Stripe products, copying API keys, setting up Resend, configuring PostHog, I have MCP servers for each service. Claude calls them directly.
The setup flow:
- I describe my project in a config file
- Claude reads the config
- Claude calls MCP servers to create databases, payment products, email domains, analytics projects
- Environment variables are populated automatically
What used to take 60+ minutes of context-switching between dashboards now takes about 5 minutes of describing what I want.
The Agentic Mindset Shift
Working this way has fundamentally changed how I think about development.
Before: I write code. I occasionally ask AI for help. AI gives generic suggestions that I adapt.
After: I describe intent. AI implements using my exact patterns. I review and course-correct.
The mental model is managing a team of junior developers. They're fast, literal, and excellent at pattern-matching. But they need clear instructions (skills), access to tools (MCP), and quality assurance (review).
Some practical examples of how this plays out:
Adding a feature: I describe "add a favorites feature where users can bookmark items." Claude reads the database skill, creates a table. Reads the API skill, creates endpoints. Reads the hooks skill, creates React Query hooks. Reads the UI skill, creates components. All matching existing patterns.
Fixing a bug: I describe "session persists after logout on mobile." Claude examines the auth skill, traces the signOut flow, identifies the issue, fixes it.
Refactoring: I describe "the conversation list is slow with 100+ items." Claude reads the UI patterns skill, knows to add virtualization. Reads the API skill, adds pagination. Updates the React Query hook with proper caching.
In each case, the output is consistent with the rest of the codebase because the skills encode the patterns.
What Doesn't Work
To be honest about the limitations:
Skills aren't a substitute for thinking. Claude follows patterns well, but it doesn't make architectural decisions. You still need to decide WHAT to build. Skills help with HOW.
Skills need maintenance. When you change a pattern, you need to update the skill. I've been burned by outdated skills that encode old conventions.
Complex cross-cutting concerns are hard to skill-ify. A skill for "add an API route" is straightforward. A skill for "redesign the auth flow to support SAML" is too complex and context-dependent to encode.
You still need to read the output. Claude is a fast, very literal capable developer. It does exactly what you say, not what you mean. Reviewing AI-generated code is non-negotiable.
Getting Started
If you want to try this approach in your own codebase:
Start with your most common task. What do you build most often? API endpoints? React components? Database migrations? Write a skill for that first.
Include concrete examples. Abstract descriptions don't work well. Show the EXACT code pattern you want.
List anti-patterns. What does Claude get wrong when it doesn't have context? Encode those as explicit "do NOT" rules.
Keep skills focused. One skill per concern. Don't write a mega-skill that covers everything. Claude can read multiple skills per task.
Iterate. Your first skill will be mediocre. After using it 10 times and seeing where Claude deviates, you'll refine it into something solid.
If you'd rather start with 30+ production-tested skills already written in your ready-to-fly codebase, Eden Stack includes all the skills described in this article.
The goal isn't to replace human judgment. It's to eliminate the gap between what AI could produce (given perfect context) and what it actually produces (guessing at your conventions). Skills close that gap.
Documentation exists for humans who might read it. Skills exist for AI that always reads them. In a world where AI writes an increasing share of production code, optimizing for the AI reader isn't just pragmatic. It's the highest-leverage investment you can make in your codebase's consistency.
Magnus Rødseth builds AI-native applications and is the creator of Eden Stack, a production-ready starter kit with 30+ Claude skills encoding production patterns for AI-native SaaS development.
Top comments (14)
While I do agree with most of the content of the post. I do think skills are not meant as documentation.
The example of point 4 in the "Why it works better" section seems like a waste of tokens.
A simple line making using Inngest required is enough.
The documentation about the choice I would put in a docs/tools.md file.
The LLM does nothing with that explanation.
While I think using markdown files that are read by LLM's can be a part of the documentation.
There will still be a part of the project that is going to need human specific documentation. I'm thinking about architectural choices, onboarding, tool choices. And there are probably more reasons.
At the moment I'm working on an experiment where I'm using markdown files to describe module functionality, so that I can let a coding agent can generate the code based on staged git changes.
That way the documentation and prompts can be saved at the same time as the code changes.
I will create a post after I ended the experiment.
This is so relatable. We went through the same thing at work where nobody actually read the docs wiki, but once we started encoding the patterns directly into CLAUDE.md files the consistency went way up almost overnight. The biggest win for us was not having to repeat ourselves in code reviews anymore since Claude just follows the patterns now.
Do you version control your skills separately or just keep them in the main repo?
Keep them in the same repo! But some skills make more sense to keep as user scope, not project scope
It may be of interest to you and others that, instead focusing prompts/ skills, I'm developing the underlying framework AI builds with to defend against it doing things it shouldn't for a given task. So far it seems like a much more effective strategy than composing skills and instructions. What I've found is that AI can be very unpredictable regardless of the information you feed it, It certainly doesn't do the exact same thing twice, so I would say skills alone aren't that helpful when it comes to achieving consistent, predictable outcomes!
Great read. Documentation often becomes outdated quickly, so using AI to surface the right knowledge at the right time makes a lot of sense.
I have been doing something similar with structured Markdown docs in CLAUDE.md and a docs/ directory that Claude uses as context (with a CLAUDE.md file serving as an index to these docs). These docs are a direct mapping of our monorepo packages. Same idea, different mechanism. Going to try the skills approach after reading this.
We also combined the docs with a complementary layer: a custom ESLint plugin with rules that enforce the same patterns described in the docs. The two together work differently than either alone.
Docs tell Claude what to do. The linter catches violations regardless of whether the docs were consulted.
We use atomic design (composing UI into atoms, molecules, organisms, and templates, each built on top of the others). We have rules like no-raw-dom-elements (blocks raw html elements in favor of design system components), and template-no-hooks (templates are pure functions, no useState or useEffect allowed). The pre-commit hook runs ESLint on every staged .ts/.tsx file and blocks on any error.
Curious whether skills reduce that context pressure problem compared to full docs. That's probably the thing I'm most interested in testing.
Thank you for sharing this.
The core insight here is dead-on: documentation written for humans fails because humans are unreliable readers, but AI consumes everything you give it. Once you internalize that, encoding your conventions as machine-readable skills becomes obvious.
That's one of the things I’ve been thinking about over the last few months: do we really need documentation at all? Well, sure, at least for AI so they can answer questions. Do developers read documentation now, or do they just go straight to their favorite chatbot and ask, "How to set up RabbitMQ?" If that's the case, it makes sense to have AI generate documentation for AI.
The "guaranteed consumption" point is the killer insight here. Human devs skim docs, Claude reads everything. I hit the same realization but ended up going one layer deeper.
Instead of encoding patterns directly into skills, I structure the specifications themselves as Markdown files with YAML frontmatter — status, unique IDs, parent references — in a directory tree mirroring the feature hierarchy. Then .claude/rules/ files teach Claude how to navigate and work with that spec tree. The skills describe the structure, not the content, so when requirements change I update one spec file and the rules stay the same.
That separation solved the maintenance problem you mention toward the end. Skills that encode domain details drift when the domain changes. Skills that describe conventions rarely need updating.
I wrote about the .claude/rules/ setup in more detail here: dev.to/thlandgraf/how-i-use-claude...
I'm building a VS Code extension called SPECLAN around this approach (creator, full disclosure). But the two-layer pattern — structured specs as source of truth, plus rules that teach the agent the structure — works without the extension too.
Nice, this is exactly what I was looking for but didn't know it exists. Thanks for sharing this!
Interesting approach. Replacing traditional docs with AI-assisted workflows could really change how teams onboard and find information.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.