Spec Coding: The Complete Guide

March 21, 2026

Spec coding is the methodology I use to ship 6+ projects simultaneously with Claude Code. It’s not complicated. But it is specific, and the specificity is the point.

This is the full guide. What it is, where it came from, how to do it, and why it works better than the alternative.

What spec coding is

Spec coding is writing a specification before the AI writes code. Not a prompt. Not a description. A specification — a document that contains everything the agent needs to build the thing without asking you a single question.

The difference between a prompt and a spec:

Prompt: “Add analytics to the dashboard.”

Spec:

Task: Add daily active users chart to the Scouter dashboard

File: src/components/Dashboard/DauChart.tsx

Create a line chart component that:
- Accepts data as Array<{ date: string; count: number }>
- Uses the existing ChartWrapper from src/components/Charts/ChartWrapper.tsx
- Renders a 30-day view by default
- Matches the existing chart styles in src/styles/charts.css
- No interactivity beyond hover tooltips

Mount in src/pages/Dashboard.tsx below the existing MetricsGrid.
Use the existing analyticsApi.getDailyActiveUsers() from src/api/analytics.ts.

Test: The chart renders with mock data showing 30 days of DAU counts.
Constraint: Don't modify ChartWrapper or any existing chart components.

The prompt requires you in the loop. The spec doesn’t. That’s the whole idea.

Where it came from

I wrote four screenplays before I wrote software. That’s not a random biographical detail — it’s where spec coding comes from.

A screenplay is a specification. You write precise instructions that other people execute. The director, the actors, the cinematographer, the editor — none of them were in the room when you wrote it. The document has to stand on its own. If it’s vague, everyone interprets it differently and you get a mess. If it’s precise, everyone builds the same thing.

Software specs work the same way. The AI agent wasn’t in the room when you designed the feature. It doesn’t know your preferences, your architecture, your conventions — unless you write them down. The spec is the document that transfers your intent to an executor who can’t read your mind.

Screenwriting taught me one specific thing that engineering docs usually miss: constraint is more useful than description. In screenwriting, you learn that what a character doesn’t do defines them more than what they do. In spec coding, what the agent shouldn’t build prevents more problems than what it should.

For the deeper connection between screenwriting and spec coding, I wrote about that in what spec coding learned from screenwriting.

The spec template

Every spec I write contains five elements. Not always in the same order, not always with the same headers. But all five are there.

1. File paths. Where does this code live? Not “in the API layer.” The actual path. src/api/routes/creators.ts. The agent is navigating your codebase. Give it the map.

2. The interface. What goes in, what comes out. For an API endpoint: parameters, types, response shape. For a UI component: props, rendered output, user interactions. Precision here eliminates interpretation.

3. Existing code references. “Use the existing CreatorService.search() method.” This is the most underrated element. Every codebase has patterns. If you don’t point the agent at them, it invents new ones. Then you have two patterns where you had one.

4. Constraints. What this is NOT. “No multi-entry export.” “Max 100 per page.” “Don’t modify the existing schema.” Constraints prevent the agent from being helpful in ways you didn’t ask for. I wrote more about this in the spec that makes AI agents work independently.

5. A test. One concrete scenario that defines done. “A GET to /api/creators/search?q=fitness returns a paginated list.” The agent needs to know what success looks like.

How it enables parallelism

Vibe coding requires you in the loop. You’re the context. The babysitter. Without you watching, the agent drifts.

Specs make you optional during execution. Write the spec (60–90 seconds). Hand it to the agent. Switch to another tab. The agent builds while you’re gone. Come back, review the diff, move on.

That’s how I run 8 Claude Code sessions at once. Not by being faster. By being absent. Each agent has a self-contained spec. Each spec has everything the agent needs. No agent depends on me sitting there.

If you want the deep dive on why vibe coding breaks at scale, I’ve covered that separately. The short version: you can’t vibe code in 8 terminals because you only have one brain and one pair of eyes. Specs scale. Conversations don’t.

How it differs from prompting

Prompting is conversational. You describe what you want, the AI interprets, you correct, it iterates. The conversation is the specification — it emerges over time through back-and-forth.

Spec coding front-loads that work. You think through the decisions before the agent starts. There’s no conversation. There’s a document and an execution.

The tradeoffs:

PromptingSpec Coding
Speed to startFasterSlower
Quality of first outputVariablePredictable
Requires your attentionYes, throughoutOnly at review
ParallelizableNoYes
Scales with complexityPoorlyWell

For simple tasks, prompting is fine. I still do it. But for anything that needs to be right the first time, or anything I want running while I’m working on something else, I write a spec.

The distinction matters enough that I wrote a whole comparison: vibe coding vs spec coding.

The workflow loop

The actual daily workflow is four steps:

1. Scope. Spend 15–20 minutes in the morning writing specs for the day’s tasks. I usually write 6–8 specs across different projects. Each one takes 60–90 seconds.

2. Execute. Paste each spec into a Claude Code session. Let them all run. Don’t watch.

3. Review. Come back and review the diffs. Most are 3–5 files. I’m reading code, not writing it. If the spec was good, the diff is straightforward. If something’s off, I note what the spec was missing.

4. Iterate. If the output needs changes, I write a follow-up spec. Not a conversational correction — a new spec. “In src/components/DauChart.tsx, change the default view from 30 days to 14 days. Update the test scenario accordingly.” Fresh spec, clean context.

This loop runs 2–3 times per day. That’s 12–24 tasks across 6 projects. Most of them land on the first spec.

For more on how to scope tasks for background agents, that post covers the sizing and decomposition side of this.

Specs for different types of work

Specs aren’t just for API endpoints. Here’s what they look like for different categories.

API endpoint:

Task: Add GET /api/campaigns/:id/metrics
File: src/api/routes/campaigns.ts
Parameters: id (string, required), range (enum: 7d | 30d | 90d, default 30d)
Use CampaignService.getMetrics() from src/services/campaign.ts
Return shape: { impressions: number, clicks: number, conversions: number, daily: Array<DailyMetric> }
Constraint: Read-only. No mutations.
Test: GET /api/campaigns/abc-123/metrics?range=7d returns metrics for the last 7 days.

UI component:

Task: Add onboarding progress bar to Triumfit workout setup
File: src/components/Onboarding/ProgressBar.tsx
Props: currentStep (number), totalSteps (number)
Render a horizontal bar showing completion percentage. Use the existing color tokens from src/styles/tokens.ts. Match the height and border-radius of the existing ProgressRing component.
Mount in src/screens/OnboardingFlow.tsx, above the step content.
Constraint: No animation. No labels. Just the bar.
Test: At step 3 of 5, the bar fills 60%.

Database migration:

Task: Add preferences column to users table
File: src/db/migrations/20260318_add_user_preferences.ts
Add a JSONB column 'preferences' to 'users' table, default {}, not null.
Update User type in src/types/user.ts to include preferences: Record<string, unknown>.
Use the existing migration format from src/db/migrations/20260301_add_campaigns.ts.
Constraint: Don't modify existing columns. Down migration should drop the column.
Test: After migration, SELECT preferences FROM users returns {} for existing rows.

Marketing page:

Task: Add testimonial section to JWP landing page
File: sites/jwallaceparker/src/components/Testimonials.astro
Render 3 testimonial cards in a responsive grid. Use the existing Card component from src/components/ui/Card.astro. Testimonial data lives in src/data/testimonials.ts (create this file with placeholder data).
Mount in src/pages/index.astro between the Portfolio and CTA sections.
Constraint: No carousel. No JavaScript. Static grid only.
Test: The page renders 3 testimonial cards at desktop width, stacked at mobile.

Every one of these takes about 90 seconds to write. Every one gives Claude Code what it needs to build without asking.

When not to spec

Spec coding has overhead. Sometimes that overhead isn’t worth it.

Prototyping — when you genuinely don’t know what you want — is faster as a conversation. One-off scripts, quick investigations, learning a new framework — all fine without specs. I wrote about when vibe coding is actually fine if you want the full list.

The rule I use: if I’m going to review the output carefully, I write a spec. If I’m going to throw the output away or iterate on it live, I don’t.

The payoff

Spec coding isn’t faster for a single task. Writing a spec takes 60–90 seconds. Typing a prompt takes 10 seconds.

But spec coding is dramatically faster across a day. Eight specced tasks running in parallel, landing clean diffs I can review in minutes. Versus eight prompted tasks running one at a time, each requiring my attention throughout.

The leverage isn’t in any individual task. It’s in what you do with the time the spec buys you. And what it buys you is the ability to work on the next thing while the last thing builds itself.