Scoping Tasks for Background Claude Code Agents

March 20, 2026

The hardest part of the octopus workflow isn’t running 8 sessions. It’s knowing what to put in each one.

Scope too big and the agent builds something you didn’t want. Scope too small and you’re back to micromanaging. The sweet spot is a task that takes Claude Code 3–20 minutes and produces a diff you can review in under 5.

I’ve been refining this for months. Here’s the framework.

The 3-minute rule

If I can’t write the scope in under 3 minutes, the task is too big. Not too complex – too big. Complexity is fine. Size is the problem.

A complex task: “Write a migration that adds a preferences JSONB column to the users table, with a default value of {}, and update the User type in src/types/user.ts to include the new field.”

That’s complex. It touches the database, the type system, and maybe the ORM config. But it’s small. One column. One type. Clear outcome. Claude Code handles it in 3 minutes.

A big task: “Add user preferences to the app.” That’s not a task. That’s a project. It could mean settings UI, database changes, API endpoints, preference syncing, notification settings, theme support – the agent has to decide what “preferences” means. That decision is your job, not the agent’s.

The scoping framework

Every task I scope follows this template. Not literally – I don’t copy-paste a form. But every scope contains these elements:

What: One sentence. What is being built or changed. “Add PDF export to the journal view.” (This maps directly to the spec template — a scope is a spec with less ceremony.)

Where: File paths. Every file that should be created or modified. “Create src/features/export/pdfExport.ts. Modify src/components/JournalToolbar.tsx.”

How: Existing patterns to follow. “Use the existing renderEntry() function. Use jspdf for PDF generation. Match the existing toolbar icon style.”

Boundaries: What not to do. “Single entry only. No batch export. No format selection. PDF only.”

Done: One test case. “Clicking the export button on a journal entry downloads a PDF containing the entry title, date, and body text.”

That’s it. Five elements. Takes 60–90 seconds to write once you know the codebase.

Real scopes from this week

Here’s what I actually sent to Claude Code across different projects this week.

Scouter – API rate limiting:

Add rate limiting to the Scouter API.
Use express-rate-limit.
Apply to all /api/* routes.
100 requests per 15-minute window per IP.
Return 429 with { error: "Rate limit exceeded", retryAfter: <seconds> }.
Add the middleware in src/api/middleware/rateLimit.ts.
Register it in src/api/index.ts before route handlers.
Don't touch the existing auth middleware ordering.

Triumfit – Onboarding copy:

Update the onboarding screen copy in src/screens/Onboarding/.
Screen 1: "Track workouts across any program."
Screen 2: "See your progress over weeks, not days."
Screen 3: "Built for people who actually go to the gym."
Don't change layout, styles, or navigation. Copy only.

Lucid – Bug fix:

Fix: tapping the back button on the entry editor
discards unsaved changes without warning.
Add an unsaved-changes guard to src/screens/EntryEditor.tsx.
If the entry has been modified, show an Alert with
"Discard changes?" / "Keep editing" / "Discard".
Use React Navigation's beforeRemove event.
Don't add auto-save. Just the warning.

Each one took me about a minute to write. Each one ran in the background while I worked on something else. Each one produced a reviewable diff.

The scope-to-review ratio

Here’s the math that makes this workflow worth it.

Scoping 8 tasks: ~15 minutes. Agent execution time: 3–20 minutes per task, running in parallel. Review time: 2–5 minutes per task, sequential.

Total time: 15 minutes scoping + ~30 minutes reviewing = 45 minutes of my active time. Total output: 8 completed tasks across 6 projects.

If I coded each task myself, even with AI assistance, each one takes 15–30 minutes of focused work. That’s 2–4 hours for the same output. And I can only do them one at a time.

The leverage isn’t in the AI being fast. It’s in the AI being parallel while I’m sequential.

When scoping fails

Two failure modes I hit regularly.

The scope is too vague. I write “improve the search results page” and Claude Code redesigns the entire component, adds filtering I didn’t ask for, and changes the pagination approach. My fault. “Improve” isn’t a task. It’s an invitation to improvise.

The scope assumes context the agent doesn’t have. I write “use the same pattern as the other endpoints” without specifying which pattern. There are three patterns in the codebase (legacy, v2, and v3). The agent picks one. It’s not the one I meant.

Both failures have the same fix: be more specific. Not more verbose – more specific. There’s a difference.

“Add a GET endpoint following the v3 pattern in src/api/v3/creators.ts” is more specific. Not longer. Just clearer.

Start here

Pick a task you’d normally do yourself. Something small – a bug fix, a copy change, a new endpoint.

Before you open the file, write the scope. File paths. Existing patterns. Boundaries. Done condition.

Time yourself. Under 90 seconds is the target.

Then hand it to Claude Code and go work on something else. When you come back, check the diff.

If the diff is right, you just learned to scope.

If the diff is wrong, the scope told you where it was unclear. Fix the scope, not the code. Run it again.

That’s the loop. That’s how you get to 8. If you want the full picture of what running 8 sessions looks like or a deeper guide on delegating tasks to AI, start there.