From One Project to Eight: Scaling AI-Assisted Development
February 25, 2026
I didn’t start running 8 Claude Code sessions at once. I started the same way everyone starts — one project, one conversation, watching the AI type.
The jump from that to 8 parallel sessions across 6 projects happened in stages. Each stage required a different mindset shift. Here’s the progression, what I learned at each level, and where it plateaus.
Stage 1: Conversational, one project, watching the AI
This is where most people are. You open Claude Code in your project directory. You describe what you want in natural language. You watch it work. You react to what it’s doing. You course-correct mid-stream.
When I was building Logline’s search feature, this is how I worked. “Hey, the search is slow. Can you optimize the search index?” Then I’d watch Claude Code explore the codebase, propose an approach, start implementing. I’d interrupt: “No, don’t use that library, use the one we already have.” Back and forth.
What works: You get decent output. The conversational flow feels natural. You learn what the AI can and can’t do.
What doesn’t scale: You’re locked in. One project. One task. Your full attention. The AI could work independently but you won’t let it because you don’t trust it yet. You’re essentially pair programming with a very fast typist.
Time per task: 15–30 minutes of your attention per task. You’re present for the entire thing.
Stage 2: Specs, one project, not watching
The first shift is writing a spec instead of having a conversation. You describe the task upfront — file paths, approach, acceptance criteria — and walk away.
For me this happened on Scouter. I needed to add rate limiting middleware. Instead of talking it through, I wrote a paragraph: what middleware to create, where to register it, what limits to enforce, what response to return, which existing middleware to reference for patterns. I pasted it, switched to email, came back 8 minutes later to a clean diff.
That was the moment. The realization that I could give instructions and leave.
What changes: You stop watching. You start reviewing. Your time per task drops because you’re only engaged at the start (scoping) and end (reviewing), not the middle (the AI working).
What you learn: Writing good specs is a skill. Your first specs are too vague. The AI interprets them creatively. You learn to add file paths, to reference existing code, to specify what NOT to do. Each bad output teaches you what was missing from the spec.
Time per task: 3–5 minutes scoping + 5–10 minutes reviewing. The AI works for 5–15 minutes in between, but that’s not your time.
Stage 3: Specs, 2–3 projects, reviewing in batches
Once you’re comfortable not watching, the obvious next step is: while the AI works on Project A, scope a task for Project B.
I started doing this with Scouter and Lucid. Scope a Scouter task, switch to Lucid, scope a Lucid task, go back and review Scouter’s output, then review Lucid’s output. Two projects making progress in the time it used to take one.
What changes: You start thinking about task parallelism. Which tasks can run simultaneously? Which ones do you want to review with fresh eyes? You develop a review cadence — scope everything first, then review everything in a batch.
What you learn: Not all projects are equal in review difficulty. A styling change on the Octopus Coder site takes 30 seconds to review. A new API endpoint on Scouter takes 10 minutes. You start ordering your review queue by difficulty.
Time per task: Same per-task cost, but your throughput doubles or triples because the AI’s work time overlaps.
Stage 4: Full parallel, 6–8 projects, morning scope + review cadence
This is where I am now. Eight terminal tabs. Each one a different project or a different task in the same project. Morning scoping block, then review waves.
8:30 – Scope all 8 tasks (15-20 min)
8:50 – Agents working. I'm not needed.
9:20 – First review pass (25-30 min)
9:50 – Corrections + second-round scoping
10:00 – Agents working again
10:20 – Second review pass
By mid-morning, 6+ projects have fresh commits. Not massive features — scoped, reviewed, incremental progress across the board.
What changes: Everything is systematic. You have a scoping ritual. A review order. A running task list per project. You stop thinking about individual tasks and start thinking about portfolio-level progress. The question changes from “what should I build today” to “what does each project need next.”
What you learn: The review is the bottleneck, not the coding. You develop review shortcuts — structure pass first to catch file-level surprises, then logic pass on the real changes. You learn your own review fatigue point. Mine is around 12 diffs before quality drops.
What changes at each stage
Mindset: From “I’m coding with AI help” to “I’m managing an AI team.” At Stage 1, you’re a developer using a tool. At Stage 4, you’re a technical lead who scopes, delegates, and reviews.
Tooling: Stage 1 needs nothing special. Stage 4 needs a good terminal setup (I use iTerm2 with named tabs), CLAUDE.md files in every project, a task tracking system (mine is a markdown file per project), and a disciplined review process.
Process: Stage 1 is ad-hoc. Stage 4 is rhythmic. Same blocks every morning. Same cadence. The rhythm is what makes it sustainable.
Trust: Stage 1, you trust nothing. Stage 4, you trust the process — not blindly, but based on evidence that well-scoped tasks produce good output. The trust is earned through hundreds of reviewed diffs.
What stays the same
Specs. At every stage, the quality of your specs determines the quality of the output.
At Stage 1, your “spec” is a conversational description. At Stage 4, it’s a structured document with file paths, function references, architectural context, and acceptance criteria. The format changes. The principle doesn’t: clear input produces clear output.
If you take one thing from this post, let it be this: you can skip ahead on tooling, on terminal setup, on process. You cannot skip ahead on spec quality. Spec quality is the prerequisite for everything else.
The plateau
I tried scaling to 10 projects and 16 tasks in a day. Here’s what happened: I scoped 16 tasks in 35 minutes. All of them ran successfully. Most of the diffs looked clean. I started reviewing.
By task 10, I was skimming. By task 13, I was accepting diffs I’d only half-read. I shipped two bugs that day. One was a Lucid edge case with empty journal entries. The other was a Scouter webhook handler that silently dropped malformed payloads.
Both bugs were in the diffs. I just didn’t read them carefully enough.
The plateau is your review bandwidth. The AI can produce as much code as you can scope tasks for. But every line of that code needs a human review before it ships. When your review quality drops, your bug rate rises.
My sustainable ceiling is 6–8 projects, 10–14 tasks per day, with the hardest reviews done first. Beyond that, I’m not being thorough enough.
The real limit
It’s not the AI’s capacity. Claude Code can handle as many sessions as you throw at it.
It’s not your scoping speed. With practice, scoping 8 tasks takes under 20 minutes.
It’s your review bandwidth. How many diffs can you read carefully in a day? How many codebases can you hold architectural awareness of simultaneously? How many context-specific quality judgments can you make before fatigue sets in?
For me, the answer is about 12 serious reviews per day across 6–8 codebases. Your number might be different. You’ll find it by scaling up until your review quality drops, then backing off.
How to start
If you’re at Stage 1, don’t jump to Stage 4 tomorrow. The stages build on each other.
Move to Stage 2 first. Pick your next task and write a full spec instead of describing it conversationally. Include file paths. Include what not to do. Paste it, close the tab, come back and review the diff. Do this for a week until not-watching feels natural.
Then try Stage 3. Add a second project. Scope both tasks, then review both diffs. Get comfortable with the batch review cadence. Practice for a few weeks.
Then expand gradually. Add a third project. Then a fourth. At each step, check: am I still reviewing thoroughly? Am I still catching problems? If yes, add more. If no, you’ve found your current ceiling.
The progression from one project to eight took me about three months. Not because the tooling was hard. Because the habits — scoping well, trusting the process, reviewing thoroughly — take repetition to build.
Read the spec coding complete guide if you want the full methodology. The workflow in this post is what it looks like in practice.