Solo Developer, Multiple Projects: How I Ship Progress on 6+ at Once
March 14, 2026
The traditional solo developer problem is context switching. You work on Project A for two weeks. Then you switch to Project B and spend the first day remembering where you left off. By the time you’re productive on B, you’ve forgotten the details of A.
So most solo devs pick one project. They go all-in on it. The other projects wait. Sometimes for months.
I run 6 projects simultaneously. Scouter, Triumfit, Lucid, Logline, the Octopus Coder site, and the JWP portfolio. Some days I add client work on top of that. They all get commits. Every day.
This isn’t hustle culture. I’m not working 16-hour days. I’m working normal hours with a different workflow.
Why context switching used to kill me
I built Logline over six months working on it exclusively. When I’d come back after even a weekend away, it’d take 30 minutes to load the project state back into my head. What was the data model? Where did I leave the search feature? What’s the test coverage gap?
That reload time is the real cost of context switching. Not the mechanical act of opening a different project — the cognitive cost of reconstructing your mental model.
Now multiply that by 6 projects. If each one costs 30 minutes to reload, that’s 3 hours just getting back up to speed. On a good day. In a context-switch-heavy workflow, you’d spend more time loading state than writing code.
What changes with AI agents
Here’s the thing: the AI doesn’t lose context between switches. I do. But the spec carries it.
When I write a scope for a Scouter task, every detail the AI needs is in the spec. File paths. Function names. Architectural decisions. Acceptance criteria. The AI reads the spec, reads the code, and executes.
It doesn’t need 30 minutes to remember where it left off. It doesn’t have a mental model of the project that degrades over a weekend. Every session starts fresh with the spec as the single source of truth.
This means my context switching cost drops dramatically. I don’t need to remember Scouter’s data model in detail. I need to know enough to write a good scope. That’s a different level of knowledge — it’s architectural awareness, not implementation-level recall.
I can hold architectural awareness of 6 projects in my head simultaneously. I can’t hold implementation details of 6 projects. The specs bridge the gap.
The morning scoping ritual
Every morning starts the same way. I open my project list — a markdown file with a “next up” section for each project — and write scopes for 8 tasks across 6+ projects.
scouter: Add webhook retry logic for failed deliveries
triumfit: Fix: workout timer doesn't pause on app background
lucid: Add tag filtering to journal list view
logline: Update search results to show genre tags
octopus: New blog post on code review workflow
jwp-site: Add Triumfit to portfolio with screenshots
pwa-marketing: Update Beyond529 headline copy
skill-library: Add email marketing skill examples
The scoping takes 15–20 minutes. I write each scope with file paths, function references, and clear acceptance criteria. Then I paste them into 8 terminal tabs and let them all run.
This is the key moment. I’ve just done the hardest cognitive work — deciding what each project needs and specifying it precisely. The AI does the rest.
The review cadence
Thirty to forty minutes later, I start reviewing. Tab by tab. Diff by diff.
Most tasks are done. I review each diff — structure, logic, style. Accept the clean ones. Kick back the ones that need corrections with specific guidance.
Then I scope a second round. The morning goes in waves: scope, wait, review, scope, wait, review. By noon I’ve done 2–3 rounds and shipped commits across every project.
The review is the real work. Not the coding. Not the scoping (though that matters). The review is where I apply judgment — is this correct, is this the right approach, does this match the codebase patterns. That judgment is my job. Everything else is delegation.
Managing different tech stacks
My projects span: Swift (Lucid, Logline), React Native (Triumfit), TypeScript/Node (Scouter), Astro (Octopus Coder, JWP site). Different languages. Different frameworks. Different patterns.
This used to be the hardest part of multi-project work. Switching from Swift to TypeScript to Astro in the same morning meant holding three different sets of conventions in your head.
With specs, it barely matters. Each project has its own CLAUDE.md that defines its conventions. Each scope is project-specific. I don’t need to hold Swift conventions in my head while writing a Scouter scope — I write the Scouter scope in terms of Scouter’s architecture, and the AI knows TypeScript.
My mental model per project is: what does this project do, what’s the architecture, what does it need next. Not: what’s the syntax for this framework’s state management. That’s the AI’s job.
The psychological benefit
This is the part nobody talks about when they discuss productivity systems.
When you work on one project at a time, you make progress on one thing. The other five projects feel like they’re falling behind. There’s a background anxiety — Scouter needs webhook retries, Lucid has that bug, Triumfit’s onboarding copy is stale — and you can’t address any of it because you’re focused on the one project that “won.”
When you ship commits across 6 projects in a morning, that anxiety dissolves. Not because you’ve finished everything. But because everything moved forward. Every project got attention. Nothing is rotting.
For me — running a solo practice where all these projects represent potential revenue — that feeling of broad forward progress is the difference between calm confidence and stressed prioritization.
What this workflow requires
This isn’t free. There are real prerequisites.
Specs. You need to be able to write clear, scoped task descriptions that an AI can execute without follow-up questions. That’s a skill. It takes practice. Start with the principles in the spec writing guide and iterate.
Discipline. The temptation to dive deep into one project is always there. Some mornings Scouter has an interesting problem and I want to spend three hours on it. The workflow says: scope one task for Scouter, let the AI build it, review it, move on. Save the deep dive for after all projects have moved.
Trust in the process. You have to believe the AI can handle a well-scoped task without supervision. If you watch each session, you’re back to working on one project at a time. Trust but verify — trust the process, verify the output.
Review skill. You’re reviewing 10–14 diffs a day across different codebases. You need to be fast and thorough. That’s the one skill this workflow demands more of, not less.
The honest limit
This doesn’t scale infinitely. I’ve tried 10 projects. It didn’t work. Not because the AI couldn’t handle it — because I couldn’t review that many diffs well. By diff twelve, my reviews got sloppy. I started accepting things I should have pushed back on.
Six to eight projects with 10–14 tasks is my limit. Beyond that, review quality drops and bugs slip through. The ceiling isn’t the AI’s output capacity. It’s my review bandwidth.
If you’re starting out, don’t try 8 projects on day one. Start with two. Get the scoping right. Get the review habit right. Scale up gradually. The workflow teaches you what your limits are.
The net
One person. Six projects. Every day.
Not because I’m fast. Because the work is parallel. The AI builds while I review. I scope while the AI builds. The bottleneck is my judgment, not my typing speed.
That’s a fundamentally different way to be a solo developer. It doesn’t require working more. It requires working differently.