December 15, 2024·8 min read

How I Ship Production Apps Without Writing Code

A practical look at the AI-first development workflow that lets a solo founder build at team speed — spec documents, agent direction, and ruthless systems thinking.

AIDevelopmentWorkflowSolo Founder

There's a question I get more than any other: "How do you build all of this alone?"

The honest answer is that I don't write most of the code. I direct agents that write it. But that framing undersells the discipline required — and oversells the magic.

Here's what the actual workflow looks like.

Start with a SPEC document

Before a single line of code gets written, I write a SPEC document. Not a vague outline — a structured, detailed technical document that answers:

  • What problem does this solve?
  • Who is it for?
  • What are the exact inputs and outputs?
  • What are the edge cases?
  • What does failure look like?

For Almighty, I've written 36+ of these documents. Each one takes 2-4 hours. Combined, they've saved hundreds of hours of back-and-forth with AI agents that would have otherwise generated plausible-looking but wrong code.

The rule: ambiguity is the most expensive bug. Fix it before you write the prompt.

Treat AI agents like junior engineers

The mistake most people make is treating Claude or Copilot like a search engine — ask a question, get an answer, move on. That produces mediocre code.

I treat them like junior engineers who are brilliant at execution but need clear direction. This means:

  • Give them the full context upfront
  • Specify the constraints explicitly (language, library versions, patterns to follow)
  • Ask for an implementation plan before the implementation
  • Review the plan before accepting the code
  • Test the output against the spec

When I asked Claude to build the debt payoff engine for Almighty Budget, I didn't say "build a debt payoff calculator." I gave it the SPEC document, the existing data models, the test framework, and instructions to write failing tests first. The output was production-ready on the second iteration.

Spec → Tests → Code

My development loop for every feature:

  1. SPEC document — define the behavior completely
  2. Test cases — write the tests that prove the spec is met
  3. Implementation — let the agent write code that passes the tests
  4. Review — check for correctness, security, and edge cases
  5. Ship

For SHADOWSTATE, this process produced 283 passing tests before we had a playable build. That sounds backwards until you realize that every test is a conversation with the AI agent about exactly what the system should do.

The things AI still can't do

I want to be honest about the limits.

AI agents are not good at:

  • Understanding your users better than you do
  • Making product decisions (yet)
  • Knowing when not to build something
  • Keeping the system coherent across hundreds of files without strong human oversight

The human work in AI-first development is less about writing code and more about:

  • Product clarity — knowing exactly what to build
  • System coherence — making sure everything fits together
  • Judgment calls — knowing when to override the agent's recommendation
  • Testing reality — using the product, not just reading the test results

The operational mindset matters

Here's what I think most developers miss about this workflow: it's fundamentally an operational problem, not a technical one.

Running a retail store taught me that systems break at handoffs. The same is true in software development. The handoff between your product thinking and the AI's code generation is where most of the failure happens.

Fix the handoff. Write better SPECs. Review more carefully. Test against reality.

The code almost takes care of itself.


If you're building something with AI-first workflows and want to compare notes — reach out.