Engineering··5 min read

How to Write Specs AI Can Actually Execute

AI fails not because it's dumb, but because your specs are vague. Learn how to describe what you want in plain language—and let the system handle the technical translation.

M

Miguel Carvalho

Founder

Share:

Most AI coding assistants fail not because they lack intelligence, but because they lack clarity. You give them a vague request, they produce vague code. The solution isn't smarter AI. It's clearer specs.

But here's what most people get wrong: clearer specs doesn't mean writing technical acceptance criteria or becoming a software architect. It means describing what you want in your own words, precisely enough that there's no room for interpretation.


The Spec Problem

When you tell AI "build user authentication," it guesses. Maybe it skips OAuth. Maybe it uses insecure sessions. Maybe it forgets password reset entirely. You won't know until you test, and by then you've wasted cycles.

The traditional fix is writing detailed technical specs. But that requires you to know what "OAuth" and "sessions" mean in the first place.

The Kodebase approach is different: you write in your domain language, and Scout translates that into technical specs automatically.


What Makes a Good Spec

A good spec answers three questions in plain language:

  1. What are you building? (The goal, in one sentence)
  2. How will you know it's done? (Observable outcomes)
  3. How will you test it yourself? (Steps you can follow)

That's it. No technical jargon. Just clear thinking.

Here's an example:

Vague spec:

"Build user authentication"

Clear spec:

"Users can create accounts and log in. They should be able to sign up with email/password or use their Google account. If they forget their password, they can reset it via email. They should stay logged in even if they close the browser."

The second spec isn't technical. A lawyer could write it. A restaurant owner could write it. But it's precise enough that there's no ambiguity about what "done" looks like.


The Founder-Testable Principle

Every feature you describe should include something we call validation steps—a "How to Test This Yourself" guide written in your own words.

Not this:

"Endpoint GET /api/users/me returns 200 with User schema"

But this:

"Go to the signup page, create an account with test@example.com, log out, log back in with Google, close the browser, reopen it, and verify you're still logged in"

Why does this matter? Because you should be able to verify that what was built matches what you asked for—without reading code, without understanding APIs, without asking a developer to explain it.

This is how elite DORA metrics happen. Not through heroics. Through specs that eliminate ambiguity.


What Happens Behind the Scenes

When you describe what you want clearly, here's what the system does:

  1. Scout asks clarifying questions - Business questions, not technical ones. "How many users per month?" not "PostgreSQL or MongoDB?"

  2. Scout generates technical artifacts - These are structured files that tell AI exactly what to build. You never see them.

  3. Sherpa guides implementation - AI implements each task with full context, checking against the technical specs.

  4. You validate using your own words - The validation steps you wrote are how you verify completion.

The technical layer exists. You just don't need to touch it.


Common Mistakes

Being Vague About Outcomes

VagueClear
"Users can log in""Users can log in with email/password or Google"
"Handle errors""Show a message when login fails with wrong password"
"Fast loading""The dashboard appears within 3 seconds"

Skipping Edge Cases

Don't just describe the happy path. Think about what can go wrong:

  • What if someone enters an invalid email?
  • What if they try to sign up with an email that's already registered?
  • What if they close the browser during signup?

Each edge case you identify becomes part of the spec—and gets handled automatically.

Specifying HOW Instead of WHAT

Bad: "Use bcrypt with cost factor 12 for password hashing" Good: "Passwords are stored securely (not in plain text)"

You define outcomes. Let AI choose implementation. This is what makes the system work for non-technical founders—you stay in your domain of expertise.


Start Simple

You don't need to write perfect specs immediately. Start with one feature:

  1. Describe what you're building in one paragraph
  2. List what "done" looks like (3-5 observable outcomes)
  3. Write how you'd test it yourself (step by step, in your words)

That's a spec. Scout will ask clarifying questions if anything is ambiguous. Sherpa will translate it into technical tasks. You'll verify using the steps you wrote.


The Payoff

Teams using this approach report:

  • Fewer rewrites - AI gets it right the first time more often
  • Faster validation - Founders can test without asking developers
  • Better alignment - What gets built matches what was envisioned
  • Institutional memory - Every spec lives in Git, never lost

This is how we achieved elite DORA metrics while building Kodebase. Not through technical expertise. Through clear thinking.

Docs are the source code. Write them well, and everything else follows.

specsmethodologyfoundersai-developmenttutorial
M

Miguel Carvalho

Founder