Skip to content

CONCEPT Cited by 1 source

AI-assisted codebase rewrite

Definition

An AI-assisted codebase rewrite is a project where one (or few) humans direct a capable AI coding agent to produce the overwhelming majority of code for a non-trivial target — a framework, compiler, runtime, library — that previously took teams months or years.

vinext is the canonical wiki instance — ~1 week, one engineer, ~$1,100 in Claude tokens, 800+ OpenCode sessions, producing 94 % API coverage of Next.js 16 with 1,700+ unit tests, 380 E2E tests, and a production deployment (CIO.gov) at publication time.

Enabling preconditions

The 2026-02-24 post is explicit that four preconditions had to line up. "Take any one of them away and this doesn't work nearly as well."

  1. Well-specified target API — Next.js has extensive documentation, years of Stack Overflow answers, and the API surface is heavily represented in training data. Claude "doesn't hallucinate" on Next.js APIs.
  2. Comprehensive test suite on the target — thousands of E2E tests in the Next.js repo ported directly gave vinext a "specification we could verify against mechanically."
  3. Solid foundation to build onVite handled the hard parts (HMR, ESM, plugin API, production bundling). vinext "just had to teach it to speak Next.js." @vitejs/plugin-rsc gave RSC support without rebuilding it.
  4. Capable model"would not have been possible even a few months ago." New models can hold a full architecture in context, reason about module interactions, produce correct code often enough to maintain momentum.

Workflow shape

  1. Spend hours planning architecture with Claude in OpenCode"what to build, in what order, which abstractions to use."
  2. Define a task ("implement the next/navigation shim with usePathname, useSearchParams, useRouter").
  3. AI writes implementation + tests.
  4. Run the test suite.
  5. Tests pass → merge. Tests fail → give AI the error output, iterate.
  6. AI agents for code review and comment remediation in PRs.
  7. Browser-level verification via agent-browser for hydration + client-side issues unit tests miss.

Human role

"The human still has to steer." Architecture decisions, prioritisation, catching AI's confident-but-wrong implementations ("There were PRs that were just wrong. The AI would confidently implement something that seemed right but didn't match actual Next.js behavior. I had to course-correct regularly"), knowing when the AI is heading down a dead end.

Caveats

  • Preconditions narrow the applicability. Most real-world projects lack one or more of well-specified target / comprehensive tests / solid foundation / capable model.
  • Not a general methodology for greenfield work — vinext has a target to verify against.
  • Token cost is not zero, but $1,100 is small compared to engineering-month cost. That arithmetic will shift with model prices.
  • Quality gates are load-bearing. Without the discipline of AI agent guardrails, AI-written code compounds failure.

Seen in

Last updated · 200 distilled / 1,178 read