CONCEPT Cited by 1 source
LLM-generated database changes¶
LLM-generated database changes is the class of workflows where a large language model is asked to propose schema-level mutations (indexes, constraints, column additions, query rewrites, migration scripts) for a production database, and those proposals must be validated before they're applied.
The class exists because LLMs happen to be competent at index design and query tuning when given a well-specified workload — but they're untrustworthy in two specific ways that shape the architecture around them. PlanetScale's own framing (sources/2026-04-21-planetscale-ai-powered-postgres-index-suggestions):
Given a problem, LLMs are great at finding an answer but can't always be trusted to decide if maybe nothing needs to change at all.
As anyone who's used LLM tools for software development can tell you, it's crucially important to validate LLM generated solutions before shipping them to production.
The two failure modes¶
1. Over-eager change ("always finds an answer")¶
Asked "what should I change?", an LLM almost always proposes something — even when the right answer is "nothing." Symptoms: recommending indexes for queries that don't need them, suggesting schema migrations for workloads that are fine as-is, producing plausible-looking but unhelpful mutations.
Fix: don't ask the LLM whether anything needs changing. Use your own workload telemetry to pre-filter which query patterns are even candidates for improvement, and only invoke the LLM for those. See concepts/index-candidate-filtering and patterns/workload-aware-llm-prompting.
2. Plausible-but-wrong output ("inaccurate results")¶
LLMs produce syntactically plausible, semantically confident
output that is sometimes just wrong — an index that doesn't
help, a CREATE INDEX on a non-existent column, a schema
change that violates a constraint, a query rewrite that's
subtly different semantics.
Fix: validate every generated change against a ground-truth oracle before showing it to a human. The oracle varies by change type:
- Index suggestions → HypoPG
EXPLAINcost comparison.- Query rewrites →
EXPLAINon both variants + a result equivalence check. - Migration scripts → dry-run in a schema-branched clone, diff before/after.
See patterns/llm-plus-planner-validation for the canonical validation pattern.
Architectural principles this class imposes¶
- The LLM is never the final decision-maker. It proposes; a validator decides.
- Input to the LLM is shaped by domain telemetry, not raw workload dumps. Workload filtering happens before the prompt is built.
- Prompt content is minimised to what the LLM needs. sources/2026-04-21-planetscale-ai-powered-postgres-index-suggestions: "we filter the schema down to the tables referenced by the query. This keeps the prompt smaller and more focused."
- Every proposal carries a justification that enables validation. The LLM is asked to cite the queries each proposed index is designed to improve — because those citations are exactly what the validation step replays.
- Validation must be cheap. Real
CREATE INDEXevaluation on a 10 TB table is hours of I/O; hypothetical evaluation is microseconds. The validator has to be cheaper than the change it's validating, or you can't afford to run it on every candidate.
Examples¶
- PlanetScale Insights AI-powered index suggestions
(2026-04) — LLM proposes
CREATE INDEXstatements, HypoPG+EXPLAINvalidates each one, discards those that don't materially reduce planner cost. Canonical wiki instance.
Seen in¶
- sources/2026-04-21-planetscale-ai-powered-postgres-index-suggestions — introduces the class. Names both failure modes ("finds an answer" vs "inaccurate results") and prescribes the corresponding fixes (workload pre-filter + planner-based validation).