Skip to content

PATTERN Cited by 1 source

Database as final arbiter of uniqueness

Pattern: for any logical uniqueness invariant, the database's unique index (or equivalent row-lock-based conditional insert primitive) must be the final authoritative check — application-layer validations (ORM uniqueness:, pre-insert SELECT, client-side de-duplication) are advisory only. The application code path must handle the database's unique-violation error as the canonical outcome, not the ORM validation.

Why

Application-layer uniqueness validation is always implemented as a check- then-act sequence: SELECT for the conflicting row, INSERT if none found. With no lock acquired between the SELECT and the INSERT, two concurrent requests both see the pre-conflict state, both proceed to INSERT, and at most one of the INSERTs can succeed. The one that succeeds and the one that fails is a race; the fact that at most one succeeds is guaranteed only by the database's unique-index enforcement at commit time.

"The database, as the final arbiter of uniqueness, would then only allow one of these queries to succeed and the other would receive the error we see in Insights."

— Rafer Hazen, PlanetScale (Source: sources/2026-04-21-planetscale-debugging-database-errors-with-insights)

This is not a bug in the ORM validator or in the application. It is a structural consequence of the fact that uniqueness is a global invariant across concurrent writers, and global invariants on a shared resource can only be enforced at the shared resource — i.e. at the database. Application-layer validation is a layer on top of the database guarantee, not a replacement for it.

Corollaries

  1. The unique index must exist. Without a database- level unique index (or equivalent — UNIQUE constraint, EXCLUDE constraint, UPSERT primitive), no amount of application-layer checking can guarantee uniqueness. In the PlanetScale example, the schema definition
t.index ["database_branch_id", "display_name"],
    name: "idx_branch_id_display_name", unique: true

is what actually prevents duplicates. Removing it and keeping only the Rails validates :display_name, uniqueness: would leave the system looking correct on single-threaded dev tests and racing into duplicate rows in production.

  1. The application code path must handle the DB error. ActiveRecord::RecordNotUnique, Postgres unique_violation (SQLSTATE 23505), MySQL ERROR 1062 — whatever the database raises when the unique index rejects an insert must be caught and translated into the same user-facing error the ORM validation would have returned. Otherwise concurrent users see raw database errors for cases the ORM validator caught in the single-threaded path. The canonical shape:
def create
  record.save!
rescue ActiveRecord::RecordNotUnique
  record.errors.add(:display_name, :taken)
  render_validation_error
end
  1. Validation is a UX affordance, not a correctness guarantee. It does improve the common uncontested path: one transaction round-trip instead of two for every create, and a friendly error without a rescue handler. It does not reduce the need for the DB-layer enforcement. In systems where latency matters more than duplicate-error-UX, skipping the app-layer validation entirely and relying on the database error path is a defensible simplification ("optimistic insert, rescue on duplicate").

  2. Atomic conditional-insert primitives are stronger than SELECT-then-INSERT. Where available, prefer INSERT ... ON DUPLICATE KEY UPDATE (MySQL) / INSERT ... ON CONFLICT DO NOTHING (Postgres) / INSERT ... SELECT ... WHERE NOT EXISTS — these collapse the check-then-act into a single atomic database operation, so there is no ORM-validation window where a race can happen. The application still handles the "no row was inserted because it already existed" case, but without an error-rescue path.

What this pattern does not solve

  • Conceptual uniqueness that crosses the database boundary. If the same username must be unique across two different databases, neither database can enforce it alone; you need a third system (a username service, a distributed lock, a Saga). The pattern scopes to uniqueness enforceable by a single database's indexes.

  • Uniqueness under dual-write migration. During a cutover where two databases are accepting writes, unique indexes on both sides don't coordinate — you can get conflicting rows that are individually unique in each database but collide on merge. See the PlanetScale zero-downtime- migrations series for this edge case.

  • Soft-deleted uniqueness. WHERE deleted_at IS NULL filtered uniqueness requires a partial unique index (Postgres) or a computed-column unique index (MySQL 5.7+), not a plain unique index. Application-layer validation that respects deleted_at without a matching partial index at the database is the same check-then-act race in a different costume.

Implementation checklist

  • Database-level unique index / constraint exists and covers the same columns + scope as the application-layer validation.
  • Application code path catches the database's unique-violation error and translates it to the same user-facing error the validation would return.
  • Optional: replace SELECT-then-INSERT validator with atomic ON CONFLICT / ON DUPLICATE KEY / WHERE NOT EXISTS insert for latency and correctness.
  • Optional: concurrency test that explicitly fires two simultaneous requests and asserts at most one row exists after both return (covers the race the dev-environment test misses — see concepts/check-then-act-race for why sequential tests don't).

Seen in

  • sources/2026-04-21-planetscale-debugging-database-errors-with-insights — Canonical disclosure. PlanetScale staff encountered a production instance of Rails uniqueness validation racing against MySQL unique-index enforcement on the database_branch_password table; the error volume was "a few 10s to a few hundred per day" and the fix in their case was caller-side serialisation (an internal tool stopped issuing parallel requests). The post states the pattern verbatim: "the database, as the final arbiter of uniqueness, would then only allow one of these queries to succeed."
Last updated · 347 distilled / 1,201 read