CONCEPT Cited by 1 source
Dependent-destroy cascade risk¶
Dependent-destroy cascade risk is the unbounded-work
failure mode of ActiveRecord's
has_many :children, dependent: :destroy option when
child-row count grows with application age. A small UI
action (user deletes an Author) produces an O(children)
DB transaction that locks rows across multiple tables,
blocks the request thread, blocks the primary DB, and
spikes replication lag on
every replica — all for a delete the user perceives as a
single click.
The canonical trap¶
class Author < ApplicationRecord
has_many :books, dependent: :destroy
end
class Book < ApplicationRecord
belongs_to :author
end
When author.destroy runs:
- ActiveRecord opens a transaction.
- Issues
SELECT * FROM books WHERE author_id = ?. - For each
Book, invokes the fulldestroylifecycle (callbacks, further cascades ifBookhas its owndependent: :destroyassociations). - Issues
DELETE FROM books WHERE id = ?per child. - Issues
DELETE FROM authors WHERE id = ?. - Commits.
All within a single DB transaction; all within the user's request thread.
The failure modes¶
From sources/2026-04-21-planetscale-ruby-on-rails-3-tips-for-deleting-data-at-scale:
"As a Rails application grows, it can be very easy to unintentionally delete a parent record and trigger a cascade of thousands of deletions. Having all of this happen within a request can lead to timeouts and added strain on your database."
Four specific failure modes compound:
- Request-path timeout. The user's HTTP request blocks until the transaction commits. A 10-second cascade delete blows the 5-second request timeout on most web-tier configurations; the user sees a 504 while the cascade completes in the background (or gets rolled back on connection timeout).
- Row-lock
contention. Each row being deleted holds an
Xlock for the transaction duration. Any concurrent query touching those rows (SELECT ... FOR UPDATE,UPDATE,DELETE, or even someSELECTat strict isolation levels) blocks until the cascade commits. On a hot aggregate (Author with 50,000 Books, each locked for the cascade duration), this produces cluster-wide stall. - Binlog-event amplification + [[concepts/replication- lag|replication lag]]. A single 10-second cascade transaction produces a binlog event the replicas must replay sequentially. The lag seen by every replica is at least the cascade duration. On a read-heavy cluster (web tier reading from replicas), this lag propagates to every user reading stale-beyond-lag data.
- Cascading cascade amplification. If
Bookin turn hashas_many :chapters, dependent: :destroy, and eachChapterhashas_many :sentences, dependent: :destroy, the work isO(authors × books × chapters × sentences)— unbounded multiplicatively.
Why operators don't see it coming¶
dependent: :destroy looks correct and clean in the model
file. The coupling between the small UI action and the
large DB work is invisible at read-time. Developers
adding associations during feature work rarely check how
many children a typical parent has in production.
Three years into a fast-growing application, an
author.destroy that used to touch 12 rows now touches
50,000. The model file is unchanged; the operational
profile is catastrophic. The gap between the code's
appearance and its runtime cost is the essence of the
trap.
Mitigations¶
dependent: :destroy_async— the canonical Rails 6.1+ fix. Decouples the cascade from the request transaction via ActiveJob. The per-request work becomesO(1); the cascade work runs in a Sidekiq job bounded to its own transaction(s).dependent: :restrict_with_error— reject parent deletion when children exist; push the cleanup responsibility to application code. Forces an explicit per-child deletion plan at deletion time, which naturally uses a batched-deletion job.- Batched explicit deletion at the application tier — Sidekiq job with self-requeue that deletes children in bounded batches before deleting the parent.
- Soft-delete-the-parent, cron-delete-the-children
— soft-delete
the parent (set
deleted_at), let a cron reap the children over time, and hard-delete the parent when the child count drops to zero.
Relationship to DB-level ON DELETE CASCADE¶
Foreign-key ON DELETE CASCADE constraints have
identical failure-mode structure at a different layer.
The cascade is atomic in the DB but unbounded in the same
way — DELETE FROM authors WHERE id = 1 with
ON DELETE CASCADE on books.author_id issues the same
O(children) row deletions inside the same transaction.
Moving the cascade from ActiveRecord to the DB doesn't
fix the unbounded-work problem; it just hides it one
layer deeper.
The canonical PlanetScale recommendation from the source
post is explicit: "We recommend replacing any usage of
foreign key constraints with :destroy_async for safer
deletes at scale." The cascade must live on an
independently-executable path to stay bounded — which
means an async job, which means the application tier
(there's no native "async cascade" at the DB layer).
Canonicalised as
patterns/foreign-key-cascade-vs-dependent-destroy-async.
Seen in¶
- sources/2026-04-21-planetscale-ruby-on-rails-3-tips-for-deleting-data-at-scale —
canonical wiki introduction. Mike Coutermarsh
(PlanetScale, 2022-08-01) canonicalises the trap in
the opening framing of the 3-tips post and names
dependent: :destroy_asyncas the fix. Verbatim trap statement: "can lead to timeouts and added strain on your database."