CONCEPT Cited by 1 source
Sidekiq unique jobs¶
Sidekiq Enterprise unique jobs is a paid-feature
mechanism that prevents duplicate jobs from being enqueued
within a configurable time window. Configured per worker
class via sidekiq_options unique_for: DURATION,
unique_until: SEMANTIC. When a client attempts
Worker.perform_async(args) and an identical job (same
class, same args) is already in the queue within the
window, the attempt is rejected silently.
Shape¶
class CheckDeploymentStatusJob < BaseJob
sidekiq_options queue: "urgent",
retry: 5,
unique_for: 1.minute,
unique_until: :start
# ...
end
unique_for: 1.minute— the uniqueness window. A secondperform_asyncwith the same args within 1 minute is a no-op.unique_until: :start— the lifecycle point at which the uniqueness lock releases.:startreleases as soon as the job begins executing (not when it completes); allows new enqueues while a long-running instance is still working. Alternative::success(holds until successful completion).
The uniqueness is keyed on (worker_class, args) hash,
stored in Redis alongside the job payload.
Why a framework-level gate helps¶
A paired scheduler
architecture deliberately introduces duplicate enqueues:
user triggers → perform_async; scheduler sees row
still pending → perform_async again. Without a gate,
both end up in the queue and workers run both.
Layer 1 defences (state re-check in perform) catch
this eventually — the second worker reads state and
exits fast — but the second job still consumed queue
space and a worker slot. At volume this is real
overhead.
unique_for rejects the duplicate at enqueue time, so
the queue never holds it and no worker ever wakes up
for it. Net: the scheduler can fire as often as it
wants without generating queue bloat.
unique_until semantics¶
The lifecycle of the uniqueness lock matters. Sidekiq supports several options; the two most common:
:start— lock releases whenperformbegins. Twoperform_asynccalls 30 seconds apart (insideunique_for: 1.minute) produce one job; but if the job starts executing, a third call 10 seconds later is accepted because the lock has released. Appropriate for enqueue-burst coalescing: the scheduler fires every minute; user triggers fire randomly; avoid queueing duplicates but don't block the next legitimate re-enqueue once the previous one starts.:success— lock holds through execution, only releases on successful completion. More conservative; no new instance can be enqueued even while the first is running. Appropriate for jobs that should strictly not overlap.
PlanetScale's canonical example uses :start because
the jobs are status-poll-like (CheckDeploymentStatus)
— overlap with earlier instances that are still running
is fine; duplicate queue entries are what matters.
How it composes with idempotent job design¶
Unique jobs is layer 3 of the three-layer idempotence stack:
- Layer 1: state re-check at job entry (cheapest).
- Layer 2: DB row lock around mutation (correctness under concurrency).
- Layer 3: unique jobs at enqueue time (queue-bloat prevention).
Layer 3 alone isn't sufficient — a gap of
unique_for + ε between two enqueues still produces
two jobs, so layer 1 is still required. But adding
layer 3 where it's cheap to do so keeps scheduler
storms from filling the queue.
Enterprise-only¶
The unique_for option is a Sidekiq Enterprise
feature (paid subscription). Open-source Sidekiq has
a community gem sidekiq-unique-jobs that provides
similar behaviour with different internals; behaviour
is similar but the Redis key layout differs.
Implementation¶
Sidekiq Enterprise implements unique_for via a Redis
SETNX on a key derived from
(worker_class, args) at enqueue time. The key TTL
matches unique_for. The perform_async call checks
for the key's presence; if set, returns without
enqueuing (silently — no exception).
The uniqueness check is therefore a Redis round-trip
added to every perform_async call for workers with
unique_for configured. For high-enqueue-rate
workers this is a small but measurable overhead.
Seen in¶
- sources/2026-04-21-planetscale-how-we-made-planetscales-background-jobs-self-healing —
canonical wiki introduction. PlanetScale uses
unique_for: 1.minute, unique_until: :startonCheckDeploymentStatusJobas the third layer of duplicate-job defence, paired with state-re-check at job entry and DB locks around mutation. Named as layer 3 in their three-layer dedup stack.