PATTERN Cited by 1 source
Shadow validation of a derived dependency graph¶
Shadow validation of a derived dependency graph is the pattern of running a new, derived data structure (typically a graph or index) alongside an existing authoritative data path — computing it for every operation, but not acting on it — and emitting errors whenever the authoritative path produces an outcome the derived structure says shouldn't happen. It is the pattern you reach for when:
- The derived structure is about to become load-bearing for correctness (not just performance), AND
- Its failure mode is silent divergence (wrong data, not a crash), AND
- Its completeness cannot be proven from first principles — you're relying on engineers having enumerated every rule correctly.
Shadow validation lets production traffic find the gaps before the derived structure goes live.
(Canonical source: sources/2024-05-22-figma-dynamic-page-loading — Figma's shadow-mode validation of QueryGraph's write dependencies before enabling dynamic loading for editors.)
Shape¶
- Build the derived structure — in Figma's case, QueryGraph with its new write-dep edge types.
- Compute for every real operation, but don't act on it. Run the derived structure's inputs and outputs alongside the live authoritative path. Authoritative behavior is unchanged.
- Define the error condition. In Figma's case: "if multiplayer received any edits to nodes outside of the write dependency set, it would report an error." The event that would have been wrong under the derived structure is what you watch for.
- Investigate every error. Each error is either a missing rule in the derived structure (a write-dep class that was never enumerated) or a bug in its construction.
- Exit condition. The error stream is clean for long enough that you trust the derived structure to be complete. Flip the live path.
Why Figma's example is a canonical fit¶
Write-dep enumeration is a classification problem: every possible implicit effect between nodes must be represented by an edge type or the graph is incomplete. There are enough special-case effects (auto layout, frame constraints, nested instances, text overrides, variable bindings, …) that the odds of a hand-enumeration being complete on the first try are low.
The failure mode if you ship with a missing edge:
- A collaborator edits a node that should have derived-affected nodes X, Y, Z.
- Your derived graph says the edit only affects A, B.
- The client never materializes X, Y, Z for this user.
- The user sees broken layout, drifted instances, missing fonts — silent divergence, not a crash, not an error pop-up.
Shadow mode inverts the problem: in the shadow phase, the authoritative (full-load) path is still running, so you have a "ground truth" signal — the authoritative system is editing nodes you can compare against the derived structure's predicted set. A mismatch = a missed rule. The post's own example: "we discovered a complex, cross-page, recursive write dependency involving frame constraints and instances. Had we not handled this dependency properly, edits could have resulted in incomplete layout computations."
Difference from patterns/shadow-migration¶
patterns/shadow-migration runs two engines in parallel on the same inputs and reconciles their outputs, for migrating a data pipeline from engine A to engine B. Shadow validation here is narrower: one engine (the authoritative one) is still running; the "shadow" is a derived data structure that predicts what that engine will do. You're not comparing outputs against another engine; you're comparing actual operations against a prediction.
Same animating principle — use production traffic as the test set, don't flip live until the signal is clean — but different shape.
Difference from property-based testing¶
patterns/property-based-testing checks invariants against synthetic inputs. Shadow validation runs the check on real production traffic — which covers cases engineers wouldn't dream up in a generator and in the ratios they actually occur. Figma's "cross-page, recursive" dep would likely be far down the tail of a generator's distribution.
Shadow validation is cheap precisely because you already have a stream of real operations; you just need to add the prediction-and-check layer.
When to use this pattern¶
- Building a derived index / graph / cache that will become a correctness-critical component, not just a performance optimization.
- The component's correctness depends on exhaustive hand-enumeration of rules (no compiler or type system is enforcing completeness for you).
- The failure mode is silent divergence rather than a hard error.
- Production traffic is representative and reasonably diverse.
Cost¶
- Runtime overhead during shadow — computing the derived structure for every operation, logging errors. In Figma's case this was the full write-dep computation on every edit.
- "Extended period" of shadow time before live flip — Figma doesn't say how long, but calls it "an extended period of time."
- Investigation burden — every reported error must be triaged. Some will be genuine missed rules; some may be bugs in the derived structure itself.
Seen in¶
- sources/2024-05-22-figma-dynamic-page-loading — Multiplayer in shadow mode, tracking what page the user is on + computing write deps as if dynamically loaded, without changing runtime behavior. Reporting errors for edits outside the computed write-dep set. Surfaced at least one cross-page recursive write dep missed in the initial implementation.
Related¶
- patterns/shadow-migration — dual-engine reconciliation pattern; the parent family.
- concepts/write-dependency-graph — the specific derived structure that needed shadow validation at Figma.
- concepts/design-away-invalid-states — complementary strategy of making the bad case structurally unrepresentable; shadow validation is what you reach for when you can't.
- patterns/post-inference-verification — same "run a check next to the primary action to catch silent wrongness" shape, applied to LLM outputs rather than data-model derivations.
- patterns/side-by-side-runtime-validation — stronger sibling; runs two full runtimes in parallel on real production workloads and gates rollout on matched correctness and matched performance. Shadow-validation has one authority and a prediction; side-by-side has two complete implementations. Figma's Materializer rollout (2026) is the canonical instance.