PATTERN Cited by 1 source
Config-based soft-spacing framework¶
Problem¶
Soft-spacing penalties start out as one-off code for one sensitive content class (e.g. elevated-quality-risk content). As more quality axes get identified over time — new policy classes, new sensitivity types, new UX concerns — each gets hard-coded: a separate penalty function, separate classifier signal, separate weight, separate window. Adding a new sensitivity axis requires a code change, a deploy, an A/B test, and re-tuning.
Over months, the reranking stage accumulates a zoo of one-off penalty terms that are hard to reason about, hard to tune jointly, and hard to remove without risk.
Solution¶
Abstract soft-spacing into a config-based framework where each sensitive class is declaratively specified and the framework composes them into the utility equation automatically. Each class config provides:
- Class identifier — the sensitive set
R. - Classifier signal source — how items get labelled.
- Distance kernel — e.g. inverse
1/d, exponential, piecewise. - Window size
w— how far back penalties consider. - Weight
λ— relative strength vs other objectives. - Activation conditions — e.g. surface, user segment, experiment.
The reranking engine reads configs at request time, computes per-class qᵢ(t) terms, and subtracts them from the utility equation:
New classes can be added via config push without code deploy. A/B-testable at config granularity. Deprecable without removing anything from the critical path.
Canonical instance — Pinterest late 2025¶
Pinterest launched soft-spacing mid-2025 as a one-off penalty for elevated-quality-risk content. Late-2025 abstraction:
"In late 2025 we further abstracted the logic via building an easy to use, config-based framework to make it more extendable to meet and adapt to quality needs." (Source: sources/2026-04-07-pinterest-evolution-of-multi-objective-optimization-at-pinterest-home)
Pinterest doesn't disclose the config schema or the axes of extensibility in detail, but the direction is clear: soft-spacing became a platform rather than a feature. Future sensitivity classes get added by configuration; the reranking engine composes them automatically.
Structural properties¶
- Declarative class membership — sensitivity classes are data, not code.
- Shared utility-equation composition — framework enforces the
− λ_c · q_c,i(t)shape uniformly so per-class tuning is local, not entangled with other classes. - Config distribution substrate — rides existing config-push infrastructure (feature flags, dynamic service config); per Pinterest's pattern of treating policy/rule distribution as dynamic config (patterns/config-distribution-for-quota-rules is the sibling pattern from the quota domain).
- Per-class A/B testing — enable/disable/re-weight each class independently behind experiment flags.
- Debuggability — per-class penalty contribution is loggable and inspectable per impression.
When to apply this pattern¶
- You have a single soft-spacing implementation and are about to add the second one — stop, abstract first.
- You foresee a growing list of sensitive content classes over time (policy evolution, new UX concerns, new verticals).
- You have an existing dynamic-config distribution substrate.
- The penalty axes are structurally similar (classifier label → distance-weighted penalty → utility reduction).
When NOT to apply this pattern¶
- You have exactly one soft-spacing class with no plausible second one — YAGNI.
- Your axes are structurally different (some need hard filters, some need pre-ranking boosts, some need post-hoc swaps) — a uniform soft-penalty framework may not fit.
- You lack the config-distribution infrastructure — build that first.
Dependencies¶
- A classifier pipeline that produces the per-item labels consumed by each class config.
- A config distribution substrate (PinConf, feature flags, dynamic config). Pinterest rides PinConf (substrate shared with rate-limit quotas and feature flags, per sources/2026-02-24-pinterest-piqama-pinterest-quota-management-ecosystem).
- A reranking engine whose utility equation accommodates additive penalty composition (SSD's equation form is ideal; DPP's determinant form is not).
- Observability per class — loggable
q_ccontributions, per-class penalty distributions, per-class launch metrics.
Caveats¶
- Class interaction — multiple overlapping penalty classes can compound; joint tuning is harder than solo tuning of one class.
- Classifier drift — configs assume classifier semantics stable; drift in classifier behaviour silently moves the soft-spacing effect.
- Hyperparameter search grows combinatorially — each new class adds a
λ_cand possiblyw_c,kernel_c; framework should constrain the search space via defaults and guardrails. - Config-surface vs policy-surface — not every policy decision should become a tunable dial; some decisions belong in the classifier (hard filter) or in product UX (explicit warnings) instead.
Seen in¶
- sources/2026-04-07-pinterest-evolution-of-multi-objective-optimization-at-pinterest-home — canonical wiki instance. Pinterest's mid-2025 single-class soft-spacing abstracted into late-2025 config-based framework for extensibility to new quality-need classes.
Related¶
- concepts/soft-spacing-penalty — the mechanism this framework platforms.
- concepts/quality-penalty-signal — the signal consumed by each class.
- concepts/feed-diversification — the broader reranking concern.
- systems/pinterest-home-feed-blender — the canonical host.
- patterns/multi-objective-reranking-layer — parent pattern.
- patterns/config-distribution-for-quota-rules — sibling config-as-rule-distribution pattern in the quota domain.
- patterns/rule-engine-with-continuous-policy-deploy — related rule-distribution framing at the serving layer.