Skip to content

PATTERN Cited by 1 source

Opt-in performance interface

Problem

A platform-level performance instrumentation system (see patterns/base-class-automatic-instrumentation) can run at every screen automatically, but it can't know which of the screen's UI elements matter for the user-perceived done-state. An Android View tree contains hundreds of ImageViews: hero images, icons, loading shimmers, background decorations, avatars, chrome. Only some of them are content-critical for Visually Complete. Auto-detecting every ImageView as critical produces either never-complete (some decorative shimmer is always loading) or misleading-complete timestamps.

Solution

Make product engineers opt specific views in via a marker interface. The interface carries a small readiness contract (isDrawn(), isVideoLoadStarted(), geometry methods) that the platform consumes via its view-tree walk. The fact that a view implements the interface is the product-engineer declaration "count this view for performance measurement."

Pinterest's canonical instance (Source: sources/2026-04-08-pinterest-performance-for-everyone):

  • Three marker interfaces: PerfImageView, PerfTextView, PerfVideoView — one per media-type because each type has a different ready-predicate (drawn for image/text, video started playing for video).
  • Minimal method surface: isDrawn(), isVideoLoadStarted(), x(), y(), width(), height() — only what the platform needs for the walk.
  • Product-engineer responsibility: on a new surface, decide which views carry user-perceived meaning and tag them by implementing the right Perf* interface. Typically one line: class HeroImageView : ImageView(), PerfImageView.
  • Platform responsibility: everything else — walk the tree, filter to visible, check readiness, conjoin, emit.

Why opt-in and not auto-detect

  • False-positive avoidance — not every view matters for Visually Complete. Loading placeholders, chrome, decorative views should not block completion.
  • Product intent is not type-observable — two identical ImageViews can have opposite roles (one is the hero, the other is decorative). Only the engineer knows which.
  • Tagging is cheap — one declaration per content-critical view; minutes instead of the two-engineer-weeks per-surface cost of hand-rolled detection.
  • Discoverability — tagged views are grep-able / lintable / auditable. A new engineer can ask "which views participate in Visually Complete?" and get an answer.

Why a marker interface and not an annotation / attribute / registration call

Language-specific trade-off. Interface-based opt-in (Pinterest's choice) has advantages on Java/Kotlin:

  • Compile-time visibility — implementing the interface is a visible part of the view's type.
  • No runtime-reflection / annotation-processor scaffolding needed — the platform's type check is a cheap instanceof.
  • IDE affordances — autocomplete naturally surfaces the readiness methods.
  • Inheritance composability — subclasses automatically inherit the marker.

Alternatives considered in equivalent designs:

  • Annotations (@PerfCritical) — requires annotation-processor or runtime reflection; less ergonomic on Android.
  • Registration API (perfTracker.register(this)) — imperative opt-in; easier to forget; harder to lint.
  • Base class (PerfImageViewBase) — locks the subclass out of other inheritance; less flexible than interface-based opt-in in single-inheritance languages.

The load-bearing correctness property: tagging accuracy

The whole pattern's measurement accuracy depends on product engineers tagging the right views:

  • Forgot to tag (false negative) → a view the user is waiting for doesn't count → Visually Complete fires too early.
  • Tagged irrelevant view (false positive) → a view the user doesn't care about blocks completion → Visually Complete fires too late or never.

Production mitigations (not specified in Pinterest's post but standard industry practice):

  • Lint rule that warns on ImageViews above a certain screen-position threshold that aren't tagged.
  • Perf-team review of PRs adding new surfaces.
  • Periodic audit via human rater video-review vs measured Visually Complete timestamp.

When to use

  • Platform is instrumenting a hierarchical substrate (view tree, component tree, DOM) and only some elements matter for the predicate being measured.
  • Product engineers can be expected to declare opt-in with small boilerplate.
  • The opt-in declaration carries state / methods the platform needs to consume (not just a marker — Pinterest's isDrawn() style).
  • The platform wants to avoid false positives from auto-detection.

When not to use

  • Every instance of a type counts — if all ImageViews really do matter, there's no need for opt-in; just auto-detect.
  • The opt-in can't be trusted — if product engineers can't be relied upon to tag correctly, mechanisms with better defaults (auto-detect + denylist) or validation (human rater comparison) are better.
  • Language doesn't support lightweight interfaces / traits — if the cost of declaring opt-in is high, engineers will skip it.

Caveats

  • Pinterest's post does not describe tagging guardrails (lint rules, review process, rater validation). This is left to the implementor.
  • Hybrid marker + method surface is Pinterest's specific shape. Pure marker interfaces (zero methods) don't work here because the platform needs readiness + geometry queries.
  • Scope is per-screen, not per-metric — each marker interface serves one platform concern (Visually Complete). Stacking multiple orthogonal opt-in interfaces on the same view is possible but isn't described by Pinterest.

Seen in

  • 2026-04-08 Pinterest — Performance for Everyone (sources/2026-04-08-pinterest-performance-for-everyone) — canonical wiki instance. PerfImageView / PerfTextView / PerfVideoView as hybrid marker-plus-readiness interfaces; product-engineer-tagged opt-in; consumed by BaseSurface view-tree walk. Extended to iOS and Web.
Last updated · 319 distilled / 1,201 read