Skip to content

FIGMA Tier 3-equivalent

Read original ↗

Redefining Impact as a Data Scientist (Figma, 2026-04-21)

Summary

Figma Engineering post (Data Science author, writing on behalf of the team supporting Billing infrastructure) reframing "data-science impact" in a correctness-heavy domain. Most of the article is DS-role/culture framing — pie charts comparing the "traditional DS" work mix vs. what Billing DS actually does, a reflection on how full-stack DS fits a backend-adjacent product surface. The architectural content is narrow but real: two durable in-house tools — the Invoice Seat Report (a data application reconstructing seat-charge narratives across billing-system events) and consistency checkers (SQL-based invariant tests routing structured alerts, run in both dev and prod) — introduced as the highest-leverage outputs of DS in a correctness-first domain. Consistency checkers are explicitly linked to Slack Engineering's data-consistency-checks post as prior art and reported as having spread beyond Billing to product security, access + identity management, and growth teams at Figma. No architectural diagrams, no scale numbers (policy count, invariant count, alert volume, false- positive rate all undisclosed). Ingested narrowly: one pattern page (consistency-checkers), source-page mention of the Invoice Seat Report, Figma company-page update.

Key takeaways

  1. Consistency checkers = SQL-based invariant tests as a testing strategy. Figma's framework compares expected system state against the data recorded across multiple sources of truth on a pre-defined cadence. Article names two categories:

  2. Data-quality checks: validate whether stored data accurately reflects real product state (e.g. is the seat row in the billing table consistent with the seat-usage event log?).

  3. Code-logic checks: detect when application behaviour diverges from defined billing rules (e.g. did a seat upgrade actually apply the contracted price adjustment?).

Both flavours are SQL queries against unified data (product logs + billing state + payment-processing state + CRM), each targeted and explainable, routing structured alerts with the exact rows and metadata needed to the relevant engineering team on violation. Holds in both development and production environments — same invariant code surfaces regressions pre-deploy and drift post-deploy. Positioned as "more thorough than E2E testing and UAT" for correctness of stateful workflows where failures can be subtle and costly. First wiki instance; see patterns/consistency-checkers.

  1. Cross-company precedent: Slack Engineering. Article links to Slack's data-consistency-checks post as prior art — "other companies also leverage them as part of their testing strategy". This validates consistency checkers as a genuine cross- company pattern, not a Figma-specific tool; Slack content not yet ingested into this wiki (Slack is a Tier-2 source with no articles ingested to date).

  2. Adopted beyond the originating team at Figma. After the Billing team built the framework to support the 2025 billing-model refresh, "the framework has been adopted beyond the billing team, powering data-integrity and code-logic checks across product security, access and identity management, and other growth teams". Named concrete reuse: connected projects uses consistency checkers to confirm sharing and access settings behave as expected. Captures the generalisation arc — bespoke billing tool → platform-wide correctness substrate.

  3. Invoice Seat Report = data-application-productization. Figma's billing model has users occupying seats whose lifecycle (assignments / removals / permission changes / contract terms / edge-case adjustments) spans several systems with different schemas. What appears as a simple invoice line item can represent a long chain of cross-system interactions. The Invoice Seat Report is an internal data application that reconstructs the full narrative behind each seat charge by pulling product-usage events + contract metadata + billing rules + past state transitions, compiling them into a plain-language explanation. Originally a bespoke DS analysis, now "one of the most-viewed data applications at Figma" — used by Support, Order Management, enterprise specialists (external customer explanations), and by Billing + monetisation engineers (internal debugging of unexpected system behaviour). See patterns/data-application-productization.

  4. Architectural pre-work: unifying data across fragmented sources. Building the Invoice Seat Report required "wrangling fragmented data and developing a shared mental model of how seat state evolves across systems" — validating assumptions with engineers, cleaning inconsistencies in historical data, and requesting new instrumentation where gaps existed. In several cases, "existing logs captured what happened but not why" → DS advocated for future events to reflect the underlying mechanics more faithfully. Generalises a recurring wiki pattern: derived reporting reveals instrumentation debt in upstream services — the data product can't be built until upstream emits the right signals. Sibling framing to Airbnb's alert-platform observation that "own the full surface area — partial ownership creates leaky abstractions" (sources/2026-03-04-airbnb-alert-backtesting-change-reports).

  5. Translating billing rules into analysis requires careful SQL transformation. Seat pricing is determined by usage events + contractual terms + upgrade paths + workspace state + exact transition timing. Edge cases: legacy multiyear contracts with sparse seat history; early upgrades creating blind spots in data. Required "careful translation of rules into SQL and transformations that could be traced and debugged" — the consistency-checker shape becomes viable because DS owns this translation, and the SQL becomes the canonical encoding of "what correct looks like" that can then be run continuously.

  6. Scope-framing claim: "data science isn't just about analysis". Article's closing meta-lesson — in correctness-critical domains, DS's highest-leverage output is often small durable applications that embed shared logic directly into the way teams operate, rather than analyses / presentations / A/B tests. Both the Invoice Seat Report and consistency checkers emerged as bespoke analyses that productized once multiple teams depended on the same logic (patterns/data-application-productization).

Caveats / undisclosed

  • No numbers: no count of consistency checkers running, no alert volume, no false-positive rate, no invariant-cadence disclosure, no rollout timeline for the billing-model refresh, no coverage percentage, no Invoice Seat Report query rate / user count / latency.
  • No architectural detail on the checker framework itself: what orchestrates the SQL runs (Airflow? custom scheduler?), how the alerts are routed (email? PagerDuty? Slack webhook?), what data warehouse hosts the unified view (Snowflake? BigQuery? Databricks?), how the cross-system schemas are joined, how the dev-environment variant is plumbed — all absent.
  • No example SQL invariants shown — description is abstract.
  • Heavy DS-role-framing overlay: ~60% of the article is traditional-vs-actual-DS-work pie-chart commentary, not architectural content. Passes scope filter on the 40% that describes the two tools
  • the Slack-cited cross-company-pattern angle.
  • Figma is not in AGENTS.md's formal Tier 1/2/3 lists — treated as Tier-3-equivalent with Tier-3 selectivity; this article sits at the edge of the filter and is ingested narrowly (one pattern page) rather than given the full 10-20-file treatment reserved for distributed-systems-internals posts like LiveGraph 100x or FigCache.

Connections

Source

Last updated · 200 distilled / 1,178 read