SYSTEM Cited by 1 source
Managed Compression (Meta)¶
Managed Compression is Meta's runtime service for operating compression at fleet scale. Originally built "to automate dictionary compression with Zstandard" (Source: sources/2025-10-06-meta-openzl-an-open-source-format-aware-compression-framework, referencing an earlier 2018 engineering.fb.com zstd post — not yet ingested on the wiki). In 2025 it was extended to OpenZL Plans, making it the operational substrate for format-aware compression at Meta.
Wiki role¶
Stub. The 2018 source describing Managed Compression architecturally is not yet ingested on the wiki. This page anchors the system name + the lifecycle role it plays for OpenZL in the 2025 launch post, and will be deepened when the 2018 post (or any later Managed Compression post) is ingested.
What it does¶
Described by Meta in the OpenZL post (Source: sources/2025-10-06-meta-openzl-an-open-source-format-aware-compression-framework):
"Each registered use case is monitored, sampled, periodically re-trained, and receives new configs when they prove beneficial. The decompression side continues to decode both old and new data without any change."
That is a single loop: register → monitor → sample → re-train → evaluate → roll out. Four properties it contributes:
- Fleet-scale monitoring of registered use cases. Managed Compression tracks which datasets / workloads use which configs.
- Periodic re-sampling. Live data is pulled off the fleet as training input — the configuration stays matched to the actual data distribution, not to the data distribution at integration time.
- Automated re-training. Ingested samples feed a trainer run (OpenZL trainer in the 2025 regime; zstd dictionary-training in the 2018 regime) that produces an updated config — a Plan for OpenZL, a dictionary for zstd.
- Safe rollout + backward compatibility. New configs roll out "like any other config change" — old frames continue to decode unchanged (the monoversion-decoder property). No format-version coordination between producers and consumers.
Why OpenZL + Managed Compression compose well¶
The OpenZL post is explicit about the synergy:
"The synergy with Managed Compression is apparent: Each registered use case is monitored, sampled, periodically re-trained, and receives new configs when they prove beneficial."
OpenZL exports the exact knobs Managed Compression needs: Plans are first-class config objects, the encoder is Plan-agnostic (it resolves whatever Plan it's handed), and the decoder doesn't care which Plan a frame was compressed with (the Resolved Graph travels in-frame). This maps cleanly onto Managed Compression's rollout model, which was already designed for zstd dictionaries — another per-use-case config object.
Scope + caveats¶
- No operational numbers disclosed in the ingested source. No fleet size, registered-use-case count, bytes-compressed-per-day, re-training cadence, or evaluation-gate details are present in the 2025-10-06 OpenZL post.
- Internals not described. How Managed Compression samples, how it evaluates candidate configs against current ones, what the A/B harness looks like, how it decides to roll forward vs. hold — none of this is in the 2025 source. Deepening this page requires the 2018 "Managed Compression with Zstandard" post or a successor.
Seen in¶
- sources/2025-10-06-meta-openzl-an-open-source-format-aware-compression-framework — Managed Compression is named as the runtime home for OpenZL Plan lifecycle at Meta.
Related¶
- systems/zstandard-zstd — the 2018 original integration target (zstd dictionaries).
- systems/openzl — the 2025 extension (OpenZL Plans).
- concepts/compression-plan — the OpenZL-era unit of configuration.
- patterns/offline-train-online-resolve-compression — the architectural shape the Managed Compression loop enforces.
- patterns/graceful-upgrade-via-monoversion-decoder — the rollout safety property Managed Compression depends on.
- companies/meta