CONCEPT Cited by 1 source
Compression + compaction CPU cost¶
Definition¶
Compression + compaction CPU cost names the broker-side CPU tax that arises when a topic is both compressed (record data is encoded with ZSTD / LZ4 / gzip / snappy) and compacted (Kafka's log compaction retains only the latest value per key). Compaction forces the broker to inspect record keys — which, for compressed data, forces a decompress → read-key → rewrite (with recompression) cycle on every compaction pass.
The normal case — broker treats records as opaque¶
In Redpanda's (and Kafka's) common case, the broker never looks inside record payloads. Producer writes a batch → broker appends it to the partition log as an opaque byte sequence → consumer reads and decompresses on its own. CPU for compression lives entirely on the client side.
From the post (Source: sources/2025-04-23-redpanda-need-for-speed-9-tips-to-supercharge-redpanda):
"The compaction process runs in the broker and is actually the only use case where the broker reads message-level details from a topic. Usually, Redpanda treats the data as opaque bytes that need to be sent without reading them in detail."
This is the load-bearing design choice for why client-side compression is preferred — opaque-byte handling is cheap.
Compaction breaks the opaque-byte invariant¶
Log compaction is Kafka's mechanism for retaining only the latest value per key: when a topic is configured for compaction, older records with the same key are eventually dropped by a background process. The canonical use case is snapshots-as-deltas — e.g. a topic holding the current state of every user as keyed records; on restart, a service can replay just the compacted log rather than the full history.
To decide which records to drop, the compactor must read each record's key — which means it must decode the record format.
The compression tax¶
When the topic is compressed, the compactor's read-key path now requires decompression. When it rewrites retained records back into the log, it must recompress. Per the post:
"We usually recommend that compression takes place in clients (see above) for performance reasons, but when compacting, that's no longer an option. This is because both the read and write portions of the compaction process will use additional CPU to decompress and recompress the data."
The broker-side CPU cost grows with:
- Compaction frequency — more passes = more decompress/recompress cycles.
- Codec cost — ZSTD is more expensive than LZ4 per byte; gzip more expensive still.
- Topic size — every retained record passes through decompress + recompress.
Verbatim:
"Combining compression (particularly with CPU-intensive codecs) with compaction can lead to significant CPU utilization. Again, this is a classic trade-off between space utilization and CPU time."
The operational rules¶
Redpanda's recommendations:
"Don't compress compacted topics unless you're willing to spend the CPU cycles uncompressing and recompressing."
"Use ZSTD or LZ4 for a good balance between compression ratio and CPU time if compression is essential."
Three practical operating points:
| Situation | Recommendation |
|---|---|
| Compaction required, compression not required | No compression (cheapest) |
| Compaction required, space savings matter | LZ4 (lightest CPU on decompress/recompress) |
| Compaction required, maximum space savings | ZSTD (accept the CPU tax; provision accordingly) |
| Compaction not required | Any codec; prefer ZSTD |
The default should lean toward no compression on compacted topics unless the disk savings demonstrably justify broker CPU headroom — an explicit cost/benefit analysis.
Why this matters more than it sounds¶
Common compacted-topic patterns:
__consumer_offsets— Kafka's internal offset-storage topic is compacted. Most operators don't think of__consumer_offsetsas a performance-sensitive hot path, but under the offset-commit-cost analysis, it is — every consumer commit is a write to it. Compressing__consumer_offsetswith ZSTD adds a tax on every commit cycle + compaction pass.__transaction_state— exactly-once transaction state, compacted by design.- CDC last-write-wins topics — per-row current-state materialisations are compacted so consumers can rebuild state.
- Stream-processor state (Flink, Kafka Streams) — backing topics are typically compacted.
For each: the compression decision compounds with CDC / exactly-once / stream-processing workloads and should be made deliberately, not copied from an application-topic default.
Alternative: last-value cache via WASM transforms¶
Redpanda's escape hatch — canonicalise a last-value cache outside the compaction pipeline via WASM data transforms, avoiding both compaction and the compression+compaction interaction:
"See our blog on implementing a last value cache using WASM." (Redpanda docs: data transforms)
For the narrow case where compaction is being used purely as a last-value cache (not for replay-log retention), a WASM transform can maintain the projected state externally, eliminating the compaction-CPU cost entirely. Deferred to future wiki canonicalisation.
Seen in¶
- sources/2025-04-23-redpanda-need-for-speed-9-tips-to-supercharge-redpanda — canonical wiki source. "Only use case where the broker reads message-level details" framing; decompress + recompress on every pass; ZSTD / LZ4 codec guidance for compacted topics.
Related¶
- systems/kafka, systems/redpanda — Kafka-API compaction behaviour.
- concepts/compression-codec-tradeoff — parent concept; codec choice in the general case.
- concepts/effective-batch-size — batching composes with compression but doesn't help on the compaction side.
- patterns/client-side-compression-over-broker-compression — the pattern that breaks down for compacted topics because the broker must now participate.