Skip to content

SYSTEM Cited by 1 source

Okio (Square I/O buffer library)

Definition

Okio (square.github.io/okio) is a Kotlin/Java I/O library from Square that sits alongside java.io and java.nio as a zero-copy-friendly buffer and stream abstraction. Its core type — Buffer — is a segmented, doubly-linked list of byte arrays that supports sharing segments between buffers by reference rather than copying, making concatenation, prefix/suffix slicing, and stream pipelining allocation-light.

Okio is the I/O substrate underneath OkHttp, Retrofit's streaming bodies, Moshi's JSON readers, and Square's other Kotlin/Java platform libraries.

Why it matters for low-latency serving

  • Segment sharing = free concatenation. Appending one Buffer to another moves segment pointers, not bytes. For pre-encoded responses (e.g. already-gzipped chunks), two Buffers can be combined into a single response body without a read-and-rewrite cycle.
  • Zero-copy gzip concatenation. The gzip format is self-concatenating: the concatenation of two valid gzip streams is itself a valid gzip stream. Combined with Okio's segment sharing, this means a server can take N individually-gzipped item responses and emit them as one gzipped batch response without ever decompressing or recompressing.
  • Predictable allocation profile. Buffer pools segments internally, so steady-state Okio traffic has low per-request GC pressure — significant on Netty EventLoop threads where GC pauses directly show up as tail latency.

Seen in

  • sources/2025-03-06-zalando-from-event-driven-chaos-to-a-blazingly-fast-serving-api — Zalando's PRAPI Batch component uses Okio buffers to hold pre-gzipped individual item responses: "Avoid reading individual gzipped responses into memory. Instead, store them in Okio buffers and concatenate them directly in the response object, eliminating unnecessary gunzip/re-gzip operations." This is the canonical implementation of patterns/zero-allocation-cache-payload at the batch- response boundary — paired with ByteArray caching on the single-item path, PRAPI trades object-graph ergonomics for GC-pause elimination on latency-critical NIO threads.
Last updated · 501 distilled / 1,218 read