Skip to content

SYSTEM Cited by 1 source

OpenMessaging Benchmark (OMB)

Overview

OpenMessaging Benchmark (OMB) is an open-source framework for benchmarking distributed messaging and streaming systems with a standard, driver-abstracted workload model. OMB decouples workloads (produce rate, partition count, consumer fan-out, message size, warm-up duration) from drivers (Kafka, Redpanda, Pulsar, RabbitMQ, etc.), so the same workload YAML can be applied across brokers for apples-to-apples comparison.

Maintained as an open-source project with per-broker driver forks; the Redpanda fork on GitHub is the canonical Redpanda-oriented variant.

Canonical positioning on the wiki

Redpanda's 2025-02-11 multi-region stretch-cluster post uses OMB as the simulation substrate for multi-region performance testing:

"To performance test a multi-region Redpanda cluster and avoid the expenses associated with it, you might want to skip setting up an actual multi-region cluster. Alternatively, you can use our OpenMessaging Benchmark (OMB) repo on GitHub, which allows you to benchmark any Redpanda cluster."

(Source: sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters)

Combined with tc inter-broker latency injection, OMB is the Redpanda team's default technique for characterising stretch-cluster performance without paying cloud cross-region bandwidth during testing.

Driver and workload YAML surface

From the Redpanda post's worked example:

Driver config (Redpanda-specific driver class, cluster connection, producer/consumer settings):

name: Redpanda+3xi3en.xlarge
driverClass: io.openmessaging.benchmark.driver.redpanda.RedpandaBenchmarkDriver
replicationFactor: 3
reset: true
topicConfig: |
commonConfig: |
  bootstrap.servers=<ip1>:9092,<ip2>:9092,<ip3>:9092
  request.timeout.ms=300000
producerConfig: |
  acks=all
  linger.ms=1
  batch.size=131072
consumerConfig: |
  auto.offset.reset=earliest
  enable.auto.commit=false

Workload config (workload shape — topics, partitions, rate, fan-out, message size, durations):

name: 50MB/s rate; 4 producers 4 consumers; 1 topic 144 partitions
topics: 1
partitionsPerTopic: 144
messageSize: 1024
useRandomizedPayloads: true
randomBytesRatio: 0.5
randomizedPayloadPoolSize: 1000
subscriptionsPerTopic: 1
producersPerTopic: 4
consumerPerSubscription: 4
producerRate: 50000
consumerBacklogSizeGB: 0
testDurationMinutes: 5
warmupDurationMinutes: 5

The separation matters: the workload YAML is portable across brokers. Swap the driver class + connection config to benchmark Kafka or Pulsar under the same workload.

Seen in

Last updated · 470 distilled / 1,213 read