Skip to content

CONCEPT Cited by 1 source

Low-end device inclusion

Definition

Low-end device inclusion is the product-level constraint that engineering choices — codec, ML model, rendering pipeline — must serve users on older, slower, cheaper devices, not just flagship handsets. For a billions-of-users RTC product like WhatsApp, the low-end population is a first-class audience, not a rounding error. This constraint frequently rules out otherwise-attractive ML-based solutions on complexity grounds (Source: sources/2024-06-13-meta-mlow-metas-low-bitrate-audio-codec).

The specific numbers (Meta 2024)

Meta's MLow announcement is unusually explicit about the size of the low-end population they design for:

  • "More than 20 percent of our calls are made on ARMv7 devices."
  • "10's of millions of daily calls on WhatsApp are on 10-year-old-plus devices."

These numbers are load-bearing on the decision to build a classic-DSP codec rather than ship Meta's own Encodec ML codec. The argument is not about absolute capability — Encodec is better — but about addressable user population: a codec that only runs on expensive phones helps only expensive-phone users.

The Meta quote

"While these AI/ML-based codecs are able to achieve great quality at low bitrates, it often comes at the expense of heavy computational cost. Consequently, only the very high-end (expensive) mobile handsets are able to run these codecs reliably, while users running on lower-end devices continue to experience audio quality issues in low-bitrate conditions. So the net impact of these newer computationally expensive codecs is actually limited to a small portion of users."

The operative phrase — "net impact… limited to a small portion of users" — is the reusable constraint. If an intervention helps 15% of users and breaks on 85%, its net impact is bounded.

Recurs across this wiki

The low-end-device-inclusion pattern echoes across other compute-constrained production decisions:

What the constraint doesn't imply

MLow is not "dumber" than Encodec — it's classical rather than learned. Meta put 2.5 years of development into MLow to get the quality-per-CPU-cycle that hits the low-end target. The pattern is not "use simpler things"; it's "pay the engineering cost of the compute-constrained solution when your install base demands it."

Seen in

Last updated · 319 distilled / 1,201 read