CONCEPT Cited by 1 source
S-curve limits¶
Definition¶
S-curve limits is the recurring framing in the Redpanda convergence of AI and data streaming series that every growth axis in frontier AI — data volume, parameter count, training cost, capability uplift — is an S-curve, not an unbounded exponential, and is heading into the diminishing-returns region ("top" of the S-curve).
Verbatim framing from Peter Corless (Redpanda, 2026-01-13):
"I believe in S-curves. So expect to hit some level of diminishing returns regarding raw data generation as well." + "this isn't a hard brick wall, it'll be governed by the Law of Diminishing Returns. At some point, we'll reach a state where, even if we could technically continue to grow a model, it might be simply infeasible economically, as well as producing a computationally negligible return on investment." (Source: sources/2026-01-13-redpanda-the-convergence-of-ai-and-data-streaming-part-1-the-coming-brick-walls)
Canonical S-curves named in the post¶
- LLM public-training-data S-curve — see concepts/llm-training-data-exhaustion.
- Training cost S-curve — ~260% annual growth today, projected >$1B per frontier model by 2027 (Epoch AI). The Law-of-Diminishing-Returns bounds the top.
- Data-centre energy consumption S-curve — projected 2× by 2030 (Nature, April 2025). Grid capacity is the physical bound.
- Raw-data-generation S-curve — global data production (180 ZB generated / 200 ZB stored in 2025, CAGR 78%) — Corless flags this too as an S-curve despite its current steep slope.
- Capability uplift S-curve — GPT-5.1 measurably worse than GPT-5.0 on some evaluations (cross-references concepts/llm-model-drift).
Why this page exists¶
The wiki canonicalises the S-curve framing as a meta-claim the Redpanda series rests on: every named brick wall (data exhaustion, training-cost growth, batch-training boundary) is a specific instance of the more general claim that growth curves flatten.
Caveats¶
- Stub. This page is a minimal framing anchor; the economic-modeling depth is not walked.
- S-curve framing is a working hypothesis, not a proof. Whether any specific growth axis is near its top is estimate-dependent. Epoch AI's projections are assumption-dependent.
- Industry-commentary altitude. The framing is rhetorical in the Corless post; the wiki captures it as a cross-cutting theme rather than a formal claim.
Seen in¶
- 2026-01-13 Redpanda — The convergence of AI and data streaming, Part 1 (sources/2026-01-13-redpanda-the-convergence-of-ai-and-data-streaming-part-1-the-coming-brick-walls) — canonical: S-curves as the meta-framing for the three brick walls.
Related¶
- concepts/llm-training-data-exhaustion — a specific S-curve instance.
- concepts/llm-model-drift — capability regressions as evidence of flattening.
- concepts/frontier-model-batch-training-boundary — the architectural shape Corless argues streaming will unlock past the current S-curve.
- systems/transformer — the architecture primitive under all the named S-curves.
- companies/redpanda — the company whose blog series canonicalises this framing.