CONCEPT Cited by 3 sources
Stateless Compute¶
Stateless compute is a contract in which the execution environment keeps no durable state across invocations — any persistence lives in a separate managed store. The execution side is therefore freely replicable, killable, and relocatable.
Why it underwrites serverless¶
Lambda's 2014 PR/FAQ is explicit: "Applications hosted by Lambda are stateless; persistent state should be stored in Amazon S3, Amazon DynamoDB, or another Internet-available storage service. Inbound network connections are managed by Lambda. For security reasons, some low-level system calls are restricted, but language features, and most libraries, function normally. Local file system access is intended as a temporary scratch space and is deleted between invocations. These restrictions enable Lambda to launch and scale applications on behalf of developers by ensuring that their code can run on Lambda infrastructure and that the service can launch as many copies of their application as needed to scale to the incoming request rate."
(Source: sources/2024-11-15-allthingsdistributed-aws-lambda-prfaq-after-10-years)
The tell: stateless-by-contract is what makes the provider free to kill, relocate, and parallelise execution without user-visible consequences. That freedom is the lever behind concepts/scale-to-zero and the placement engine that makes concepts/fine-grained-billing economically viable.
What this pushes out of the compute tier¶
- Durable state → managed storage (S3, DynamoDB, RDS, external service APIs).
- Session affinity → stateless auth (IAM / Cognito-style tokens in the 2014 PR/FAQ), not in-memory sessions.
- Persistent connections → short-lived outbound requests, or connection pools held in a front-tier, not in the function.
Trade-offs and gotchas¶
- Workloads needing long-lived in-memory state (e.g., large model weights, precomputed indexes, DB connection pools) pay repeated initialisation cost unless mitigated by snapshots / reuse / provisioned concurrency. See concepts/cold-start.
- The 2014 PR/FAQ explicitly routes lift-and-shift / stateful workloads to Beanstalk/EC2 rather than Lambda.
Seen in¶
- sources/2024-11-15-allthingsdistributed-aws-lambda-prfaq-after-10-years — stateless-by-contract as an enabling constraint for Lambda's placement / scale model.
- sources/2026-01-13-databricks-open-sourcing-dicer-auto-sharder — appears as the motivating foil for systems/dicer: the "Hidden Costs of Stateless Architectures" section argues stateless + remote cache pays a network tax per request, CPU on (de)serialization, and an "overread" waste (whole-object fetches for fractional use). Dicer frames concepts/dynamic-sharding as the third option that keeps state in memory without static-sharding's failure modes. Conclusion is not "stateless is wrong", but "stateless + remote cache is the wrong default when the alternative to static sharding is an auto-sharder."
- sources/2026-04-20-databricks-take-control-customer-managed-keys-for-lakebase-postgres — systems/lakebase's Postgres compute VMs are a canonical stateless-compute tier (persistent state in Pageserver/Safekeeper, compute scales to zero) — and surfaces the scratch-state problem that stateless-by-contract doesn't solve on its own. Answer: patterns/per-boot-ephemeral-key encrypts VM-local state under a key that dies with the instance, so the statelessness becomes cryptographic as well as architectural.