PlanetScale — Serverless Laravel applications with AWS Lambda and PlanetScale¶
Summary¶
Matthieu Napoli (creator of Bref, writing on the PlanetScale blog, 2023-05-03) walks through deploying a Laravel PHP application to AWS Lambda via Bref with PlanetScale MySQL as the backing database, then runs an ab -c 50 -n 10000 load test to surface the baseline and the optimised-with-Laravel-Octane performance profiles. Baseline (shares-nothing PHP, new DB connection per request) holds p50 75 ms / p95 130 ms at 3,800 req/min with 180 queries/s against PlanetScale and a 0.3 ms median PlanetScale query time; switching to Laravel Octane with OCTANE_PERSIST_DATABASE_SESSIONS=1 pulls the same p50 to 14 ms (5.4×) and p95 to 35 ms (3.7×) — the delta is almost entirely TLS handshake + PHP bootstrap cost that gets amortised across requests once the process and connection persist. Cold-start tail is ~1 s but impacts <1% of requests in the first minute (the first 50 cold-start invocations out of 3,800+).
The post is load-bearing for the wiki because it's the first canonical benchmark of the shares-nothing PHP request model under serverless, and the first canonical demonstration that a persistent-process framework (Octane) on top of Lambda recovers most of the per-request overhead without abandoning the serverless autoscale guarantee. It pairs with the 2022 One million connections benchmark (which establishes the proxy-tier architectural fix) to give the wiki both sides of the serverless-database-connection problem: the client-side persistent-process fix (this post) and the server-side proxy-pool fix (the 2022 post).
Secondary content: a MySQL SSL-cert-path gotcha under Bref (MYSQL_ATTR_SSL_CA=/opt/bref/ssl/cert.pem, distinct from the local-machine path), and the @primary DB_DATABASE routing hint for sharded PlanetScale keyspaces — PlanetScale-specific Vitess @primary target syntax surfacing in a Laravel .env.
Key takeaways¶
-
Baseline PHP-on-Lambda performance against PlanetScale: p50 75 ms, p95 130 ms, 3,800 req/min from cold start in under a minute. The
ab -c 50 -n 10000load test (50 parallel requests, 10,000 total) against a freshly deployed Laravel app on Lambda + PlanetScale hits: "Laravel scaled instantly from zero to 3,800 HTTP requests/minute… 100% of HTTP requests were handled successfully… The median PHP execution time (p50) for each HTTP request is 75ms… 95% of requests (p95) are processed in less than 130ms… PlanetScale processed up to 180 queries/s… The median PlanetScale query execution time is 0.3ms." Latency measurement is AWS Lambda duration, not HTTP RTT — deliberately excludes network to give reproducible comparable numbers; real-user HTTP latency would be higher. -
Cold starts are ~1 s but impact <1% of requests in the first minute under instant-scale fan-out. "The load test was performed against a freshly deployed application. That means the first requests were cold starts: New AWS Lambda instances started and scaled up to handle the incoming traffic. The cold starts usually have a much slower execution time (one second instead of 75ms). However, we do not see them in the p50 or p95 metrics because they only impacted 1% of the requests in the first minute. After the first 50 requests (cold starts), all the other requests were warm invocations." Canonical wiki datum for cold-start tail-weight under burst: with a 50-concurrent client and ~3,800 req/min throughput, the per-Lambda-instance initialisation cost drowns in warm-invocation throughput within tens of seconds. See concepts/cold-start.
-
Dropping concurrency from 50 parallel to 1 doesn't change per-request PHP execution time. "After a few minutes, I dropped the traffic from 50 requests in parallel to one. The PHP execution time stayed identical. This illustrates that the load did not impact the response time." Canonical wiki demonstration that Lambda's per-invocation isolation makes per-request latency load-invariant in the range tested — no per-instance queueing, no per-instance lock contention, no shared-resource degradation as concurrency scales. Different from typical server-based hosting where p50 rises with load until saturation.
-
PHP's shares-nothing request model pays the TLS handshake cost on every invocation against PlanetScale. "Since Laravel connects to PlanetScale over SSL, creating the SSL connection can take longer than running the SQL query itself. PlanetScale itself can easily handle unlimited connections using built-in connection pooling, which massively improves performance by keeping those database connections open between requests. However, PHP, by design, shares nothing across requests. This means at the end of every request, PHP will close the connection to the database." Canonical wiki framing: the concepts/shared-nothing-php-request-model is a language-runtime architectural choice that predates serverless but interacts hostilely with it — each invocation opens a fresh TLS connection, each TLS connection requires a 2-3 RTT handshake to PlanetScale, and at 0.3 ms query time the handshake is the dominant latency component. PlanetScale's 1M-connection ceiling (see sources/2026-04-21-planetscale-one-million-connections) removes the connection-count constraint but cannot remove the per-connection-establishment constraint for a shares-nothing runtime.
-
Laravel Octane recovers 5× by persisting the app process + DB connections across invocations. "To circumvent this problem, we can use Laravel Octane to gain performance in two ways: Keeping the Laravel application in memory across requests using Laravel Octane. Reusing SQL connections across requests (instead of reconnecting every time)." With
BREF_LOOP_MAX: 250+OCTANE_PERSIST_DATABASE_SESSIONS: 1on the BrefOctaneHandler: "The median PHP execution time (p50) went from 75ms to 14ms. 95% of requests (p95) are processed in less than 35ms." Canonical wiki datum: p50 drops from 75 ms → 14 ms (5.4×), p95 drops from 130 ms → 35 ms (3.7×). The saved 61 ms at p50 is the combined Laravel-bootstrap + TLS-handshake + DB-auth cost that used to run on every request. -
BREF_LOOP_MAX: 250bounds how many invocations one Octane-persistent process serves before Lambda tears it down. The Bref Octane handler shape: each Lambda execution environment keeps the PHP process + DB connection warm across up to 250 invocations, then the runtime recycles. Trades per-request amortisation depth against memory/state-staleness bounds. Canonical wiki framing of the patterns/persistent-process-for-serverless-php-db-connections tradeoff. -
PlanetScale SSL cert lives at
/opt/bref/ssl/cert.pemunder Lambda, different from local-machine path. "Don't skip theMYSQL_ATTR_SSL_CAline: SSL certificates are required to secure the connection between Laravel and PlanetScale. Note that the path in AWS Lambda (/opt/bref/ssl/cert.pem) differs from the one on your machine (likely/etc/ssl/cert.pem). If you run the application locally, you will need to change this environment variable back and forth." Canonical wiki operational detail: the Bref runtime layer ships an SSL trust store at/opt/bref/ssl/cert.pem; PDO'sMYSQL_ATTR_SSL_CAmust point there, not the host path. Developer-experience friction that round-trips the env-var between local dev and Lambda deploys — a real cost of the Lambda-packaging model. -
Sharded PlanetScale keyspaces route via
DB_DATABASE=@primary, not the literal database name. "ForDB_DATABASE, you can use your PlanetScale database name directly if you have a single unsharded keyspace. If you have a sharded keyspace, you'll need to use@primary. This will automatically direct incoming queries to the correct keyspace/shard." Canonical wiki datum: the Vitess@primarytarget syntax (covered in VTGate routing) leaks into application-level.envconfiguration for PlanetScale users with sharded keyspaces. Unsharded users get simpler per-database naming; sharded users accept the@primaryindirection. -
Database migrations run via
serverless bref:cli --args="migrate --force"(Laravel Artisan on Lambda). Bref exposes abref:clisubcommand that invokes the Lambda function in a one-shot management-command mode — Laravel's Artisan commands (migrate, seed, custom CLI tasks) execute inside the Lambda runtime with the production env, closing the loop on "how do I run migrations on a deployed Lambda app?" Canonical wiki operational framing: CLI-on-Lambda as a first-class deploy primitive, not a hack.
Systems / concepts / patterns surfaced¶
- New systems (3):
- Bref — open-source project providing PHP runtime support on AWS Lambda; ships Laravel integration, runtime layers with SSL certs,
serverless.ymltemplates, and an Octane handler for persistent-process mode. Created by Matthieu Napoli. - Laravel — canonical PHP web framework (Taylor Otwell, 2011). Referenced here as the request-per-invocation baseline and the persistent-process target via Octane.
-
Laravel Octane — Laravel's persistent-process request handler. Keeps the Laravel application bootstrap + DB connections alive across requests by running on Swoole / RoadRunner / FrankenPHP (or in this case Bref's
OctaneHandler). Flips PHP's shares-nothing request model into an Erlang/Node-like persistent-worker model. -
New concepts (2):
- concepts/shared-nothing-php-request-model — PHP's language-runtime design decision that every HTTP request executes in a fresh PHP process image, with no state carried across requests. Predates serverless by 20+ years but interacts hostilely with serverless because it defeats any per-process connection-pool / auth-cache / application-bootstrap amortisation that persistent-process runtimes get for free.
-
concepts/ssl-handshake-as-per-request-tax — when a request model opens a fresh TLS connection per request (shares-nothing PHP, some serverless runtimes), the TLS handshake (2-3 RTT + crypto) becomes a per-request tax that can exceed the actual query-execution time. Canonical wiki anchor for why persistent connections dominate fresh connections against TLS-terminated services at low query cost.
-
New patterns (1):
-
patterns/persistent-process-for-serverless-php-db-connections — deploy a persistent-process request handler (Laravel Octane / Swoole / RoadRunner / FrankenPHP) inside the serverless Lambda invocation environment, so that the Lambda execution context keeps the PHP process + DB connection warm across up to N invocations (bounded by
BREF_LOOP_MAXor equivalent). Recovers ~5× latency vs. the shares-nothing baseline without giving up the Lambda autoscale / pay-per-request model. -
Extended:
- systems/aws-lambda — add the Bref PHP runtime + Laravel deploy as a canonical non-Node-native language Lambda example. Add the Octane-style persistent-process-within-Lambda technique as a general cross-language pattern (same shape applies to Python ASGI, Node workers). Add the cold-start-tail-drowning-in-throughput datum: <1% of requests cold under 50-concurrent burst.
- systems/planetscale — add the Bref / Laravel integration gotchas (SSL cert path,
@primarysharded routing) as a developer-experience-surface-area item. Pair with the 2022 one-million-connections benchmark to show both sides of the serverless connection story (server-side pooling + client-side persistent-process). - systems/mysql — first canonical wiki mention of the
/opt/bref/ssl/cert.pemLambda-layer SSL trust-store path; generalisable to any Lambda-layer-based MySQL client. - concepts/cold-start — add the <1% cold-start tail drowning under 50-concurrent fan-out datum to the cold-start page's quantitative corpus. Different regime from the VPC-mode cold-start decomposition (sources/2026-04-22-allthingsdistributed-invisible-engineering-behind-lambdas-network): this is application-runtime cold start (PHP + Laravel bootstrap), not network cold start.
- concepts/connection-pool-exhaustion — add shares-nothing PHP as an aggravating factor: even with a per-process connection pool configured, every Lambda invocation spins up a fresh PHP process image, so the aggregate connection count scales with invocation count, not with pool size. The fix is either patterns/two-tier-connection-pooling on the server side or patterns/persistent-process-for-serverless-php-db-connections on the client side.
- patterns/two-tier-connection-pooling — cross-reference persistent-process pattern as the client-side complement to the proxy-pool server-side fix. Both solve the serverless connection problem; they're composable.
Operational numbers (canonical)¶
| Metric | Baseline (shares-nothing PHP) | With Laravel Octane | Delta |
|---|---|---|---|
| p50 PHP execution time | 75 ms | 14 ms | 5.4× |
| p95 PHP execution time | 130 ms | 35 ms | 3.7× |
| Throughput | 3,800 req/min | ~4,800 req/min | (bounded by load rig, not system) |
| PlanetScale query p50 | 0.3 ms | — | — |
| PlanetScale QPS | 180 q/s peak | — | — |
| Cold start | ~1 s | — | — |
| Cold-start share of requests (first min) | <1% (≈50 / 3,800+) | — | — |
| Load test shape | ab -c 50 -n 10000 |
same | — |
Bref Octane config (canonical):
web:
handler: Bref\LaravelBridge\Http\OctaneHandler
runtime: php-81
environment:
BREF_LOOP_MAX: 250
OCTANE_PERSIST_DATABASE_SESSIONS: 1
events:
- httpApi: '*'
PlanetScale .env (canonical):
DB_CONNECTION=mysql
DB_HOST=<host url>
DB_PORT=3306
DB_DATABASE=<database_name> # or @primary for sharded keyspaces
DB_USERNAME=<user>
DB_PASSWORD=<password>
MYSQL_ATTR_SSL_CA=/opt/bref/ssl/cert.pem
Caveats¶
-
Measurement is Lambda duration, not HTTP RTT. The author explicitly calls this out: "Note that we are looking at the AWS Lambda duration instead of HTTP response time: This is to exclude any latency related to networking (and thus have reproducible and comparable results). This is not the HTTP response time real users would see as, like on any server, the network adds latency to HTTP responses." Real-user p50 will be higher by the client-to-Lambda-edge RTT.
-
Load test is minimal — it's one read endpoint (
/api/usersreturning 10 rows). No transactional workload, no contention, no write amplification. Generalisation beyond trivial read workloads requires separate benchmarking. -
3,800 req/min is the fan-out ceiling for the
abrig, not a PlanetScale ceiling. "Laravel handled 1,000 more requests/minute, though this number is not important: We could simply send more requests in our load test to reach a higher number anytime." -
Tutorial voice. Matthieu Napoli is the Bref creator writing on the PlanetScale blog — it's a partnership-promotion post. The numbers and the architectural framing around shares-nothing + Octane are the load-bearing content; the step-by-step deploy walkthrough is pedagogical scaffolding.
-
Author works on Bref. Post is both a PlanetScale blog entry and a Bref product demo. Framing ("Bref natively supports Laravel Octane") reflects author incentive.
-
Default Laravel rate limiting was disabled for the benchmark. "The only change I made is to disable Laravel's default rate limiting for API calls (
ThrottleRequestsmiddleware) inapp/Http/Kernel.phpbecause it would get in the way of my load test." Real production deploys would re-enable this.
Source¶
- Original: https://planetscale.com/blog/serverless-laravel-app-aws-lambda-bref-planetscale
- Raw markdown:
raw/planetscale/2026-04-21-serverless-laravel-applications-with-aws-lambda-and-planetsc-64bbdc22.md
Related¶
- systems/aws-lambda — Lambda is the serverless runtime under benchmark
- systems/planetscale — PlanetScale MySQL is the backing database
- systems/bref — PHP-on-Lambda runtime
- systems/laravel — PHP web framework under test
- systems/laravel-octane — persistent-process handler that recovers 5×
- systems/mysql — SSL cert path +
@primaryshard routing - concepts/cold-start — ~1 s PHP+Laravel cold start, <1% request tail
- concepts/shared-nothing-php-request-model — the architectural constraint being benchmarked
- concepts/ssl-handshake-as-per-request-tax — why the baseline is so costly
- concepts/connection-pool-exhaustion — aggravated by shares-nothing PHP
- concepts/serverless-tcp-socket-restriction — companion Lambda/serverless DB connectivity concept
- patterns/persistent-process-for-serverless-php-db-connections — the Octane fix pattern
- patterns/two-tier-connection-pooling — the server-side complement from the 2022 1M-connections post
- sources/2026-04-21-planetscale-one-million-connections — the 2022 proxy-pool benchmark this pairs with
- companies/planetscale — PlanetScale engineering blog