PATTERN Cited by 2 sources
Partner managed service as native binding¶
Intent¶
Integrate a third-party managed service (database, vector store, inference provider) into a platform such that customer code consumes it through the same binding mechanism as first-party primitives — identical API shape, provisioned from the platform's dashboard / API, billed through the platform's invoice — while the third-party retains operational ownership of the underlying service.
Customer code is provider-agnostic at the binding layer; provisioning + billing are platform-unified.
Context¶
Platform vendors face a build-vs-partner decision for each new primitive tier (relational SQL, managed search, LLM inference, CI runners, etc.). Building in-house is expensive and slow; referring customers to external providers splits the developer experience: different auth, different provisioning UX, different invoice, different support contract, different SDKs.
Relevant when:
- The platform wants to expand the primitive surface customers can reach with one-line config changes without building the primitive itself.
- There is a credible partner operating the underlying service at production quality.
- The platform already has a binding mechanism and per-tenant auth + metadata plumbing that can be extended to a new backend.
Solution¶
Extend the platform's existing binding infrastructure to accept the partner's service as a first-class binding type:
- Provisioning inside the platform. Dashboard + API flows provision the partner resource on behalf of the customer. Customer never signs in to the partner's console.
- Binding with the same shape as native primitives.
Customer declares the binding in the same config file
that holds native bindings; runtime exposes the same
env.NAME.<method>()access idiom. - Connectivity layer absorbs protocol differences. Platform stands up a proxy / pooling / caching tier (e.g. Hyperdrive for SQL) that gives the native-looking interface atop the partner's wire protocol.
- Billing aggregation. Usage is invoiced through the platform's account; platform credits (startup programme, committed spend) are redeemable against partner usage — a direct instance of patterns/unified-billing-across-providers.
- Full partner feature surface preserved. The customer "get[s] the exact same ... database developer experience" — same SKUs, same pricing, same feature flags — so the platform is a provisioning + billing aggregator, not a repackager that re-sells a subset.
Consequences¶
Positive
- Customer gets a new primitive tier with one config line instead of a new vendor relationship.
- Billing + support + identity stays unified; engineering teams already approved on the platform don't have to go through procurement for every partner.
- Platform can grow its addressable primitive surface faster than build-in-house cadence allows.
- Platform credits apply to partner usage — a real economic lever for startup-programme and committed-spend customers.
Negative
- Partner dependency: an outage in the partner surface is an outage in the platform's developer experience for that tier.
- Pricing model coupling: the partner's SKU shape shows up on the platform's invoice, so changes to partner pricing become changes to platform pricing.
- Feature-lag hazard: if the partner ships a new feature, the platform-provisioned surface may lag the partner-direct surface until the integration catches up.
- Provider-lock subtler: native-binding ergonomics make it easier to adopt the partner but harder to swap — the application code looks Cloudflare-native, not partner- native.
Known uses¶
- Cloudflare Workers × PlanetScale (2026-04-16) —
canonical wiki instance. Customer provisions
PlanetScale Postgres / MySQL from the Cloudflare
dashboard, binds to it via systems/hyperdrive in
wrangler.jsonc, and (from "next month") pays for it on their Cloudflare invoice with Cloudflare credits redeemable against PlanetScale usage. Full PlanetScale feature surface (query insights, AI tooling, branching) is preserved. (Source: sources/2026-04-16-cloudflare-deploy-postgres-and-mysql-databases-with-planetscale-workers.) - Fly.io × Tigris (2024-02-15) — storage-tier
instance on a different platform.
fly storage createprovisions a Tigris bucket from the Fly.io CLI; five S3-compatible env-var app secrets (AWS_REGION/BUCKET_NAME/AWS_ENDPOINT_URL_S3/AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY) are auto-injected into the customer's Fly.io app so the AWS SDK just works — the binding shape is the existing S3-compat interface, and the platform's config-secrets machinery is what makes it feel native. "All you have to do is fill in a bucket name. Hit enter. All of the configuration for the AWS S3 library will be injected into your application for you." Tigris is a third party (Tigris Data, Inc.) but the developer experience is indistinguishable from a first-party Fly.io primitive. Usage rolls into the Fly.io invoice alongside Supabase / PlanetScale / Upstash — see patterns/unified-billing-across-providers. Shape- parallel to the Cloudflare / PlanetScale instance but without a platform-owned connectivity tier (Fly.io doesn't stand up a Hyperdrive-equivalent; Tigris's S3-compat API is the binding directly). (Source: sources/2024-02-15-flyio-globally-distributed-object-storage-with-tigris.)
Relationship to adjacent patterns¶
- patterns/unified-billing-across-providers is the billing half of this pattern, usable standalone when the runtime integration is less tight (e.g. 12+ model providers routed through AI Gateway without necessarily being "native bindings" in the same sense).
- patterns/ai-gateway-provider-abstraction is the
inference-tier cousin: Cloudflare presents one
env.AI.run()binding that routes to any of 12+ external inference providers. Same posture, storage-tier instance applies it to Postgres / MySQL rather than LLMs. - patterns/caching-proxy-tier is the connectivity vehicle that typically makes "native binding" feasible for stateful partners — Hyperdrive is the caching/pooling proxy that shields customer code from the partner's wire-protocol specifics.