CONCEPT Cited by 1 source
Reuse existing infrastructure over purpose-built service¶
Definition¶
Reuse existing infrastructure over purpose-built service is the heuristic that says: when the platform you already operate can do the job adequately, don't onboard a new purpose-built service for a small use case — even when the purpose-built service is the textbook answer. The marginal cost of one more route, one more bucket, one more Ingress on existing infrastructure is near zero; the fixed cost of a new service (IaC templates, monitoring, alerts, TLS/WAF setup, on-call knowledge, billing line items) is real and recurring.
This is the mirror image of the "use the right tool for the job" rule: both are valid, and the judgement call is which side of the tradeoff you're on given the scale at which the marginal costs matter.
Seen in¶
- sources/2020-06-30-zalando-launching-the-engineering-blog —
Zalando explicitly chose not to use
CloudFront to serve their engineering
blog's static content from S3, despite that being the textbook
AWS answer for "static site with custom domain + SSL." Their
reasoning: "I decided to not use CloudFront as all the required
infrastructure for domain+SSL is already in place." They had
Skipper running in front of 140+
Kubernetes clusters already, with ACM + ALB + External DNS
already provisioned on the pattern. One Ingress annotation
proxies the blog to the S3 website endpoint with
[
compress() setDynamicBackendUrl](<../patterns/static-site-via-ingress-proxy-to-s3-website.md>), reusing all of that. Explicit tradeoff the post doesn't name: no edge caching, no global PoP presence — fine for blog-scale traffic, wrong at higher scale.
When it applies¶
- Platform already operated at scale — the denominator of "fixed cost of one more service" is how many services you already run; at 140+ K8s clusters the delta of one ingress is invisible.
- Low-traffic or internal endpoint — if the endpoint doesn't need edge caching / geo-distribution / DDoS mitigation the platform can't already provide, a simple ingress route suffices.
- Homogeneous auth / TLS / observability expectations — reusing existing infrastructure also reuses WAF rules, TLS policy, TLS monitoring, request-level metrics, and the team's operational playbook.
When it doesn't apply¶
- Traffic-shape changes — if the endpoint needs global edge caching, CloudFront (or Cloudflare, Fastly) is the right answer regardless of what you already run.
- Compliance/isolation requirements — public endpoints with distinct auth domains often warrant separate infrastructure to limit blast radius.
- Marginal cost is actually high — e.g., proxying a multi-terabyte-per-day binary workload through your shared ingress fleet imposes real capacity costs on unrelated tenants.