Skip to content

PATTERN Cited by 1 source

Policy gate on provisioning

Pattern

Gate every infrastructure-provisioning request at admission time against a central policy catalog. Non-compliant manifests are rejected before any resource is created — shift-left compliance.

The canonical Kubernetes realization is Open Policy Agent Gatekeeper: a K8s admission controller that evaluates Rego policies against every CREATE / UPDATE request and returns admit / deny (with optional mutation). Policies live in a central catalog; platform teams own policy authorship; application teams' manifests pass through the gate transparently when compliant.

Why admission time, not audit time

Two alternative enforcement points:

  • Audit-time — resources are created, then a scanner discovers non-compliant ones and files tickets. Drifts happen routinely; incidents happen when the scanner lags.
  • Deploy-time (CI/CD pipeline step) — policy check runs in CI. Works for CI-gated flows; fails for any out-of-band change (console click, kubectl apply with cluster-admin, emergency manual fix).
  • Admission-time — the K8s API server itself asks the policy engine before committing. Every change path — CI, console, kubectl, GitOps controller — passes through it. No drift window.

Shape

  • Central policy catalog. Policies live in a dedicated store, versioned, audited, authored by security/compliance team (concepts/policy-as-data).
  • Admission controller calls the engine synchronously. Request latency floor is the policy evaluation time.
  • Constraint templates + instances. Gatekeeper's two-layer model: the template defines a generic rule shape (e.g. "all Pods must have a resource limit"); the instance binds it to specific scopes (which namespaces, which labels).
  • Escape hatches via exemptions, not bypasses. Named exemptions in the catalog, audited; no "admins can skip policies" backdoor.
  • Mutation, carefully. Gatekeeper can optionally mutate incoming manifests (add missing labels, inject a default IAM role). Powerful but reduces the caller's control — use sparingly.

Canonical instance (Santander Catalyst)

"Policies catalog — A central repository of policies ensuring compliance and security across all operations using Open Policy Agent." (Source: sources/2026-02-26-aws-santander-catalyst-platform-engineering)

Catalyst's policies catalog lives on the EKS control plane cluster alongside systems/crossplane (stacks catalog) and systems/argocd (data-plane claims). Every Crossplane claim — which is a K8s API request, because Crossplane models every cloud resource as a CR — passes through Gatekeeper before it reaches the Crossplane controller. Non-compliant database instances, IAM roles, VPC configurations are rejected at claim time, not after the resources exist.

Relation to other policy enforcement layers

Layer Enforcement point Engine Canonical instance
patterns/policy-gate-on-provisioning K8s admission (infra provisioning) OPA / Gatekeeper Santander Catalyst
patterns/lambda-authorizer API Gateway (app request) AVP / Cedar Convera
concepts/service-control-policy AWS Organizations (account ceiling) IAM policy / SMT-proven by systems/aws-policy-interpreter ProGlove
patterns/zero-trust-re-verification Backend data-tier boundary App code calling AVP Convera

All four are concepts/policy-as-data instances. Different engines, different enforcement layers, same load-bearing discipline: policies separate from code, versioned, audited, evaluated by a dedicated engine.

Caveats

  • Policy-evaluation latency adds to every admission. A bad Rego policy can brown out the API server.
  • Policy updates are themselves risky. A broken policy blocks all provisioning; need CI + canary + rollback discipline for the policy catalog itself.
  • Coverage ≠ perfection. Admission-time is after the human-in-loop (or CI-in-loop) step; social-engineering / stolen creds / permissive exemptions still bypass. Pair with defense in depth (concepts/zero-trust-authorization, patterns/zero-trust-re-verification).
  • Cross-cluster consistency requires the policy catalog to be distributed; version skew across clusters is a silent correctness gap.

Seen in

Last updated · 200 distilled / 1,178 read