Skip to content

SYSTEM Cited by 2 sources

CoreDNS

CoreDNS is the default DNS server for Kubernetes clusters (CNCF graduated, replaced kube-dns as the default). It resolves service names like my-service.default.svc.cluster.local into ClusterIP virtual IPs (or pod IPs for headless services).

Role in default Kubernetes service routing

  1. App looks up the service's DNS name.
  2. CoreDNS returns the ClusterIP.
  3. App opens TCP to ClusterIP; kernel rules (from systems/kube-proxy) rewrite it to a pod IP.

Why DNS is a weak service-discovery substrate for high-performance systems

  • Caching / staleness. Clients (language runtimes, OS resolvers) cache DNS records; updates propagate slowly and unevenly.
  • No endpoint metadata. A DNS A record is just an IP — no zone, no shard, no readiness, no weight. Topology-aware LB (zone-affinity, shard-aware hashing) needs metadata DNS can't carry.
  • Critical-path risk. If every RPC resolves DNS, DNS infra becomes part of the request-latency tail; outages propagate.

Systems that need fresher, richer topology than DNS can deliver substitute a streaming control plane (xDS / gRPC-LB / custom) that pushes updates and carries metadata. See systems/databricks-endpoint-discovery-service, concepts/xds-protocol.

Seen in

  • sources/2025-10-01-databricks-intelligent-kubernetes-load-balancing — Databricks removed CoreDNS from the critical path for internal RPC; the custom xDS EDS control plane pushes endpoint updates directly to clients so DNS caching / staleness don't affect routing decisions.
  • sources/2024-03-07-flyio-fly-kubernetes-does-more-now — retained at FKS beta for in-cluster <svc>.<ns>.svc.cluster.local resolution, but resolves to IPv6 ClusterIPs on the Fly WireGuard mesh, not IPv4 addresses mediated by iptables. Fly flags CoreDNS for replacement with a "custom internal DNS" — evidence that even the default CNCF DNS substrate isn't load-bearing under a nodeless, IPv6-first service mesh.
Last updated · 200 distilled / 1,178 read