Skip to content

CONCEPT Cited by 1 source

Linux cGroup

A Linux cGroup (control group) is a kernel primitive that groups a set of processes and lets the kernel enforce resource limits (CPU, memory, I/O, PIDs) and isolation policies (network, namespaces, eBPF program attachment) on the group as a whole. It is the process-set unit the Linux kernel exposes for scoping policy below the host-wide level but above the per-process level.

cGroups are best known as the primitive Docker / Kubernetes / systemd use under the hood, but they are independent of any of those: "A cGroup is a Linux primitive (used heavily by Docker but not limited to it) that enforces resource limits and isolation for sets of processes. You can create a cGroup, configure it, and move processes into it — no Docker required." (Source: sources/2026-04-16-github-ebpf-deployment-safety)

Why cGroups are the right scope for eBPF security policy

systems/ebpf programs can attach at several granularities (system-wide kprobes, per-interface XDP/TC, per-socket, etc.). The cGroup attachment point (BPF_PROG_TYPE_CGROUP_SKB, BPF_PROG_TYPE_CGROUP_SOCK, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, cGroup-scoped LSM hooks) is what makes process-set-scoped security enforcement possible without Docker / container dependencies.

GitHub's deployment-safety use case needs exactly this scope: deploy scripts but not customer-serving processes on the same host need to be blocked from talking to github.com. Attaching the eBPF firewall to a purpose-created cGroup — moving only the deploy-script process into it — cleanly carves the intended set. Host-wide iptables or interface-level XDP would over-block; per-PID tracking without cGroups would race on process lifetimes.

Primary surfaces

  • cGroup v1 / v2. cGroup v2 unified the controllers into one hierarchy (vs v1's per-controller hierarchies) and is the modern default. cGroup v2's pseudo-filesystem mount is typically /sys/fs/cgroup/.
  • Resource controllers. cpu (scheduling priority + quotas), memory (limits + OOM behaviour), io (block I/O weight + throttling), pids (process count cap), cpuset (pin to cores / NUMA nodes).
  • Namespaces are orthogonal. cGroups limit resources; namespaces limit visibility (PID / network / mount / UTS / IPC / user). Docker layers both together, but they are independently usable.
  • eBPF attachments. cGroup v2 supports attaching eBPF programs via bpf(BPF_PROG_ATTACH)CGROUP_SKB (egress + ingress packet filter), CGROUP_SOCK (socket creation), CGROUP_SOCK_ADDR (connect/bind/sendmsg destination rewrite), CGROUP_DEVICE (device access), plus cGroup-scoped LSM hooks.

Shape for GitHub's deploy-safety firewall

1. Create cGroup at e.g. /sys/fs/cgroup/deploy-script/
2. Place deploy-script PID into it by writing to cgroup.procs
3. Attach eBPF programs to the cGroup:
     - CGROUP_SOCK_ADDR connect4 → rewrite dest for DNS (:53)
       queries to a localhost userspace DNS proxy
     - CGROUP_SKB egress → drop packets targeting blocked IPs
4. Userspace DNS proxy evaluates each requested hostname against
   the blocklist; communicates allow/deny via eBPF maps
5. Bonus: CPU + memory cGroup limits prevent runaway deploy
   scripts from starving host's customer-serving processes

The cGroup is load-bearing at every layer: it's what scopes the eBPF programs, what provides the resource limits, and what the deploy orchestrator targets to place the script.

Alternative scopes (and why they fall short for this use case)

  • Host-wide iptables / nftables. Blocks github.com from every process on the host, including the customer-traffic- serving processes that legitimately need it. Wrong granularity.
  • Per-process PR_SET_PDEATHSIG / PID-based policy. No stable kernel primitive for PID-tagged packet filtering; PIDs recycle; children aren't automatically tracked.
  • Container (Docker) boundary. Over-committed — forces the deploy script into a container, which changes the deploy tool model, root filesystem, and attack surface in ways unrelated to egress filtering.
  • Separate host / VM. Same over-commitment + deployment- surface-area cost; also wastes hardware because the deploy script runs for seconds.

Other cGroup uses surfaced across the wiki

  • Resource isolation for co-tenanted workloads (CPU / memory / io controllers are what make Kubernetes Pod QoS classes enforceable; typical in any container platform).
  • eBPF-based observability / security. Datadog's Workload Protection attaches eBPF programs scoped to specific cGroup paths for policy isolation (see sources/2026-01-07-datadog-hardening-ebpf-for-runtime-security).
  • systemd slice hierarchy. /sys/fs/cgroup/system.slice/ vs user.slice/ is a standard systemd cGroup v2 layout; eBPF programs can target them separately.

Seen in

Last updated · 200 distilled / 1,178 read