PATTERN Cited by 1 source
Time-bucketed tcpdump capture¶
Use systems/tcpdump's -W N -G S -w 'strftime_template'
flags to capture traffic into a rolling window of N fixed-size
pcap files each covering S seconds, named by timestamp. This
bounds per-file size (so each slice loads cleanly in Wireshark),
aligns the capture cadence with the observed incident cadence,
and lets engineers grep / replay a specific time slice without
loading gigabytes.
Mechanism¶
-n— don't resolve names (avoid triggering DNS lookups while capturing DNS).-tt— print timestamps as seconds since epoch.-i any— capture on all interfaces.-W 30— keep 30 rolling files.-G 60— rotate every 60 seconds.-w '%FT%T.pcap'— filename template usingstrftime(3)—%F=YYYY-MM-DD,%T=HH:MM:SS. ISO-8601 names sort lexicographically in time order.port 53— BPF filter (only capture DNS).
Result: a 30-minute rolling window of 60-second slices, each
~tens of MB for typical DNS traffic, with filenames like
2024-12-12T14:32:00.pcap.
Why bucket rather than stream¶
- Per-slice file size stays small. Wireshark loading a multi-GB pcap stalls; loading a 60-second slice opens instantly.
- Grep-ability. Filenames contain the timestamp, so
selecting "what happened around 14:35" is
ls *2024-12-12T14:3[3-7]*.pcap. - Bounded disk usage. The
-W Nflag caps total disk consumption; a capture running for days doesn't overflow the filesystem. - Independent analysis per slice. Different engineers can grab different slices and analyse in parallel without contention.
Choosing N and S¶
Sshould match the cadence of the incident. Stripe's spikes recurred hourly and lasted several minutes, so 60-second slices aligned with sub-spike granularity.N × Sshould exceed the time window an investigator expects to need — Stripe's 30 × 60 = 1,800 s gave 30 minutes of rolling history, enough to catch a single hourly spike in its entirety.
Seen in¶
- Stripe — The secret life of DNS packets (2024-12-12).
Canonical wiki instance. Stripe ran
tcpdump -n -tt -i any -W 30 -G 60 -w '%FT%T.pcap' port 53on the central DNS servers to capture DNS traffic across a 30-minute rolling window. Analysing the 60-second slice containing an hourly spike revealed: 90% of requests to the VPC resolver were reverse-DNS lookups for IPs in104.16.0.0/12; source IPs clustered on Hadoop hosts; inbound vs outbound packet ratio exposed the VPC-resolver rate-limit saturation.