SYSTEM Cited by 1 source
Vercel BotID Deep Analysis¶
Vercel BotID Deep Analysis is the edge-case path inside Vercel BotID that handles sophisticated actors — bot networks that use "real browser automation tools and carefully crafted profiles" and successfully evade single-pass classification. It is "powered by Kasada's machine learning backend" and is the named subject of Vercel's 2026-04-21 production-incident post.
Architectural role¶
Where standard BotID is a single-pass classifier, Deep Analysis is a cross-session correlation + adaptive re-verification loop. It tolerates a short window during which a novel stealth-bot fleet is classified as human, then breaks the disguise once a coordination signal emerges across sessions.
per-session telemetry → single-pass classifier → decision
│
└─ if ambiguous / novel
│
▼
Deep Analysis correlation engine
│
▼
cross-session pattern detection
(e.g. same fingerprint across proxy IPs)
│
▼
forced re-verification
(fresh telemetry, re-scored with priors)
│
▼
reclassify → block
What the 2026-04-21 post discloses¶
The post describes a single production incident — a 10-minute window on October 29 at 9:44 am. Deep Analysis' behaviour during that window is the wiki's only disclosure of the subsystem's internals.
- Initial classification was human. "For a few minutes, BotID's models carefully analyzed this new data, determining whether these sessions were genuine or malicious." — the canonical concepts/adaptive-bot-reclassification window.
- Model analysis spanned ~3 minutes (9:45 → 9:48 am), examining "40-45 new browser profiles making thousands of requests across proxy nodes."
- Pattern correlation fired at 9:48 am — "These browser sessions that had initially appeared legitimate started showing up across a range of IP addresses." The canonical concepts/proxy-node-correlation-signal — same browser fingerprints, multiple proxy-node IPs.
- Forced re-verification at 9:49 am — "the system automatically forced these sessions back through the verification process to collect fresh browser telemetry."
- Attack traffic dropped to zero at 9:54 am — 5 minutes after forced re-verification began, 10 minutes from the initial spike.
- Customer took no action. "No manual intervention required. No emergency patches or rule updates. The customer took no action at all." — the patterns/hands-free-adaptive-bot-mitigation property.
Design thesis¶
From the post: "Standard bot detection handles the majority of threats effectively. But sophisticated attacks like this one create a tradeoff: aggressive blocking risks false positives against legitimate users, while permissive rules let sophisticated bots through. Deep Analysis exists for these edge cases where attackers invest significant resources in evasion."
The subsystem is the asymmetric-FP-FN compromise path: the standard single-pass classifier stays permissive against sophisticated evasion so legitimate users aren't blocked; Deep Analysis resolves the resulting false negatives via cross-session correlation and adaptive re-verification without disrupting legitimate traffic.
Signal combination¶
The explicit statement from the post: "By combining multiple signals (browser telemetry, network patterns, behavioral analysis, and real-time learning), Deep Analysis can identify coordination patterns that individual signals miss. The key insight wasn't any single red flag. It was the correlation: identical browser fingerprints cycling through proxy infrastructure."
Four named signal classes:
- Browser telemetry — fingerprint surface.
- Network patterns — proxy-node vs origin IP characterisation.
- Behavioural analysis — session-level behaviour patterns.
- Real-time learning — continuous model updates on new telemetry.
Scope / boundaries¶
- Not a first-line filter. The standard BotID path handles the majority of bots. Deep Analysis activates on the residual — traffic that single-pass classification couldn't resolve.
- Not a rule engine. Mitigation isn't a static blocklist; it's an online-learning model updating its scoring in real-time.
- Not customer-observable in detail. The customer sees reclassifications and blocks, not the correlation signal.
Relationship to Cloudflare's stealth-crawler signature¶
Both Deep Analysis and Cloudflare's 2025-08-04 stealth-crawler detection (patterns/stealth-crawler-detection-fingerprint) solve the same abstract problem — identifying a coordinated bot operator behind rotation of operator-controllable identifiers — with different signal spaces:
| Dimension | Vercel / Kasada Deep Analysis | Cloudflare stealth-crawler signature |
|---|---|---|
| Feature space | Browser telemetry + behavioural patterns | TLS / HTTP/2 network fingerprints |
| Trigger | Coordinated-fleet correlation | Operator-declaration mismatch |
| Response | Per-session forced re-verification | Block signature in managed ruleset |
| Customer ergonomics | Hands-free adaptive | Managed-ruleset signature ships |
| Granularity | Session (short-lived) | Operator (long-lived) |
| Feature-list disclosure | Not published | Not published |
They are complementary, not redundant — the trigger for Deep Analysis (fingerprint matching across proxy IPs) assumes an actor sophisticated enough to cycle IPs; the trigger for the Cloudflare signature (operator-declaration mismatch) assumes the actor is declaring a stealth UA and can be caught on network- stack-level signals.
Limitations / caveats¶
- Single-incident evidence. One documented production incident. No baseline detection frequency, no FP rate.
- "Brand-new bot network" is an inference. Vercel / Kasada infer the network is new from the absence of prior fingerprint matches; the operator's activity elsewhere is unknown.
- Kasada dependency. Strategic decisions about Deep Analysis collapse to decisions about the Kasada vendor relationship.
- The post is marketing-voice. No model architecture, no feature list, no benchmarks. The value is architectural framing, not reproducible detail.
Seen in¶
- sources/2026-04-21-vercel-botid-deep-analysis-catches-a-sophisticated-bot-network-in-real-time — the only wiki source on this subsystem. Covers a single incident from October 29 at 9:44 am; 10-minute detect-to-zero-traffic window; 40-45 browser profiles; 500 % traffic spike; zero customer intervention.
Related¶
- systems/vercel-botid — the parent product.
- systems/kasada-bot-management — the ML backend.
- concepts/browser-telemetry-fingerprint / concepts/proxy-node-correlation-signal / concepts/adaptive-bot-reclassification / concepts/coordinated-bot-network — the signal-space concepts.
- concepts/ml-bot-fingerprinting — the class concept Deep Analysis instantiates.
- patterns/correlation-triggered-reverification / patterns/hands-free-adaptive-bot-mitigation — the pattern- layer abstractions.
- patterns/stealth-crawler-detection-fingerprint — the Cloudflare analogue.
- companies/vercel.