CONCEPT Cited by 1 source
Adaptive bot reclassification¶
Definition¶
Adaptive bot reclassification is the bot-management posture in which a session's human-vs-bot label is not final at first classification — the ML backend reserves the right to flip the label later, based on correlation signals that emerge only across subsequent sessions or time windows. The initial classifier output is a provisional prior, not a commitment.
This concept captures the short window during which a novel stealth-bot fleet is labelled as human, and the mechanism by which the label changes once a coordination signal fires.
Canonical wiki instance¶
From Vercel's 2026-04-21 BotID Deep Analysis post:
"What we were witnessing was likely a brand-new browser bot network spinning up for the first time. These weren't your typical bots. They were sophisticated actors that generated telemetry data that looked completely legitimate... For a few minutes, BotID's models carefully analyzed this new data, determining whether these sessions were genuine or malicious."
Explicit timeline:
- 9:44 am — traffic spike; human classification continues.
- 9:45-9:48 am — model analysis, classification remains human.
- 9:48 am — pattern correlation identifies coordinated activity; first reclassification eligibility.
- 9:49 am — forced re-verification; bot classification begins.
- 9:54 am — attack traffic to zero.
The window during which the attacker was classified as human is ~4 minutes. That window is not a bug; it's the cost of running a calibrated classifier that tolerates uncertainty and waits for coordination signals.
Why the window exists¶
A single-pass classifier sees only the current session. Against a sophisticated operator who has invested in legitimate-looking per-session telemetry, the classifier's signal-to-noise is near-zero for the first few sessions — it has no prior examples to learn from, no coordination signals available, and a strong prior against blocking legitimate users.
The only way to resolve the ambiguity is to wait until the operator exposes cross-session structure — the concepts/proxy-node-correlation-signal in the canonical instance. That structure requires multiple sessions to be visible simultaneously, which means time.
Why the window is acceptable¶
The concepts/bot-vs-human-frame asymmetric-cost reasoning justifies the window:
- FP cost = blocking a legitimate human = lost revenue / trust / engagement, compounded across the blocked user's future behaviour.
- FN cost = letting a bot through for 4 minutes of a 10-minute attack = a bounded amount of abuse.
Aggressive first-pass blocking optimises for FN but catastrophically inflates FP. Permissive first-pass + adaptive reclassification optimises the compound cost — the FN cost accumulates for a few minutes while the correlation signal materialises, and then drops to zero once reclassification fires.
Shape of the adaptive mechanism¶
- Soft first-pass label. Emit a prior-style classification that downstream systems can treat as non-final (e.g., attached to a session ID, revocable).
- Session persistence. Keep the session's telemetry and classification in a correlation store — typically a short-retention online-learning state.
- Cross-session correlation engine. Join sessions by the stable cross-session key (telemetry fingerprint) and compute coordination signals.
- Reclassification trigger. Specific signals — e.g. concepts/proxy-node-correlation-signal — fire a correlation-triggered policy path.
- Forced re-verification. The session is pulled back through a re-scoring loop, now with the correlation signal folded in as a strong prior. See patterns/correlation-triggered-reverification.
- Mitigation. The reclassified sessions are blocked or challenged.
Design implications for downstream systems¶
An application consuming a bot-management verdict needs to support revocable sessions — because the verdict isn't final:
- Don't cache "verified human" in a long-lived session token.
- Support the bot-management system's re-verification challenges at any point in the session, not just at sign-in.
- Treat abuse-sensitive operations (checkout, signup, API token creation) as authorisation decisions against the current classification, not a cached past one.
This is one reason systems/vercel-botid positions Deep Analysis for "your most sensitive routes like login, checkout, AI agents, and APIs" — those routes already tolerate challenge-response flows.
Contrast with static signatures¶
Before adaptive reclassification, the standard posture was static rules — block this UA, block this IP range, block this ASN. Static rules:
- Fail closed against novel actors (no signature = no detect).
- Require operator-authored updates in response to every new evasion — the patterns/hands-free-adaptive-bot-mitigation anti-pattern.
- Don't handle the FP-FN asymmetry well — rules are binary.
Adaptive reclassification is the bot-management product category's response.
Limitations¶
- Operator can abort faster than the window closes. If the operator's goal is a short-duration spike attack (credential stuffing, price scraping within 10 minutes), the window is long enough to accomplish the goal before reclassification.
- Classification state adds privacy surface. The correlation store is, by definition, cross-session tracking of users.
- Requires ML backend with online-learning capacity. A pure rule engine cannot implement this.
- Reclassification can cascade false positives. If the correlation signal fires on a legitimate user who happens to be using a proxy and the same browser as other users (e.g. corporate VPN), they get re-verified unnecessarily.
Seen in¶
- sources/2026-04-21-vercel-botid-deep-analysis-catches-a-sophisticated-bot-network-in-real-time — canonical wiki instance. The 2026-04-21 post is the first wiki example of the concept explicitly named and quantified; the ~4-minute human-classified window and the subsequent reclassification at 9:49 am are the concrete numbers.
Related¶
- concepts/proxy-node-correlation-signal — the specific trigger that closes the window in the canonical instance.
- concepts/browser-telemetry-fingerprint — the cross- session key that makes correlation possible.
- concepts/coordinated-bot-network — the attacker shape adaptive reclassification is designed to catch.
- concepts/bot-vs-human-frame — the asymmetric-cost reasoning.
- concepts/ml-bot-fingerprinting.
- patterns/correlation-triggered-reverification — the response pattern.
- patterns/hands-free-adaptive-bot-mitigation — the operational goal.
- systems/vercel-botid-deep-analysis — the canonical system.