Skip to content

CONCEPT Cited by 1 source

Sliding-window training

Definition

Sliding-window training is a time-series training strategy where the training dataset is a fixed-length window that slides forward in time with each retraining cycle. Instead of accumulating all historical data forever, the window drops old data as new data enters — bounding training cost, keeping the model responsive to recent dynamics, and avoiding overweighting ancient history.

Why sliding-window

  • Bounded training cost. Training time scales with window length × number of entities. Fixing the window length makes training cost flat over time even as history grows.
  • Model recency. Recent patterns (new products, evolving customer behaviour, new competitors) get representative weight; very old data is less informative.
  • Captures seasonality. Window length is chosen to cover enough seasonal cycles (years for annual seasonality) while not over-representing ancient regimes.
  • Avoids concept drift from distant history. 10-year-old sales patterns may be actively misleading for a fast-moving catalogue.

Canonical instance (Zalando ZEOS)

  • Window length: 2.5 years (selected from 3 years of available history).
  • Rationale verbatim: "we use a 2.5-year timeframe to enable the model to capture seasonal patterns without overemphasising older historical performance."
  • Entities: 5 million SKUs at size + colour granularity.
  • Cadence: weekly retraining.
  • Producer: systems/zeos-demand-forecaster via Nixtla MLForecast on LightGBM.

Seen in

Last updated · 501 distilled / 1,218 read