Skip to content

CONCEPT Cited by 1 source

Anytime optimization algorithm

Definition

An anytime algorithm is an optimization procedure that produces a valid-but-possibly-suboptimal answer at every step, and improves that answer monotonically with more iterations. You can stop at any time and get some answer; you can also keep going and get a better one. The defining trait is that compute budget maps directly to output quality as a knob the caller controls.

Why it matters for system design

Anytime algorithms are the right shape whenever:

  1. The search space is too large for exhaustive enumeration.
  2. The quality-vs-compute curve is continuous and non-trivial (not just "done/not-done").
  3. The caller has heterogeneous time budgets — a slow query deserves more tuning than a fast one.

Database query tuning, hyperparameter search, game-tree search (MCTS), Bayesian optimization, and LLM-agent planning loops all fit this shape.

The rollout-budget instance

Databricks' LLM join-order agent (Source: sources/2026-04-22-databricks-are-llm-agents-good-at-join-order-optimization) is an explicit anytime algorithm built on a rollout budget:

"Our agent progressively improves the workload with each tested plan (sometimes called a rollout), creating a simple anytime algorithm where larger time budgets can be translated into further query performance. Of course, eventually query performance will stop improving."

Each rollout is one invocation of the agent's execute-plan tool on a candidate join order. The best plan seen so far is always available; more rollouts find a better one (up to a diminishing- returns asymptote). See patterns/rollout-budget-anytime-plan-search.

The knob-shape API

Anytime algorithms expose their compute budget as a first-class parameter. For the Databricks agent this is rollouts=N. For Bayesian-optimization-over-plans (see concepts/bayesian-optimization-over-parameter-space) it is n_iter. For MCTS it is simulations or wall-clock. The caller chooses based on priority:

Caller situation Budget
Interactive slow-query investigation Small (tens of rollouts)
Nightly batch optimization sweep Large (hundreds per query)
Workload-wide regression Very large (thousands, parallelised)

Diminishing returns and stopping criteria

The post is explicit about the asymptote: "eventually query performance will stop improving." Production implementations typically add:

  • Plateau detection. Stop if no improvement in K rollouts.
  • Budget ceilings. Hard cap on compute cost.
  • Quality thresholds. Stop once a target speedup is reached.

Plateau detection is especially natural for agent-based search: if the agent keeps proposing variations of the same plan, the exploration axis is exhausted and further rollouts are waste.

Contrast with fixed-time algorithms

Most production software is not anytime: the query optimizer either returns a plan or doesn't. The anytime shape is what enables budget-driven quality control — a property worth paying for when the underlying problem is expensive and quality-sensitive.

Seen in

Last updated · 510 distilled / 1,221 read