Skip to content

PATTERN Cited by 1 source

Hackathon to platform

Start an internal platform from a short, time-boxed hackathon prototype that validates the core value proposition with unpolished code — not from a multi-quarter platform initiative. Once the prototype proves people want the thing, follow user feedback iteratively to reach production quality.

Intent

Internal platform teams often fall into two traps:

  • Over-spec trap. Multi-quarter roadmap written before talking to users; delivers the wrong thing on schedule.
  • Demo-ware trap. Prototype ships as "v1" with no follow-through on the platform substrate below.

The hackathon-to-platform pattern threads between them: the hackathon validates direction with real users; the subsequent iterations invest in substrate only once direction is proven.

Mechanism

  1. Time-box (1–5 days). Build the smallest useful thing — not a slide deck, not a design doc. "Unpolished but immediately improves basic workflows" is the bar.
  2. Show it to the intended users. Real engineers, real workflow. Watch their reaction. No reaction = wrong direction.
  3. Interview + shadow to understand pain. After the prototype lands, talk to the service teams it targets. Shadow on-call rotations. Map the actual debug/operations journey.
  4. Each subsequent iteration solves one feedback-driven pain point. Not "build the platform now" — incremental additions informed by how users used the prototype.
  5. Invest in the substrate only when the surface demands it. For agent platforms, this is when you hit scaling limits: fragmented data sources across clouds/regions, fragmented access control, fragmented evaluation. At that point you stop adding features and build the unification layer.

Why it works

  • Fastest way to discover you were wrong. Two days of code reveals misconceptions that two months of design review wouldn't.
  • Forces customer-obsession to be real. Without a working prototype you can't shadow users effectively — "look at this mock" doesn't surface the same feedback.
  • Natural budget ramp. Hackathon wins unlock iterative investment; no sustained investment unlocks before there's user pull.

Relationship to other patterns

Tradeoffs

  • Risk of premature scope lock-in. If the prototype hits the wrong direction but gets organizational momentum, course-correction is costly.
  • Substrate debt accumulates if you only ever bolt on user-facing features. Databricks' article is explicit about the moment they stopped and built the central-first sharded foundation (see concepts/central-first-sharded-architecture).
  • Not all problems have a 2-day prototype. Infrastructure-heavy problems (storage engines, consensus systems) rarely fit this mold.

Seen in

  • sources/2025-12-03-databricks-ai-agent-debug-databases — Databricks' Storex platform: "We didn't start with a large, multi-quarter initiative. Instead, we tested the idea during a company-wide hackathon. In two days, we built a simple prototype that unified a few core database metrics and dashboards into a single view. It wasn't polished, but it immediately improved basic investigation workflows." The subsequent evolution (v1 static SOP → v2 anomaly detection → v3 chat assistant) followed user feedback.
Last updated · 200 distilled / 1,178 read