SYSTEM Cited by 1 source
LangChain¶
Definition¶
LangChain is a Python (and JS) LLM-orchestration library providing chain abstractions (sequential and parallel LLM invocations), tool-calling, output parsers, and LLM-provider adapters. Yelp uses LangChain's async chain capability to run BAA's four question-analysis classifiers in parallel so the pipeline latency is the maximum of the four calls, not the sum.
Role at Yelp (2026-03-27)¶
"Asynchronous Pipeline: We built the question analysis agents as asynchronous chains invoked through langchain, which meant that they can run in parallel and we just need to wait for the longest agent to complete. We also added early stopping to the components in the pipeline when the trust and safety classifier rejects the question to avoid waiting longer before responding." (Source: sources/2026-03-27-yelp-building-biz-ask-anything-from-prototype-to-product)
Two load-bearing properties:
- Parallel async invocation — Trust & Safety + Inquiry Type + Content Source Selection + Keyword Generation run concurrently. See patterns/parallel-pre-retrieval-classifier-pipeline.
- Early stopping — T&S rejection cancels downstream work; saves latency + cost on unsafe questions.
Caveats¶
- Stub page. Deeper LangChain architecture (LCEL, Runnable interface, agent types) is not walked here.
- LangChain's popularity as a general-purpose LLM library means many wiki instances could plausibly Seen-in here; the current canonical wiki reference is Yelp BAA's async-chain parallel classifier fleet.
Seen in¶
- sources/2026-03-27-yelp-building-biz-ask-anything-from-prototype-to-product — async-chain orchestration of BAA's four-classifier question-analysis fleet.
Related¶
- systems/yelp-biz-ask-anything — the canonical consumer.
- patterns/parallel-pre-retrieval-classifier-pipeline — the pattern LangChain's async chains enable.
- companies/yelp