Skip to content

SYSTEM Cited by 1 source

Llama 3.1

Llama 3.1 is Meta's 2024 open-weights foundation-model family (8B, 70B, 405B parameters). In the context of this wiki it is notable as the adaptation base for domain-adapted enterprise LLMs.

Seen in (wiki)

Why "adapt" rather than "train from scratch"

From eBay's framing:

"Training a large-scale LLM from scratch is a very time- and resource-intensive process. In order to move fast, one could use existing pretrained models, such as Llama 3.1, for their use cases. However, these models typically lack specific knowledge, in our case about the e-commerce domain."

Llama 3.1's role in an enterprise adaptation pipeline is thus: known-capable open base → continued-pretrain with domain data + replay → fine-tune + RLHF → deploy. Trades the time cost of a from-scratch build against the ceiling cost of starting from a model you didn't shape. Stub — expand as more adaptation sources cite Llama 3.1 as a base.

Last updated · 200 distilled / 1,178 read