Skip to content

SYSTEM Cited by 1 source

Llama 2

Llama 2 is Meta's second-generation open-weight foundation model family, released July 2023 in 7B / 13B / 70B sizes, with chat variants produced via supervised fine-tuning (SFT) + reinforcement learning from human feedback (RLHF). Meta's open release was a pivotal moment for public access to competitive-tier LLM weights.

Significance on this wiki

Llama 2's full architectural disclosures (Transformer variant, training data composition, RLHF process) come from the Llama 2 paper + blog, not yet ingested on this wiki; this page is a stub oriented toward the wiki's internal cross-reference needs.

Training-pipeline shape (as reused by downstream teams)

Llama 2 is a base model (autoregressive next-token) in its non-chat form. The 2024 Meta RCA team's adaptation recipe was:

  1. Continued pre-training on internal Meta corpora ("limited and approved internal wikis, Q&As, and code").
  2. Mixed SFT combining Llama 2's original SFT data + internal context + a dedicated RCA SFT dataset (~5,000 instruction-tuning examples).
  3. A second SFT round to produce logprob-rankable ordered lists as output.

Seen in

Last updated · 319 distilled / 1,201 read