library_name: transformers
language:
- en
Model Information
We introduce UltraLong-8B, a series of ultra-long context language models designed to process extensive sequences of text (up to 1M, 2M, and 4M tokens) while maintaining competitive performance on standard benchmarks. Built on the Llama-3.1, UltraLong-8B leverages a systematic training recipe that combines efficient continued pretraining with instruction tuning to enhance long-context understanding and instruction-following capabilities. This approach enables our models to efficiently scale their context windows without sacrificing general performance.
The UltraLong Models
- UltraLong/Llama-3.1-8B-UltraLong-1M-Instruct
- UltraLong/Llama-3.1-8B-UltraLong-2M-Instruct
- UltraLong/Llama-3.1-8B-UltraLong-4M-Instruct
Uses
Starting with transformers >= 4.43.0
onward, you can run conversational inference using the Transformers pipeline
abstraction or by leveraging the Auto classes with the generate()
function.
Make sure to update your transformers installation via pip install --upgrade transformers
.
import transformers
import torch
model_id = "ultralong/Llama-3.1-8B-UltraLong-1M-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Model Card
- Base model: meta-llama/Llama-3.1-8B-Instruct
- Continued Pretraining: 1B tokens on 1M Per-source upsampled SlimPajama data.
- Supervised fine-tuning (SFT): 1B tokens on open-source instruction datasets across general, mathematics, and code domains.
- Maximum context window: 1M tokens
Evaluation Results
We evaluate UltraLong-8B on a diverse set of benchmarks, including long-context tasks (e.g., RULER, LV-Eval, and InfiniteBench) and standard tasks (e.g., MMLU, MATH, GSM-8K, and HumanEval). UltraLong-8B achieves superior performance on ultra-long context tasks while maintaining competitive results on standard benchmarks.
Needle in a Haystack

Long context evaluation

Standard capability evaluation

Correspondence to
Chejian Xu ([email protected]), Wei Ping ([email protected])