File size: 2,939 Bytes
03e92c0
 
 
 
 
 
 
 
 
 
 
 
 
c7e0bb9
 
 
03e92c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
library_name: transformers
language:
- en
---

# Model Information

We introduce **UltraLong-8B**, a series of ultra-long context language models designed to process extensive sequences of text (up to 1M, 2M, and 4M tokens) while maintaining competitive performance on standard benchmarks. Built on the Llama-3.1, UltraLong-8B leverages a systematic training recipe that combines efficient continued pretraining with instruction tuning to enhance long-context understanding and instruction-following capabilities. This approach enables our models to efficiently scale their context windows without sacrificing general performance.


## The UltraLong Models

- [UltraLong/Llama-3.1-8B-UltraLong-1M-Instruct](https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct)
- [UltraLong/Llama-3.1-8B-UltraLong-2M-Instruct](https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-2M-Instruct)
- [UltraLong/Llama-3.1-8B-UltraLong-4M-Instruct](https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-4M-Instruct)


## Uses

Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.

Make sure to update your transformers installation via `pip install --upgrade transformers`.

```python
import transformers
import torch

model_id = "ultralong/Llama-3.1-8B-UltraLong-1M-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```

## Model Card

* Base model: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
* Continued Pretraining: 1B tokens on 1M Per-source upsampled SlimPajama data.
* Supervised fine-tuning (SFT): 1B tokens on open-source instruction datasets across general, mathematics, and code domains.
* Maximum context window: 1M tokens

## Evaluation Results

We evaluate UltraLong-8B on a diverse set of benchmarks, including long-context tasks (e.g., RULER, LV-Eval, and InfiniteBench) and standard tasks (e.g., MMLU, MATH, GSM-8K, and HumanEval). UltraLong-8B achieves superior performance on ultra-long context tasks while maintaining competitive results on standard benchmarks.

### Needle in a Haystack

<img width="80%" alt="image" src="Llama-3.1-8B-UltraLong-1M-Instruct.png">

### Long context evaluation

<img width="80%" alt="image" src="long_benchmark.png">

### Standard capability evaluation

<img width="80%" alt="image" src="standard_benchmark.png">

## Correspondence to
Chejian Xu ([email protected]), Wei Ping ([email protected])

## Citation

<pre>

</pre>