File size: 15,202 Bytes
6ec342a afa8e54 6ec342a afa8e54 6ec342a c6c24ea 6ec342a afa8e54 6ec342a c6c24ea afa8e54 6ec342a c52e518 c6c24ea 6ec342a c52e518 6ec342a c6c24ea 6ec342a c6c24ea c52e518 c6c24ea c52e518 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a c6c24ea 6ec342a afa8e54 6ec342a c6c24ea 6ec342a afa8e54 6ec342a afa8e54 6ec342a afa8e54 6ec342a afa8e54 c52e518 afa8e54 c6c24ea afa8e54 6ec342a c52e518 6ec342a afa8e54 6ec342a afa8e54 6ec342a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 |
---
license: cc-by-nc-4.0
datasets:
- Salesforce/xlam-function-calling-60k
language:
- en
pipeline_tag: text-generation
tags:
- function-calling
- LLM Agent
- tool-use
- llama
- qwen
- pytorch
- LLaMA-factory
library_name: transformers
---
<p align="center">
<img width="500px" alt="xLAM" src="https://huggingface.co/datasets/jianguozhang/logos/resolve/main/xlam-no-background.png">
</p>
<p align="center">
<a href="https://arxiv.org/abs/2504.03601">[Paper]</a> |
<a href="https://apigen-mt.github.io/">[Homepage]</a> |
<a href="https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k">[Dataset (Coming Soon)]</a> |
<a href="https://github.com/SalesforceAIResearch/xLAM">[Github]</a>
</p>
<hr>
# Welcome to the xLAM-2 Model Family!
[Large Action Models (LAMs)](https://blog.salesforceairesearch.com/large-action-models/) are advanced language models designed to enhance decision-making by translating user intentions into executable actions. As the **brains of AI agents**, LAMs autonomously plan and execute tasks to achieve specific goals, making them invaluable for automating workflows across diverse domains.
**This model release is for research purposes only.**
The new **xLAM-2** series, built on our most advanced data synthesis, processing, and training pipelines, marks a significant leap in **multi-turn conversation** and **tool usage**. Trained using our novel APIGen-MT framework, which generates high-quality training data through simulated agent-human interactions. Our models achieve state-of-the-art performance on [**BFCL**](https://gorilla.cs.berkeley.edu/leaderboard.html) and **Ο-bench** benchmarks, outperforming frontier models like GPT-4o and Claude 3.5. Notably, even our smaller models demonstrate superior capabilities in multi-turn scenarios while maintaining exceptional consistency across trials.
We've also refined the **chat template** and **vLLM integration**, making it easier to build advanced AI agents. Compared to previous xLAM models, xLAM-2 offers superior performance and seamless deployment across applications.
<p align="center">
<img width="100%" alt="Model Performance Overview" src="https://github.com/apigen-mt/apigen-mt.github.io/blob/main/img/model_board.png?raw=true">
<br>
<small><i>Comparative performance of larger xLAM-2-fc-r models (8B-70B, trained with APIGen-MT data) against state-of-the-art baselines on function-calling (BFCL v3, as of date 04/02/2025) and agentic (Ο-bench) capabilities.</i></small>
</p>
## Table of Contents
- [Usage](#usage)
- [Basic Usage with Huggingface Chat Template](#basic-usage-with-huggingface-chat-template)
- [Using vLLM for Inference](#using-vllm-for-inference)
- [Setup and Serving](#setup-and-serving)
- [Testing with OpenAI API](#testing-with-openai-api)
- [Benchmark Results](#benchmark-results)
- [Citation](#citation)
---
## Model Series
[xLAM](https://huggingface.co/collections/Salesforce/xlam-models-65f00e2a0a63bbcd1c2dade4) series are significant better at many things including general tasks and function calling.
For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
| Model | # Total Params | Context Length | Category | Download Model | Download GGUF files |
|------------------------|----------------|------------|-------|----------------|----------|
| Llama-xLAM-2-70b-fc-r | 70B | 128k | Multi-turn Conversation, Function-calling | [π€ Link](https://huggingface.co/Salesforce/Llama-xLAM-2-70b-fc-r) | NA |
| Llama-xLAM-2-8b-fc-r | 8B | 128k | Multi-turn Conversation, Function-calling | [π€ Link](https://huggingface.co/Salesforce/Llama-xLAM-2-8b-fc-r) | [π€ Link](https://huggingface.co/Salesforce/Llama-xLAM-2-8b-fc-r-gguf) |
| xLAM-2-32b-fc-r | 32B | 32k (max 128k)* | Multi-turn Conversation, Function-calling | [π€ Link](https://huggingface.co/Salesforce/xLAM-2-32b-fc-r) | NA |
| xLAM-2-3b-fc-r | 3B | 32k (max 128k)* | Multi-turn Conversation, Function-calling | [π€ Link](https://huggingface.co/Salesforce/xLAM-2-3b-fc-r) | [π€ Link](https://huggingface.co/Salesforce/xLAM-2-3b-fc-r-gguf) |
| xLAM-2-1b-fc-r | 1B | 32k (max 128k)* | Multi-turn Conversation, Function-calling | [π€ Link](https://huggingface.co/Salesforce/xLAM-2-1b-fc-r) | [π€ Link](https://huggingface.co/Salesforce/xLAM-2-1b-fc-r-gguf) |
***Note:** The default context length for Qwen-2.5-based models is 32k, but you can use techniques like YaRN (Yet Another Recursive Network) to achieve maximum 128k context length. Please refer to [here](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct#processing-long-texts) for more details.
You can also explore our previous xLAM series [here](https://huggingface.co/collections/Salesforce/xlam-models-65f00e2a0a63bbcd1c2dade4).
The `-fc` suffix indicates that the models are fine-tuned for **function calling** tasks, while the `-r` suffix signifies a **research** release.
β
All models are fully compatible with vLLM and Transformers-based inference frameworks.
## Usage
### Framework versions
- Transformers 4.46.1 (or later)
- PyTorch 2.5.1+cu124 (or later)
- Datasets 3.1.0 (or later)
- Tokenizers 0.20.3 (or later)
### Basic Usage with Huggingface Chat Template
The new xLAM models are designed to work seamlessly with the Hugging Face Transformers library and utilize natural chat templates for an easy and intuitive conversational experience. Below are examples of how to use these models.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Salesforce/Llama-xLAM-2-3b-fc-r")
model = AutoModelForCausalLM.from_pretrained("Salesforce/Llama-xLAM-2-3b-fc-r", torch_dtype=torch.bfloat16, device_map="auto")
# Example conversation with a tool call
messages = [
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "Thanks. I am doing well. How can I help you?"},
{"role": "user", "content": "What's the weather like in London?"},
]
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The unit of temperature to return"}
},
"required": ["location"]
}
}
]
print("====== prompt after applying chat template ======")
print(tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, tokenize=False))
inputs = tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
input_ids_len = inputs["input_ids"].shape[-1] # Get the length of the input tokens
inputs = {k: v.to(model.device) for k, v in inputs.items()}
print("====== model response ======")
outputs = model.generate(**inputs, max_new_tokens=256)
generated_tokens = outputs[:, input_ids_len:] # Slice the output to get only the newly generated tokens
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))
```
### Using vLLM for Inference
The xLAM models can also be efficiently served using vLLM for high-throughput inference. Please use `vllm>=0.6.5` since earlier versions will cause degraded performance for Qwen-based models.
#### Setup and Serving
1. Install vLLM with the required version:
```bash
pip install "vllm>=0.6.5"
```
2. Download the tool parser plugin to your local path:
```bash
wget https://huggingface.co/Salesforce/xLAM-2-1b-fc-r/raw/main/xlam_tool_call_parser.py
```
3. Start the OpenAI API-compatible endpoint:
```bash
vllm serve Salesforce/xLAM-2-1b-fc-r \
--enable-auto-tool-choice \
--tool-parser-plugin ./xlam_tool_call_parser.py \
--tool-call-parser xlam \
--tensor-parallel-size 1
```
Note: Ensure that the tool parser plugin file is downloaded and that the path specified in `--tool-parser-plugin` correctly points to your local copy of the file. The xLAM series models all utilize the **same** tool call parser, so you only need to download it **once** for all models.
#### Testing with OpenAI API
Here's a minimal example to test tool usage with the served endpoint:
```python
import openai
import json
# Configure the client to use your local vLLM endpoint
client = openai.OpenAI(
base_url="http://localhost:8000/v1", # Default vLLM server URL
api_key="empty" # Can be any string
)
# Define a tool/function
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return"
}
},
"required": ["location"]
}
}
}
]
# Create a chat completion
response = client.chat.completions.create(
model="Salesforce/xLAM-2-1b-fc-r", # Model name doesn't matter, vLLM uses the served model
messages=[
{"role": "system", "content": "You are a helpful assistant that can use tools."},
{"role": "user", "content": "What's the weather like in San Francisco?"}
],
tools=tools,
tool_choice="auto"
)
# Print the response
print("Assistant's response:")
print(json.dumps(response.model_dump(), indent=2))
```
For more advanced configurations and deployment options, please refer to the [vLLM documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Benchmark Results
### Berkeley Function-Calling Leaderboard (BFCL v3)
<p align="center">
<img width="80%" alt="BFCL Results" src="https://github.com/apigen-mt/apigen-mt.github.io/blob/main/img/bfcl-result.png?raw=true">
<br>
<small><i>Performance comparison of different models on [BFCL leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html). The rank is based on the overall accuracy, which is a weighted average of different evaluation categories. "FC" stands for function-calling mode in contrast to using a customized "prompt" to extract the function calls.</i></small>
</p>
### Ο-bench Benchmark
<p align="center">
<img width="80%" alt="Tau-bench Results" src="https://github.com/apigen-mt/apigen-mt.github.io/blob/main/img/taubench-result.png?raw=true">
<br>
<small><i>Success Rate (pass@1) on Ο-bench benchmark averaged across at least 5 trials. Our xLAM-2-70b-fc-r model achieves an overall success rate of 56.2% on Ο-bench, significantly outperforming the base Llama 3.1 70B Instruct model (38.2%) and other open-source models like DeepSeek v3 (40.6%). Notably, our best model even outperforms proprietary models such as GPT-4o (52.9%) and approaches the performance of more recent models like Claude 3.5 Sonnet (new) (60.1%).</i></small>
</p>
<p align="center">
<img width="80%" alt="Pass^k curves" src="https://github.com/apigen-mt/apigen-mt.github.io/blob/main/img/pass_k_curves_retail_airline.png?raw=true">
<br>
<small><i>Pass^k curves measuring the probability that all 5 independent trials succeed for a given task, averaged across all tasks for Ο-retail (left) and Ο-airline (right) domains. Higher values indicate better consistency of the models.</i></small>
</p>
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
### Model Licenses
For all Llama relevant models, please also follow corresponding Llama license and terms. Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.
## Citation
If you use our model or dataset in your work, please cite our paper:
```bibtex
@article{prabhakar2025apigen,
title={APIGen-MT: Agentic PIpeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay},
author={Prabhakar, Akshara and Liu, Zuxin and Zhu, Ming and Zhang, Jianguo and Awalgaonkar, Tulika and Wang, Shiyu and Liu, Zhiwei and Chen, Haolin and Hoang, Thai and others},
journal={arXiv preprint arXiv:2504.03601},
year={2025}
}
```
Additionally, please check our other awesome related works regarding xLAM series and consider citing them as well:
```bibtex
@article{zhang2025actionstudio,
title={ActionStudio: A Lightweight Framework for Data and Training of Action Models},
author={Zhang, Jianguo and Hoang, Thai and Zhu, Ming and Liu, Zuxin and Wang, Shiyu and Awalgaonkar, Tulika and Prabhakar, Akshara and Chen, Haolin and Yao, Weiran and Liu, Zhiwei and others},
journal={arXiv preprint arXiv:2503.22673},
year={2025}
}
```
```bibtex
@article{zhang2024xlam,
title={xLAM: A Family of Large Action Models to Empower AI Agent Systems},
author={Zhang, Jianguo and Lan, Tian and Zhu, Ming and Liu, Zuxin and Hoang, Thai and Kokane, Shirley and Yao, Weiran and Tan, Juntao and Prabhakar, Akshara and Chen, Haolin and others},
journal={arXiv preprint arXiv:2409.03215},
year={2024}
}
```
```bibtex
@article{liu2024apigen,
title={Apigen: Automated pipeline for generating verifiable and diverse function-calling datasets},
author={Liu, Zuxin and Hoang, Thai and Zhang, Jianguo and Zhu, Ming and Lan, Tian and Tan, Juntao and Yao, Weiran and Liu, Zhiwei and Feng, Yihao and RN, Rithesh and others},
journal={Advances in Neural Information Processing Systems},
volume={37},
pages={54463--54482},
year={2024}
}
```
```bibtex
@article{zhang2024agentohana,
title={AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning},
author={Zhang, Jianguo and Lan, Tian and Murthy, Rithesh and Liu, Zhiwei and Yao, Weiran and Tan, Juntao and Hoang, Thai and Yang, Liangwei and Feng, Yihao and Liu, Zuxin and others},
journal={arXiv preprint arXiv:2402.15506},
year={2024}
}
```
|