tulika214 commited on
Commit
3f5c326
·
verified ·
1 Parent(s): 4ce4e39

Update README

Browse files
Files changed (1) hide show
  1. README.md +122 -3
README.md CHANGED
@@ -1,3 +1,122 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+ <p align="center">
5
+ <img width="500px" alt="xLAM" src="https://huggingface.co/datasets/jianguozhang/logos/resolve/main/xlam-no-background.png">
6
+ </p>
7
+ <p align="center">
8
+ <a href="https://www.salesforceairesearch.com/projects/xlam-large-action-models">[Homepage]</a> |
9
+ <a href="https://github.com/SalesforceAIResearch/xLAM">[Github]</a> |
10
+ <a href="https://blog.salesforceairesearch.com/large-action-model-ai-agent/">[Blog]</a>
11
+ </p>
12
+ <hr>
13
+
14
+ ## Model Summary
15
+
16
+ This repo provides the GGUF format for the xLAM-2-1b-fc-r model. Here's a link to original model [xLAM-2-1b-fc-r](https://huggingface.co/Salesforce/xLAM-2-1b-fc-r).
17
+ [Large Action Models (LAMs)](https://blog.salesforceairesearch.com/large-action-models/) are advanced language models designed to enhance decision-making by translating user intentions into executable actions. As the **brains of AI agents**, LAMs autonomously plan and execute tasks to achieve specific goals, making them invaluable for automating workflows across diverse domains.
18
+
19
+ ## Model Overview
20
+ The new **xLAM-2** series, built on our most advanced data synthesis, processing, and training pipelines, marks a significant leap in **multi-turn reasoning** and **tool usage**. It achieves state-of-the-art performance on function-calling benchmarks like **BFCL** and **tau-bench**. We've also refined the **chat template** and **vLLM integration**, making it easier to build advanced AI agents. Compared to previous xLAM models, xLAM-2 offers superior performance and seamless deployment across applications.
21
+ **This model release is for research purposes only.**
22
+
23
+ ## How to download GGUF files
24
+
25
+ 1. **Install Hugging Face CLI:**
26
+
27
+ ```
28
+ pip install huggingface-hub
29
+ ```
30
+
31
+ 2. **Login to Hugging Face:**
32
+ ```
33
+ huggingface-cli login
34
+ ```
35
+
36
+ 3. **Download the GGUF model:**
37
+ ```
38
+ huggingface-cli download Salesforce/xLAM-2-1b-fc-r-gguf xLAM-2-1b-fc-r-gguf --local-dir . --local-dir-use-symlinks False
39
+ ```
40
+ ## Prompt template
41
+ ```
42
+ <|im_start|>system
43
+ {TASK_INSTRUCTION}
44
+ You have access to a set of tools. When using tools, make calls in a single JSON array:
45
+
46
+ [{"name": "tool_call_name", "arguments": {"arg1": "value1", "arg2": "value2"}}, ... (additional parallel tool calls as needed)]
47
+
48
+ If no tool is suitable, state that explicitly. If the user's input lacks required parameters, ask for clarification. Do not interpret or respond until tool results are returned. Once they are available, process them or make additional calls if needed. For tasks that don't require tools, such as casual conversation or general advice, respond directly in plain text. The available tools are:
49
+
50
+ {AVAILABLE_TOOLS}
51
+
52
+ <|im_end|><|im_start|>user
53
+ {USER_QUERY}<|im_end|><|im_start|>assistant
54
+ {ASSISTANT_QUERY}<|im_end|><|im_start|>user
55
+ {USER_QUERY}<|im_end|><|im_start|>assistant
56
+ ```
57
+ For more information, refer to [documentation](https://huggingface.co/agentstudio-family/xLAM-1B-FC-r#basic-usage-with-huggingface))
58
+
59
+ ## Usage
60
+
61
+ ### Command Line
62
+
63
+ 1. Install llama.cpp framework from the source [here](https://github.com/ggerganov/llama.cpp)
64
+ 2. Run the inference task as below, to configure generation related paramter, refer to [llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
65
+ ```
66
+ llama-cli -m [PATH-TO-LOCAL-GGUF]
67
+ ```
68
+
69
+ ### Python framwork
70
+
71
+ 1. Install [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
72
+ ```
73
+ pip install llama-cpp-python
74
+ ```
75
+ 2. Refer to [llama-cpp-API](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#high-level-api), here's a example below
76
+ ```python
77
+ from llama_cpp import Llama
78
+ llm = Llama(
79
+ model_path="[PATH-TO-MODEL]"
80
+ )
81
+ output = llm.create_chat_completion(
82
+ messages = [
83
+ {
84
+ "role": "system",
85
+ "content": "You are a helpful assistant that can use tools. You are developed by Salesforce xLAM team."
86
+
87
+ },
88
+ {
89
+ "role": "user",
90
+ "content": "Extract Jason is 25 years old"
91
+ }
92
+ ],
93
+ tools=[{
94
+ "type": "function",
95
+ "function": {
96
+ "name": "UserDetail",
97
+ "parameters": {
98
+ "type": "object",
99
+ "title": "UserDetail",
100
+ "properties": {
101
+ "name": {
102
+ "title": "Name",
103
+ "type": "string"
104
+ },
105
+ "age": {
106
+ "title": "Age",
107
+ "type": "integer"
108
+ }
109
+ },
110
+ "required": [ "name", "age" ]
111
+ }
112
+ }
113
+ }],
114
+ tool_choice={
115
+ "type": "function",
116
+ "function": {
117
+ "name": "UserDetail"
118
+ }
119
+ }
120
+ )
121
+ print(output['choices'][0]['message'])
122
+ ```