Update README.md
Browse files
README.md
CHANGED
@@ -14,25 +14,11 @@ library_name: transformers
|
|
14 |
These files have been built using a imatrix file and latest llama.cpp build. You must use a fork of llama.cpp to use vision with the model.
|
15 |
|
16 |
|
17 |
-
## How to Use Qwen 2.5 VL Instruct with llama.cpp
|
18 |
|
19 |
-
To utilize the experimental support for Qwen 2.5 VL in `llama.cpp`, follow these steps:
|
20 |
-
Note this uses a fork of llama.cpp. At this time the main branch does not support vision for this model
|
21 |
-
1. **Clone the lastest llama.cpp Fork**:
|
22 |
-
```bash
|
23 |
-
git clone https://github.com/HimariO/llama.cpp.qwen2vl.git
|
24 |
-
cd llama.cpp.qwen2vl
|
25 |
-
git checkout qwen25-vl-20250404
|
26 |
-
```
|
27 |
-
|
28 |
-
|
29 |
-
2. **Build the Llama.cpp**:
|
30 |
-
|
31 |
-
Build llama.cpp as usual : https://github.com/ggml-org/llama.cpp#building-the-project
|
32 |
|
33 |
-
Once llama.cpp is built Copy the ./llama.cpp.qwen2vl/build/bin/llama-qwen2-vl-cli to a chosen folder.
|
34 |
|
35 |
-
|
36 |
|
37 |
https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main
|
38 |
|
@@ -42,7 +28,7 @@ Example gguf file : https://huggingface.co/Mungert/Mungert/Qwen2.5-VL-7B-Instruc
|
|
42 |
|
43 |
Copy this file to your chosen folder.
|
44 |
|
45 |
-
|
46 |
|
47 |
https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main
|
48 |
|
@@ -52,20 +38,20 @@ Example mmproj file : https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF
|
|
52 |
|
53 |
Copy this file to your chosen folder.
|
54 |
|
55 |
-
|
56 |
|
57 |
-
In the example below the gguf files, images and llama-
|
58 |
|
59 |
Example image: image https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/car-1.jpg
|
60 |
|
61 |
Copy this file to your chosen folder.
|
62 |
|
63 |
-
|
64 |
|
65 |
From your chosen folder :
|
66 |
|
67 |
```bash
|
68 |
-
llama-
|
69 |
|
70 |
```
|
71 |
|
|
|
14 |
These files have been built using a imatrix file and latest llama.cpp build. You must use a fork of llama.cpp to use vision with the model.
|
15 |
|
16 |
|
17 |
+
## How to Use Qwen 2.5 VL Instruct with llama.cpp (latest as of 10th May 2025)
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
|
|
20 |
|
21 |
+
1. **Download the Qwen 2.5 VL gguf file**:
|
22 |
|
23 |
https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main
|
24 |
|
|
|
28 |
|
29 |
Copy this file to your chosen folder.
|
30 |
|
31 |
+
2. **Download the Qwen 2.5 VL mmproj file**
|
32 |
|
33 |
https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main
|
34 |
|
|
|
38 |
|
39 |
Copy this file to your chosen folder.
|
40 |
|
41 |
+
3. Copy images to the same folder as the gguf files or alter paths appropriately.
|
42 |
|
43 |
+
In the example below the gguf files, images and llama-mtmd-cli are in the same folder.
|
44 |
|
45 |
Example image: image https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/car-1.jpg
|
46 |
|
47 |
Copy this file to your chosen folder.
|
48 |
|
49 |
+
4. **Run the CLI Tool**:
|
50 |
|
51 |
From your chosen folder :
|
52 |
|
53 |
```bash
|
54 |
+
llama-mtmd-cli -m Qwen2.5-VL-7B-Instruct-q8_0.gguf --mmproj Qwen2.5-VL-7B-Instruct-mmproj-f16.gguf -p "Describe this image." --image ./car-1.jpg
|
55 |
|
56 |
```
|
57 |
|