Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,11 @@ base_model:
|
|
8 |
- meta-llama/Llama-4-Scout-17B-16E-Instruct
|
9 |
---
|
10 |
|
|
|
|
|
|
|
|
|
|
|
11 |
## Model Information
|
12 |
|
13 |
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
|
|
|
8 |
- meta-llama/Llama-4-Scout-17B-16E-Instruct
|
9 |
---
|
10 |
|
11 |
+
** This is a prototype. It is not yet reflected in the official [AutoAWQ repository](https://github.com/casper-hansen/AutoAWQ/pull/748).**
|
12 |
+
|
13 |
+
## usage
|
14 |
+
See [llama4_inference.ipynb](https://huggingface.co/kishizaki-sci/Llama-4-Scout-17B-16E-Instruct-AWQ/blob/main/llama4_inference.ipynb).
|
15 |
+
|
16 |
## Model Information
|
17 |
|
18 |
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
|