Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,6 @@ pipeline_tag: text-generation
|
|
29 |
| `Llama-SmolTalk-3.2-1B-Instruct.Q5_K_M.gguf` | 912 MB | Llama-SmolTalk model (Q5_K_M quantization) | Uploaded (LFS) |
|
30 |
| `Llama-SmolTalk-3.2-1B-Instruct.Q8_0.gguf` | 1.32 GB | Llama-SmolTalk model (Q8_0 quantization) | Uploaded (LFS) |
|
31 |
|
32 |
-

|
33 |
The **Llama-SmolTalk-3.2-1B-Instruct** model is a lightweight, instruction-tuned model designed for efficient text generation and conversational AI tasks. With a 1B parameter architecture, this model strikes a balance between performance and resource efficiency, making it ideal for applications requiring concise, contextually relevant outputs. The model has been fine-tuned to deliver robust instruction-following capabilities, catering to both structured and open-ended queries.
|
34 |
|
35 |
### Key Features:
|
@@ -134,6 +133,7 @@ plays a pivotal role in pushing the boundaries of human exploration and settleme
|
|
134 |
```
|
135 |
|
136 |
---
|
|
|
137 |
|
138 |
## Conclusion
|
139 |
|
|
|
29 |
| `Llama-SmolTalk-3.2-1B-Instruct.Q5_K_M.gguf` | 912 MB | Llama-SmolTalk model (Q5_K_M quantization) | Uploaded (LFS) |
|
30 |
| `Llama-SmolTalk-3.2-1B-Instruct.Q8_0.gguf` | 1.32 GB | Llama-SmolTalk model (Q8_0 quantization) | Uploaded (LFS) |
|
31 |
|
|
|
32 |
The **Llama-SmolTalk-3.2-1B-Instruct** model is a lightweight, instruction-tuned model designed for efficient text generation and conversational AI tasks. With a 1B parameter architecture, this model strikes a balance between performance and resource efficiency, making it ideal for applications requiring concise, contextually relevant outputs. The model has been fine-tuned to deliver robust instruction-following capabilities, catering to both structured and open-ended queries.
|
33 |
|
34 |
### Key Features:
|
|
|
133 |
```
|
134 |
|
135 |
---
|
136 |
+

|
137 |
|
138 |
## Conclusion
|
139 |
|