bartowski commited on
Commit
7ab106e
·
verified ·
1 Parent(s): 4b0b92a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -8
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
- base_model:
3
- - mistralai/Mistral-Small-24B-Instruct-2501
4
  library_name: transformers
5
  license: apache-2.0
6
  ---
@@ -9,12 +8,6 @@ license: apache-2.0
9
 
10
  **Arcee-Blitz (24B)** is a new Mistral-based 24B model distilled from DeepSeek, designed to be both **fast and efficient**. We view it as a practical “workhorse” model that can tackle a range of tasks without the overhead of larger architectures.
11
 
12
- ### Quantizations
13
-
14
- GGUF quants are available [here](https://huggingface.co/arcee-ai/Arcee-Blitz-GGUF)
15
-
16
- AWQ quants are available [here](https://huggingface.co/arcee-ai/Arcee-Blitz-AWQ)
17
-
18
  ### Model Details
19
 
20
  - Architecture Base: Mistral-Small-24B-Instruct-2501
 
1
  ---
2
+ base_model: arcee-ai/Arcee-Blitz
 
3
  library_name: transformers
4
  license: apache-2.0
5
  ---
 
8
 
9
  **Arcee-Blitz (24B)** is a new Mistral-based 24B model distilled from DeepSeek, designed to be both **fast and efficient**. We view it as a practical “workhorse” model that can tackle a range of tasks without the overhead of larger architectures.
10
 
 
 
 
 
 
 
11
  ### Model Details
12
 
13
  - Architecture Base: Mistral-Small-24B-Instruct-2501