Upload folder using huggingface_hub
Browse files- README.md +5 -1
- quant_model.png +0 -0
- snowflake2_m_uint8.onnx +2 -2
README.md
CHANGED
@@ -87,6 +87,10 @@ language:
|
|
87 |
- yo
|
88 |
- zh
|
89 |
---
|
|
|
|
|
|
|
|
|
90 |
# snowflake2_m_uint8
|
91 |
|
92 |
This is a slightly modified version of the uint8 quantized ONNX model from https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0
|
@@ -99,7 +103,7 @@ No benchmarks, but it in my limited testing it's exactly equivalent to the FP32
|
|
99 |
|
100 |
# Quantization method
|
101 |
|
102 |
-
|
103 |
|
104 |
Here's what the graph of the original output looks like:
|
105 |
|
|
|
87 |
- yo
|
88 |
- zh
|
89 |
---
|
90 |
+
# Accuracy
|
91 |
+
|
92 |
+
Not sure on accuracy quite yet, will update soon. After I confirm this is working well (preliminary results suggest it's good), I can try a version which combines normalization + quantization for the `token_embeddings` output.
|
93 |
+
|
94 |
# snowflake2_m_uint8
|
95 |
|
96 |
This is a slightly modified version of the uint8 quantized ONNX model from https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0
|
|
|
103 |
|
104 |
# Quantization method
|
105 |
|
106 |
+
Linear quantization for the scale -.3 to 0.3, which is what sentence_embedding is normalized to.
|
107 |
|
108 |
Here's what the graph of the original output looks like:
|
109 |
|
quant_model.png
CHANGED
![]() |
![]() |
snowflake2_m_uint8.onnx
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:97de99227c030abf5207feb4e8cb75a65caa43dae4df3a1defb8ec8743c70b8b
|
3 |
+
size 310916368
|