prince-canuma commited on
Commit
4f80c1d
·
verified ·
1 Parent(s): e12a21c

Upload folder using huggingface_hub

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -21094,32 +21094,31 @@ model-index:
21094
  value: 78.51132446157838
21095
  ---
21096
 
21097
- # mlx-community/modernbert-embed-base-8bit
21098
 
21099
- The Model [mlx-community/modernbert-embed-base-8bit](https://huggingface.co/mlx-community/modernbert-embed-base-8bit) was converted to MLX format from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) using mlx-lm version **0.0.3**.
21100
 
21101
- ## Use with mlx
21102
 
21103
- ```bash
21104
- pip install mlx-embeddings
21105
- ```
21106
 
21107
- ```python
21108
- from mlx_embeddings import load, generate
21109
- import mlx.core as mx
21110
 
21111
- model, tokenizer = load('mlx-community/modernbert-embed-base-8bit')
21112
 
21113
- # For text embeddings
21114
- output = generate(model, processor, texts=["I like grapes", "I like fruits"])
21115
- embeddings = output.text_embeds # Normalized embeddings
21116
 
21117
- # Compute dot product between normalized embeddings
21118
- similarity_matrix = mx.matmul(embeddings, embeddings.T)
21119
 
21120
- print("
21121
- Similarity matrix between texts:")
21122
- print(similarity_matrix)
21123
 
21124
 
21125
- ```
 
21094
  value: 78.51132446157838
21095
  ---
21096
 
21097
+ # mlx-community/modernbert-embed-base-8bit
21098
 
21099
+ The Model [mlx-community/modernbert-embed-base-8bit](https://huggingface.co/mlx-community/modernbert-embed-base-8bit) was converted to MLX format from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) using mlx-lm version **0.0.3**.
21100
 
21101
+ ## Use with mlx
21102
 
21103
+ ```bash
21104
+ pip install mlx-embeddings
21105
+ ```
21106
 
21107
+ ```python
21108
+ from mlx_embeddings import load, generate
21109
+ import mlx.core as mx
21110
 
21111
+ model, tokenizer = load("mlx-community/modernbert-embed-base-8bit")
21112
 
21113
+ # For text embeddings
21114
+ output = generate(model, processor, texts=["I like grapes", "I like fruits"])
21115
+ embeddings = output.text_embeds # Normalized embeddings
21116
 
21117
+ # Compute dot product between normalized embeddings
21118
+ similarity_matrix = mx.matmul(embeddings, embeddings.T)
21119
 
21120
+ print("Similarity matrix between texts:")
21121
+ print(similarity_matrix)
 
21122
 
21123
 
21124
+ ```