Feature Extraction
Transformers
Safetensors
clip
zero-shot-image-classification
megaelius commited on
Commit
f3d3027
·
verified ·
1 Parent(s): c662e72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -3
README.md CHANGED
@@ -1,3 +1,22 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - ILSVRC/imagenet-1k
5
+ - mlfoundations/datacomp_small
6
+ base_model:
7
+ - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
8
+ ---
9
+
10
+ Model Initialized from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
11
+
12
+ To load this model use:
13
+
14
+ ```python
15
+ from transformers import CLIPProcessor, CLIPModel
16
+
17
+ model_name = "LEAF-CLIP/OpenCLIP-ViT-H-rho50-k1-constrained-FARE2"
18
+ processor_name = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
19
+
20
+ model = CLIPModel.from_pretrained(model_name)
21
+ processor = CLIPProcessor.from_pretrained(processor_name)
22
+ ```