Feature Extraction
Transformers
Safetensors
clip
zero-shot-image-classification
nielsr HF Staff commited on
Commit
f2abc9f
·
verified ·
1 Parent(s): 6c01a2a

Add library name and pipeline tag

Browse files

This PR improves the model card by adding the `library_name` (Transformers) and the correct pipeline tag, so that the model can be found at https://huggingface.co/models?pipeline_tag=feature-extraction.

Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- license: mit
 
3
  datasets:
4
  - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
- base_model:
7
- - openai/clip-vit-large-patch14
 
8
  ---
 
9
  [[Paper]](https://www.arxiv.org/abs/2506.03355)   [[Code]](https://github.com/LIONS-EPFL/LEAF)
10
 
11
  Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
@@ -20,5 +23,4 @@ processor_name = "openai/clip-vit-large-patch14"
20
 
21
  model = CLIPModel.from_pretrained(model_name)
22
  processor = CLIPProcessor.from_pretrained(processor_name)
23
- ```
24
-
 
1
  ---
2
+ base_model:
3
+ - openai/clip-vit-large-patch14
4
  datasets:
5
  - ILSVRC/imagenet-1k
6
  - mlfoundations/datacomp_small
7
+ license: mit
8
+ library_name: transformers
9
+ pipeline_tag: feature-extraction
10
  ---
11
+
12
  [[Paper]](https://www.arxiv.org/abs/2506.03355)   [[Code]](https://github.com/LIONS-EPFL/LEAF)
13
 
14
  Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
 
23
 
24
  model = CLIPModel.from_pretrained(model_name)
25
  processor = CLIPProcessor.from_pretrained(processor_name)
26
+ ```