Feature Extraction
Transformers
Safetensors
clip
zero-shot-image-classification
megaelius nielsr HF Staff commited on
Commit
c43df35
·
verified ·
1 Parent(s): ebb4ec5

Add pipeline tag and library name (#1)

Browse files

- Add pipeline tag and library name (5740e051d0711eb1a5d1cc31f4e4471a065eae3e)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- license: mit
 
3
  datasets:
4
  - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
- base_model:
7
- - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
 
8
  ---
 
9
  [[Paper]](https://www.arxiv.org/abs/2506.03355) &nbsp; [[Code]](https://github.com/LIONS-EPFL/LEAF)
10
 
11
  Model Initialized from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
 
1
  ---
2
+ base_model:
3
+ - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
4
  datasets:
5
  - ILSVRC/imagenet-1k
6
  - mlfoundations/datacomp_small
7
+ license: mit
8
+ pipeline_tag: feature-extraction
9
+ library_name: transformers
10
  ---
11
+
12
  [[Paper]](https://www.arxiv.org/abs/2506.03355) &nbsp; [[Code]](https://github.com/LIONS-EPFL/LEAF)
13
 
14
  Model Initialized from `laion/CLIP-ViT-H-14-laion2B-s32B-b79K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.