Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,7 @@ datasets:
|
|
6 |
base_model:
|
7 |
- openai/clip-vit-large-patch14
|
8 |
---
|
|
|
9 |
|
10 |
Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
|
11 |
|
@@ -19,4 +20,5 @@ processor_name = "openai/clip-vit-large-patch14"
|
|
19 |
|
20 |
model = CLIPModel.from_pretrained(model_name)
|
21 |
processor = CLIPProcessor.from_pretrained(processor_name)
|
22 |
-
```
|
|
|
|
6 |
base_model:
|
7 |
- openai/clip-vit-large-patch14
|
8 |
---
|
9 |
+
[[Paper]](https://www.arxiv.org/abs/2506.03355) [[Code]](https://github.com/LIONS-EPFL/LEAF)
|
10 |
|
11 |
Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$ and semantic constraints.
|
12 |
|
|
|
20 |
|
21 |
model = CLIPModel.from_pretrained(model_name)
|
22 |
processor = CLIPProcessor.from_pretrained(processor_name)
|
23 |
+
```
|
24 |
+
|