Safetensors
clip

Model Initialized from openai/clip-vit-large-patch14. The image encoder is finetuned with FARE at $\epsilon=2/255$.

To load this model use:

from transformers import CLIPProcessor, CLIPModel

model_name = "LEAF-CLIP/CLIP-ViT-L-FARE2"
processor_name = "openai/clip-vit-large-patch14"

model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)
Downloads last month
192
Safetensors
Model size
428M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LEAF-CLIP/CLIP-ViT-L-FARE2

Finetuned
(100)
this model

Datasets used to train LEAF-CLIP/CLIP-ViT-L-FARE2

Collection including LEAF-CLIP/CLIP-ViT-L-FARE2