TYH71 commited on
Commit
5d4c560
·
1 Parent(s): 8950c7e

docs: edit README

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -11,5 +11,6 @@ pinned: false
11
 
12
  # CLIP Gradio Demo
13
 
14
- | Repository to host CLIP on HuggingFace spaces
15
 
 
 
11
 
12
  # CLIP Gradio Demo
13
 
14
+ > Repository to host CLIP (Contrastive Language Image Pretraining) on HuggingFace spaces
15
 
16
+ CLIP is an open source, multi-modal, zero-shot model. Given an image and text descriptions, the model can predict the most relevant text description for that image, without optimizing for a particular task (zero-shot).