Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
Jialuo21 commited on
Commit
87ae387
·
verified ·
1 Parent(s): 67edadc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -31,3 +31,24 @@ configs:
31
  - split: test
32
  path: data/test-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  - split: test
32
  path: data/test-*
33
  ---
34
+
35
+
36
+
37
+ <img src="./teaser.png" align="center">
38
+
39
+ # Science-T2I-C Benchmark
40
+
41
+ ## Resources
42
+ - [Website](https://jialuo-li.github.io/Science-T2I-Web/)
43
+ - [arXiv: Paper](https://arxiv.org/abs/2410.03051)
44
+ - [GitHub: Code](https://github.com/rese1f/aurora)
45
+ - [Huggingface: SciScore](https://huggingface.co/Jialuo21/SciScore)
46
+ - [Huggingface: Science-T2I-Trainset](https://huggingface.co/datasets/Jialuo21/Science-T2I-Trainset)
47
+
48
+ ## Benchmark Collection and Processing
49
+ - Science-T2I-C is generated using the identical procedure as the training data, with a key adjustment to the prompts. This test set pushes the model further by introducing more intricate scenarios, incorporating contextual details like specific scene settings and diverse situations. Prompts in Science-T2I-C might include phrases like "in a bedroom" or "on the street," thereby adding spatial and contextual variety. This heightened complexity assesses the model's capacity to adapt to more nuanced and less constrained environments.
50
+ - To evaluate the model's understanding of implicit prompts and its ability to connect them with visual content, we employ a comparative image selection task. Specifically, we present the model with an implicit prompt and two distinct images. The model's objective is to analyze the prompt and then choose the image that best aligns with the overall meaning conveyed by that prompt. The specifics of this process are outlined in the EVAL CODE.
51
+ ## Benchmarking LMM&VLM
52
+ Most existing VLMs struggle to select the correct image based on scientific knowledge, with performance often resembling random guessing. Similarly, LMMs face challenges in this area. However, SciScore stands out by demonstrating exceptional performance, achieving human-level accuracy after being trained on Science-T2I.
53
+
54
+ <img src="./exp.png" align="center">