Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ license: apache-2.0
|
|
41 |
|
42 |
## Resources
|
43 |
- [Website](https://jialuo-li.github.io/Science-T2I-Web/)
|
44 |
-
- [arXiv: Paper](https://arxiv.org/)
|
45 |
- [GitHub: Code](https://github.com/Jialuo-Li/Science-T2I)
|
46 |
- [Huggingface: SciScore](https://huggingface.co/Jialuo21/SciScore)
|
47 |
- [Huggingface: Science-T2I-Trainset](https://huggingface.co/datasets/Jialuo21/Science-T2I-Trainset)
|
@@ -49,7 +49,22 @@ license: apache-2.0
|
|
49 |
## Benchmark Collection and Processing
|
50 |
- Science-T2I-C is generated using the identical procedure as the training data, with a key adjustment to the prompts. This test set pushes the model further by introducing more intricate scenarios, incorporating contextual details like specific scene settings and diverse situations. Prompts in Science-T2I-C might include phrases like "in a bedroom" or "on the street," thereby adding spatial and contextual variety. This heightened complexity assesses the model's capacity to adapt to more nuanced and less constrained environments.
|
51 |
- To evaluate the model's understanding of implicit prompts and its ability to connect them with visual content, we employ a comparative image selection task. Specifically, we present the model with an implicit prompt and two distinct images. The model's objective is to analyze the prompt and then choose the image that best aligns with the overall meaning conveyed by that prompt. The specifics of this process are outlined in the EVAL CODE.
|
|
|
52 |
## Benchmarking LMM&VLM
|
53 |
Most existing VLMs struggle to select the correct image based on scientific knowledge, with performance often resembling random guessing. Similarly, LMMs face challenges in this area. However, SciScore stands out by demonstrating exceptional performance, achieving human-level accuracy after being trained on Science-T2I.
|
54 |
|
55 |
-
<img src="./exp.png" align="center">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Resources
|
43 |
- [Website](https://jialuo-li.github.io/Science-T2I-Web/)
|
44 |
+
- [arXiv: Paper](https://arxiv.org/abs/2504.13129)
|
45 |
- [GitHub: Code](https://github.com/Jialuo-Li/Science-T2I)
|
46 |
- [Huggingface: SciScore](https://huggingface.co/Jialuo21/SciScore)
|
47 |
- [Huggingface: Science-T2I-Trainset](https://huggingface.co/datasets/Jialuo21/Science-T2I-Trainset)
|
|
|
49 |
## Benchmark Collection and Processing
|
50 |
- Science-T2I-C is generated using the identical procedure as the training data, with a key adjustment to the prompts. This test set pushes the model further by introducing more intricate scenarios, incorporating contextual details like specific scene settings and diverse situations. Prompts in Science-T2I-C might include phrases like "in a bedroom" or "on the street," thereby adding spatial and contextual variety. This heightened complexity assesses the model's capacity to adapt to more nuanced and less constrained environments.
|
51 |
- To evaluate the model's understanding of implicit prompts and its ability to connect them with visual content, we employ a comparative image selection task. Specifically, we present the model with an implicit prompt and two distinct images. The model's objective is to analyze the prompt and then choose the image that best aligns with the overall meaning conveyed by that prompt. The specifics of this process are outlined in the EVAL CODE.
|
52 |
+
|
53 |
## Benchmarking LMM&VLM
|
54 |
Most existing VLMs struggle to select the correct image based on scientific knowledge, with performance often resembling random guessing. Similarly, LMMs face challenges in this area. However, SciScore stands out by demonstrating exceptional performance, achieving human-level accuracy after being trained on Science-T2I.
|
55 |
|
56 |
+
<img src="./exp.png" align="center">
|
57 |
+
|
58 |
+
## Citation
|
59 |
+
|
60 |
+
```
|
61 |
+
@misc{li2025sciencet2iaddressingscientificillusions,
|
62 |
+
title={Science-T2I: Addressing Scientific Illusions in Image Synthesis},
|
63 |
+
author={Jialuo Li and Wenhao Chai and Xingyu Fu and Haiyang Xu and Saining Xie},
|
64 |
+
year={2025},
|
65 |
+
eprint={2504.13129},
|
66 |
+
archivePrefix={arXiv},
|
67 |
+
primaryClass={cs.CV},
|
68 |
+
url={https://arxiv.org/abs/2504.13129},
|
69 |
+
}
|
70 |
+
```
|