CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs
Abstract
A new benchmark, CSVQA, evaluates scientific reasoning in vision-language models through domain-specific visual question answering, highlighting the need for improvement in these models.
Vision-Language Models (VLMs) have demonstrated remarkable progress in multimodal understanding, yet their capabilities for scientific reasoning remains inadequately assessed. Current multimodal benchmarks predominantly evaluate generic image comprehension or text-driven reasoning, lacking authentic scientific contexts that require domain-specific knowledge integration with visual evidence analysis. To fill this gap, we present CSVQA, a diagnostic multimodal benchmark specifically designed for evaluating scientific reasoning through domain-grounded visual question answering.Our benchmark features 1,378 carefully constructed question-answer pairs spanning diverse STEM disciplines, each demanding domain knowledge, integration of visual evidence, and higher-order reasoning. Compared to prior multimodal benchmarks, CSVQA places greater emphasis on real-world scientific content and complex reasoning.We additionally propose a rigorous evaluation protocol to systematically assess whether model predictions are substantiated by valid intermediate reasoning steps based on curated explanations. Our comprehensive evaluation of 15 VLMs on this benchmark reveals notable performance disparities, as even the top-ranked proprietary model attains only 49.6\% accuracy.This empirical evidence underscores the pressing need for advancing scientific reasoning capabilities in VLMs. Our CSVQA is released at https://huggingface.co/datasets/Skywork/CSVQA.
Community
CSVQA is a multimodal benchmark specifically designed to evaluate the scientific reasoning capabilities of Vision-Language Models (VLMs).
π¬ STEM-Focused
CSVQA contains 1,378 carefully curated visual question-answer pairs across STEM disciplines:
- Physics
- Chemistry
- Biology
- Mathematics
π Real-World Scientific Contexts
Unlike generic multimodal benchmarks, CSVQA emphasizes:
- Domain-specific scientific knowledge
- Integration with visual evidence
π§ Higher-Order Reasoning
Tasks go beyond surface-level understanding, requiring:
- Multi-step reasoning
- Logical inference grounded in scientific principles
π§Ύ Explanation-Based Evaluation
Each QA pair is paired with:
- Curated reasoning chains
- Used to evaluate if model answers are logically and factually justified
π Challenging for SOTA Models
An evaluation of 15 VLMs shows:
- Even the best proprietary model only achieves 49.6% accuracy
- Demonstrates the current limitations of VLMs in scientific reasoning
I have no idea, dude
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge (2025)
- ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models (2025)
- OCR-Reasoning Benchmark: Unveiling the True Capabilities of MLLMs in Complex Text-Rich Image Reasoning (2025)
- Seeing Beyond Words: MatVQA for Challenging Visual-Scientific Reasoning in Materials Science (2025)
- MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga Understanding (2025)
- MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs (2025)
- LENS: Multi-level Evaluation of Multimodal Reasoning with Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper