Papers
arxiv:2505.24120

CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs

Published on May 30
Β· Submitted by OrlandoHugBot on Jun 4
Authors:
,
,
,
,
,

Abstract

A new benchmark, CSVQA, evaluates scientific reasoning in vision-language models through domain-specific visual question answering, highlighting the need for improvement in these models.

AI-generated summary

Vision-Language Models (VLMs) have demonstrated remarkable progress in multimodal understanding, yet their capabilities for scientific reasoning remains inadequately assessed. Current multimodal benchmarks predominantly evaluate generic image comprehension or text-driven reasoning, lacking authentic scientific contexts that require domain-specific knowledge integration with visual evidence analysis. To fill this gap, we present CSVQA, a diagnostic multimodal benchmark specifically designed for evaluating scientific reasoning through domain-grounded visual question answering.Our benchmark features 1,378 carefully constructed question-answer pairs spanning diverse STEM disciplines, each demanding domain knowledge, integration of visual evidence, and higher-order reasoning. Compared to prior multimodal benchmarks, CSVQA places greater emphasis on real-world scientific content and complex reasoning.We additionally propose a rigorous evaluation protocol to systematically assess whether model predictions are substantiated by valid intermediate reasoning steps based on curated explanations. Our comprehensive evaluation of 15 VLMs on this benchmark reveals notable performance disparities, as even the top-ranked proprietary model attains only 49.6\% accuracy.This empirical evidence underscores the pressing need for advancing scientific reasoning capabilities in VLMs. Our CSVQA is released at https://huggingface.co/datasets/Skywork/CSVQA.

Community

Paper submitter

CSVQA is a multimodal benchmark specifically designed to evaluate the scientific reasoning capabilities of Vision-Language Models (VLMs).


πŸ”¬ STEM-Focused

CSVQA contains 1,378 carefully curated visual question-answer pairs across STEM disciplines:

  • Physics
  • Chemistry
  • Biology
  • Mathematics

🌍 Real-World Scientific Contexts

Unlike generic multimodal benchmarks, CSVQA emphasizes:

  • Domain-specific scientific knowledge
  • Integration with visual evidence

🧠 Higher-Order Reasoning

Tasks go beyond surface-level understanding, requiring:

  • Multi-step reasoning
  • Logical inference grounded in scientific principles

🧾 Explanation-Based Evaluation

Each QA pair is paired with:

  • Curated reasoning chains
  • Used to evaluate if model answers are logically and factually justified

πŸ“‰ Challenging for SOTA Models

An evaluation of 15 VLMs shows:

  • Even the best proprietary model only achieves 49.6% accuracy
  • Demonstrates the current limitations of VLMs in scientific reasoning

This paper is not mine. I also want to know what happened.

Β·

I have no idea, dude

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.24120 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.24120 in a Space README.md to link it from this page.

Collections including this paper 4