Papers
arxiv:2505.14142

AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound

Published on May 20
Authors:
,
,
,

Abstract

AudSemThinker, utilizing the AudSem dataset, excels in semantic audio reasoning by addressing data contamination and outperforming existing models.

AI-generated summary

Audio-language models have shown promising results in various sound understanding tasks, yet they remain limited in their ability to reason over the fine-grained semantics of sound. In this paper, we present AudSemThinker, a model whose reasoning is structured around a framework of auditory semantics inspired by human cognition. To support this, we introduce AudSem, a novel dataset specifically curated for semantic descriptor reasoning in audio-language models. AudSem addresses the persistent challenge of data contamination in zero-shot evaluations by providing a carefully filtered collection of audio samples paired with captions generated through a robust multi-stage pipeline. Our experiments demonstrate that AudSemThinker outperforms state-of-the-art models across multiple training settings, highlighting its strength in semantic audio reasoning. Both AudSemThinker and the AudSem dataset are released publicly.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 2

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.