Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval
Abstract
UNITE addresses challenges in multimodal information retrieval through data curation and modality-aware training, achieving state-of-the-art results across benchmarks with Modal-Aware Masked Contrastive Learning.
Multimodal information retrieval (MIR) faces inherent challenges due to the heterogeneity of data sources and the complexity of cross-modal alignment. While previous studies have identified modal gaps in feature spaces, a systematic approach to address these challenges remains unexplored. In this work, we introduce UNITE, a universal framework that tackles these challenges through two critical yet underexplored aspects: data curation and modality-aware training configurations. Our work provides the first comprehensive analysis of how modality-specific data properties influence downstream task performance across diverse scenarios. Moreover, we propose Modal-Aware Masked Contrastive Learning (MAMCL) to mitigate the competitive relationships among the instances of different modalities. Our framework achieves state-of-the-art results on multiple multimodal retrieval benchmarks, outperforming existing methods by notable margins. Through extensive experiments, we demonstrate that strategic modality curation and tailored training protocols are pivotal for robust cross-modal representation learning. This work not only advances MIR performance but also provides a foundational blueprint for future research in multimodal systems. Our project is available at https://friedrichor.github.io/projects/UNITE.
Community
Universal Multimodal Embeddings
ā” Supports text, image, video, and their fusion.
ā” Supports coarse-grained retrieval, fine-grained retrieval, and instruction-based retrieval
Project Page š https://friedrichor.github.io/projects/UNITE/
Code š https://github.com/friedrichor/UNITE
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs (2025)
- UniMoCo: Unified Modality Completion for Robust Multi-Modal Embeddings (2025)
- CIBR: Cross-modal Information Bottleneck Regularization for Robust CLIP Generalization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper