Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models
Abstract
Text-derived steering via sparse autoencoders, mean shift, and linear probing enhances multimodal accuracy in large language models without requiring parameter modifications or significant additional data or computation.
Steering methods have emerged as effective and targeted tools for guiding large language models' (LLMs) behavior without modifying their parameters. Multimodal large language models (MLLMs), however, do not currently enjoy the same suite of techniques, due in part to their recency and architectural diversity. Inspired by this gap, we investigate whether MLLMs can be steered using vectors derived from their text-only LLM backbone, via sparse autoencoders (SAEs), mean shift, and linear probing. We find that text-derived steering consistently enhances multimodal accuracy across diverse MLLM architectures and visual tasks. In particular, mean shift boosts spatial relationship accuracy on CV-Bench by up to +7.3% and counting accuracy by up to +3.3%, outperforming prompting and exhibiting strong generalization to out-of-distribution datasets. These results highlight textual steering vectors as a powerful, efficient mechanism for enhancing grounding in MLLMs with minimal additional data collection and computational overhead.
Community
Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Denoising Concept Vectors with Sparse Autoencoders for Improved Language Model Steering (2025)
- SAE-SSV: Supervised Steering in Sparse Representation Spaces for Reliable Control of Language Models (2025)
- Steering Large Language Models for Machine Translation Personalization (2025)
- MASSV: Multimodal Adaptation and Self-Data Distillation for Speculative Decoding of Vision-Language Models (2025)
- Improving Multilingual Language Models by Aligning Representations through Steering (2025)
- Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models (2025)
- Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper