Abstract
Sparse Attention Vectors (SAVs) are replaced by scalar activations called Super Neurons (SNs) that enable faster, more efficient classification in Vision-Language Models through early exiting from shallow layers.
Sparse Attention Vectors (SAVs) have emerged as an excellent training-free alternative to supervised finetuning or low-rank adaptation to improve the performance of Vision Language Models (VLMs). At their heart, SAVs select a few accurate attention heads for a task of interest and use them as classifiers, rather than relying on the model's prediction. In a similar spirit, we find that directly probing the raw activations of the VLM, in the form of scalar values, is sufficient to yield accurate classifiers on diverse visually grounded downstream tasks. Shifting focus from attention vectors to scalar activations dramatically increases the search space for accurate parameters, allowing us to find more discriminative neurons immediately from the first generated token. We call such activations Super Neurons (SNs). In this probing setting, we discover that enough SNs appear in the shallower layers of the large language model to allow for extreme early exiting from the first layer of the model at the first generated token. Compared to the original network, SNs robustly improve the classification performance while achieving a speedup of up to 5.10x.
Community
Motivated by the fact that vision-language models contain billions of parameters and that simple tasks might require less compute to be solved accurately, we present a simple training-free approach that identifies high scoring activations in the model. These activations provide substantial performance improvements over the base model on a diverse set of downstream categorical VQA tasks. We thus call them Super Neurons (SNs). These SNs are robust to prompt and image distribution change and seem to appear in a variety of model families (LLaVA and Qwen-3-VL). We also leverage SNs to perform extreme early exiting. That is, producing an accurate answer from the first layer of the LLM, during the first autoregressive step, substantially increasing the inference speed.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ConsensusDrop: Fusing Visual and Cross-Modal Saliency for Efficient Vision Language Models (2026)
- Selective Training for Large Vision Language Models via Visual Information Gain (2026)
- Seeing Clearly, Reasoning Confidently: Plug-and-Play Remedies for Vision Language Model Blindness (2026)
- VisNec: Measuring and Leveraging Visual Necessity for Multimodal Instruction Tuning (2026)
- Beyond Static Cropping: Layer-Adaptive Visual Localization and Decoding Enhancement (2026)
- CLUE: Crossmodal disambiguation via Language-vision Understanding with attEntion (2026)
- Do LLMs and VLMs Share Neurons for Inference? Evidence and Mechanisms of Cross-Modal Transfer (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper