Make it SING: Analyzing Semantic Invariants in Classifiers
Abstract
All classifiers, including state-of-the-art vision models, possess invariants, partially rooted in the geometry of their linear mappings. These invariants, which reside in the null-space of the classifier, induce equivalent sets of inputs that map to identical outputs. The semantic content of these invariants remains vague, as existing approaches struggle to provide human-interpretable information. To address this gap, we present Semantic Interpretation of the Null-space Geometry (SING), a method that constructs equivalent images, with respect to the network, and assigns semantic interpretations to the available variations. We use a mapping from network features to multi-modal vision language models. This allows us to obtain natural language descriptions and visual examples of the induced semantic shifts. SING can be applied to a single image, uncovering local invariants, or to sets of images, allowing a breadth of statistical analysis at the class and model levels. For example, our method reveals that ResNet50 leaks relevant semantic attributes to the null space, whereas DinoViT, a ViT pretrained with self-supervised DINO, is superior in maintaining class semantics across the invariant space.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SCoCCA: Multi-modal Sparse Concept Decomposition via Canonical Correlation Analysis (2026)
- Cross-Modal Redundancy and the Geometry of Vision-Language Embeddings (2026)
- Insight: Interpretable Semantic Hierarchies in Vision-Language Encoders (2026)
- Continual Learning with Vision-Language Models via Semantic-Geometry Preservation (2026)
- Towards Understanding Multimodal Fine-Tuning: Spatial Features (2026)
- What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Language Models (2026)
- Global Geometry Is Not Enough for Vision Representations (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper

