SelaVPR++: Towards Seamless Adaptation of Foundation Models for Efficient Place Recognition
Paper
•
2502.16601
•
Published
SelaVPR++ introduces a parameter-, memory-, and time-efficient PEFT method for seamless adaptation of foundation models to visual place recognition, enhancing both parameter and computational efficiency. It also proposes a novel two-stage paradigm using compact binary features for fast candidate retrieval and robust floating-point features for re-ranking, significantly improving retrieval speed. In addition to its high efficiency, this work also outperforms previous state-of-the-art methods on several VPR benchmarks.
Paper: SelaVPR++: Towards Seamless Adaptation of Foundation Models for Efficient Place Recognition (Accepted by IEEE T-PAMI 2025)
GitHub: Lu-Feng/SelaVPRplusplus
@ARTICLE{selavprpp,
author={Lu, Feng and Jin, Tong and Lan, Xiangyuan and Zhang, Lijun and Liu, Yunpeng and Wang, Yaowei and Yuan, Chun},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={SelaVPR++: Towards Seamless Adaptation of Foundation Models for Efficient Place Recognition},
year={2025},
volume={},
number={},
pages={1-18},
doi={10.1109/TPAMI.2025.3629287}}