LateOn
State-of-the-Art ColBERT Retrieval Model by LightOn
DenseOn | LateOn | PyLate | FastPLAID
About the LateOn / DenseOn Family
State-of-the-art retrieval is increasingly dominated by closed models, either hidden behind APIs or trained on undisclosed data. This blocks reproducibility, prevents study of possible data leakage, and gatekeeps progress to a handful of private labs. We thus decided to gather and curate a large amount of data and explore various mixtures. We release all the data used in our explorations:
- Gathered pre-training data, 1.4B query-documents pairs alongside annotations used for non-destructive filtering (structural filtering, deduplication, cross-encoder pair relevancy)
- Best pre-training mixture found with already applied filters
- Fine-tuning datasets with query, positive and 2048 mined documents alongside their scores for 1.88M samples.
Based on our findings, we trained LateOn (multi-vector/ColBERT) and DenseOn (single vector/dense) models on a proprietary Apache 2.0-compatible training dataset and release those models as well. Both are built on the ModernBERT backbone at 149M parameters, a size we believe sits at the sweet spot: large enough to handle real-world queries and documents, small enough to serve at high throughput in latency-sensitive production systems. For more information, please read our blogpost.
LateOn
LateOn is a ColBERT (multi-vector) retrieval model built on ModernBERT (149M parameters), trained by LightOn using PyLate. This is the supervised version, fine-tuned with hard-negative contrastive training. The unsupervised version can be found here
Notably it:
- Beats every existing ColBERT model, including those 4× its size (Jina ColBERT v2 at 559M, Arctic Embed L v2 at 568M).
- Holds up under decontamination: when training-overlap samples are stripped from the BEIR corpora, LateOn climbs to 60.36 nDCG@10 on the 12-dataset decontaminated split — first place overall.
- Uses fully open data for both pre-training and fine-tuning, with all signals released as metadata so you can rebuild, extend, or replace any filter.
This release was focused on contrastive data exploration. We did not run a knowledge distillation phase nor use asymmetric prompts (both shown to help in our ColBERT-Zero study). We believe even stronger results are within reach by adding these — stay tuned.
LateOn achieves 57.22 average NDCG@10 on BEIR (14 datasets) and 60.36 on decontaminated BEIR (12 datasets), leading all ColBERT models. See our blog post for full results and analysis.
Alongside LateOn, we also trained DenseOn, a dense (single vector) variant, trained on the same setup. This variant is easier and cheaper to use, albeit with slightly weaker results (achieving a score of 56.20 on BEIR, it is still stronger than any dense base-sized model). It may also suffer from some limitations compared to LateOn, such as in terms of generalisation and long context.
Results
BEIR (14 datasets, NDCG@10)
| Model | Average | Size (M) | Embed dim | ArguAna | CQADupstackRetrieval | ClimateFEVER | DBPedia | FEVER | FiQA2018 | HotpotQA | MSMARCO | NFCorpus | NQ | QuoraRetrieval | SCIDOCS | SciFact | TRECCOVID | Touche2020 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ColBERTv2 | 48.63 | 110 | 128 | 46.50 | 38.30 | 17.60 | 45.20 | 78.50 | 35.40 | 67.50 | 46.00 | 33.70 | 52.40 | 85.50 | 15.40 | 68.90 | 72.60 | 26.00 |
| Jina-ColBERT-v2 | 51.85 | 600 | 128 | 36.60 | 40.80 | 23.90 | 47.10 | 80.50 | 40.80 | 76.60 | 46.90 | 34.60 | 64.00 | 88.70 | 18.60 | 67.80 | 83.40 | 27.40 |
| ColBERT-small | 53.79 | 33 | 96 | 50.09 | 38.75 | 33.07 | 45.58 | 90.96 | 41.15 | 76.11 | 43.50 | 37.30 | 59.10 | 87.72 | 18.42 | 74.77 | 84.59 | 25.69 |
| GTE-ModernColBERT-v1 | 54.75 | 149 | 128 | 47.52 | 41.08 | 31.33 | 47.56 | 87.67 | 45.25 | 77.48 | 45.60 | 37.83 | 61.62 | 86.71 | 19.22 | 76.33 | 84.84 | 31.25 |
| ColBERT-Zero | 55.39 | 149 | 128 | 52.82 | 41.41 | 35.90 | 47.43 | 90.52 | 42.50 | 79.45 | 45.95 | 37.21 | 61.82 | 85.19 | 19.84 | 76.33 | 78.27 | 36.24 |
| LateOn-unsupervised | 50.11 | 149 | 128 | 43.12 | 47.71 | 18.76 | 43.36 | 65.74 | 51.94 | 68.17 | 37.51 | 37.15 | 58.41 | 89.48 | 21.13 | 76.89 | 69.81 | 22.53 |
| LateOn | 57.22 | 149 | 128 | 50.52 | 47.36 | 39.67 | 45.99 | 92.02 | 53.12 | 79.98 | 45.67 | 37.79 | 63.91 | 89.67 | 21.90 | 76.61 | 83.60 | 30.52 |
LateOn achieves 57.22 average NDCG@10, becoming the first ColBERT (and sub 150M parameter) model to break the 57 mark on BEIR by surpassing the previous best ColBERT model (ColBERT-Zero at 55.32) by almost two points and exceeding GTE-ModernColBERT-v1 by two point and a half, despite sharing the same backbone. It is worth noting that we achieve this performance only with contrastive data and without the use of the prompt (which has been shown to be beneficial in the ColBERT-Zero study but makes it harder and more expensive to use). This work was focused on an exploration of contrastive data, but we believe we can achieve much stronger results through a KD phase and the addition of prompts to serve as query expansion. Given such strong performances, one fair concern would be overfitting to BEIR. As highlighted in the decontaminated BEIR experiments, our models stay very strong on benches curated from any possible data leakage, especially in the multi-vector case and contrary to some other models.
Decontaminated BEIR (12 datasets, NDCG@10)
Standard benchmarks risk overestimating model quality when training data overlaps with evaluation corpora. This is a non-negligible risk in our case, as our mixture explorations are mostly built on BEIR evaluation. To quantify this and ensure that our model has not memorized possible leakage, we built decontaminated versions of the BEIR datasets by removing samples found in both the mGTE training dataset and in our internal training datasets. Since many retrieval models draw from similar public sources (Wikipedia, MS MARCO, Common Crawl, academic corpora), we expect significant overlap across models and believe the decontaminated benchmarks provide a meaningful, if imperfect, stress test. The decontaminated datasets are publicly available on HuggingFace.
Despite being in the toughest position (as the decontamination is based on our data), LateOn and DenseOn stay consistent under decontamination. LateOn keeps its #1 position and DenseOn stays in the top four (only falling behind our other strong multi-vector model, ColBERT-Zero). Neither model flinches, which is direct evidence of generalization rather than overfitting.
More broadly, ColBERT models seems to generalize better under decontamination: all three ColBERT models hold or improve their ranking, and they take 2 of the top 3 decontaminated positions. Although DenseOn holds strong, some dense models are hit harder, for example GTE-ModernBERT, dropping from 8th to last, which is particularly interesting considering our base mixture is derived from theirs. This highlights the strength of our curation methodology. While other models such as Qwen3-Embedding-0.6B also drop some ranks, hinting at an overlap with the BEIR evaluation, it is worth noting that newer models, such as the new jina-embeddings-v5 and pplx-embed-v1-0.6b seems to exhibit stronger evidence of generalization rather than overfitting.
Related Checkpoints
| Model | Stage | Link |
|---|---|---|
| LateOn (this card) | Pre-training + fine-tuning (recommended) | lightonai/LateOn |
| LateOn-unsupervised | Pre-training only | lightonai/LateOn-unsupervised |
| DenseOn | Single-vector counterpart | lightonai/DenseOn |
| DenseOn-unsupervised | Single-vector, pre-training only | lightonai/DenseOn-unsupervised |
Model Details
Model Description
- Model Type: PyLate ColBERT model
- Base model: LateOn-unsupervised (ModernBERT-base)
- Document Length: 300 tokens
- Query Length: 32 tokens
- Output Dimensionality: 128 tokens
- Similarity Function: MaxSim
- Language: English
- License: Apache 2.0
Model Sources
- Documentation: PyLate Documentation
- Repository: PyLate on GitHub
- Hugging Face: PyLate models on Hugging Face
Full Model Architecture
ColBERT(
(0): Transformer({'max_seq_length': 299, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Dense({'in_features': 768, 'out_features': 1536, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': True})
(2): Dense({'in_features': 1536, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': True})
(3): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
)
Usage
First install the PyLate library:
pip install -U pylate
Retrieval
Use this model with PyLate to index and retrieve documents. The index uses FastPLAID for efficient similarity search.
Indexing documents
Load the ColBERT model and initialize the PLAID index, then encode and index your documents:
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="lightonai/LateOn",
)
# Step 2: Initialize the PLAID index
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
)
Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # Ensure that it is set to True to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path="lightonai/LateOn",
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
Training Details
Framework Versions
- Python: 3.11.10
- Sentence Transformers: 5.1.1
- PyLate: 1.3.4
- Transformers: 4.57.5
- PyTorch: 2.9.0+cu128
- Accelerate: 1.12.0
- Datasets: 3.6.0
- Tokenizers: 0.22.1
Citation
BibTeX
DenseOn and LateOn
@misc{sourty2026denseonlateon,
title={DenseOn with the LateOn: Open State-of-the-Art Single and Multi-Vector Models},
author={Sourty, Raphael and Chaffin, Antoine and Weller, Orion and Demoura, Paulo and Chatelain, Amelie},
year={2026},
howpublished={\url{https://huggingface.co/blog/lightonai/denseon-lateon}},
}
PyLate
@inproceedings{DBLP:conf/cikm/ChaffinS25,
author = {Antoine Chaffin and
Rapha{\"{e}}l Sourty},
editor = {Meeyoung Cha and
Chanyoung Park and
Noseong Park and
Carl Yang and
Senjuti Basu Roy and
Jessie Li and
Jaap Kamps and
Kijung Shin and
Bryan Hooi and
Lifang He},
title = {PyLate: Flexible Training and Retrieval for Late Interaction Models},
booktitle = {Proceedings of the 34th {ACM} International Conference on Information
and Knowledge Management, {CIKM} 2025, Seoul, Republic of Korea, November
10-14, 2025},
pages = {6334--6339},
publisher = {{ACM}},
year = {2025},
url = {https://github.com/lightonai/pylate},
doi = {10.1145/3746252.3761608},
}
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
Acknowledgements
We thank Xin Zhang, Zach Nussbaum, Tom Aarsen, Bo Wang, Eugene Yang, Benjamin Clavié, Nandan Thakur, Oskar Hallström and Iacopo Poli for their valuable contributions and feedback. We are grateful to the teams behind Sentence Transformers and the BEIR benchmark, and to the open-source retrieval community, in particular the authors of Nomic Embed.
This work was granted access to the HPC resources of IDRIS under GENCI allocations AS011016449, A0181016214, and A0171015706 (Jean Zay supercomputer). We also acknowledge the Barcelona Supercomputing Center (BSC-CNS) for providing access to MareNostrum 5 under EuroHPC AI Factory Fast Lane project EHPC-AIF-2025FL01-445.
- Downloads last month
- 776