OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training
Abstract
OptiMer enables flexible continual pre-training by decoupling data mixture ratio selection from training through post-hoc Bayesian optimization of distribution vectors extracted from individual dataset models.
Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.
Community
🤯 Struggling with dataset mixing ratios in LLM continual training?
🧩 We propose OptiMer: train one model per dataset, then merge them optimally. No more costly ratio tuning!
📄OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training
🔗https://arxiv.org/abs/2603.28858
My last work at @NICT also related to our collaboration with @AISingapore
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Bagging-Based Model Merging for Robust General Text Embeddings (2026)
- Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization (2026)
- Pre-training LLM without Learning Rate Decay Enhances Supervised Fine-Tuning (2026)
- LEDA: Latent Semantic Distribution Alignment for Multi-domain Graph Pre-training (2026)
- MoSE: Mixture of Slimmable Experts for Efficient and Adaptive Language Models (2026)
- mSFT: Addressing Dataset Mixtures Overfitting Heterogeneously in Multi-task SFT (2026)
- OPUS: Towards Efficient and Principled Data Selection in Large Language Model Pre-training in Every Iteration (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.28858 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper