SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization
Abstract
SLIME is a novel reference-free alignment objective for large language models that decouples preference learning from generation quality through a three-pronged approach combining likelihood maximization, probability stabilization, and dual-margin constraints.
Direct preference optimization methods have emerged as a computationally efficient alternative to Reinforcement Learning from Human Feedback (RLHF) for aligning Large Language Models (LLMs). Latest approaches have streamlined the alignment process by deriving implicit reward functions, yet they often suffer from a critical objective mismatch: optimizing the relative margin between chosen and rejected responses does not guarantee the preservation of the chosen response's absolute likelihood. This can lead to ``unlearning'', where the model degrades the probability of high-quality outputs to satisfy margin constraints, and ``formatting collapse'' caused by the over-penalization of rejected sequences. In this work, we introduce SLIME (Stabilized Likelihood Implicit Margin Enforcement), a reference-free alignment objective designed to decouple preference learning from generation quality. SLIME incorporates a three-pronged objective: (1) an anchoring term to maximize the likelihood of preferred responses; (2) a stabilizing penalty that prevents the probabilities of rejected tokens from collapsing to zero; and (3) a dual-margin mechanism that combines hard and soft constraints for precise boundary shaping. Our results demonstrate that SLIME achieves superior performance compared to state-of-the-art baselines while maintaining higher generation stability.
Community
We introduce SLIME, a reference-free preference optimization objective designed to decouple preference learning from generation quality. Our approach uses a three-pronged objective:
- Likelihood Anchoring: An explicit term to maximize the likelihood of the preferred response, preventing quality degradation.
- Token-Level Stabilization: A softplus-based penalty that prevents rejected token probabilities from collapsing to zero, preserving linguistic fluency.
- Dual-Margin Mechanism: A novel combination of hard and soft margins for precise boundary shaping without vanishing gradients.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Clipping-Free Policy Optimization for Large Language Models (2026)
- InSPO: Unlocking Intrinsic Self-Reflection for LLM Preference Optimization (2025)
- RIFT: Repurposing Negative Samples via Reward-Informed Fine-Tuning (2026)
- AMIR-GRPO: Inducing Implicit Preference Signals into GRPO (2026)
- Reflective Preference Optimization (RPO): Enhancing On-Policy Alignment via Hint-Guided Reflection (2025)
- From RLHF to Direct Alignment: A Theoretical Unification of Preference Learning for Large Language Models (2026)
- GRADE: Replacing Policy Gradients with Backpropagation for LLM Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper