YouZum

From RLHF to Direct Alignment: A Theoretical Unification of Preference Learning for Large Language Models

arXiv:2601.06108v1 Announce Type: cross
Abstract: Aligning large language models (LLMs) with human preferences has become essential for safe and beneficial AI deployment. While Reinforcement Learning from Human Feedback (RLHF) established the dominant paradigm, a proliferation of alternatives — Direct Preference Optimization (DPO), Identity Preference Optimization (IPO), Kahneman-Tversky Optimization (KTO), Simple Preference Optimization (SimPO), and many others — has left practitioners without clear guidance on method selection. This survey provides a textit{theoretical unification} of preference learning methods, revealing that the apparent diversity reduces to principled choices along three orthogonal axes: textbf{(I) Preference Model} (what likelihood model underlies the objective), textbf{(II) Regularization Mechanism} (how deviation from reference policies is controlled), and textbf{(III) Data Distribution} (online vs. offline learning and coverage requirements). We formalize each axis with precise definitions and theorems, establishing key results including the coverage separation between online and offline methods, scaling laws for reward overoptimization, and conditions under which direct alignment methods fail. Our analysis reveals that failure modes — length hacking, mode collapse, likelihood displacement — arise from specific, predictable combinations of design choices. We synthesize empirical findings across 50+ papers and provide a practitioner’s decision guide for method selection. The framework transforms preference learning from an empirical art into a theoretically grounded discipline.

We use cookies to improve your experience and performance on our website. You can learn more at Privacy Policy and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
en_US