Learning to Reason under Off-Policy Guidance 1r615v

22/04/2025

Recent advances in large reasoning models (LRMs) demonstrate that sophisticated behaviors such as...

Recent advances in large reasoning models (LRMs) demonstrate that sophisticated behaviors such as multi-step reasoning and self-reflection can emerge via reinforcement learning (RL) with simple rule-based rewards. However, existing zero-RL approaches are inherently ``on-policy'', limiting learning to a model's own outputs and failing to acquire reasoning abilities beyond its initial capabilities. We introduce LUFFY (Learning to reason Under oFF-policY guidance), a framework that augments zero-RL with off-policy reasoning traces. LUFFY dynamically balances imitation and exploration by combining off-policy demonstrations with on-policy rollouts during training. Notably, we propose policy shaping via regularized importance sampling to avoid superficial and rigid imitation during mixed-policy training. Remarkably, LUFFY achieves an over +7.0 average gain across six math benchmarks and an advantage of over +6.2 points in out-of-distribution tasks. It also substantially sures imitation-based supervised fine-tuning (SFT), particularly in generalization. Analysis shows LUFFY not only imitates effectively but also explores beyond demonstrations, offering a scalable path to train generalizable reasoning models with off-policy guidance.

MovieGen: A Detailed Review of Meta's Text-to-Video Generation System 8 meses 12:51 Contents On the Nature of Time 7 meses 11:21 AI's Potential to Transform the World 7 meses 23:27 Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? 1 mes 12:33 VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models 1 mes 18:57 Ver más en APP Comentarios del episodio 1v394a