Tag: Reinforcement Learning
All the articles with the tag "Reinforcement Learning".
-
Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL
本文提出 *AutoThink*,通过省略号提示和多阶段强化学习框架,使 R1 风格大型推理模型根据问题复杂性自适应地决定是否进行显式推理,在五个数学基准上实现了准确性和效率的优越权衡。
-
Temporal Sampling for Forgotten Reasoning in LLMs
本文揭示了大型语言模型微调中的'Temporal Forgetting'现象,并提出'Temporal Sampling'方法,通过从多个训练检查点采样答案显著提升推理性能(Pass@k提升4-19个百分点),并通过LoRA适配降低存储成本。
-
Learning to Think: Information-Theoretic Reinforcement Fine-Tuning for LLMs
This paper introduces Learning to Think (L2T), an information-theoretic reinforcement fine-tuning framework for LLMs that uses a universal dense process reward to optimize reasoning effectiveness and efficiency, achieving significant accuracy and token efficiency gains on math reasoning benchmarks.
-
ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models
本文提出ProRL方法,通过长时间强化学习结合KL散度惩罚和参考策略重置,在多样化任务上训练Nemotron-Research-Reasoning-Qwen-1.5B模型,显著扩展了大型语言模型的推理边界,尤其在基础模型表现较差的领域和分布外任务上表现出色。
-
Not All Thoughts are Generated Equal: Efficient LLM Reasoning via Multi-Turn Reinforcement Learning
本文提出Long⊗Short框架,通过长思维和短思维LLM协作推理,利用自动思维分块、冷启动SFT和多轮RL优化,显著提升推理效率,在多个基准上使Qwen2.5-7B和Llama3.1-8B性能接近蒸馏模型,同时减少token长度超80%。