Tag: RLHF
All the articles with the tag "RLHF".
-
Toward Evaluative Thinking: Meta Policy Optimization with Evolving Reward Models
本文提出元策略优化(MPO)框架,通过元奖励模型动态调整奖励模型的评估提示,显著提升了大语言模型在多种任务中的对齐性能,同时减少了奖励漏洞和手动提示工程的负担。
-
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
本文通过分析对齐前后LLM输出分布的变化,揭示了对齐虽减少分布性多元化但通过更长响应实现奥弗顿多元化,且基础模型通过上下文学习可有效模仿对齐模型行为,支持表面对齐假说。
-
Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach
本文通过校准感知微调(CFT和RCFT)方法,结合可校准和不可校准区域的理论框架,显著改善了偏好对齐后大型语言模型的校准性能,同时维持或提升其语言能力。
-
Better Estimation of the KL Divergence Between Language Models
This paper introduces a Rao-Blackwellized Monte Carlo estimator for KL divergence between language models, achieving unbiased estimates with provably lower variance than standard Monte Carlo methods, and demonstrates improved stability and performance in RLHF fine-tuning for sentiment-controlled generation.
-
Base Models Beat Aligned Models at Randomness and Creativity
本文通过在随机数生成、混合策略游戏和创意写作等需要不可预测性的任务上进行实验,发现流行的对齐技术会损害基础模型在这方面的能力,而基础模型在这些任务上表现更佳,这表明在常见基准性能和不可预测能力之间可能存在权衡。