Tag: Instruction Tuning
All the articles with the tag "Instruction Tuning".
-
SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning
This paper introduces SIMPLEMIX, a simple method to mix on- and off-policy data in language model preference optimization, demonstrating that their complementary strengths—on-policy for reasoning tasks and off-policy for open-ended tasks—lead to a 6.03% average improvement over single-source methods on Alpaca Eval 2.0.
-
Latent Factor Models Meets Instructions: Goal-conditioned Latent Factor Discovery without Task Supervision
本文提出Instruct-LF方法,通过结合LLMs的指令遵循能力和梯度-based统计模型,实现无需任务监督的目标导向潜在因素发现,提高了下游任务性能并在人工评估中被偏好。
-
ASIDE: Architectural Separation of Instructions and Data in Language Models
本文提出ASIDE方法,通过在嵌入级别应用固定正交旋转实现大型语言模型的指令-数据架构分离,提高了模型的安全性和对提示注入攻击的鲁棒性,同时不牺牲性能。
-
Meeseeks: An Iterative Benchmark Evaluating LLMs Multi-Turn Instruction-Following Ability
本文提出Meeseeks多轮指令遵循基准,通过迭代反馈机制系统评估LLMs的自纠错能力,发现模型在多轮互动中性能显著提升。
-
Constraint Back-translation Improves Complex Instruction Following of Large Language Models
本文提出约束反向翻译方法,通过从现有指令-响应对中提取隐含约束构建高质量复杂指令数据集CRAB,并结合反向训练显著提升大型语言模型在复杂指令跟随任务上的性能。