Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Explaining Context Length Scaling and Bounds for Language Models
本文从内在空间视角提出理论框架,解释上下文长度对语言模型损失的影响,推导出与数据集大小相关的最优上下文长度,并通过自然语言和合成数据实验验证假设。
-
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
This paper introduces a taxonomy of language model memorization into recitation, reconstruction, and recollection, demonstrating through experiments with Pythia models that different factors influence each category, with a taxonomy-based predictive model outperforming baselines in predicting memorization likelihood.
-
Thinkless: LLM Learns When to Think
本文提出Thinkless框架,通过强化学习和解耦组相对策略优化(DeGRPO)算法,使大型语言模型根据任务复杂度和自身能力自主选择短格式或长格式推理,在数学任务上显著提升效率并保持性能。
-
SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning
SATURN提出一个基于SAT问题的强化学习框架,通过课程学习和可控难度的SAT任务显著提升大型语言模型在SAT、数学和编程任务上的推理能力。
-
RAISE: Reinforced Adaptive Instruction Selection For Large Language Models
本文提出 RAISE 框架,通过强化学习驱动的动态指令选择方法,根据指令对模型性能的预期影响自适应选择训练数据,仅用 1% 训练步骤即可超越全数据训练效果,并在多个基准测试中显著优于静态选择基线。