Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
ICLR: In-Context Learning of Representations
本文通过上下文图追踪任务揭示了大型语言模型能随上下文规模增加而突现地重组概念表示以适应新语义,并提出能量最小化假设解释这一过程。
-
Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models
This paper introduces a recursive summarization method to enhance long-term dialogue memory in LLMs, achieving marginal quantitative improvements and notable qualitative gains in consistency and coherence across multiple models and datasets.
-
What do Language Model Probabilities Represent? From Distribution Estimation to Response Prediction
本文通过理论分析区分了语言模型输出概率的三种解释(完成分布、响应分布、事件分布),揭示了现有研究中对这些分布的混淆和误解,并呼吁谨慎解释模型概率以指导LLM的开发和应用。
-
Toward Understanding In-context vs. In-weight Learning
本文通过一个简化的理论模型和多场景实验,揭示了数据分布特性如何驱动上下文学习(ICL)和权重学习(IWL)的出现与竞争,并解释了ICL在训练过程中可能短暂的原因。
-
On the generalization of language models from in-context learning and finetuning: a controlled study
本文通过控制实验比较了语言模型在上下文学习和微调下的泛化能力,发现上下文学习更灵活,并提出通过数据增强方法显著改善微调的泛化性能。