Tag: Pre-training
All the articles with the tag "Pre-training".
-
What do Language Model Probabilities Represent? From Distribution Estimation to Response Prediction
本文通过理论分析区分了语言模型输出概率的三种解释(完成分布、响应分布、事件分布),揭示了现有研究中对这些分布的混淆和误解,并呼吁谨慎解释模型概率以指导LLM的开发和应用。
-
Toward Understanding In-context vs. In-weight Learning
本文通过一个简化的理论模型和多场景实验,揭示了数据分布特性如何驱动上下文学习(ICL)和权重学习(IWL)的出现与竞争,并解释了ICL在训练过程中可能短暂的原因。
-
Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation
本文提出DPE,一种无需训练的长文本外推方法,通过检测RoPE不同维度组的有效相对距离并识别关键维度,有选择地调整这些关键维度的位置索引,显著扩展了LLM的上下文窗口并提升了长文本任务性能。
-
Radio: Rate-Distortion Optimization for Large Language Model Compression
This paper introduces 'Radio,' a rate-distortion optimization framework for LLM compression that outperforms existing quantization methods in perplexity and downstream task accuracy, particularly at lower bit depths, by iteratively optimizing bit depths and using companding quantization post-training.
-
Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance
本文提出谱系正则化矩阵分解(LRMF)方法,通过利用大型语言模型的谱系关系显著提高性能预测准确性,在同质和异质模型场景下均优于传统方法,尤其在冷启动问题上表现突出。