Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Toward Understanding In-context vs. In-weight Learning
本文通过一个简化的理论模型和多场景实验,揭示了数据分布特性如何驱动上下文学习(ICL)和权重学习(IWL)的出现与竞争,并解释了ICL在训练过程中可能短暂的原因。
-
Don't be lazy: CompleteP enables compute-efficient deep transformers
This paper introduces CompleteP, a parameterization for transformers with α = 1, which ensures depth-wise hyperparameter transfer and complete feature learning, achieving 12-34% compute efficiency improvements and enabling a wider range of compute-optimal width-to-depth ratios.
-
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
This paper demonstrates through meta-analysis and experiments that Chain-of-Thought (CoT) prompting significantly enhances large language model performance on math and symbolic reasoning tasks, but offers limited benefits for non-symbolic tasks and underperforms compared to tool-augmented approaches.
-
Radio: Rate-Distortion Optimization for Large Language Model Compression
This paper introduces 'Radio,' a rate-distortion optimization framework for LLM compression that outperforms existing quantization methods in perplexity and downstream task accuracy, particularly at lower bit depths, by iteratively optimizing bit depths and using companding quantization post-training.
-
Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation
本文提出DPE,一种无需训练的长文本外推方法,通过检测RoPE不同维度组的有效相对距离并识别关键维度,有选择地调整这些关键维度的位置索引,显著扩展了LLM的上下文窗口并提升了长文本任务性能。