Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
A Sliding Layer Merging Method for Efficient Depth-Wise Pruning in LLMs
本文提出滑动层合并(SLM)方法,通过基于CKA相似性动态合并大型语言模型的连续层,实现深度剪枝,在零样本任务和推理效率上显著优于现有方法,同时探索了深度与宽度剪枝结合的潜力。
-
Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs
本文通过系统性实验证明,纯强化学习(RL)训练不仅提升大型语言模型的复杂推理能力,还能隐式培养过程奖励模型(PRM)能力,提出Self-PRM框架以进一步改进性能,但也揭示了其在高难度问题上的低精度局限。
-
MergeBench: A Benchmark for Merging Domain-Specialized LLMs
本文提出MergeBench,一个针对领域专精大型语言模型合并的全面基准测试框架,基于Llama和Gemma模型(2B-9B)评估八种合并方法,揭示了合并在大模型上的优越性、稀疏化和系数调整对知识保留的重要性,并提供了算法选择的实用指南。
-
LongReD: Mitigating Short-Text Degradation of Long-Context Large Language Models via Restoration Distillation
本文提出LongReD方法,通过长文本训练、短文本蒸馏和短到长蒸馏的多目标训练策略,有效缓解了长上下文大语言模型在短文本任务上的性能下降,同时保持或提升长文本处理能力。
-
Why do LLMs attend to the first token?
This paper argues that attention sinks in LLMs, particularly at the first token, are a useful mechanism to prevent over-mixing of information in deep Transformers, supported by theoretical insights and empirical evidence from Gemma 7B, LLaMa 3.1 models, and pre-training experiments showing stronger sinks with larger models and longer contexts.