Tag: Efficiency
All the articles with the tag "Efficiency".
-
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
本文通过KVFundaBench基准系统评估KV缓存压缩对大型语言模型基本能力的影响,揭示任务依赖性性能降解,并提出ShotKV方法,通过区分预填充和解码阶段压缩策略,在长上下文生成任务上显著提升性能。
-
Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute
This paper introduces ModelSwitch, a multi-LLM repeated sampling strategy that leverages answer consistency to dynamically switch models, achieving superior performance and 34% sample efficiency over single-LLM self-consistency across diverse datasets.
-
Pretraining Language Models to Ponder in Continuous Space
本文提出Pondering Language Model,通过在预训练阶段引入自监督的连续空间深思机制,显著提升语言模型在语言建模和下游任务上的性能,PonderingPythia-1B接近TinyLlama-1.1B的效果。
-
RLAE: Reinforcement Learning-Assisted Ensemble for LLMs
RLAE提出了一种通过强化学习动态调整大型语言模型集成权重的框架,将集成过程建模为马尔可夫决策过程,在多个任务上实现最高3.3%的性能提升,并展现出跨任务泛化能力和计算效率。
-
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
本文提出Perturb-and-Merge (P&M)框架,通过训练时任务向量扰动和推理时模型凸组合合并,结合LoRA实现参数高效持续学习,在多个基准数据集上显著缓解灾难性遗忘并提升性能。