Tag: Efficiency
All the articles with the tag "Efficiency".
-
Splitwiser: Efficient LM inference with constrained resources
Splitwiser introduces a method to split LLM inference phases on a single GPU using multiprocessing and NVIDIA MPS, achieving modest latency reductions (up to 18.2%) and throughput improvements (up to 1.42x) on Huggingface and vLLM pipelines, though constrained by overheads and scalability issues.
-
Exploring the Role of Diversity in Example Selection for In-Context Learning
本文提出基于多样性的上下文学习(DICL)方法,通过最大边际相关性(MMR)算法重新排序示例以平衡相关性和多样性,在多个数据集和大型语言模型上实现了约70%的下游任务性能提升或维持。
-
Mixture of Sparse Attention: Content-Based Learnable Sparse Attention via Expert-Choice Routing
本文提出Mixture of Sparse Attention (MoSA)方法,通过专家选择路由实现基于内容的稀疏注意力,显著提高了Transformer模型在相同计算预算下的语言建模性能,并优化了资源使用。
-
Training Plug-n-Play Knowledge Modules with Deep Context Distillation
本文提出使用深度上下文蒸馏训练可插拔知识模块的方法,能够在低数据场景下高效整合文档知识,并通过实验证明其在问答任务中优于传统方法且与 RAG 具有协同效应。
-
Small or Large? Zero-Shot or Finetuned? Guiding Language Model Choice for Specialized Applications in Healthcare
本文通过实证实验指导在医疗专业应用中语言模型的选择,强调微调小语言模型和领域特定预训练的显著优势,使其在特定任务上超越零-shot 大语言模型。