Tag: Efficiency
All the articles with the tag "Efficiency".
-
Why Do More Experts Fail? A Theoretical Analysis of Model Merging
本文通过理论分析揭示了模型融合性能随专家模型数量增加而饱和的原因,并提出Reparameterized Heavy-Tailed方法扩展参数空间覆盖范围,在多个基准任务上验证了其有效性。
-
AI agents may be worth the hype but not the resources (yet): An initial exploration of machine translation quality and costs in three language pairs in the legal and news domains
本文通过实证评估五种机器翻译范式,发现推理增强的大型语言模型(如o1-preview)在人工评估中表现出色,超越传统NMT,而多智能体系统虽具潜力,但因高计算成本和语言对表现不一致而受限。
-
Boltzmann Classifier: A Thermodynamic-Inspired Approach to Supervised Learning
The Boltzmann Classifier introduces a thermodynamically inspired supervised learning approach that uses an energy-based model derived from the Boltzmann distribution to estimate class probabilities, achieving competitive accuracy on benchmark datasets while offering interpretability and computational efficiency.
-
Activated LoRA: Fine-tuned LLMs for Intrinsics
本文提出 Activated LoRA (aLoRA),一种改进的 LoRA 框架,通过仅对激活后 token 适配权重,复用基础模型 KV 缓存,实现高效动态适配,并在多个任务上保持与标准 LoRA 相当的性能,同时显著降低推理成本。
-
TensorLLM: Tensorising Multi-Head Attention for Enhanced Reasoning and Compression in LLMs
本文提出了一种基于多头张量化和Tucker分解的框架,通过强制共享高维子空间对大型语言模型的多头注意力权重进行结构化去噪和压缩,显著提升推理能力并实现高达247倍的压缩率。