Tag: Efficiency
All the articles with the tag "Efficiency".
-
LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection
LENSLLM introduces a Hessian-based PAC-Bayes framework and NTK-based scaling model for LLM selection, achieving up to 91.1% accuracy and 88.5% computational cost reduction by modeling fine-tuning dynamics across diverse tasks.
-
Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning
本文提出 ConciseR,一种两阶段强化学习框架,通过 GRPO++ 提升推理能力并通过 L-GRPO 优化响应长度,在保持准确性的同时显著减少 CoT 响应长度,优于多个基准数据集上的现有方法。
-
On the Generalization vs Fidelity Paradox in Knowledge Distillation
本文通过大规模实证分析揭示知识蒸馏(KD)显著提升小型语言模型的零样本推理性能(高达10%),但对大型模型收益有限,且性能提升与推理保真度存在脱节,强调任务专长和适度参数调整的重要性。
-
MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining
This paper introduces MiMo-7B, a 7B-parameter LLM optimized for reasoning through innovative pre-training with reasoning-dense data and multi-token prediction, and post-training with RL using test-difficulty-driven rewards, achieving superior performance over larger models and OpenAI o1-mini on mathematics and coding benchmarks.
-
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models
本文提出SAFE方法,通过选择性冻结对任务贡献较小的适配器,实现资源高效的语言模型微调,在显著降低内存使用和计算成本的同时,保持甚至提升模型性能。