Tag: Efficiency
All the articles with the tag "Efficiency".
-
Does quantization affect models' performance on long-context tasks?
本文系统评估了量化对大型语言模型在长上下文任务中的性能影响,发现8-bit量化基本保持准确率(下降约0.8%),而4-bit量化导致显著损失(最高达59%),且影响因模型、任务和语言而异,强调了在长上下文和多语言场景下谨慎应用量化的必要性。
-
EMORL: Ensemble Multi-Objective Reinforcement Learning for Efficient and Flexible LLM Fine-Tuning
本文提出EMORL框架,通过集成学习分别训练单目标模型并在隐藏状态层聚合,结合分层网格搜索优化权重,在咨询反思生成任务中实现了与传统方法相当的性能,同时显著提升了训练效率、可扩展性和解释性。
-
Thought calibration: Efficient and confident test-time scaling
本文提出‘思想校准’方法,通过推理树抽象和轻量级探针动态决定语言模型推理终止时机,在分布内数据上减少高达60%的思考token,同时保持性能,并在分布外数据上实现20%的减少。
-
Route to Reason: Adaptive Routing for LLM and Reasoning Strategy Selection
本文提出Route-To-Reason(RTR)框架,通过动态路由机制联合选择最优模型和推理策略,在多个推理任务上实现了更高的准确率和超过60%的token使用量减少,显著优化了性能与成本的权衡。
-
Sparse-Group Boosting with Balanced Selection Frequencies: A Simulation-Based Approach and R Implementation
This paper introduces sparse-group boosting and a simulation-based group balancing algorithm within the 'sgboost' R package to mitigate variable selection bias in high-dimensional grouped data, demonstrating improved fairness and interpretability through simulations and ecological data analysis.