Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
LoRE-Merging: Exploring Low-Rank Estimation For Large Language Model Merging
本文提出LORE-MERGING框架,通过低秩估计构建近似基础模型和任务向量,无需访问原始基础模型即可实现模型合并,并在多个基准数据集上展现出优于传统方法的性能。
-
Long-Short Chain-of-Thought Mixture Supervised Fine-Tuning Eliciting Efficient Reasoning in Large Language Models
This paper introduces Long-Short Chain-of-Thought Mixture Supervised Fine-Tuning (LS-Mixture SFT), which combines long and short CoT datasets to fine-tune non-reasoning LLMs, achieving a 2.3% average accuracy improvement and 47.61% response length reduction on reasoning benchmarks.
-
ZeroTuning: Unlocking the Initial Token's Power to Enhance Large Language Models Without Training
ZeroTuning提出了一种无需训练的方法,通过调整大型语言模型初始token的注意力分布,在文本分类、问答和多轮对话任务中显著提升性能,同时展现出对资源限制和长上下文的鲁棒性。
-
Not All Thoughts are Generated Equal: Efficient LLM Reasoning via Multi-Turn Reinforcement Learning
本文提出Long⊗Short框架,通过长思维和短思维LLM协作推理,利用自动思维分块、冷启动SFT和多轮RL优化,显著提升推理效率,在多个基准上使Qwen2.5-7B和Llama3.1-8B性能接近蒸馏模型,同时减少token长度超80%。
-
Training Language Models to Reason Efficiently
本文提出了一种通过强化学习训练大型推理模型以高效推理的方法,利用长度惩罚目标函数和可调参数α显著降低推理成本,同时在多个数学数据集上保持大部分准确性。