Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama通过结合状态空间模型和多头潜在注意力层,从预训练Transformer构建高效混合模型,显著降低KV缓存大小并提升推理吞吐量,同时保持或超越基线性能。
-
The Mosaic Memory of Large Language Models
This paper introduces the concept of 'mosaic memory' in Large Language Models, demonstrating through experiments on canaries and real-world datasets like SlimPajama that LLMs memorize training data via fuzzy duplicates with partial overlaps, predominantly syntactically, challenging existing deduplication practices and raising concerns for privacy, model utility, and benchmark fairness.
-
REFINE-AF: A Task-Agnostic Framework to Align Language Models via Self-Generated Instructions using Reinforcement Learning from Automated Feedback
本文提出REFINE-AF框架,利用小型开源语言模型和基于自动化反馈的强化学习生成任务无关指令数据集,相较基线在SUPER-NI数据集上显著提升了63-66%的任务表现,同时降低了成本和人工干预。
-
Hybrid Latent Reasoning via Reinforcement Learning
本文提出HRPO,一种基于强化学习的混合潜在推理框架,通过门控机制结合离散token和连续隐状态,显著提升了大型语言模型在知识和推理任务上的性能,同时减少了对链式思维数据的依赖。
-
Knowledge Grafting of Large Language Models
GraftLLM提出了一种通过模块感知压缩生成SkillPack的方法,实现大型语言模型间高效跨能力转移、知识融合和无遗忘持续学习,并在多个基准测试中显著优于现有方法。