Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation
本文提出日志增强生成(LAG)框架,通过使用KV缓存直接复用过去的推理计算,显著提升大型语言模型在知识和推理密集型任务上的准确性和效率,优于标准代理系统及现有反思和KV缓存方法。
-
Task Specific Pruning with LLM-Sieve: How Many Parameters Does Your Task Really Need?
LLM-Sieve提出了一种任务特定的剪枝框架,通过联合低秩投影和遗传算法实现差异化剪枝,在保持1-5%精度损失下减少20-75%的参数,显著优于现有方法,并与LoRA微调和量化兼容。
-
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama通过结合状态空间模型和多头潜在注意力层,从预训练Transformer构建高效混合模型,显著降低KV缓存大小并提升推理吞吐量,同时保持或超越基线性能。
-
The Mosaic Memory of Large Language Models
This paper introduces the concept of 'mosaic memory' in Large Language Models, demonstrating through experiments on canaries and real-world datasets like SlimPajama that LLMs memorize training data via fuzzy duplicates with partial overlaps, predominantly syntactically, challenging existing deduplication practices and raising concerns for privacy, model utility, and benchmark fairness.
-
REFINE-AF: A Task-Agnostic Framework to Align Language Models via Self-Generated Instructions using Reinforcement Learning from Automated Feedback
本文提出REFINE-AF框架,利用小型开源语言模型和基于自动化反馈的强化学习生成任务无关指令数据集,相较基线在SUPER-NI数据集上显著提升了63-66%的任务表现,同时降低了成本和人工干预。