Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Task-Core Memory Management and Consolidation for Long-term Continual Learning
This paper introduces Long-CL, a human memory-inspired framework for long-term continual learning, leveraging task-core memory management and selective sample consolidation to significantly outperform baselines by 7.4% and 6.5% AP on two novel benchmarks, MMLongCL-Bench and TextLongCL-Bench, while mitigating catastrophic forgetting.
-
Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs
本文提出DCoT方法,通过在单次推理步骤内生成多个多样化推理链并进行自我改进,显著提升了大型语言模型在复杂推理任务上的性能,尤其在结果空间较大的任务中效果突出。
-
Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains
本文提出Compressed Latent Reasoning (CoLaR)框架,通过潜在空间动态压缩和强化学习优化大型语言模型的推理过程,在数学推理任务中显著提升效率并保持较高准确率。
-
Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
本文提出通过缩放隐藏状态中的位置通道来缓解长上下文语言模型的位置偏差问题,并在多个模型和任务上验证了其有效性,特别是在“中间丢失”基准测试中显著提升了中间位置信息的利用率。
-
Large Language Models are Locally Linear Mappings
本文提出了一种通过分离Jacobian将大型语言模型在特定输入点转化为近乎精确局部线性系统的方法,揭示了模型内部低秩语义结构,并初步探索了输出引导应用,但泛化性和实用性受限。