Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Block Circulant Adapter for Large Language Models
本文提出块循环适配器方法,通过利用块循环矩阵和FFT优化LLM的微调过程,显著降低存储和计算成本,同时通过学习率调整确保训练稳定。
-
SEM: Reinforcement Learning for Search-Efficient Large Language Models
本文提出 *SEM* 框架,通过强化学习优化大型语言模型的搜索行为,在减少冗余搜索的同时提升回答准确性,显著提高推理效率。
-
Patterns and Mechanisms of Contrastive Activation Engineering
This paper systematically investigates Contrastive Activation Engineering (CAE) for steering LLM behavior at inference time, revealing reliable in-distribution performance with optimal sample sizes around 80-100, but significant challenges in out-of-distribution generalization, model perplexity degradation, and vulnerability to adversarial inputs.
-
Large Language Models Think Too Fast To Explore Effectively
本文通过《Little Alchemy 2》游戏评估大型语言模型(LLMs)的探索能力,发现大多数LLMs因过早决策和过度依赖不确定性驱动策略而表现不如人类,但o1和DeepSeek-R1通过平衡赋能和深入推理显著超越人类,揭示了推理深度和架构设计对开放性探索的重要性。
-
Better Estimation of the KL Divergence Between Language Models
This paper introduces a Rao-Blackwellized Monte Carlo estimator for KL divergence between language models, achieving unbiased estimates with provably lower variance than standard Monte Carlo methods, and demonstrates improved stability and performance in RLHF fine-tuning for sentiment-controlled generation.