Tag: Interpretability
All the articles with the tag "Interpretability".
-
Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge
本文通过跨任务梯度追踪工具揭示了混合训练通过增加共享参数的数量和重要性,并在关键注意力头中集中这些参数,从而教授知识并提升语言模型的事实回忆泛化能力。
-
Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs
本文通过层级上下文掩码和跨任务补丁方法,验证了大型语言模型内部存在‘内部思维链’,即在不同网络深度学习并按序执行复合任务的子任务,从而提升了模型透明度并为指令级行为控制开辟了新路径。
-
EMORL: Ensemble Multi-Objective Reinforcement Learning for Efficient and Flexible LLM Fine-Tuning
本文提出EMORL框架,通过集成学习分别训练单目标模型并在隐藏状态层聚合,结合分层网格搜索优化权重,在咨询反思生成任务中实现了与传统方法相当的性能,同时显著提升了训练效率、可扩展性和解释性。
-
Boltzmann Classifier: A Thermodynamic-Inspired Approach to Supervised Learning
The Boltzmann Classifier introduces a thermodynamically inspired supervised learning approach that uses an energy-based model derived from the Boltzmann distribution to estimate class probabilities, achieving competitive accuracy on benchmark datasets while offering interpretability and computational efficiency.
-
When Do LLMs Admit Their Mistakes? Understanding the Role of Model Belief in Retraction
本文通过构建模型特定数据集和信念操控实验,揭示了大型语言模型(LLMs)的撤回行为受内部信念因果影响,并通过监督微调显著提高撤回性能。