Tag: Interpretability
All the articles with the tag "Interpretability".
-
Large Language Models are Locally Linear Mappings
本文提出了一种通过分离Jacobian将大型语言模型在特定输入点转化为近乎精确局部线性系统的方法,揭示了模型内部低秩语义结构,并初步探索了输出引导应用,但泛化性和实用性受限。
-
Steering LLM Reasoning Through Bias-Only Adaptation
本文通过训练转向向量(steering vectors)验证了大型语言模型中推理能力已潜藏的假设,在数学推理任务上以极高的参数效率接近甚至超过全模型微调的表现。
-
A Statistical Case Against Empirical Human-AI Alignment
This position paper argues against forward empirical human-AI alignment due to statistical biases and anthropocentric limitations, advocating for prescriptive and backward alignment approaches to ensure transparency and minimize bias, supported by a case study on language model decoding strategies.
-
Self-Interpretability: LLMs Can Describe Complex Internal Processes that Drive Their Decisions, and Improve with Training
本文通过微调GPT-4o和GPT-4o-mini,展示了大型语言模型能够量化报告其内部决策过程(如属性权重),并通过内省训练显著提升报告准确性,且这种能力可泛化至原生偏好,为AI可解释性和安全性提供了新路径。
-
Talking Heads: Understanding Inter-layer Communication in Transformer Language Models
This paper investigates inter-layer communication in Transformer LMs by identifying low-rank communication channels via SVD, demonstrating their causal role in prompt sensitivity through interventions that significantly improve performance on context retrieval tasks like the Laundry List task.