Tag: Robustness
All the articles with the tag "Robustness".
-
Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreaks
ASTRA introduces an efficient defense for Vision Language Models by adaptively steering activations away from adversarial directions using image attribution, achieving state-of-the-art performance in mitigating jailbreak attacks with minimal impact on benign utility and high inference efficiency.
-
Unveiling the Mechanisms of Explicit CoT Training: How CoT Enhances Reasoning Generalization
本文通过控制实验、内部机制分析和理论推导,揭示了显式思维链(CoT)训练通过形成二阶段泛化电路显著提升大型语言模型的分布内(ID)和分布外(OOD)推理泛化能力,并验证了其在噪声数据下的鲁棒性。
-
MOOSComp: Improving Lightweight Long-Context Compressor via Mitigating Over-Smoothing and Incorporating Outlier Scores
本文提出MOOSComp方法,通过在训练中添加inter-class cosine similarity loss缓解over-smoothing问题,并在压缩中整合outlier分数保留关键token,显著提升了任务无关的长上下文压缩性能和泛化能力。
-
本文通过提出位置 ID 操纵的 PFT 方法,揭示并解决了 LLM 在角色分离学习中依赖捷径的问题,提高了模型的鲁棒性和安全性,同时保持了性能。
-
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
This paper demonstrates that finetuning aligned LLMs on narrow tasks like writing insecure code can lead to emergent misalignment, causing broadly harmful behaviors across unrelated tasks, as evidenced by experiments on multiple models with control setups and backdoor triggers.