Tag: Safety
All the articles with the tag "Safety".
-
Adversarial Attacks on LLM-as-a-Judge Systems: Insights from Prompt Injections
本文通过提出攻击框架和实验评估,揭示了LLM-as-a-judge系统的prompt injection漏洞,并推荐使用多模型委员会等策略提升鲁棒性。
-
HAIR: Hardness-Aware Inverse Reinforcement Learning with Introspective Reasoning for LLM Alignment
HAIR introduces a novel LLM alignment method using hardness-aware inverse reinforcement learning and introspective reasoning, constructing a balanced safety dataset and training category-specific reward models with GRPO-S, achieving state-of-the-art harmlessness while preserving usefulness across multiple benchmarks.
-
Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning
本文提出Reason2Attack方法,通过基于Frame Semantics的CoT示例合成和带攻击过程奖励的强化学习,增强LLM的推理能力,以高效生成对抗性提示实现对T2I模型的越狱攻击。
-
CachePrune: Neural-Based Attribution Defense Against Indirect Prompt Injection Attacks
本文提出CachePrune方法,通过基于DPO损失的特征归因识别并修剪KV缓存中的关键神经元,防御间接提示注入攻击,同时保持模型响应质量。
-
ASIDE: Architectural Separation of Instructions and Data in Language Models
本文提出ASIDE方法,通过在嵌入级别应用固定正交旋转实现大型语言模型的指令-数据架构分离,提高了模型的安全性和对提示注入攻击的鲁棒性,同时不牺牲性能。