Posts
All the articles I've posted.
-
HyPerAlign: Hypotheses-driven Personalized Alignment
本文提出HyPerAlign方法,通过假设驱动的少样本学习实现LLM的个性化对齐,提高了模型对个体用户的适应性和安全性,同时减少了对微调的依赖。
-
Exploring the Role of Diversity in Example Selection for In-Context Learning
本文提出基于多样性的上下文学习(DICL)方法,通过最大边际相关性(MMR)算法重新排序示例以平衡相关性和多样性,在多个数据集和大型语言模型上实现了约70%的下游任务性能提升或维持。
-
Kimi-Audio Technical Report
本文提出Kimi-Audio,一个开源的音频基础模型,通过结合音频分词、LLM处理和逆分词的统一架构,以及大规模多模态训练,实现了音频理解、生成和对话的多任务SOTA性能。
-
Waking Up an AI: A Quantitative Framework for Prompt-Induced Phase Transition in Large Language Models
本文提出了一种双重提示框架(TIP和TQP)来量化大型语言模型(LLMs)的认知相变,发现LLMs对概念融合提示的情感反应与人类直觉差异显著,揭示了AI与人类认知在概念整合上的潜在鸿沟。
-
Toward Reasonable Parrots: Why Large Language Models Should Argue with Us by Design
This position paper advocates for redesigning Large Language Models as 'reasonable parrots' that integrate argumentation theory principles to foster critical thinking through multi-persona dialogues, challenging users with diverse perspectives rather than providing one-sided answers.