Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
CCSK:Cognitive Convection of Self-Knowledge Based Retrieval Augmentation for Large Language Models
本文提出CCSK框架,通过Siamese Network和Response Quality Model动态融合查询相似性和响应质量,优化大型语言模型的信息检索决策,在多个问答数据集上显著提升了F1分数和准确率。
-
MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism
本文提出MegaScale-Infer系统,通过分离注意力模块和FFN模块的并行策略以及高效M2N通信库,优化大规模MoE模型的推理效率,实现高达1.90倍的吞吐量提升。
-
Reward-SQL: Boosting Text-to-SQL via Stepwise Reasoning and Process-Supervised Rewards
REWARD-SQL introduces a framework for Text-to-SQL by decomposing queries into Chain-of-CTEs and using Process Reward Models (PRMs) with GRPO and Best-of-N sampling, achieving a state-of-the-art 68.9% execution accuracy on the BIRD dataset with a 7B model.
-
Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
本文提出了一种奖励增强数据集方法,通过对偏好对进行重新标记使大型语言模型条件化于奖励值学习响应质量全谱,显著提升了直接偏好优化(DPO)的性能并缓解了其遗忘高质被拒响应和无差别学习低质选中响应的局限性。
-
Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs
本文提出了低秩知识遗忘(LoKU)框架,包含反向铰链损失(IHL)和 Fisher 加权低秩适配器初始化(FILA),以实现鲁棒且参数高效的大语言模型知识遗忘,有效移除敏感信息同时保持模型原有能力。