Posts
All the articles I've posted.
-   The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs本文通过模块化方法,利用大型语言模型参数在数学推理和多语言能力上的分离性,提出Layer-Swapping等策略,在低资源语言跨语言迁移中显著优于非模块化基线,尤其在数据受限场景下表现最佳。 
-   1bit-Merging: Dynamic Quantized Merging for Large Language Models1bit-Merging提出了一种动态模型合并框架,通过1位量化任务向量和任务特定路由,在保持94.53%性能的同时将存储需求降至55.02%,在通用知识、数学推理和代码生成任务上优于传统和动态合并方法。 
-   Gameplay Highlights GenerationThis paper presents a method to generate gameplay highlight reels by finetuning the X-CLIP multimodal model on an in-house FPS game dataset, achieving over 90% event detection accuracy and demonstrating transfer learning, while optimizing deployment through quantization. 
-   Reward Reasoning Model本文提出奖励推理模型(RRMs),通过链式推理过程在生成奖励前自适应利用测试时计算资源,在多个奖励建模基准和实际应用中显著提升性能,尤其在复杂推理任务上表现优异。 
-   TL;DR: Too Long, Do Re-weighting for Effcient LLM Reasoning Compression本文提出TLDR方法,通过动态再加权系统1和系统2推理数据,显著压缩大型语言模型的推理输出token数量(约40%),同时在多难度数学任务上基本保持准确性。