Posts
All the articles I've posted.
-
CoThink: Token-Efficient Reasoning via Instruct Models Guiding Reasoning Models
CoThink 提出了一种双阶段推理框架,通过指令模型生成解决方案大纲指导推理模型完成解答,在保持准确率的同时平均减少 22.3% 的令牌生成量,提升了大型语言模型的推理效率。
-
How Much Backtracking is Enough? Exploring the Interplay of SFT and RL in Enhancing LLM Reasoning
本文通过控制实验研究SFT和RL在增强LLM推理能力中的相互作用,发现短CoT预热对RL有中等贡献,回溯次数需与任务难度匹配,且RL对SFT数据正确性依赖较小而对结构一致性敏感。
-
Towards Complementary Knowledge Distillation for Efficient Dense Image Prediction
This paper introduces a Boundary and Context Distillation (BCD) method for efficient dense image prediction, enhancing compact models' boundary completeness and region connectivity through targeted knowledge transfer, achieving superior accuracy across multiple tasks and datasets without inference cost increase.
-
Stabilizing and Solving Unique Continuation Problems by Parameterizing Data and Learning Finite Element Solution Operators
本文提出了一种结合有限元方法与机器学习技术(自编码器与操作符学习)解决非线性PDE逆问题中唯一性延续问题的方法,通过数据降维和稳定化技术提高病态问题的求解稳定性和效率,并在合成数据上验证了其有效性。
-
RaaS: Reasoning-Aware Attention Sparsity for Efficient LLM Reasoning
本文提出 RaaS 算法,通过识别推理任务中的里程碑令牌并采用 LRU 缓存策略管理 KV 向量,在保持高准确性的同时实现了 O(L) 的时间和内存复杂度,显著优于现有方法如 Quest 的内存效率。