Posts
All the articles I've posted.
-
CREAM: Consistency Regularized Self-Rewarding Language Models
本文提出了CREAM(Consistency Regularized Self-Rewarding Language Model)方法,通过衡量自奖励过程中不同迭代模型之间排序的一致性来正则化偏好训练,从而缓解奖励偏差问题,提高小型语言模型的对齐性能和训练稳定性。
-
SEAL: Steerable Reasoning Calibration of Large Language Models for Free
SEAL, a training-free method, calibrates the reasoning process of Large Language Models by steering latent representations to reduce redundant thoughts, achieving up to 14.1% accuracy improvement and 50.4% token reduction across diverse benchmarks.
-
Stabilizing and Solving Unique Continuation Problems by Parameterizing Data and Learning Finite Element Solution Operators
本文提出了一种结合有限元方法与机器学习技术(自编码器与操作符学习)解决非线性PDE逆问题中唯一性延续问题的方法,通过数据降维和稳定化技术提高病态问题的求解稳定性和效率,并在合成数据上验证了其有效性。
-
Think2SQL: Reinforce LLM Reasoning Capabilities for Text2SQL
本文通过结合监督微调(SFT)、强化学习(RL)及细粒度奖励函数(如QATCH),显著提升了小型LLM在Text2SQL任务中的推理能力和性能,Think2SQL-7B模型在BIRD数据集上超越了400B+参数模型。
-
Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism
本文通过提出Gather-and-Aggregate (G&A)机制,揭示了Transformer和SSM模型在上下文检索能力上的性能差距主要源于少数关键头部的实现差异,并通过混合模型实验验证了注意力机制在改进SSM检索能力上的潜力。