Posts
All the articles I've posted.
-
Less is More: Towards Green Code Large Language Models via Unified Structural Pruning
本文提出Flab-Pruner,一种结合词汇、层和FFN剪枝的统一结构剪枝方法,通过KL散度优化和自定义微调策略,在减少代码LLM参数的同时保持高性能和效率。
-
When2Call: When (not) to Call Tools
本文提出When2Call基准,通过多选格式评估语言模型在工具调用决策上的表现,并通过偏好优化(RPO)训练方法显著提升模型在何时调用工具及何时保守行为之间的平衡能力。
-
HINT: Hypernetwork Approach to Training Weight Interval Regions in Continual Learning
HINT proposes a continual learning framework using interval arithmetic in embedding space with a hypernetwork to generate target network weights, achieving improved scalability and non-forgetting guarantees over InterContiNet while outperforming several benchmarks, though struggling with complex datasets.
-
GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance
GuidedQuant通过整合最终损失梯度信息并保留输出通道内权重依赖性,结合LNQ算法显著提升了大型语言模型在权重和激活量化下的性能,实现了更高效的后训练量化。
-
Replay to Remember: Retaining Domain Knowledge in Streaming Language Models
本文通过结合LoRA和轻量级重放机制的方法,在流式学习条件下帮助大型语言模型减轻灾难性遗忘,同时实现了实时域适应。