Posts
All the articles I've posted.
-
UnifyFL: Enabling Decentralized Cross-Silo Federated Learning
UnifyFL proposes a decentralized cross-silo federated learning framework using Ethereum blockchain and IPFS to enable trust-based collaboration among organizations, achieving comparable accuracy to centralized FL with flexible aggregation policies and efficient handling of stragglers through synchronous and asynchronous modes.
-
AI agents may be worth the hype but not the resources (yet): An initial exploration of machine translation quality and costs in three language pairs in the legal and news domains
本文通过实证评估五种机器翻译范式,发现推理增强的大型语言模型(如o1-preview)在人工评估中表现出色,超越传统NMT,而多智能体系统虽具潜力,但因高计算成本和语言对表现不一致而受限。
-
Activated LoRA: Fine-tuned LLMs for Intrinsics
本文提出 Activated LoRA (aLoRA),一种改进的 LoRA 框架,通过仅对激活后 token 适配权重,复用基础模型 KV 缓存,实现高效动态适配,并在多个任务上保持与标准 LoRA 相当的性能,同时显著降低推理成本。
-
Recall with Reasoning: Chain-of-Thought Distillation for Mamba's Long-Context Memory and Extrapolation
This paper proposes Recall with Reasoning (RwR), a method that enhances Mamba's long-context memory and extrapolation by distilling chain-of-thought summarization from a teacher model, achieving significant performance improvements on LONGMEMEVAL and HELMET benchmarks while preserving short-context capabilities.
-
LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection
LENSLLM introduces a Hessian-based PAC-Bayes framework and NTK-based scaling model for LLM selection, achieving up to 91.1% accuracy and 88.5% computational cost reduction by modeling fine-tuning dynamics across diverse tasks.