Tag: Parameter-Efficient Fine-Tuning
All the articles with the tag "Parameter-Efficient Fine-Tuning".
-   
TT-LoRA MoE: Unifying Parameter-Efficient Fine-Tuning and Sparse Mixture-of-Experts
本文提出 TT-LoRA MoE 框架,通过两阶段解耦的专家训练和路由机制,实现了参数高效的多任务学习,显著减少计算开销并保持性能。
 -   
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
This paper introduces SEFE, a method combining Answer Style Diversification (ASD) to mitigate superficial forgetting and RegLoRA to address essential forgetting in Multimodal Continual Instruction Tuning, achieving state-of-the-art performance on the CoIN benchmark.
 -   
TT-LoRA MoE: Unifying Parameter-Efficient Fine-Tuning and Sparse Mixture-of-Experts
本文提出TT-LoRA MoE框架,通过两阶段训练结合张量分解的低秩适配器和动态稀疏路由机制,以极低的参数量(LoRA的2%,AdapterFusion的0.03%)实现多任务NLP分类任务的竞争性性能,平均准确率提升约4个百分点,同时解决任务干扰和知识遗忘问题。
 -   
Communication-Efficient Wireless Federated Fine-Tuning for Large-Scale AI Models
本文提出了一种无线联邦LoRA微调框架,通过Sparsified Orthogonal Fine-Tuning (SOFT) 和Two Stage Federated Algorithm (TSFA) 优化参数稀疏化和动态资源分配,提高了通信效率和学习性能。
 -   
Quantum-Enhanced LLM Efficient Fine Tuning
本文提出量子张量混合适配(QTHA)方法,通过整合量子神经网络和张量网络,实现LLM的参数高效微调,显著减少参数量并提升性能,为量子增强人工智能奠定基础。