Tag: Foundation Model
All the articles with the tag "Foundation Model".
-
No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces
本文提出了一种等向性模型合并框架,通过展平任务矩阵奇异值谱并结合公共与任务特定子空间,显著提升了多任务模型的性能,在视觉和语言任务上达到了最先进的合并效果。
-
Why Do More Experts Fail? A Theoretical Analysis of Model Merging
本文通过理论分析揭示了模型融合性能随专家模型数量增加而饱和的原因,并提出Reparameterized Heavy-Tailed方法扩展参数空间覆盖范围,在多个基准任务上验证了其有效性。
-
Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging
本文提出一个多模态大语言模型(MLLM)融合基准和改进的任务向量优化方法(WUDI v2),通过低秩近似去除噪声并优化合并向量,在多任务和跨模态融合实验中取得平均2.48%的性能提升,展现了无需数据训练即可构建高性能MLLMs的潜力。
-
Foundation Models For Seismic Data Processing: An Extensive Review
This paper conducts an extensive review of natural image foundation models for seismic data processing, demonstrating that hierarchical models like Swin and ConvNeXt, especially with self-supervised pre-training, outperform non-hierarchical ones in demultiple, interpolation, and denoising tasks, while highlighting the benefits and limitations of natural image pre-training for seismic applications.
-
R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
R&B框架通过基于语义相似性的数据重新分组和梯度驱动的动态权重调整,以极低的计算开销(0.01%)在自然语言和多模态任务中匹配或超越现有数据混合策略,提升了基础模型训练效率。