Tag: Multimodality
All the articles with the tag "Multimodality".
-
Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model Merging
本文提出OSRM方法,通过在微调前约束LoRA子空间以减少任务间干扰,显著提升了多个语言模型在八个GLUE数据集上的合并性能,同时保持单任务准确性。
-
Shadow-FT: Tuning Instruct via Base
本文提出Shadow-FT框架,通过调优BASE模型并将权重更新直接移植到INSTRUCT模型,显著提升了大型语言模型在数学、编码和推理任务上的性能,同时不引入额外训练成本。
-
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama通过结合状态空间模型和多头潜在注意力层,从预训练Transformer构建高效混合模型,显著降低KV缓存大小并提升推理吞吐量,同时保持或超越基线性能。
-
An Efficient Sparse Kernel Generator for O(3)-Equivariant Deep Networks
This paper introduces a GPU sparse kernel generator for the Clebsch-Gordon tensor product in O(3)-equivariant deep networks, achieving significant speedups (up to 10x over e3nn and 1.3x-2.0x over cuEquivariance) by leveraging JIT compilation, static analysis, and kernel fusion, particularly enhancing performance in computational chemistry models like Nequip and MACE.
-
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
本文提出了一种层交换方法,通过将语言专家模型的顶部和底部层与数学专家模型的中间层重组,实现零样本跨语言迁移,在低资源语言的数学推理任务上显著提升性能达10%。