Tag: Pre-training
All the articles with the tag "Pre-training".
-
Zero-Shot Vision Encoder Grafting via LLM Surrogates
本文提出通过构建小型代理模型训练视觉编码器并零样本嫁接至大型LLM(如Llama-70B),在保持视觉理解能力的同时将VLM训练成本降低约45%。
-
Foundation Models For Seismic Data Processing: An Extensive Review
This paper conducts an extensive review of natural image foundation models for seismic data processing, demonstrating that hierarchical models like Swin and ConvNeXt, especially with self-supervised pre-training, outperform non-hierarchical ones in demultiple, interpolation, and denoising tasks, while highlighting the benefits and limitations of natural image pre-training for seismic applications.
-
FlashThink: An Early Exit Method For Efficient Reasoning
FlashThink方法通过验证模型动态判断推理过程是否提前结束,在保持大型语言模型准确率的同时显著减少推理内容长度(平均效率提升约77%),并通过FT²微调进一步优化性能。
-
Mini-batch Coresets for Memory-efficient Language Model Training on Data Mixtures
本文提出 CoLM 方法,通过构建小批量核心集匹配大批量梯度,在内存需求减少 2 倍的情况下,使 LLM 微调性能优于 4 倍批大小的常规训练,同时提升收敛速度。
-
Born a Transformer -- Always a Transformer?
本文通过检索和复制任务研究Transformer的长度泛化限制,发现预训练选择性增强了归纳能力(向右/向前任务),但无法克服架构固有局限,微调可平衡不对称性但仍受理论约束。