Tag: Pre-training
All the articles with the tag "Pre-training".
-
Adaptive Layer-skipping in Pre-trained LLMs
本文提出FlexiDepth方法,通过插件式路由器和适配器实现预训练LLM的自适应层跳过,提高计算效率同时保持生成性能,并通过实验揭示了token类型对计算需求的影响。
-
Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders
This paper uses Sparse Autoencoders to identify and manipulate language-specific features in Large Language Models, introducing a monolinguality metric, demonstrating context dependency via code-switching, and enhancing steering vectors for better control over multilingual generation while revealing significant language-specific impacts through ablation studies.
-
Splitwiser: Efficient LM inference with constrained resources
Splitwiser introduces a method to split LLM inference phases on a single GPU using multiprocessing and NVIDIA MPS, achieving modest latency reductions (up to 18.2%) and throughput improvements (up to 1.42x) on Huggingface and vLLM pipelines, though constrained by overheads and scalability issues.
-
DeepSeek-Prover-V2: Advancing Formal Mathematical Reasoning via Reinforcement Learning for Subgoal Decomposition
本文提出DeepSeek-Prover-V2,通过子目标分解和强化学习统一非正式和正式数学推理,显著提升了神经定理证明的性能,在多个基准上达到最先进水平。
-
Beyond Public Access in LLM Pre-Training Data
本文通過DE-COP成員推斷攻擊方法,使用O'Reilly書籍數據集證明OpenAI的GPT-4o可能訓練過非公共版權內容,突顯了LLM預訓練數據中非公共數據使用增加的趨勢及加強透明度和許可框架的必要性。