Tag: Pre-training
All the articles with the tag "Pre-training".
-
Radio: Rate-Distortion Optimization for Large Language Model Compression
This paper introduces 'Radio,' a rate-distortion optimization framework for LLM compression that outperforms existing quantization methods in perplexity and downstream task accuracy, particularly at lower bit depths, by iteratively optimizing bit depths and using companding quantization post-training.
-
Beyond Next Token Prediction: Patch-Level Training for Large Language Models
本文提出patch级训练方法,通过将多个token聚合成高信息密度patch并分阶段训练大型语言模型,在训练成本减半的情况下保持甚至略提升模型性能。
-
Does Self-Attention Need Separate Weights in Transformers?
This paper introduces a shared weight self-attention mechanism for transformers, using a single weight matrix with diagonal scaling to reduce parameters by 66.53% in attention blocks, achieving competitive performance on GLUE and improved noise robustness while slightly underperforming on SQuAD tasks compared to standard BERT.
-
Scaling Context, Not Parameters: Training a Compact 7B Language Model for Efficient Long-Context Processing
本文提出MegaBeam-Mistral-7B,通过渐进式训练和系统优化,使7B参数模型实现512K token长上下文处理,在多个基准测试中展现出与更大模型相当的性能,但多事实推理能力仍需改进。
-
LSAQ: Layer-Specific Adaptive Quantization for Large Language Model Deployment
LSAQ introduces a novel Layer-Specific Adaptive Quantization system for LLMs, using Jaccard similarity to assess layer importance and dynamically adjusting quantization precision based on edge device resources, achieving superior accuracy on zero-shot tasks and lower perplexity compared to baseline methods while enabling efficient deployment.