Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
LSAQ: Layer-Specific Adaptive Quantization for Large Language Model Deployment
LSAQ introduces a novel Layer-Specific Adaptive Quantization system for LLMs, using Jaccard similarity to assess layer importance and dynamically adjusting quantization precision based on edge device resources, achieving superior accuracy on zero-shot tasks and lower perplexity compared to baseline methods while enabling efficient deployment.
-
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma2
This paper demonstrates that Elastic Weight Consolidation (EWC) applied to full-parameter continual pre-training of Gemma2 2B LLM mitigates catastrophic forgetting on English tasks while improving performance on Lithuanian language benchmarks during autoregressive pre-training on CulturaX data.
-
Accelerating Large Language Model Reasoning via Speculative Search
Speculative Search (SpecSearch) accelerates LLM reasoning by up to 2.12× through a bi-level speculative thought generator that collaborates between small and large models, maintaining comparable reasoning quality via a quality-preserving rejection mechanism.
-
Reinforced MLLM: A Survey on RL-Based Reasoning in Multimodal Large Language Models
本文系统综述了基于强化学习的推理方法在多模态大语言模型(MLLMs)中的进展,分析了算法设计、奖励机制及应用,揭示了跨模态推理和奖励稀疏性等挑战,并提出了分层奖励和交互式RL等未来方向。
-
Efficient Reasoning for LLMs through Speculative Chain-of-Thought
本文提出了推测思维链(SCoT)框架,通过轻量级草稿模型并行生成多个思维链草稿,并由微调后的目标大模型选择最佳草稿或决定重新思考,从而在保持接近大模型准确率的同时,显著降低了大型语言模型的推理延迟。