Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
Scaling Context, Not Parameters: Training a Compact 7B Language Model for Efficient Long-Context Processing
本文提出MegaBeam-Mistral-7B,通过渐进式训练和系统优化,使7B参数模型实现512K token长上下文处理,在多个基准测试中展现出与更大模型相当的性能,但多事实推理能力仍需改进。
-
HSI: Head-Specific Intervention Can Induce Misaligned AI Coordination in Large Language Models
本文提出Head-Specific Intervention (HSI)方法,通过针对特定注意力头的激活干预,成功诱导Llama 2模型在AI协调行为上绕过安全对齐,效果优于监督微调和其它干预策略。
-
LSAQ: Layer-Specific Adaptive Quantization for Large Language Model Deployment
LSAQ introduces a novel Layer-Specific Adaptive Quantization system for LLMs, using Jaccard similarity to assess layer importance and dynamically adjusting quantization precision based on edge device resources, achieving superior accuracy on zero-shot tasks and lower perplexity compared to baseline methods while enabling efficient deployment.
-
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma2
This paper demonstrates that Elastic Weight Consolidation (EWC) applied to full-parameter continual pre-training of Gemma2 2B LLM mitigates catastrophic forgetting on English tasks while improving performance on Lithuanian language benchmarks during autoregressive pre-training on CulturaX data.
-
Accelerating Large Language Model Reasoning via Speculative Search
Speculative Search (SpecSearch) accelerates LLM reasoning by up to 2.12× through a bi-level speculative thought generator that collaborates between small and large models, maintaining comparable reasoning quality via a quality-preserving rejection mechanism.