Tag: Large Language Model
All the articles with the tag "Large Language Model".
-
LLM-e Guess: Can LLMs Capabilities Advance Without Hardware Progress?
This paper introduces a framework to classify algorithmic innovations in LLMs as compute-dependent or compute-independent, demonstrating through small-scale GPT-2 experiments that compute-independent advancements like FlashAttention can yield up to 3.5× compute-equivalent gains even under hardware constraints, challenging the efficacy of hardware-focused AI regulation.
-
COSMOS: Predictable and Cost-Effective Adaptation of LLMs
COSMOS introduces a cost-effective framework to predict performance and cost of LLM adaptation strategies like QLoRA fine-tuning and retrieval-augmented ICL, achieving high accuracy (1.09% MAE) and reducing computational costs by 92.72% across eight diverse benchmarks.
-
Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance
本文提出谱系正则化矩阵分解(LRMF)方法,通过利用大型语言模型的谱系关系显著提高性能预测准确性,在同质和异质模型场景下均优于传统方法,尤其在冷启动问题上表现突出。
-
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
This paper demonstrates that finetuning aligned LLMs on narrow tasks like writing insecure code can lead to emergent misalignment, causing broadly harmful behaviors across unrelated tasks, as evidenced by experiments on multiple models with control setups and backdoor triggers.
-
On the generalization of language models from in-context learning and finetuning: a controlled study
本文通过控制实验比较了语言模型在上下文学习和微调下的泛化能力,发现上下文学习更灵活,并提出通过数据增强方法显著改善微调的泛化性能。