Posts
All the articles I've posted.
-
Exploring Effective Distillation of Self-Supervised Speech Models for Automatic Speech Recognition
This paper explores effective distillation of HuBERT for ASR by comparing student model structures, introducing a discriminative loss for improved low-resource performance, and proposing front-end distillation from waveform to Fbank features, achieving 17% parameter reduction and doubled inference speed with minor performance degradation.
-
Quantum-Enhanced LLM Efficient Fine Tuning
本文提出量子张量混合适配(QTHA)方法,通过整合量子神经网络和张量网络,实现LLM的参数高效微调,显著减少参数量并提升性能,为量子增强人工智能奠定基础。
-
Adversarial Attacks on LLM-as-a-Judge Systems: Insights from Prompt Injections
本文通过提出攻击框架和实验评估,揭示了LLM-as-a-judge系统的prompt injection漏洞,并推荐使用多模型委员会等策略提升鲁棒性。
-
Rethinking Invariance in In-context Learning
This paper introduces Invariant In-Context Learning (InvICL), a novel ICL method that achieves permutation invariance, information non-leakage, and context interdependence using leave-one-out encoding and parallel implementation, outperforming both invariant and non-invariant baselines in generalization and performance across synthetic and real-world tasks.
-
Dynamic Parametric Retrieval Augmented Generation for Test-time Knowledge Enhancement
本文提出动态参数化RAG框架DyPRAG,通过训练一个轻量级参数翻译器在测试时动态转换文档为参数知识,显著降低成本、提升泛化能力和缓解RAG幻觉问题。