Tag: Contrastive Learning
All the articles with the tag "Contrastive Learning".
-
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
本文通过线性探查和神经元激活分析,复制并扩展了对密集检索模型中预训练与微调知识获取作用的研究,发现预训练知识在DPR模型中主导检索效果且微调导致知识分散,但此结论在不同架构(如Contriever、RepLlama)和表示策略下并不成立。
-
Language Models are Universal Embedders
本文基于多语言解码器模型(如BLOOM)提出通用嵌入器构建方法,通过对比学习和参数高效微调实现跨语言、跨任务的高质量嵌入,实验表明其在多语言和多任务场景中具有显著潜力和泛化能力。
-
SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought Reasoning
SoftCoT++ 通过在连续潜在空间中引入多样化初始令牌和对比学习实现测试时扩展,显著提升了大型语言模型在多个推理任务上的性能,并与传统离散空间扩展方法展现出协同效应。
-
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
本文提出UniME框架,通过文本判别知识蒸馏和硬负例增强指令微调,利用多模态大语言模型学习通用的多模态嵌入,提高了下游任务的判别性和组合能力。
-
Style Feature Extraction Using Contrastive Conditioned Variational Autoencoders with Mutual Information Constraints
This paper proposes a novel method combining contrastive learning with conditional variational autoencoders and mutual information constraints to extract style features from unlabeled data, demonstrating effectiveness on simple datasets like MNIST while facing challenges with natural image datasets due to augmentation limitations and qualitative evaluation.