arXiv:2506.17781v1 Announce Type: cross
Abstract: Dense embeddings are fundamental to modern machine learning systems, powering Retrieval-Augmented Generation (RAG), information retrieval, and representation learning. While instruction-conditioning has become the dominant approach for embedding specialization, its direct application to low-capacity models imposes fundamental representational constraints that limit the performance gains derived from specialization. In this paper, we analyze these limitations and introduce the Mixture of Task Experts (MoTE) transformer block, which leverages task-specialized parameters trained with Task-Aware Contrastive Learning (tacl) to enhance the model ability to generate specialized embeddings. Empirical results show that MoTE achieves $64%$ higher performance gains in retrieval datasets ($+3.27 rightarrow +5.21$) and $43%$ higher performance gains across all datasets ($+1.81 rightarrow +2.60$). Critically, these gains are achieved without altering instructions, training data, inference time, or number of active parameters.