Mixture of Experts Residual Learning for Hamming Hashing
- Indexed by:Journal paper
- First Author:徐锦宇
- Correspondence Author:解庆
- Co-author:马艳春,李佳琛,刘遹菡
- Journal:Neural Processing Letters
- Included Journals:SCI
- Discipline:Engineering
- First-Level Discipline:Computer Science and Technology
- Document Type:J
- Key Words:Image retrieval · Hamming hashing · Mixture of experts · Attentional mechanism
- Abstract:Image retrieval has drawn growing attention due to the rapid emergence of images on the Internet. Due to the high storage and computation efficiency, hashing methods are widely employed in approximate nearest neighbor search for large-scale image retrieval. Existing deep supervised hashing methods mainly utilize the labels to analyze the semantic similarity and preserve it in hash codes, but the collected label information may be incomplete or unreliable in real-world. Meanwhile, the features extracted by a single convolutional neural network (CNN) from complex images are difficult to express the latent information, or potential to miss certain semantic information. Thus, this work further exploits existing knowledge from the pre-trained semantic features of higher quality, and proposes mixture of experts residual learning for Hamming hashing (MoE-hash), which brings in the experts forimage hashing in Hamming space. Specifically, we separately extract the basic visual features by a CNN, and different semantic features by existing expert models. To better preserve the semantic information in compact hash codes, we learn the hash codes by the mixture of experts (MoE) residual learning block with max-margin t-distribution-based loss. Extensiveexperiments on MS-COCO and NUS-WIDE demonstrate that our model can achieve clear improvement in retrieval performance, and validate the role of mixture of experts residual learning in image hashing task.
Pre One:
藏族陶瓷数字化设计与制造关键技术及实现路径研究
Next One:
基于BPNT-MDA整合模型的技艺类非遗VR科普游戏设计研究