Liu Tracy   Associate professor

Recent Research Achievements:²Yuhan Liu, Yongjian Liu. The Research on the Innovation Incentive Mechanism of Editors Based on Modern Paper Books[J]. Science-Technology& Publication, 2018(8)(to be published)(CSSCI)²Lifang Xu, Yuhan Liu. New Orientation, New Targets, New Challenges: the Research on the Publishing Dynamic of Internationl Technical Journals in 2016, 2017(2):10-14.(CSSCI)²Y...Detials

Mixture of Experts Residual Learning for Hamming Hashing

Release time:2024-06-13  Hits:

  • Indexed by:Journal paper
  • First Author:徐锦宇
  • Correspondence Author:解庆
  • Co-author:马艳春,李佳琛,刘遹菡
  • Journal:Neural Processing Letters
  • Included Journals:SCI
  • Discipline:Engineering
  • First-Level Discipline:Computer Science and Technology
  • Document Type:J
  • Key Words:Image retrieval · Hamming hashing · Mixture of experts · Attentional mechanism
  • Abstract:Image retrieval has drawn growing attention due to the rapid emergence of images on the Internet. Due to the high storage and computation efficiency, hashing methods are widely employed in approximate nearest neighbor search for large-scale image retrieval. Existing deep supervised hashing methods mainly utilize the labels to analyze the semantic similarity and preserve it in hash codes, but the collected label information may be incomplete or unreliable in real-world. Meanwhile, the features extracted by a single convolutional neural network (CNN) from complex images are difficult to express the latent information, or potential to miss certain semantic information. Thus, this work further exploits existing knowledge from the pre-trained semantic features of higher quality, and proposes mixture of experts residual learning for Hamming hashing (MoE-hash), which brings in the experts forimage hashing in Hamming space. Specifically, we separately extract the basic visual features by a CNN, and different semantic features by existing expert models. To better preserve the semantic information in compact hash codes, we learn the hash codes by the mixture of experts (MoE) residual learning block with max-margin t-distribution-based loss. Extensiveexperiments on MS-COCO and NUS-WIDE demonstrate that our model can achieve clear improvement in retrieval performance, and validate the role of mixture of experts residual learning in image hashing task.