王兴刚

个人信息Personal Information

教授   博士生导师   硕士生导师  

性别:男

在职信息:在职

所在单位:电子信息与通信学院

学历:研究生(博士)毕业

学位:工学博士学位

毕业院校:华中科技大学

学科:通信与信息系统
信号与信息处理

Revisiting multiple instance neural networks

点击次数:

论文类型:期刊论文

第一作者:Wang,Xinggang,Wang,Xinggang

通讯作者:Wang,Xinggang,Bai,Xiang

合写作者:Liu,Wenyu,Tang,Peng,Yan,Yongluan

发表刊物:Pattern Recognition

DOI码:10.1016/j.patcog.2017.08.026

发表时间:2017-08-31

影响因子:7.196

摘要:We revisit the problem of solving MIL using neural networks (MINNs), which are ignored in current MIL research community. Our experiments show that MINNs are very effective and efficient.We proposed a novel MI-Net which is centered on learning bag representation in the neural networks in an end-to-end way.Recent deep learning tricks including dropout, deep supervision and residual connections are studied in MINNs. We find deep supervision and residual connections are effective for MIL.In the experiments, the proposed MINNs achieve state-of-the-art or competitive performance on several MIL benchmarks. Moreover, it is extremely fast for both testing and training, for example, it takes only 0.0003s to predict a bag and a few seconds to train on MIL datasets on a moderate CPU. Of late, neural networks and Multiple Instance Learning (MIL) are both attractive topics in the research areas related to Artificial Intelligence. Deep neural networks have achieved great successes in supervised learning problems, and MIL as a typical weakly-supervised learning method is effective for many applications in computer vision, biometrics, natural language processing, and so on. In this article, we revisit Multiple Instance Neural Networks (MINNs) that the neural networks aim at solving the MIL problems. The MINNs perform MIL in an end-to-end manner, which take bags with a various number of instances as input and directly output the labels of bags. All of the parameters in a MINN can be optimized via back-propagation. Besides revisiting the old MINNs, we propose a new type of MINN to learn bag representations, which is different from the existing MINNs that focus on estimating instance label. In addition, recent tricks developed in deep learning have been studied in MINNs; we find deep supervision is effective for learning better bag representations. In the experiments, the proposed MINNs achieve state-of-the-art or competitive performance on several MIL benchmarks. Moreover, it is extremely fast for both testing and training, for example, it takes only 0.0003 s to predict a bag and a few seconds to train on MIL datasets on a moderate CPU.