论文成果
当前位置: 王兴刚-华中科技大学教师主页 >> 科学研究 >> 论文成果CCNet: Criss-Cross Attention for Semantic Segmentation
点击次数:
论文类型:期刊论文
第一作者:Huang,Huang,Zilong
通讯作者:Wang,Xinggang
合写作者:Huang,Thomas,Liu,Wenyu,Shi,Humphrey,Lichao,Wei,Yunchao
发表刊物:IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI)
DOI码:10.1109/TPAMI.2020.3007032
发表时间:2020-06-03
影响因子:17.861
摘要:Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a Criss-Cross Network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85\% of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9%, 45.76% and 55.47% on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNet.