检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Dong ZHANG Liyan ZHANG Jinhui TANG
机构地区:[1]School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China [2]College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China
出 处:《Science China(Information Sciences)》2023年第4期189-207,共19页中国科学(信息科学)(英文版)
基 金:supported by National Key Research and Development Program of China(Grant No.2018AAA0102002);National Natural Science Foundation of China(Grant Nos.61925204,62172212)。
摘 要:The effectiveness of modeling contextual information has been empirically shown in numerous computer vision tasks.In this paper,we propose a simple yet efficient augmented fully convolutional network(AugFCN)by aggregating content-and position-based object contexts for semantic segmentation.Specifically,motivated because each deep feature map is a global,class-wise representation of the input,we first propose an augmented nonlocal interaction(AugNI)to aggregate the global content-based contexts through all feature map interactions.Compared to classical position-wise approaches,AugNI is more efficient.Moreover,to eliminate permutation equivariance and maintain translation equivariance,a learnable,relative position embedding branch is then supportably installed in AugNI to capture the global positionbased contexts.AugFCN is built on a fully convolutional network as the backbone by deploying AugNI before the segmentation head network.Experimental results on two challenging benchmarks verify that AugFCN can achieve a competitive 45.38%mIoU(standard mean intersection over union)and 81.9%mIoU on the ADE20K val set and Cityscapes test set,respectively,with little computational overhead.Additionally,the results of the joint implementation of AugNI and existing context modeling schemes show that AugFCN leads to continuous segmentation improvements in state-of-the-art context modeling.We finally achieve a top performance of 45.43%mIoU on the ADE20K val set and 83.0%mIoU on the Cityscapes test set.
关 键 词:semantic segmentation context modeling long-range dependencies attention mechanism
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程] TP391.41[自动化与计算机技术—控制科学与工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.66