检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:夏晨星 陈欣雨 孙延光 葛斌[1] 方贤进[1,5] 高修菊 张艳 XIA Chenxing;CHEN Xinyu;SUN Yanguang;GE Bin;FANG Xianjin;GAO Xiuju;ZHANG Yan(College of Computer Science and Engineering,Anhui University of Science and Technology,Huainan 232001,China;Anhui Puhua Big Data Technology Co.,Ltd.,Huainan 230031,China;Institute of Energy,Hefei Comprehensive National Science Center,Hefei 230601,China;School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China;Institute of Artificial Intelligence,Hefei Comprehensive National Science Center,Hefei 230601,China;School of Electrical and Information Engineering,Anhui University of Science and Technology,Huainan 232001,China;School of Electronics and Information Engineering,Anhui University,Hefei 230601,China)
机构地区:[1]安徽理工大学计算机科学与工程学院,淮南232001 [2]安徽璞华大数据技术有限公司,淮南230031 [3]合肥综合性国家科学中心能源研究院,合肥230601 [4]南京理工大学计算机科学与工程学院,南京210094 [5]合肥综合性国家科学中心人工智能研究院,合肥230601 [6]安徽理工大学电气与信息工程学院,淮南232001 [7]安徽大学电子信息工程学院,合肥230601
出 处:《电子与信息学报》2024年第7期2918-2931,共14页Journal of Electronics & Information Technology
基 金:国家自然科学基金(62102003);安徽省自然科学基金(2108085QF258);安徽省博士后基金(2022B623);淮南市科技计划项目(2023A316);安徽高校协同创新项目(GXXT-2021-006,GXXT-2022-038);安徽理工大学青年科学研究基金一般项目(xjyb2020-04);中央引导地方科技发展专项资金(202107d06020001)。
摘 要:显著性目标检测目的是识别和分割图像中的视觉显著性目标,它是计算机视觉任务及其相关领域的重要研究内容之一。当下基于全卷积网络(FCNs)的显著性目标检测方法已经取得了不错的性能,然而现实场景中的显著性目标类型多变且尺寸不固定,这使得准确检测并完整分割出显著性目标仍然是一个巨大的挑战。为此,该文提出集成多种上下文和混合交互的显著性目标检测方法,通过利用密集上下文信息探索模块和多源特征混合交互模块来高效预测显著性目标。密集上下文信息探索模块采用空洞卷积、不对称卷积和密集引导连接渐进地捕获具有强关联性的多尺度和多感受野上下文信息,通过集成这些信息来增强每个初始多层级特征的表达能力。多源特征混合交互模块包含多种特征聚合操作,可以自适应交互来自多层级特征中的互补性信息,以生成用于准确预测显著性图的高质量特征表示。此方法在5个公共数据集上进行了性能测试,实验结果表明,该文方法在不同的评估指标下与19种基于深度学习的显著性目标检测方法相比取得优越的预测性能。Salient Object Detection(SOD)aims to recognize and segment visual salient objects in images,which is one of the important research contents in computer vision tasks and related fields.Existing Fully Convolutional Networks(FCNs)-based SOD methods have achieved good performance.However,the types and sizes of salient objects are variable and unfixed in real-world scenes,which makes it still a huge challenge to detect and segment salient objects accurately and completely.For that,in this paper,a novel integrating multiple context and hybrid interaction for SOD task is proposed to efficiently predict salient objects by collaborating Dense Context Information Exploration(DCIE)module and Multi-source Feature Hybrid Interaction(MFHI)module.The DCIE module uses dilated convolution,asymmetric convolution and dense guided connection to progressively capture the strongly correlated multi-scale and multi-receptive field context information,and enhances the expression ability of each initial input feature by aggregating context information.The MFHI module contains diverse feature aggregation operations,which can adaptively interact with complementary information from multi-level features to generate high-quality feature representations for accurately predicting saliency maps.The performance of the proposed method is tested on five public datasets.The performance of the proposed method is tested on five public datasets.Experimental results demonstrate that our method achieves superior prediction performance compared with 19 state-of-the-art SOD methods under different evaluation metrics.
关 键 词:计算机视觉 显著性目标检测 全卷积网络 上下文信息
分 类 号:TN911.7[电子电信—通信与信息系统] TP391.41[电子电信—信息与通信工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.147