检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:余果 李大成[1] 杨毅 YU Guo;LI Dacheng;YANG Yi(College of Mining Engineering,Taiyuan University of Technology,Taiyuan 030024,China;College of Physics and Optoelectronics,Taiyuan University of Technology,Taiyuan 030024,China)
机构地区:[1]太原理工大学矿业工程学院,太原030024 [2]太原理工大学物理与光电工程学院,太原030024
出 处:《中国空间科学技术(中英文)》2024年第5期175-185,共11页Chinese Space Science and Technology
基 金:国家重点研发计划重点专项(2022YFB3903004)。
摘 要:针对高分辨率影像中道路情况复杂,存在细小道路和被建筑、阴影等隔断道路,导致道路提取精度不高的问题,提出一种结合空洞卷积单元和并行注意力机制模块的改进模型AP-LinkNet。该模型是通过在下采样编码过程中扩大感受野和深层次关注道路特征以达到更高的细节道路提取精度。其中空洞卷积模块在扩大感受野的同时不改变空间上像素之间的关系,并行注意力机制提高输入影像采样过程中对通道和空间信息的关注度,并加权赋值给解码步骤的反卷积特征。结合两种机制的特点,减少复杂道路背景的噪声扰乱性以及提高道路提取模型的整体精度。与DeepLabV3+、U-Net、LinkNet和D-LinkNet模型做对比分析,AP-LinkNet模型在DeepGlobe数据集上道路提取的F_(1)分数和IOU评价指标为80.69%和78.65%,其中F_(1)分数分别高出对比模型11.71%、5.24%、3.97%和3.58%。结果表明模型精确度和鲁棒性更高,对于高分影像狭窄、被遮挡等复杂道路细节提取效果好。For high-resolution images,the road situation is complex.And there are narrow roads or roads separated by buildings and shadows,leading to the problem of low extraction accuracy.In this paper,an improved model AP-LinkNet combining the atrous convolutional element and the parallel attention mechanism module is proposed,which can achieve higher detail extraction accuracy by expanding the receptive field and paying deep attention to road features in the downsampling coding process.The atrous convolution module expands the receptive field without changing the relationship between pixels on space.The parallel attention mechanism increases the attention to channel and spatial information during input image sampling.Combining the characteristics of the two mechanisms,the noise disturbance of complex road background is reduced and the overall accuracy is improved.The experimental results in this paper are compared with DeepLabV3+,U-Net,LinkNet and D-LinkNet.The F_(1) score and IOU on the DeepGlobe dataset are 80.69% and 78.65%,respectively.And the F_(1) score is 11.71%,5.24%,3.97% and 3.58% higher than the comparison models.The results show that the proposed model has higher accuracy and robustness,and has a good effect on extracting the narrow and complex road details from high-resolution images.
关 键 词:深度学习 空洞卷积 并行注意力机制 混合损失函数 卷积神经网络
分 类 号:TP79[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.23.102.192