检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:马静[1,2] 郭中华[1,2] 马志强[1] 马小艳 李迦龙 MA Jing;GUO Zhonghua;MA Zhiqiang;MA Xiaoyan;LI Jialong(School of Physics and Electronic-Electrical Engineering,Ningxia University,Yinchuan 750021,China;Ningxia Key Lab on Information Sensing&Intelligent Desert,Ningxia University,Yinchuan 750021,China)
机构地区:[1]宁夏大学电子与电气工程学院,宁夏银川750021 [2]宁夏大学沙漠信息智能感知重点实验室,宁夏银川750021
出 处:《液晶与显示》2024年第8期1001-1013,共13页Chinese Journal of Liquid Crystals and Displays
基 金:国家自然科学基金(No.62365016);中央支持地方专项资金(No.2023FRD05034)。
摘 要:针对DeepLabV3+在遥感图像地物分割中出现的细节信息丢失、类别不均衡等问题引起的误差,提出一种基于轻量化网络的DeepLabV3+遥感图像地物分割方法。首先,使用MobileNetV2替换原始基准网络中的骨干网络,提高训练效率并减少模型的复杂度。其次,增大ASPP结构中空洞卷积的膨胀率,并在ASPP最后一层使用最大池化,有效地捕获不同尺度的上下文信息,同时在ASPP每个分支中引入SE注意力机制,并在提取浅层特征之后引入ECA注意力机制,提高模型对不同类别和细节的感知能力。最后,使用加权的Dice-Focal联合损失函数进行优化,处理类别不均衡的问题。将改进的模型分别在CCF和华为昇腾杯竞赛数据集上进行验证,实验结果表明,本文所提出的方法相较于原始DeepLabV3+模型在两种测试集上的各个指标均有不同程度的提高。其中,mIoU达到了73.47%、63.43%,分别提高了3.24%和15.11%;准确率达到了88.28%、86.47%,分别提高了1.47%和7.83%;F1指数达到了84.29%和77.04%,分别提高了3.86%和13.46%。改进后的DeepLabV3+模型可以更好地解决细节信息丢失和类别不均衡的问题,提高遥感图像地物分割的性能和准确性。A lightweight network based DeepLabV3+remote sensing image land feature segmentation method is proposed to address the errors caused by the loss of detail information and imbalanced categories in remote sensing image segmentation.Firstly,MobileNetV2 is adopted to replace the backbone network in original baseline network to improve training efficiency and reduce model complexity.Secondly,the dilation rate of atrous convolutions within ASPP structure is increased and max-pooling in final ASPP layer is incorporated to effectively capture context information at different scales.At the same time,SE attention mechanism is introduced into each branch of ASPP,and ECA attention mechanism is introduced after extracting shallow features to improve the model’s perception ability for different categories and details.Finally,the weighted Dice-Local joint loss function is used for optimization to address class imbalance issues.The improved model is validated on both the CCF and Huawei Ascend Cup competition datasets.Experimental results show that the proposed method outperforms original DeepLabV3+model on both test sets,with various metrics showing different degrees of improvement.Among them,mIoU reaches 73.47%and 63.43%,representing improvements of 3.24%and 15.11%,respectively.The accuracy reaches 88.28%and 86.47%,showing enhancements of 1.47%and 7.83%,respectively.The F1 index reaches 84.29% and 77.04%,increasing by 3.86%and 13.46%,respectively.The improved DeepLabV3+model can better solve the problems of loss of detail information and class imbalance,which improves the performance and accuracy of remote sensing image feature segmentation.
关 键 词:MobileNetV2 空洞卷积 注意力机制 损失函数
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7