检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:郑顾平[1] 王敏 李刚[1] ZHENG Guping;WANG Min;LI Gang(School of Computer and Control Engineering,North China Electric Power University,Baoding Hebei 071003,China)
机构地区:[1]华北电力大学控制与计算机工程学院,河北保定071003
出 处:《图学学报》2018年第6期1069-1077,共9页Journal of Graphics
基 金:国家自然科学基金项目(51407076);中央高校基本科研业务费专项资金(2018MS075)
摘 要:航拍影像同一场景不同对象尺度差异较大,采用单一尺度的分割往往无法达到最佳的分类效果。为解决这一问题,提出一种基于注意力机制的多尺度融合模型。首先,利用不同采样率的扩张卷积提取航拍影像的多个尺度特征;然后,在多尺度融合阶段引入注意力机制,使模型能够自动聚焦于合适的尺度,并为所有尺度及每个位置像素分别赋予权重;最后,将加权融合后的特征图上采样到原图大小,对航拍影像的每个像素进行语义标注。实验结果表明,与传统的FCN、DeepLab语义分割模型及其他航拍影像分割模型相比,基于注意力机制的多尺度融合模型不仅具有更高的分割精度,而且可以通过对各尺度特征对应权重图的可视化,分析不同尺度及位置像素的重要性。In aerial images,there is significant difference between the scales of different objects in the same scene,single-scale segmentation often hardly achieves the best classification effect.In order to solve the problem,we proposes a multi-scale fusion model based on attention mechanism.Firstly,extract multi-scale features of the aerial image using dilated convolutions with different sampling rates;then utilize the attention mechanism in the multi-scale fusion stage,so that the model can automatically focus on the appropriate scale,and learn to put different weights on all scale and each pixel location;finally,the weighted sum of feature map is sampled to the original image size,and each pixel of aerial image is semantically labeled.The experiment demonstrates that compared with the traditional FCN and DeepLab method,and other aerial image segmentation model,the multi-scale fusion model based on attention mechanism not only has higher segmentation accuracy,but also can analyze the importance of different scales and pixel location by visualizing the weight map corresponding to each scale feature.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28