检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:杨杰 马向阳 赵安琪 胡光 李星华 YANG Jie;MA Xiangyang;ZHAO Anqi;HU Guang;LI Xinghua(Zhejiang Huadong Mapping and Engineering Safety Technology Co.,Ltd.,Hangzhou 310014,China;School of Remote Sensing and Information Engineering,Wuhan University,Wuhan 430079,China;School of Resource and Environmental Sciences,Wuhan University,Wuhan 430079,China)
机构地区:[1]浙江华东测绘与工程安全技术有限公司,杭州1310014 [2]武汉大学遥感信息工程学院,武汉430079 [3]武汉大学资源与环境科学学院,武汉430079
出 处:《测绘科学》2024年第8期91-99,共9页Science of Surveying and Mapping
基 金:国家自然科学基金项目(42171302);浙江华东测绘与工程安全技术有限公司测绘工程院科研项目(ZKY2022-CA-02-02)。
摘 要:针对现有高光谱影像分类方法存在全局和局部特征提取不充分、特征融合效率低等问题,该文提出一种能有效提取全局特征和局部特征的双支交叉注意力网络(DCNet)。先通过超像素采样网络(SSN)初步提取全局特征,并输入到Transformer编码器进行特征增强;再通过两次卷积核尺寸不同的三维卷积模块提取局部特征;最后,以交叉注意力机制融合全局特征与局部特征,基于融合后的特征获得分类结果。为防止网络出现梯度消失的情况,DCNet中设计了大量的残差连接模块。为验证该方法的分类效果,选择Pavia University数据集和WHU-Hi-HongHu数据集作为测试数据,将其与另外6种模型的结果进行定量评价和目视判读。实验表明DCNet在两组数据集的总体分类精度分别达到99.44%和98.65%,明显高于另外6种模型。This paper proposes a dual cross-attention network(DCNet)that can effectively extract global and local features,in response to the problems of insufficient global and local feature extraction and low feature fusion efficiency in existing hyperspectral image classification methods.Firstly,preliminary global features are extracted through superpixel sampling network(SSN)and input into Transformer encoder for feature enhancement;Extract local features through two 3D convolution modules with different kernel sizes;Finally,the cross attention mechanism is used to fuse global and local features,and the classification results are obtained based on the fused features.To prevent gradient vanishing in the network,DCNet has designed a large number of residual connection modules.To verify the classification performance of this method,the Pavia University dataset and WHU-Hi-HongHu dataset were selected as test data,and their results were quantitatively evaluated and visually interpreted with the results of six other models.The experiment shows that the overall classification accuracy of DCNet on two datasets reaches 99.44%and 98.65%,respectively,which is significantly higher than the other six models.
关 键 词:高光谱影像分类 超像素 Transformer编码器 卷积神经网络 交叉注意力
分 类 号:P237.2[天文地球—摄影测量与遥感]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117