不同上下文比例对损毁建筑遥感场景图片样本集构建的影响  

Impacts of different proportions of contextual information on the construction of sample sets of remote sensing scene images for damaged buildings

在线阅读下载全文

作  者:邰佳怡 慎利 乔文凡 周吾珍 TAI Jiayi;SHEN Li;QIAO Wenfan;ZHOU Wuzhen(Faculty of Geosciences and Environment Engineering,Southwest Jiaotong University,Chengdu 610097,China;Sichuan Institute of Land Science and Technology,Sichuan Center of Satellite Application Technology,Chengdu 610045,China)

机构地区:[1]西南交通大学地球科学与环境工程学院,成都610097 [2]四川省国土科学技术研究院(四川省卫星应用技术中心),成都610045

出  处:《自然资源遥感》2024年第3期154-162,共9页Remote Sensing for Natural Resources

基  金:国家自然科学基金面上项目“基于弱监督深度学习的高分辨率遥感影像灾后损毁建筑物提取研究”(编号:42071386);“基于匀质化分解与解析式合成的栅格类别数据尺度效应建模”(编号:41971330)共同资助。

摘  要:基于深度学习的遥感影像场景分析是震后进行损毁评估的重要手段。在损毁建筑影像资源相对稀缺的情况下,构建高质量的遥感场景图片样本集,对提高场景识别和分类精度具有重要意义。作为遥感分析的重要参考依据,上下文信息在场景图片中所占比例是影响样本集构建效果的一个关键因素。目前,样本集构建方法中缺乏对上下文信息合适比例的探索。该文以构建高质量样本集为目标,设计一种调整场景图片上下文信息比例的方法,研究不同上下文信息占比对场景样本集构建的影响,探索上下文信息比例的最佳设置范围。文章构建6组不同上下文信息占比的场景图片样本集,使用5种经典卷积神经网络(convolutional neural network,CNN)进行训练和测试,并依次对每个模型的分类结果和不同上下文信息的分类结果进行分析。研究表明,当上下文信息占比为80%时,CNN网络达到了最佳的分类准确率(92.22%),当上下文信息占比为95%时,则降到89.03%;在所有的CNN模型中,GoogLeNet的分类表现最好,平均准确率达到93.13%。该研究可以找到场景样本集中合理的上下文信息比例设置范围,有效提高遥感场景图片分类的准确率,为损毁建筑遥感场景图片样本集的制作提供指导。Deep learning-based scene analysis of remote sensing images serves as a critical means for post-earthquake damage assessment.Given scarce images of damaged buildings,constructing high-quality sample sets of remote sensing scene images holds crucial significance for improving the accuracy of scene recognition and classification.The proportion of contextual information in scene images,as a significant reference for remote sensing analysis,is a key factor affecting the construction effects of sample sets.Currently,the appropriate proportion of contextual information remains under-studied in the sample set construction method.Aiming to construct high-quality sample sets,this study designed a method for adjusting the proportion of contextual information in scene images.It investigated the impacts of different proportions of contextual information on the construction of scene sample sets,exploring the optimal proportion range of contextual information.This study constructed six sample sets of scene images under different proportions of contextual information for training and testing in five classic convolutional neural network(CNN)models.It analyzed the classification results of all the CNN models under different proportions of contextual information.The results indicate that with the proportion of contextual information being 80%,the classification accuracy of the CNN reached an optimal value of 92.22%,which decreased to 89.03%with the proportion of contextual information at 95%.Among all the CNN models,GoogLeNet exhibited superior classification performance with an average accuracy of 93.13%.This study enables the setting of proper proportion ranges of contextual information in scene sample sets,thus effectively improving the classification accuracy of remote sensing scene images,and guiding the construction of sample sets of remote sensing scene images for damaged buildings.

关 键 词:遥感影像场景分析 震后损毁评估 上下文信息占比 场景图片构建 损毁建筑物 

分 类 号:TP751[自动化与计算机技术—检测技术与自动化装置]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象