地图空间形状认知的自编码器深度学习方法  被引量:8

Shape cognition in map space using deep auto-encoder learning

在线阅读下载全文

作  者:晏雄锋 艾廷华[2] 杨敏[2] 郑建滨 YAN Xiongfeng;AI Tinghua;YANG Min;ZHENG Jianbin(College of Surveying and Geo-Informatics, Tongji University, Shanghai 200092, China;School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China)

机构地区:[1]同济大学测绘与地理信息学院,上海200092 [2]武汉大学资源与环境科学学院,湖北武汉430079

出  处:《测绘学报》2021年第6期757-765,共9页Acta Geodaetica et Cartographica Sinica

基  金:国家自然科学基金(42001415,42071450);国家重点研发计划(2017YFB0503500)。

摘  要:形状是地理空间要素的重要特征,是人们建立空间概念、形成空间认知的重要依据。本文利用深度学习的特征挖掘能力引入自编码学习方法,对二维地图空间中形状边界上多组邻域尺寸下的多个特征进行集成和整合,为空间形状认知的机理和形式化提供支撑。本文以建筑物数据为例,将建筑物形状边界转换为序列数据,并提取其描述特征;随后结合sequence-to-sequence自编码学习模型,对无标签的建筑面要素数据进行学习训练,形成形状认知编码。试验表明,本文方法能够产生符合形状认知、具有相似度计算意义的形状编码,具备对不同建筑物形状的区分能力;同时,在形状检索和匹配等应用场景中,该形状编码能有效地表示建筑物的全局和局部特征,与视觉认知结果一致。Shape is an important feature of geospatial objects and a pivotal basis for people to establish spatial concepts and form spatial cognition in map space.The study tries to integrate multiple characteristics of the shape outline using deep auto-encoder learning,and provides support for the mechanism and formalization of spatial cognition.By taking the building data as a case,the study first converts the shape outline into a sequence and extracts its descriptive characteristics by considering the local and regional structures,and then learns a shape coding from the unlabeled data using the sequence-to-sequence learning model.Experiments show that the shape cognition in map space achieves a meaningful similarity measure between different shapes by using deep auto-encoder learning.Furthermore,the shape coding can effectively represent the global and local characteristics in the application scenarios such as shape retrieval and shape matching.

关 键 词:空间认知 形状编码 深度学习 自编码器 sequence-to-sequence模型 

分 类 号:P208[天文地球—地图制图学与地理信息工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象