改进的掩码图自编码器模型  被引量:1

Improved Mask Graph Autoencoders Model

在线阅读下载全文

作  者:严鑫瑜 庞慧[1,2] 石瑞雪 张爱玲 陈威 YAN Xinyu;PANG Hui;SHI Ruixue;ZHANG Ailing;CHEN Wei(Hebei University of Architecture,Zhangjiakou,Hebei 075000;Big Data Technology Innovation Center of Zhangjiakou,Hebei 075000)

机构地区:[1]河北建筑工程学院,河北张家口075000 [2]张家口市大数据技术创新中心,河北张家口075000

出  处:《河北建筑工程学院学报》2024年第1期216-221,共6页Journal of Hebei Institute of Architecture and Civil Engineering

摘  要:图自编码器(GAE)作为深度学习领域的重要模型之一,近年来受到了广泛关注。但GAE倾向于以牺牲图的结构信息为代价过度强调邻近信息,使其不适用于链接预测之外的下游任务。针对传统GAE存在的问题,研究者们在图自编码器模型中引入掩码策略,形成掩码图自编码器模型处理图数据。基于此,提出改进的掩码图自编码器(MaskGAE)模型,MaskGAE采用掩码图模型(MGM)作为代理任务,掩蔽一部分边,并尝试用部分可见的、未掩蔽的图结构来重建丢失的部分。在Cora数据集上通过调参将MaskGAE模型节点分类准确率提升了0.5%。As one of the important models in the field of deep learning,graph autoencoder(GAE)has received extensive attention in recent years.However,GAE tends to overemphasize proximity information at the expense of structural information of the graph,making it unsuitable for downstream tasks other than link prediction.In view of the problems existing in traditional GAE,researchers introduce mask autoencoder(MAE)into the generative self-supervised learning model represented by GAE.Based on this,this article proposed the mask graph autoencoder(MaskGAE),which uses the mask graph model(MGM)as an agent task to mask part of the edges,and tries to reconstruct the missing part with a partially visible,unmasked graph structure.In this paper,the node classification accuracy of the MaskGAE model is improved from 84.05%to 84.55%and the accuracy is increased by 0.5%by adjusting parameters on the Cora dataset.

关 键 词:编码器 自监督学习 掩码图模型 图结构数据 

分 类 号:TP389.1[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象