检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:严鑫瑜 庞慧[1,2] 石瑞雪 张爱玲 陈威 YAN Xinyu;PANG Hui;SHI Ruixue;ZHANG Ailing;CHEN Wei(Hebei University of Architecture,Zhangjiakou,Hebei 075000;Big Data Technology Innovation Center of Zhangjiakou,Hebei 075000)
机构地区:[1]河北建筑工程学院,河北张家口075000 [2]张家口市大数据技术创新中心,河北张家口075000
出 处:《河北建筑工程学院学报》2024年第1期216-221,共6页Journal of Hebei Institute of Architecture and Civil Engineering
摘 要:图自编码器(GAE)作为深度学习领域的重要模型之一,近年来受到了广泛关注。但GAE倾向于以牺牲图的结构信息为代价过度强调邻近信息,使其不适用于链接预测之外的下游任务。针对传统GAE存在的问题,研究者们在图自编码器模型中引入掩码策略,形成掩码图自编码器模型处理图数据。基于此,提出改进的掩码图自编码器(MaskGAE)模型,MaskGAE采用掩码图模型(MGM)作为代理任务,掩蔽一部分边,并尝试用部分可见的、未掩蔽的图结构来重建丢失的部分。在Cora数据集上通过调参将MaskGAE模型节点分类准确率提升了0.5%。As one of the important models in the field of deep learning,graph autoencoder(GAE)has received extensive attention in recent years.However,GAE tends to overemphasize proximity information at the expense of structural information of the graph,making it unsuitable for downstream tasks other than link prediction.In view of the problems existing in traditional GAE,researchers introduce mask autoencoder(MAE)into the generative self-supervised learning model represented by GAE.Based on this,this article proposed the mask graph autoencoder(MaskGAE),which uses the mask graph model(MGM)as an agent task to mask part of the edges,and tries to reconstruct the missing part with a partially visible,unmasked graph structure.In this paper,the node classification accuracy of the MaskGAE model is improved from 84.05%to 84.55%and the accuracy is increased by 0.5%by adjusting parameters on the Cora dataset.
分 类 号:TP389.1[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.137.184.32