检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张爱玲 庞慧[1,2] 严鑫瑜 ZHANG Ailing;PANG Hui;YAN Xinyu(Hebei University of Architecture,Zhangjiakou,Hebei 075000;Big Data Technology Innovation Center of Zhangjiakou,Zhangjiakou,Hebei 075000)
机构地区:[1]河北建筑工程学院,河北张家口075000 [2]张家口市大数据技术创新中心,河北张家口075000
出 处:《河北建筑工程学院学报》2024年第4期255-260,共6页Journal of Hebei Institute of Architecture and Civil Engineering
摘 要:随着图神经网络(Graph Neural Networks,GNN)在处理复杂图数据方面的应用越来越广泛,如何有效提高其性能和泛化能力成为了研究重点。当前,GNN面临的主要问题包括高计算复杂度和低采样效率,这限制了其在大规模图数据上的应用。此外,现有的GNN模型往往忽略了节点间的差异性,不能充分利用图节点的重要性信息。故提出了一种结合可微分采样和节点级注意力机制的改进方法,以提升GNN模型的性能。首先,通过可微分采样策略,在训练过程中动态调整采样概率,使模型能够针对性地选择更具信息量的节点进行计算。其次,引入节点级注意力机制,允许模型在特征提取和信息聚合过程中自适应地关注重要节点。实验结果显示,本改进方法在Cora、Citeseer、Pubmed等公开数据集上显著提高了GNN模型的分类准确率和对不同图结构的泛化能力。With the wide application of Graph Neural Networks(GNNS)in processing complex graph data,how to effectively improve their performance and generalization ability has become a research focus.At present,the main problems faced by GNN include high computational complexity and low sampling efficiency,which limit its application on large-scale graph data.In addition,the existing GNN models often ignore the differences between nodes and cannot make full use of the importance information of graph nodes.In this paper,an improved method combining differentiable sampling and node-level attention mechanism is proposed to improve the performance of GNN models.Firstly,through the differentiable sampling strategy,the sampling probability was dynamically adjusted during the training process,so that the model could pertinently select more informative nodes for calculation.Secondly,a node-level attention mechanism is introduced to allow the model to adaptively focus on important nodes in the process of feature extraction and information aggregation.Experimental results on Cora,Citeseer,Pubmed and other public datasets show that the proposed method significantly improves the classification accuracy of GNN models and the generalization ability to different graph structures.
关 键 词:图神经网络 可微分采样 节点级注意力机制 大规模图数据处理
分 类 号:TP389.1[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.21.125.27