检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:马煜昕 许胤龙[1,3] 李诚 钟锦 MA Yu-Xin;XU Yin-Long;LI Cheng;ZHONG Jin(School of Computer Science and Technology,University of Science and Technology of China,Hefei 230026,China;School of Computer and Artificial Intelligence,Hefei Normal University,Hefei 230601,China;Institute of Artificial Intelligence,Hefei Comprehensive National Science Center,Hefei 230026,China)
机构地区:[1]中国科学技术大学计算机科学与技术学院,合肥230026 [2]合肥师范学院计算机与人工智能学院,合肥230601 [3]合肥综合性国家科学中心人工智能研究院,合肥230026
出 处:《计算机系统应用》2024年第1期245-253,共9页Computer Systems & Applications
基 金:国家自然科学基金(62141216);安徽高校协同创新项目(GXXT-2022-045)。
摘 要:图神经网络(graph neural network,GNN)是处理图数据的重要方法.由于计算复杂、图数据容量大,在大规模图上训练图神经网络依赖于CPU-GPU协作和图采样训练方法,其中图结构和特征数据存储在CPU内存中,而采样得到的子图及其特征则传输至GPU进行训练.然而,这种方法面临着严重的图特征数据加载瓶颈,显著降低了端到端训练性能,且图特征占用过多内存,严重限制了可训练的图规模.为了解决这些问题,我们提出了基于输入特征稀疏化的数据加载方法,显著减少CPU内存占用和跨PCIe总线传输的数据量,大幅缩短数据加载时间,加速GNN的训练,使其可以充分利用GPU计算资源.针对图特征和GNN计算特性,我们提出了适用于图特征数据的稀疏化方法,在压缩比和模型准确度之间达到平衡.我们在3个常见GNN模型和3个不同规模的数据集上进行了实验评估,包括最大的公开数据集之一MAG240M.结果表明,此方法将特征尺寸减小了一个数量级以上,并实现1.6-6.7倍的端到端训练加速,而模型准确度的降低不超过1%.此外,在仅使用4个GPU的情况下,仅需40 min就可以在MAG240M上完成GraphSAGE模型的训练并达到目标准确度.Graph neural network(GNN)has become an important method for handling graph data.Due to the complexity of calculation and large capacity of graph data,training GNNs on large-scale graphs relies on CPU-GPU cooperation and graph sampling,which stores graph structure and feature data in CPU memory and transfers sampled subgraphs and their features to GPU for training.However,this approach faces a serious bottleneck in graph feature data loading,leading to a significant decrease in end-to-end training performance and severely limiting graph scale that can be trained as graph features take up too much memory.To address these challenges,this study proposes a data loading approach based on input feature sparsification,which significantly reduces CPU memory usage and data transfer across the PCIe bus,significantly shortens data loading time,accelerates GNN training,and enables full utilization of GPU resources.In view of the graph features and GNN computational characteristics,the study proposes a sparsification method suitable for the graph feature data,which achieves a balance between compression ratio and model accuracy.The study also conducts experimental evaluations on three common GNN models and three datasets of different sizes,including MAG240M,one of the largest publicly available datasets.The results show that this method reduces the feature size by more than one order of magnitude and achieves 1.6–6.7 times end-to-end training acceleration,while the model accuracy is reduced by less than 1%.In addition,with only four GPUs,the GraphSAGE model can be trained on the MAG240M in just 40 minutes with expected accuracy.
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.134.86.4