Efficient Knowledge Graph Embedding Training Framework with Multiple GPUs  被引量:1

在线阅读下载全文

作  者:Ding Sun Zhen Huang Dongsheng Li Min Guo 

机构地区:[1]College of Computer,National University of Defense Technology,Changsha 410073,China

出  处:《Tsinghua Science and Technology》2023年第1期167-175,共9页清华大学学报(自然科学版(英文版)

摘  要:When training a large-scale knowledge graph embedding(KGE)model with multiple graphics processing units(GPUs),the partition-based method is necessary for parallel training.However,existing partition-based training methods suffer from low GPU utilization and high input/output(IO)overhead between the memory and disk.For a high IO overhead between the disk and memory problem,we optimized the twice partitioning with fine-grained GPU scheduling to reduce the IO overhead between the CPU memory and disk.For low GPU utilization caused by the GPU load imbalance problem,we proposed balanced partitioning and dynamic scheduling methods to accelerate the training speed in different cases.With the above methods,we proposed fine-grained partitioning KGE,an efficient KGE training framework with multiple GPUs.We conducted experiments on some benchmarks of the knowledge graph,and the results show that our method achieves speedup compared to existing framework on the training of KGE.

关 键 词:knowledge graph embedding parallel algorithm partitioning graph framework graphics processing unit(GPU) 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象