检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Mingzhen LI Changxi LIU Jianjin LIAO Xuegui ZHENG Hailong YANG Rujun SUN Jun XU Lin GAN Guangwen YANG Zhongzhi LUAN Depei QIAN
机构地区:[1]State Key Laboratory of Software Development Environment,Beijing 100191,China [2]School of Computer Science and Engineering,Beihang University,Beijing 100191,China [3]National University of Singapore,Singapore 119077,Singapore [4]State Key Laboratory of Mathematical Engineering and Advanced Computing,Wuxi 214000,China [5]Science and Technology on Special System Simulation Laboratory Beijing Simulation Center,Beijing 100854,China [6]Department of Computer Science and Technology,Tsinghua University,Beijing 100084,China
出 处:《Frontiers of Computer Science》2024年第2期1-15,共15页中国计算机科学前沿(英文版)
基 金:supported by the National Key Research and Development Program of China (No.2020YFB1506703);the National Natural Science Foundation of China (Grant Nos.62072018 and 61732002);the State Key Laboratory of Software Development Environment (No.SKLSDE-2021ZX-06);the Fundamental Research Funds for the Central Universities。
摘 要:The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability.Among the existing deep learning compilers,TVM is well known for its efficiency in code generation and optimization across diverse hardware devices.In the meanwhile,the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads.This paper combines the trends in these two directions.Specifically,we propose swTVM that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as Sunway.In addition,we leverage the architecture features during the compilation such as core group for massive parallelism,DMA for high bandwidth memory transfer and local device memory for data locality,in order to generate efficient codes for deep learning workloads on Sunway.The experiment results show that the codes generated by swTVM achieve 1.79x improvement of inference latency on average compared to the state-of-the-art deep learning framework on Sunway,across eight representative benchmarks.This work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in mind.We believe this work will encourage more people to embrace the power of deep learning and Sunwaymany-coreprocessor.
关 键 词:sunway processor deep learning compiler code generation performance optimization
分 类 号:TP1[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.133.59.209