检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:XIA Chunwei ZHAO Jiacheng CUI Huimin FENG Xiaobing 夏春伟;ZHAO Jiacheng;CUI Huimin;FENG Xiaobing(Institute of Computing Technology,Chinese Academy of Sciences,Beijing 100190,P.R.China;School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 100190,P.R.China)
机构地区:[1]Institute of Computing Technology,Chinese Academy of Sciences,Beijing 100190,P.R.China [2]School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 100190,P.R.China
出 处:《High Technology Letters》2022年第4期363-372,共10页高技术通讯(英文版)
基 金:Supported by the General Program of National Natural Science Foundation of China(No.61872043)。
摘 要:It is significant to efficiently support artificial intelligence(AI)applications on heterogeneous mobile platforms,especially coordinately execute a deep neural network(DNN)model on multiple computing devices of one mobile platform.This paper proposes HOPE,an end-to-end heterogeneous inference framework running on mobile platforms to distribute the operators in a DNN model to different computing devices.The problem is formalized into an integer linear programming(ILP)problem and a heuristic algorithm is proposed to determine the near-optimal heterogeneous execution plan.The experimental results demonstrate that HOPE can reduce up to 36.2%inference latency(with an average of 22.0%)than MOSAIC,22.0%(with an average of 10.2%)than StarPU and 41.8%(with an average of 18.4%)thanμLayer respectively.
关 键 词:deep neural network(DNN) mobile heterogeneous scheduler parallel computing
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28