Optimization and Deployment of Memory-Intensive Operations in Deep Learning Model on Edge  

在线阅读下载全文

作  者:Peng XU Jianxin ZHAO Chi Harold LIU 

机构地区:[1]Department of Computer Science and Technology,Beijing Institute of Technology,Beijing 100081,China

出  处:《计算机科学》2023年第2期3-12,共10页Computer Science

基  金:supported by the National Natural Science Foundation of China(U21A20519)。

摘  要:As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.

关 键 词:Memory optimization Deep compiler Computation optimization Model deployment Edge computing 

分 类 号:TP311.5[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象