SDN中基于模型划分的云边协同推理算法  

Cloud Edge Collaborative Inference Algorithm Based on Model Partition in SDN

在线阅读下载全文

作  者:许浩 朱晓娟[1] XU Hao;ZHU Xiaojuan(School of Computer Science and Engineering,Anhui University of Science&Technology,Huainan Anhui 232001,China)

机构地区:[1]安徽理工大学计算机科学与工程学院,安徽淮南232001

出  处:《兰州工业学院学报》2023年第6期31-37,共7页Journal of Lanzhou Institute of Technology

摘  要:在网络状态和任务需求的动态变化下,为减少模型推理时延和计算成本,在软件定义网络(Software Defined Network,SDN)中提出了一种基于模型划分的云边协同推理算法。通过构建复杂度预测器分配任务执行环境,采用深度Q网络(Deep Q-network,DQN)算法对边缘环境中的推理模型进行自适应划分与卸载;以及用SDN技术从全局感知推理任务与网络资源,实现动态网络环境下网络资源的合理分配。试验结果表明:SDN中基于模型划分的云边协同推理算法具有良好的收敛能力,在动态环境中具有较好的鲁棒性。与现有的推理算法相比,该算法能够在合理分配计算资源的前提下,满足协同推理低时延的目标要求。In order to reduce model inference delay and computational cost under the dynamic changes of network state and task requirements,a cloud-edge collaborative inference algorithm based on model partitioning is proposed in Software Defined Network.First,the task execution environment is assigned by constructing a complexity predictor.Secondly,the Deep Q-network algorithm is used to perform adaptive partitioning and unloading of inference models in the edge environment.Finally,SDN technology is used to perceive inference tasks and network resources from the global perspective,so as to realize reasonable allocation of network resources in a dynamic network environment.The experimental results show that the cloud edge collaborative reasoning algorithm based on model division in SDN has good convergence ability and good robustness in dynamic environment.Compared with existing inference algorithms,the algorithm can meet the target requirements of low time delay of collaborative inference under the premise of reasonable allocation of computing resources.

关 键 词:任务分配 模型划分 边缘智能 云边协同推理 SDN 

分 类 号:TP393[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象