检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Wanwei Huang Qiancheng Zhang Tao Liu YaoliXu Dalei Zhang
机构地区:[1]College of Software Engineering,Zhengzhou University of Light Industry,Zhengzhou,450007,China [2]Henan Jiuyu Tenglong Information engineering Co.,Ltd.,Zhengzhou,450052,China [3]Henan Xin’an Communication Technology Co.,Ltd.,Zhengzhou,450007,China
出 处:《Computers, Materials & Continua》2024年第9期4875-4893,共19页计算机、材料和连续体(英文)
基 金:The financial support fromthe Major Science and Technology Programs inHenan Province(Grant No.241100210100);National Natural Science Foundation of China(Grant No.62102372);Henan Provincial Department of Science and Technology Research Project(Grant No.242102211068);Henan Provincial Department of Science and Technology Research Project(Grant No.232102210078);the Stabilization Support Program of The Shenzhen Science and Technology Innovation Commission(Grant No.20231130110921001);the Key Scientific Research Project of Higher Education Institutions of Henan Province(Grant No.24A520042)is acknowledged.
摘 要:Aiming at the rapid growth of network services,which leads to the problems of long service request processing time and high deployment cost in the deployment of network function virtualization service function chain(SFC)under 5G networks,this paper proposes a multi-agent deep deterministic policy gradient optimization algorithm for SFC deployment(MADDPG-SD).Initially,an optimization model is devised to enhance the request acceptance rate,minimizing the latency and deploying the cost SFC is constructed for the network resource-constrained case.Subsequently,we model the dynamic problem as a Markov decision process(MDP),facilitating adaptation to the evolving states of network resources.Finally,by allocating SFCs to different agents and adopting a collaborative deployment strategy,each agent aims to maximize the request acceptance rate or minimize latency and costs.These agents learn strategies from historical data of virtual network functions in SFCs to guide server node selection,and achieve approximately optimal SFC deployment strategies through a cooperative framework of centralized training and distributed execution.Experimental simulation results indicate that the proposed method,while simultaneously meeting performance requirements and resource capacity constraints,has effectively increased the acceptance rate of requests compared to the comparative algorithms,reducing the end-to-end latency by 4.942%and the deployment cost by 8.045%.
关 键 词:Network function virtualization service function chain Markov decision process multi-agent reinforcement learning
分 类 号:TP39[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7