Computation Rate Maximization for Wireless-Powered and Multiple-User MEC System with Buffer Queue  

具有缓存队列的无线供电和多用户移动边缘计算系统的计算速率最大化

在线阅读下载全文

作  者:ABDUL Rauf ZHAO Ping ABDUL Rauf;赵萍(东华大学信息科学与技术学院,上海201620)

机构地区:[1]College of Information Science and Technology,Donghua University,Shanghai 201620,China

出  处:《Journal of Donghua University(English Edition)》2024年第6期689-701,共13页东华大学学报(英文版)

基  金:National Natural Science Foundation of China(No.61902060);Shanghai Sailing Program,China(No.19YF1402100);Fundamental Research Funds for the Central Universities,China(No.2232019D3-51);Open Foundation of State Key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications,China)(No.SKLNST-2021-1-06)。

摘  要:Mobile edge computing (MEC) has a vital role in various delay-sensitive applications. With the increasing popularity of low-computing-capability Internet of Things (IoT) devices in industry 4.0 technology, MEC also facilitates wireless power transfer, enhancing efficiency and sustainability for these devices. The most related studies concerning the computation rate in MEC are based on the coordinate descent method, the alternating direction method of multipliers (ADMMs) and Lyapunov optimization. Nevertheless, these studies do not consider the buffer queue size. This research work concerns the computation rate maximization for wireless-powered and multiple-user MEC systems, specifically focusing on the computation rate of end devices and managing the task buffer queue before computation at the terminal devices. A deep reinforcement learning (RL)-based task offloading algorithm is proposed to maximize the computation rate of end devices and minimizes the buffer queue size at the terminal devices.Precisely, considering the channel gain, the buffer queue size and wireless power transfer, it further formalizes the task offloading problem. The mode selection for task offloading is based on the individual channel gain, the buffer queue size and wireless power transfer maximization in a particular time slot.The central idea of this work is to explore the best optimal mode selection for IoT devices connected to the MEC system. The proposed algorithm optimizes computation delay by maximizing the computation rate of end devices and minimizing the buffer queue size before computation at the terminal devices. Then, the current study presents a deep RL-based task offloading algorithm to solve such a mixed-integer and non-convex optimization problem, aiming to get a better trade-off between the buffer queue size and the computation rate. The extensive simulation results reveal that the presented algorithm is much more efficient than the existing work to maintain a small buffer queue for terminal devices while simultaneously ac移动边缘计算 (mobile edge computing, MEC) 在各种延迟敏感型应用中发挥着至关重要的作用。随着工业 4.0 技术中低计算能力物联网 (Internet of Things, IoT) 设备的日益普及,MEC 可促进无线电传输,提高设备效率和可持续性。与MEC 计算速率最大化相关的研究包括坐标下降法、基于交替方向乘子法 (alternating direction method of multiplier, ADMM)和 Lyapunov 优化。然而,这些研究没有考虑缓冲队列的大小。该文关注具有缓冲队列的无线供电和多用户 MEC 系统的计算速率最大化,并提出了一种基于深度强化学习 (reinforcement learning, RL) 的任务卸载算法。该算法最大化计算速率并最小化缓冲队列的大小。其主要思路是探索连接到 MEC 系统的物联网设备的最佳模式选择。该文提出基于特定时隙中的单个通道增益、缓冲队列大小和无线电力传输最大化的模式选择。在此基础上,进一步将任务卸载问题形式化为缓冲队列大小和计算速率。然后,设计了一种基于深度RL的卸载算法来解决这种混合整数非凸优化问题,旨在更好地权衡缓冲队列大小和计算速率。大量的仿真结果表明,所提算法比现有算法更有效,能同时保持较小的缓冲队列和较大的计算速率。

关 键 词:computation rate mobile edge computing(MEC) buffer queue non-convex optimization deep reinforcement learning 

分 类 号:TN929.5[电子电信—通信与信息系统]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象