检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:唐宏[1,2] 刘小洁 甘陈敏 陈榕 TANG Hong;LIU Xiaojie;GAN Chenmin;CHEN Rong(School of Communications and Information Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China;Chongqing Key Lab of Mobile Communications Technology(Chongqing University of Posts and Telecommunications),Chongqing 400065,China)
机构地区:[1]重庆邮电大学通信与信息工程学院,重庆400065 [2]移动通信技术重庆市重点实验室(重庆邮电大学),重庆400065
出 处:《哈尔滨工业大学学报》2023年第5期107-113,共7页Journal of Harbin Institute of Technology
基 金:长江学者和创新团队发展计划(IRT_16R72)。
摘 要:在超密集网络环境中,各个接入点密集部署在热点区域,构成了复杂的异构网络,用户需要选择接入合适的网络以获得最好的性能。如何为用户选择最优的网络,使用户自身或网络性能达到最佳,称为网络接入选择问题。为了解决超密集网络中用户的接入选择问题,综合考虑网络状态、用户偏好以及业务类型,结合负载均衡策略,提出了一种基于改进深度Q网络(deep Q network,DQN)的超密集网络接入选择算法。首先,通过分析网络属性和用户业务的偏好对网络选择的影响,选择合适的网络参数作为接入选择算法的参数;其次,将网络接入选择问题利用马尔可夫决策过程建模,分别对模型中的状态、动作和奖励函数进行设计;最后,利用DQN求解选网模型,得到最优选网策略。此外,为了避免DQN过高估计Q值,对传统DQN的目标函数进行优化,并且在训练神经网络时,引入了优先经验回放机制以提升学习效率。仿真结果表明,所提算法能够解决传统DQN的高估问题,加快神经网络的收敛,有效减少用户的阻塞,并改善网络的吞吐能力。In the ultra-dense network environment,each access point is deployed in the hotspot area,which forms a complex heterogeneous network.Users need to choose the appropriate network to access,so as to achieve the best performance.Network selection problem is to choose the optimal network for the user,so that the user or network performance reaches the best.In order to solve the access selection problem of users in ultra-dense networks,we proposed an ultra-dense network access selection algorithm based on the improved deep Q network(DQN),considering network states,user preferences,and service types,and combining with load balancing strategies.First,by analyzing the influence of network attributes and user preferences on network selection,the appropriate network parameters were selected as the parameters of the access selection algorithm.Then,the problem of network access selection was modeled by Markov decision-making process,and the states,actions,and reward functions of the model were designed.Finally,the optimal network strategy was obtained by using DQN to solve the network selection model.In addition,the target function of traditional DQN was optimized to avoid overestimation of Q value by DQN,and a priority experience replay mechanism was introduced to improve learning efficiency.Simulation results show that the method could well solve the problem of overestimation of traditional DQN,accelerate the convergence of neural network,effectively reduce user congestion,and improve network throughput performance.
关 键 词:超密集网络 接入选择 深度Q网络(DQN) 优先经验回放 负载均衡
分 类 号:TN92[电子电信—通信与信息系统]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7