基于深度强化学习的认知无线电功率控制  

Cognitive Radio Power Control Based on Deep Reinforcement Learning

在线阅读下载全文

作  者:陈玲玲[1] 黄福森 于越 CHEN Ling-ling;HUANG Fu-sen;YU Yue(Jilin Institute of Chemical Technology,Jilin 132022,Jilin)

机构地区:[1]吉林化工学院,吉林吉林132022

出  处:《电脑与电信》2024年第10期10-13,共4页Computer & Telecommunication

摘  要:随着科技的快速发展,人们对无线频谱的需求越来越高。然而,由于频谱资源有限,如何有效利用这些资源成为了无线电领域的一大挑战。为了解决这个问题,建立了一个主次用户共享相同的频谱资源,并且都以非协作方式工作的认知无线网络模型,以提高次用户的吞吐量,然后使用基于SumTree采样深度Q学习(SumTree Deep Q-Network,ST-DQN)算法来进行功率控制,来确保样本选取的优先级与多样性。最后通过Python进行了一系列的仿真实验,与传统的Q学习(qlearning)和自由探索算法在奖励、损失函数和次用户吞吐量等性能指标进行了比较与分析,研究发现ST-DQN算法在功率控制方面表现更优。With the rapid development of technology,people's demand for wireless spectrum is increasing.However,due to limited spectrum resources,how to effectively utilize these resources has become a major challenge in the field of radio.To address this issue,we establish a cognitive wireless network model where primary and secondary users share the same spectrum resources and work in a non cooperative manner to improve the throughput of secondary users.Then,we use the SumTree Sampling Deep QNetwork(ST-DQN)algorithm for power control to ensure priority and diversity in sample selection.Finally,a series of simulation experiments are conducted using Python to compare and analyze the performance indicators of reward,loss function,and sub user throughput with traditional Q-learning and free exploration algorithms.We find that the ST-DQN algorithm performs better in power control.

关 键 词:深度强化学习 认知无线电 功率控制 

分 类 号:TN92[电子电信—通信与信息系统]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象