Deep Reinforcement Learning Based Joint Cooperation Clustering and Downlink Power Control for Cell-Free Massive MIMO  

在线阅读下载全文

作  者:Du Mingjun Sun Xinghua Zhang Yue Wang Junyuan Liu Pei 

机构地区:[1]School of Electronics and Communication Engineering,Sun Yat-sen University,Shenzhen 518107,China [2]Department of Electronic and Information Engineering,Shantou University,Shantou 515063,China [3]College of Electronic and Information Engineering,Tongji University,Shanghai 201804,China [4]School of Information Engineering,Wuhan University of Technology,Wuhan 430070,China [5]Zhongshan Institute of Advanced Engineering Technology of Wuhan University of Technology,Zhongshan 528437,China

出  处:《China Communications》2024年第11期1-14,共14页中国通信(英文版)

基  金:supported by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515012015;supported in part by the National Natural Science Foundation of China under Grant 62201336;in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515011541;supported in part by the National Natural Science Foundation of China under Grant 62371344;in part by the Fundamental Research Funds for the Central Universities;supported in part by Knowledge Innovation Program of Wuhan-Shuguang Project under Grant 2023010201020316;in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515010247。

摘  要:In recent times,various power control and clustering approaches have been proposed to enhance overall performance for cell-free massive multipleinput multiple-output(CF-mMIMO)networks.With the emergence of deep reinforcement learning(DRL),significant progress has been made in the field of network optimization as DRL holds great promise for improving network performance and efficiency.In this work,our focus delves into the intricate challenge of joint cooperation clustering and downlink power control within CF-mMIMO networks.Leveraging the potent deep deterministic policy gradient(DDPG)algorithm,our objective is to maximize the proportional fairness(PF)for user rates,thereby aiming to achieve optimal network performance and resource utilization.Moreover,we harness the concept of“divide and conquer”strategy,introducing two innovative methods termed alternating DDPG(A-DDPG)and hierarchical DDPG(H-DDPG).These approaches aim to decompose the intricate joint optimization problem into more manageable sub-problems,thereby facilitating a more efficient resolution process.Our findings unequivo-cally showcase the superior efficacy of our proposed DDPG approach over the baseline schemes in both clustering and downlink power control.Furthermore,the A-DDPG and H-DDPG obtain higher performance gain than DDPG with lower computational complexity.

关 键 词:cell-free massive MIMO CLUSTERING deep reinforcement learning power control 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] TN929.5[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象