检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]西安电子科技大学综合业务网理论及关键技术国家重点实验室,西安710071 [2]西南科技大学国防科技学院,四川绵阳621000
出 处:《北京邮电大学学报》2014年第1期80-84,共5页Journal of Beijing University of Posts and Telecommunications
基 金:国家自然科学基金项目(61379005);国家重点基础研究发展计划项目(2009CB320403);国家科技重大专项基金项目(2009ZX03007-004);西安电子科技大学ISN实验室开放课题(ISN10-09)
摘 要:针对认知无线电多用户的信道和功率资源分配问题,提出一种基于用户聚类和可变学习速率的多Agent强化学习方法.首先使用分层处理分离信道选择与功率控制,采用快速最优搜索结合用户数均衡调节实现信道分配;其次,使用随机博弈框架对多用户功率控制问题进行建模,通过K均值用户聚类减少博弈参与用户数量和降低单个用户的环境复杂度,并使用可变Q学习速率和策略学习速率的方法进一步促进多Agent强化学习的收敛.仿真结果表明,该方法能使多个用户的功率状态和总收益有效收敛,并且使整体性能达到次优.A multi-agent enforcement learning method based on user clustering as well as a variable learning rate was proposed for solving the problem of channel allocation and power control within multi cognitive radio users. Firstly, a hierarchy processing method was used to separate channel selection and power control. The channel allocation was implemented by fast optimal search combined with user-num- ber balance. Secondly, stochastic game framework was adopted to model the muhiuser power control is- sue. In subsequent multi-agent enforcement learning, K-means user clustering method was employed to reduce the user number in game and single user' s environment complexity, and a variable learning rate scheme for Q learning and policy learning was proposed to promote the convergence of muhiuser learning. Simulation shows that the method can make multiuser' s power status and global reward converging effec- tively, moreover the whole performance can reach sub-optimal.
关 键 词:认知无线电 多AGENT强化学习 聚类 功率控制 可变学习速率
分 类 号:TN929.5[电子电信—通信与信息系统]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.40