检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:赵高长[1] 刘豪 苏军[1] ZHAO Gao-chang;LIU Hao;SU Jun(College of Sciences,Xi’an University of Science and Technology,Xi’an 710054,China)
出 处:《计算机工程与设计》2020年第3期862-866,共5页Computer Engineering and Design
基 金:国家自然科学基金项目(41271518);陕西省自然科学基金项目(2018JM1047)。
摘 要:为改善多智能体纳什Q学习算法适应性差、条件苛刻、运算复杂,且没有通用方法更新策略价值等问题,提出基于参数的算法改进思路。引入联合动作向量简化算法,引入参数,通过参数近似控制状态-行为值函数,转化训练目标,给出参数逼近的值函数更新方程,理论分析算法的收敛性及可行性。仿真结果表明,基于参数逼近的多智能体强化学习算法,能够使智能体100%达到纳什均衡,提高算法性能,简化算法复杂性,相比传统纳什Q学习算法能够较快收敛。To improve the poor adaptability,harsh conditions,complex operation,and difficult method to update the value of the strategy in the multi-agent Nash Q-learning algorithm,the improved algorithm based on the parameter was proposed.The joint action vector simplification algorithm was introduced,the parameters were introduced,and the value function update equation of the parameter approximation was given through controlling the state-behavior value function and transforming the training target.The convergence and feasibility of the algorithm were analyzed theoretically.Through experiments,the effectiveness of the algorithm is verified.The simulation results show that the multi-agent reinforcement learning algorithm based on parameter approximation can make the agent reach Nash equilibrium with the rate of 100%,it can improve the performance of the algorithm and simplify the algorithm,and it can converge faster than the traditional Nash Q-learning algorithm.
关 键 词:智能体系统 强化学习 马尔科夫博弈 Q学习 纳什均衡
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28