基于参数逼近的多智能体强化学习算法  被引量:2

Multi-agent reinforcement learning algorithm based on parameter approximation

在线阅读下载全文

作  者:赵高长[1] 刘豪 苏军[1] ZHAO Gao-chang;LIU Hao;SU Jun(College of Sciences,Xi’an University of Science and Technology,Xi’an 710054,China)

机构地区:[1]西安科技大学理学院,陕西西安710054

出  处:《计算机工程与设计》2020年第3期862-866,共5页Computer Engineering and Design

基  金:国家自然科学基金项目(41271518);陕西省自然科学基金项目(2018JM1047)。

摘  要:为改善多智能体纳什Q学习算法适应性差、条件苛刻、运算复杂,且没有通用方法更新策略价值等问题,提出基于参数的算法改进思路。引入联合动作向量简化算法,引入参数,通过参数近似控制状态-行为值函数,转化训练目标,给出参数逼近的值函数更新方程,理论分析算法的收敛性及可行性。仿真结果表明,基于参数逼近的多智能体强化学习算法,能够使智能体100%达到纳什均衡,提高算法性能,简化算法复杂性,相比传统纳什Q学习算法能够较快收敛。To improve the poor adaptability,harsh conditions,complex operation,and difficult method to update the value of the strategy in the multi-agent Nash Q-learning algorithm,the improved algorithm based on the parameter was proposed.The joint action vector simplification algorithm was introduced,the parameters were introduced,and the value function update equation of the parameter approximation was given through controlling the state-behavior value function and transforming the training target.The convergence and feasibility of the algorithm were analyzed theoretically.Through experiments,the effectiveness of the algorithm is verified.The simulation results show that the multi-agent reinforcement learning algorithm based on parameter approximation can make the agent reach Nash equilibrium with the rate of 100%,it can improve the performance of the algorithm and simplify the algorithm,and it can converge faster than the traditional Nash Q-learning algorithm.

关 键 词:智能体系统 强化学习 马尔科夫博弈 Q学习 纳什均衡 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象