基于高斯回归的连续空间多智能体跟踪学习  被引量:2

Tracking Learning Based on Gaussian Regression for Multi-agent Systems in Continuous Space

在线阅读下载全文

作  者:陈鑫[1,2] 魏海军[1,2] 吴敏[1,2] 曹卫华[1,2] 

机构地区:[1]中南大学信息科学与工程学院,长沙410083 [2]先进控制与智能自动化湖南省工程实验室,长沙410083

出  处:《自动化学报》2013年第12期2021-2031,共11页Acta Automatica Sinica

基  金:国家自然科学基金(61074058)资助~~

摘  要:提高适应性、实现连续空间的泛化、降低维度是实现多智能体强化学习(Multi-agent reinforcement learning,MARL)在连续系统中应用的几个关键.针对上述需求,本文提出连续多智能体系统(Multi-agent systems,MAS)环境下基于模型的智能体跟踪式学习机制和算法(MAS MBRL-CPT).以学习智能体适应同伴策略为出发点,通过定义个体期望即时回报,将智能体对同伴策略的观测融入环境交互效果中,并运用随机逼近实现个体期望即时回报的在线学习.定义降维的Q函数,在降低学习空间维度的同时,建立MAS环境下智能体跟踪式学习的Markov决策过程(Markov decision process,MDP).在运用高斯回归建立状态转移概率模型的基础上,实现泛化样本集Q值函数的在线动态规划求解.基于离散样本集Q函数运用高斯回归建立值函数和策略的泛化模型.MAS MBRL-CPT在连续空间Multi-cart-pole控制系统的仿真实验表明,算法能够使学习智能体在系统动力学模型和同伴策略未知的条件下,实现适应性协作策略的学习,具有学习效率高、泛化能力强等特点.mproving adaption, reMizing generalization in continuous space, and reducing dimensions are always viewed as the key issues for the implementation of multi-agent reinforcement learning (MARL) within continuous systems. To tackle them, the paper presents a learning mechanism and algorithm named model-based reinforcement learning with companion's policy tracking for multi-agent systems (MAS MBRL-CPT). Stemming from the viewpoint to make the best responses to companions, a new expected immediate reward is defined, which merges the observation on companion's policy into the payoff fed back from the environment, and whose value is estimated online by stochastic approximation. Then a Q value function with dimension reduced is developed to set up Markov decision process (MDP) for strategy learning in multi-agent environment. Based on the model of state transition using Gaussian regression, the Q value functions w.r.t. the state-action samples for generalization are solved by dynamic programming, which then serve as the basic samples to realize the generalization of value functions and learned strategies. In the simulation of multi-cart-pole in continuous space, even if the dynamics and companions' strategies are unknown in priori, MBRL-CPT entitles the learning agent to learn the tracking strategy to cooperate with its companions. The performance of MBRL-CPT shows its high efficiency and good generalization ability.

关 键 词:连续状态空间 多智能体系统 基于模型的强化学习 高斯回归 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象