基于经验指导的深度确定性多行动者-评论家算法  被引量:6

An Experience -Guided Deep Deterministic Actor -Critic Algorithm with Multi -Actor

在线阅读下载全文

作  者:陈红名 刘全[1,2,3,4] 闫岩 何斌 姜玉斌 张琳琳 Chen Hongming;Liu Quan;Yan Yan;He Bin;Jiang Yubin;Zhang Linlin(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006;Provincial Key Laboratory for Computer Information Processing Technology (Soochow University), Suzhou, Jiangsu 215006;Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun, 130012;Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210000)

机构地区:[1]苏州大学计算机科学与技术学院,江苏苏州215006 [2]江苏省计算机信息处理技术重点实验室(苏州大学),江苏苏州215006 [3]符号计算与知识工程教育部重点实验室(吉林大学),长春130012 [4]软件新技术与产业化协同创新中心,南京210000

出  处:《计算机研究与发展》2019年第8期1708-1720,共13页Journal of Computer Research and Development

基  金:国家自然科学基金项目(61772355,61702055,61472262,61502323,61502329);江苏省高等学校自然科学研究重大项目(18KJA520011,17KJA520004);苏州市应用基础研究计划工业部分项目(SYG201422)~~

摘  要:连续控制问题一直是强化学习研究的一个重要方向.近些年深度学习的发展以及确定性策略梯度(deterministic policy gradients, DPG)算法的提出,为解决连续控制问题提供了很多好的思路.这类方法大多在动作空间中加入外部噪声源进行探索,但是它们在一些连续控制任务中的表现并不是很好.为更好地解决探索问题,提出了一种基于经验指导的深度确定性多行动者评论家算法(experience-guided deep deterministic actor-critic with multi-actor, EGDDAC-MA),该算法不需要外部探索噪声,而是从自身优秀经验中学习得到一个指导网络,对动作选择和值函数的更新进行指导.此外,为了缓解网络学习的波动性,算法使用多行动者评论家模型,模型中的多个行动者网络之间互不干扰,各自执行情节的不同阶段.实验表明:相比于DDPG,TRPO和PPO算法,EGDDAC-MA算法在GYM仿真平台中的大多数连续任务中有更好的表现.The continuous control task has always been an important research direction in reinforce-ment learning. In recent years, the development of deep learning (DL) and the advent of deterministic policy gradients algorithm (DPG), provide many good ideas for solving continuous control problems. The main difficulty faced by these methods is the exploration in the continuous action space. And some of them engage in exploratory behavior through external noise injection in the action space. However, this exploration method does not perform well in some continuous control tasks. This paper proposes an experience-guided deep deterministic actor-critic algorithm with multi-actor (EGDDAC-MA) without external noise, which learns a guiding network from excellent experiences to guide the updates of the actor network and the critic network. Besides, it uses a multi-actor actor-critic (AC) model which configures different actors for each phase in an episode. These actors are independent of each other and do not interfere with each other. Finally, the experimental results show that compared with DDPG, TRPO and PPO algorithms, the proposed algorithm has better performance in most continuous tasks in GYM simulation platform.

关 键 词:强化学习 深度强化学习 确定性行动者评论家 经验指导 专家指导 多行动者 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象