一种二次采样的强化学习方法  被引量:1

Subsampling Reinforcement Learning Method

在线阅读下载全文

作  者:周江卫 关亚兵 白万民[1] 刘白林[1,2] ZHOU Jiangwei;GUAN Yabing;BAI Wanmin;LIU Bailin(The National Local Joint Engineering Laboratory for New Network and Detection Control,Xi’an Technological University,Xi’an 710021,China;School of Computer Science and Engineering,Xi’an Technological University,Xi’an 710021,China)

机构地区:[1]西安工业大学新型网络与检测控制国家地方联合工程实验室,西安710021 [2]西安工业大学计算机科学与工程学院,西安710021

出  处:《西安工业大学学报》2021年第3期345-351,共7页Journal of Xi’an Technological University

基  金:陕西省自然科学基金(15JK1372)。

摘  要:为了提高强化学习算法训练过程中信息价值高样本的回放频率,缩短算法训练时间,本文提出一种二次采样方法。对经验池中随机采集批量样本进行分层,将每层中样本的TD_error分布采样,用二次采样得到的样本训练深度Q网络,并将二次采样方法采用在DQN算法中,利用OpenAI Gym平台上测试其效果,分析算法参数对学习性能的影响。实验结果表明:相较于DQN算法,该方法能提高更有信息价值样本以及表现优秀样本的被选概率,提高Agent的学习速度,减少Agent与环境的交互次数,改善Agent的学习效果。In order to shorten the training time of the algorithm by increasing the replay frequency of the samples with high information value in the training process of the reinforcement learning algorithm,a subsampling method is proposed in this paper.Batch samples are randomly collected from experience pool for stratification,and each layer is sampled according to the TD_error distribution of the samples,the deep Q network is trained with the samples obtained by twice sampling.The subsampling method is used in the DQN algorithm,and its effect is verified on the OpenAI Gym platform,and the influence of the algorithm parameters on the learning performance is analyzed.The experimental results show that the method improves the probability of selection of more informative samples as well as those that perform well compared to the DQN algorithm,improve the learning speed of the agent,reduce the number of interactions between the agent and the environment,and improve the learning effect of the agent.

关 键 词:深度强化学习 经验回放机制 二次采样 深度Q网络 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象