检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:章鹏[1] 刘全[1,2,3] 钟珊[1] 翟建伟[1] 钱炜晟 ZHANG Peng LIU Quan ZHONG Shan ZHAI Jian-wei QIAN Wei-sheng(School of Computer Science and Technology, Soochow University, Suzhou 215006, China Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210000, China Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China)
机构地区:[1]苏州大学计算机科学与技术学院,江苏苏州215006 [2]软件新技术与产业化协同创新中心,江苏南京210000 [3]吉林大学符号计算与知识工程教育部重点实验室,吉林长春130012
出 处:《通信学报》2017年第4期166-177,共12页Journal on Communications
基 金:国家自然科学基金资助项目(No.61272005;No.61303108;No.61373094;No.61472262;No.61502323;No.61502329);江苏省自然科学基金资助项目(No.BK2012616);江苏省高校自然科学研究基金资助项目(No.13KJB520020);吉林大学符号计算与知识工程教育部重点实验室基金资助项目(No.93K172014K04);苏州市应用基础研究计划工业部分基金资助项目(No.SYG201422;No.SYG201308)~~
摘 要:针对强化学习中已有连续动作空间算法未能充分考虑最优动作的选取方法和利用动作空间的知识,提出一种对自然梯度进行改进的行动者评论家算法。该算法采用最大化期望回报作为目标函数,对动作区间上界和下界进行加权来求最优动作,然后通过线性函数逼近器来近似动作区间上下界的权值,将最优动作求解转换为对双策略参数向量的求解。为了加快上下界的参数向量学习速率,设计了增量的Fisher信息矩阵和动作上下界权值的资格迹,并定义了双策略梯度的增量式自然行动者评论家算法。为了证明该算法的有效性,将该算法与其他连续动作空间的经典强化学习算法在3个强化学习的经典测试实验中进行比较。实验结果表明,所提算法具有收敛速度快和收敛稳定性好的优点。The existed algorithms for continuous action space failed to consider the way of selecting optimal action and utilizing the knowledge of the action space, so an efficient actor-critic algorithm was proposed by improving the natural gradient. The objective of the proposed algorithm was to maximize the expected return. Upper and the lower bounds of the action range were weighted to obtain the optimal action. The two bounds were approximated by linear function. Afterward, the problem of obtaining the optimal action was transferred to the learning of double policy parameter vectors. To speed the learning, the incremental Fisher information matrix and the eligibilities of both bounds were designed. At three reinforcement learning problems, compared with other representative methods with continuous action space, the simulation results show that the proposed algorithm has the advantages of rapid convergence rate and high convergence stability.
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.147