检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]河海大学计算机与信息学院智能科学与技术研究所,江苏南京210098
出 处:《四川大学学报(工程科学版)》2012年第5期136-142,共7页Journal of Sichuan University (Engineering Science Edition)
基 金:国家自然科学基金资助项目(60971088);国家自然科学基金资助项目(60571048)
摘 要:为了提高强化学习算法的运行效率和收敛速度,提出了一种基于路径引导知识启发的强化学习方法PHQL。采用PHQL方法,不需要提前植入先导知识,agent在每一轮学习过程中更新Q表的同时,各个状态的路径知识也自主地建立起来并逐步修正和优化。算法利用已经获得的路径知识来指导和加速agent以后的强化学习过程,以减少agent学习过程的盲目性。分析了PHQL算法的探索、利用和启发3种行为的执行概率以及行为选取方法,提出一种行为选择概率随时间渐变的算法。以一个路径搜索问题为实例,对PHQL方法进行了验证、分析并与几种相关的强化学习算法进行了性能对比。实验结果表明,作者提出的方法对学习过程具有明显的加速作用,收敛性能有了较大的提高。In order to improve the efficiency and convergence speed of reinforcement learning algorithm,a method of heuristic reinforcement learning based on acquired path guiding knowledge(PHQL) was proposed.During the learning process using PHQL,embeded background knowledge was not needed to agent.While the agent updated the Q table in each episode,the path knowledge was also built,revised and optimized autonomously.After that,the lerning process was guided and accelerated by means of acquired path knowledge,which decreased the blindness of agent.In addition,three sorts of action selection methods of exploration,exploit and heuristic were analyzed,and also a practical method that action selection probilities changed over time was put forward.In a path planning environment,the PHQL was compared to the standard Q-learning and other relevant reinforcement learning algorithms.The experimental results showed that present methods accelerate the learning process obviously,and improve the convergence speed distinctly.
分 类 号:TP301[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.222.143.148