基于可中断Option的在线分层强化学习方法  被引量:4

Online hierarchical reinforcement learning based on interrupting Option

在线阅读下载全文

作  者:朱斐[1,2] 许志鹏[1] 刘全[1,2] 伏玉琛[1] 王辉[1] 

机构地区:[1]苏州大学计算机科学与技术学院,江苏苏州215006 [2]吉林大学符号计算与知识工程教育部重点实验室,吉林长春130012

出  处:《通信学报》2016年第6期65-74,共10页Journal on Communications

基  金:国家自然科学基金资助项目(No.61303108;No.61373094;No.61272005;No.61472262);江苏省高校自然科学研究基金资助项目(No.13KJB520020);吉林大学符号计算与知识工程教育部重点实验室基金资助项目(No.93K172014K04);苏州市应用基础研究计划基金资助项目(No.SYG201422);苏州大学高校省级重点实验室基金资助项目(No.KJS1524);中国国家留学基金资助项目(No.201606920013)~~

摘  要:针对大数据体量大的问题,在Macro-Q算法的基础上提出了一种在线更新的Macro-Q算法(MQIU),同时更新抽象动作的值函数和元动作的值函数,提高了数据样本的利用率。针对传统的马尔可夫过程模型和抽象动作均难于应对可变性,引入中断机制,提出了一种可中断抽象动作的Macro-Q无模型学习算法(IMQ),能在动态环境下学习并改进控制策略。仿真结果验证了MQIU算法能加快算法收敛速度,进而能解决更大规模的问题,同时也验证了IMQ算法能够加快任务的求解,并保持学习性能的稳定性。Aiming at dealing with volume of big data, an on-line updating algorithm, named by Macro-Q with in-place updating(MQIU), which was based on Macro-Q algorithm and takes advantage of in-place updating approach, was proposed. The MQIU algorithm updates both the value function of abstract action and the value function of primitive action, and hence speeds up the convergence rate. By introducing the interruption mechanism, a model-free interrupting Macro-Q Option learning algorithm(IMQ), which was based on hierarchical reinforcement learning, was also introduced to order to handle the variability which was hard to process by the conventional Markov decision process model and abstract action so that IMQ was able to learn and improve control strategies in a dynamic environment. Simulations verify the MQIU algorithm speeds up the convergence rate so that it is able to do with the larger scale of data, and the IMQ algorithm solves the task faster with a stable learning performance.

关 键 词:大数据 强化学习 分层强化学习 OPTION 在线学习 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象