Implicit policy constraint for offline reinforcement learning  

在线阅读下载全文

作  者:Zhiyong Peng Yadong Liu Changlin Han Zongtan Zhou 

机构地区:[1]National University of Defense Technology,Changsha,China

出  处:《CAAI Transactions on Intelligence Technology》2024年第4期973-981,共9页智能技术学报(英文)

基  金:National Natural Science Foundation of China,Grant/Award Number:U19A2083。

摘  要:Offline reinforcement learning(RL)aims to learn policies entirely from passively collected datasets,making it a data‐driven decision method.One of the main challenges in offline RL is the distribution shift problem,which causes the algorithm to visit out‐of‐distribution(OOD)samples.The distribution shift can be mitigated by constraining the divergence between the target policy and the behaviour policy.However,this method can overly constrain the target policy and impair the algorithm's performance,as it does not directly distinguish between in‐distribution and OOD samples.In addition,it is difficult to learn and represent multi‐modal behaviour policy when the datasets are collected by several different behaviour policies.To overcome these drawbacks,the au-thors address the distribution shift problem by implicit policy constraints with energy‐based models(EBMs)rather than explicitly modelling the behaviour policy.The EBM is powerful for representing complex multi‐modal distributions as well as the ability to distinguish in‐distribution samples and OODs.Experimental results show that their method significantly outperforms the explicit policy constraint method and other base-lines.In addition,the learnt energy model can be used to indicate OOD visits and alert the possible failure.

关 键 词:artificial intelligence artificial neural network learning(artificial intelligence) planning(artificial intelligence) 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象