Safe Q-Learning for Data-Driven Nonlinear Optimal Control With Asymmetric State Constraints  

在线阅读下载全文

作  者:Mingming Zhao Ding Wang Shijie Song Junfei Qiao 

机构地区:[1]the School of Information Science and Technology,the Beijing Key Laboratory of Computational Intelligence and Intelligent System,the Beijing Laboratory of Smart Environmental Protection,and the Beijing Institute of Artificial Intelligence,Beijing University of Technology.Beijing 100124,China [2]IEEE [3]the School of Mechanical and Electrical Engineering,University of Electronic Science and Technology of China,Chengdu 611731,China

出  处:《IEEE/CAA Journal of Automatica Sinica》2024年第12期2408-2422,共15页自动化学报(英文版)

基  金:supported in part by the National Science and Technology Major Project(2021ZD0112302);the National Natural Science Foundation of China(62222301,61890930-5,62021003)。

摘  要:This article develops a novel data-driven safe Q-learning method to design the safe optimal controller which can guarantee constrained states of nonlinear systems always stay in the safe region while providing an optimal performance.First,we design an augmented utility function consisting of an adjustable positive definite control obstacle function and a quadratic form of the next state to ensure the safety and optimality.Second,by exploiting a pre-designed admissible policy for initialization,an off-policy stabilizing value iteration Q-learning(SVIQL)algorithm is presented to seek the safe optimal policy by using offline data within the safe region rather than the mathematical model.Third,the monotonicity,safety,and optimality of the SVIQL algorithm are theoretically proven.To obtain the initial admissible policy for SVIQL,an offline VIQL algorithm with zero initialization is constructed and a new admissibility criterion is established for immature iterative policies.Moreover,the critic and action networks with precise approximation ability are established to promote the operation of VIQL and SVIQL algorithms.Finally,three simulation experiments are conducted to demonstrate the virtue and superiority of the developed safe Q-learning method.

关 键 词:Adaptive critic control adaptive dynamic programming(ADP) control barrier functions(CBF) stabilizing value iteration Q-learning(SVIQL) state constraints 

分 类 号:O232[理学—运筹学与控制论] TP18[理学—数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象