Model-free method for LQ mean-field social control problems with one-dimensional state space  

在线阅读下载全文

作  者:Zhenhui Xu Tielong Shen 

机构地区:[1]School of Engineering,Tokyo Institute of Technology,Tokyo 152-8550,Japan [2]Department of Engineering and Applied Sciences,Sophia University,Tokyo 102-8554,Japan

出  处:《Control Theory and Technology》2024年第3期479-486,共8页控制理论与技术(英文版)

摘  要:This paper presents a novel model-free method to solve linear quadratic(LQ)mean-field control problems with one-dimensional state space and multiplicative noise.The focus is on the infinite horizon LQ setting,where the conditions for solution either stabilization or optimization can be formulated as two algebraic Riccati equations(AREs).The proposed approach leverages the integral reinforcement learning technique to iteratively solve the drift-coefficient-dependent stochastic ARE(SARE)and other indefinite ARE,without requiring knowledge of the system dynamics.A numerical example is given to demonstrate the effectiveness of the proposed algorithm.

关 键 词:Mean-field control Social optima Infinite horizon Reinforcement learning 

分 类 号:O17[理学—数学] TP39[理学—基础数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象