检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Zhenyi ZHANG Jie HUANG Congjie PAN
机构地区:[1]College of Electrical Engineering and Automation,Fuzhou University,Fuzhou 350108,China [2]G+Industrial Internet Institute,Fuzhou University,Fuzhou 350108,China
出 处:《Frontiers of Information Technology & Electronic Engineering》2024年第6期869-886,共18页信息与电子工程前沿(英文版)
基 金:Project supported by the National Natural Science Foundation of China(No.92367109)。
摘 要:Reinforcement learning behavioral control(RLBC)is limited to an individual agent without any swarm mission,because it models the behavior priority learning as a Markov decision process.In this paper,a novel multi-agent reinforcement learning behavioral control(MARLBC)method is proposed to overcome such limitations by implementing joint learning.Specifically,a multi-agent reinforcement learning mission supervisor(MARLMS)is designed for a group of nonlinear second-order systems to assign the behavior priorities at the decision layer.Through modeling behavior priority switching as a cooperative Markov game,the MARLMS learns an optimal joint behavior priority to reduce dependence on human intelligence and high-performance computing hardware.At the control layer,a group of second-order reinforcement learning controllers are designed to learn the optimal control policies to track position and velocity signals simultaneously.In particular,input saturation constraints are strictly implemented via designing a group of adaptive compensators.Numerical simulation results show that the proposed MARLBC has a lower switching frequency and control cost than finite-time and fixed-time behavioral control and RLBC methods.
关 键 词:Reinforcement learning Behavioral control Second-order systems Mission supervisor
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.137.172.252