检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:轩书哲 柯良军[1,2] XUAN Shuzhe;KE Liangjun(State Key Laboratory for Manufacturing Systems Engineering,Xi’an 710049,China;School of Automation Science and Engineering,Xi’an Jiaotong University,Xi’an 710049,China)
机构地区:[1]机械制造系统工程国家重点实验室,陕西西安710049 [2]西安交通大学自动化科学与工程学院,陕西西安710049
出 处:《无线电工程》2021年第5期360-366,共7页Radio Engineering
基 金:国家自然科学基金资助项目(61973244,61573277)。
摘 要:针对大规模无人机集群攻防对抗问题,提出了一种基于近端策略优化(Proximal Policy Optimization,PPO)的改进多智能体(Multi-agent Proximal Policy Optimization,M-PPO)算法。该算法采用了Actor-Critic框架,但与PPO不同,为实现智能体之间的协作,算法使用了包含全局信息的Critic网络和局部信息的Actor网络。此外,算法采用了集中训练、分散执行的框架,训练得到的模型能够在不依赖通信的基础上实现协作。为了研究该算法的性能,设计了一个考虑无人机飞行约束和真实飞行环境的大型无人机集群对抗平台,并进行仿真实验。实验结果表明,M-PPO算法在攻防对抗问题中的效果显著优于PPO和深度确定性策略梯度(Deep Deterministic Policy Gradient,DDPG)等主流算法。In order to solve the problem of attack-defense countermeasure of large-scale unmanned aerial vehicle(UAV)swarm,an improved Multi-agent algorithm(Multi-agent Proximal Policy Optimization,M-PPO)based on proximal policy optimization algorithm(PPO)is proposed.The algorithm uses Actor-Critic framework.Unlike PPO,M-PPO uses the Critic network with global information and the Actor network with local information to achieve the cooperation between agents.In addition,the algorithm adopts the framework of centralized training and decentralized execution.The trained model can achieve cooperation without communication.In order to study the performance of the algorithm,a large UAV swarm countermeasure platform considering UAV flight constraints and real flight environment is designed.The experimental results show that M-PPO algorithm is better than PPO algorithm and deep deterministic policy gradient(DDPG)algorithm.
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117