基于多智能体强化学习的无人机集群攻防对抗策略研究  被引量:12

Study on Attack-Defense Countermeasure of UAV Swarms Based on Multi-agent Reinforcement Learning

在线阅读下载全文

作  者:轩书哲 柯良军[1,2] XUAN Shuzhe;KE Liangjun(State Key Laboratory for Manufacturing Systems Engineering,Xi’an 710049,China;School of Automation Science and Engineering,Xi’an Jiaotong University,Xi’an 710049,China)

机构地区:[1]机械制造系统工程国家重点实验室,陕西西安710049 [2]西安交通大学自动化科学与工程学院,陕西西安710049

出  处:《无线电工程》2021年第5期360-366,共7页Radio Engineering

基  金:国家自然科学基金资助项目(61973244,61573277)。

摘  要:针对大规模无人机集群攻防对抗问题,提出了一种基于近端策略优化(Proximal Policy Optimization,PPO)的改进多智能体(Multi-agent Proximal Policy Optimization,M-PPO)算法。该算法采用了Actor-Critic框架,但与PPO不同,为实现智能体之间的协作,算法使用了包含全局信息的Critic网络和局部信息的Actor网络。此外,算法采用了集中训练、分散执行的框架,训练得到的模型能够在不依赖通信的基础上实现协作。为了研究该算法的性能,设计了一个考虑无人机飞行约束和真实飞行环境的大型无人机集群对抗平台,并进行仿真实验。实验结果表明,M-PPO算法在攻防对抗问题中的效果显著优于PPO和深度确定性策略梯度(Deep Deterministic Policy Gradient,DDPG)等主流算法。In order to solve the problem of attack-defense countermeasure of large-scale unmanned aerial vehicle(UAV)swarm,an improved Multi-agent algorithm(Multi-agent Proximal Policy Optimization,M-PPO)based on proximal policy optimization algorithm(PPO)is proposed.The algorithm uses Actor-Critic framework.Unlike PPO,M-PPO uses the Critic network with global information and the Actor network with local information to achieve the cooperation between agents.In addition,the algorithm adopts the framework of centralized training and decentralized execution.The trained model can achieve cooperation without communication.In order to study the performance of the algorithm,a large UAV swarm countermeasure platform considering UAV flight constraints and real flight environment is designed.The experimental results show that M-PPO algorithm is better than PPO algorithm and deep deterministic policy gradient(DDPG)algorithm.

关 键 词:无人机 攻防对抗 多智能体强化学习 三维环境 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象