Imaginary filtered hindsight experience replay for UAV tracking dynamic targets in large-scale unknown environments  被引量:2

在线阅读下载全文

作  者:Zijian HU Xiaoguang GAO Kaifang WAN Neretin EVGENY Jinliang LI 

机构地区:[1]School of Electronics and Information,Northwestern Polytechnical University,Xi’an 710129,China [2]School of Robotic and Intelligent Systems,Moscow Aviation Institute(National Research University),Moscow 125993,Russia [3]Electromagnetic Space Operations and Applications Laboratory,The 29th Research Institute of China Electronics Technology Group Corporation,Chengdu 610036,China

出  处:《Chinese Journal of Aeronautics》2023年第5期377-391,共15页中国航空学报(英文版)

基  金:co-supported by the National Natural Science Foundation of China(Nos.62003267 and 61573285);the Natural Science Basic Research Plan in Shaanxi Province of China(No.2020JQ-220);the Open Project of Science and Technology on Electronic Information Control Laboratory,China(No.JS20201100339);the Open Project of Science and Technology on Electromagnetic Space Operations and Applications Laboratory,China(No.JS20210586512).

摘  要:As an advanced combat weapon,Unmanned Aerial Vehicles(UAVs)have been widely used in military wars.In this paper,we formulated the Autonomous Navigation Control(ANC)problem of UAVs as a Markov Decision Process(MDP)and proposed a novel Deep Reinforcement Learning(DRL)method to allow UAVs to perform dynamic target tracking tasks in large-scale unknown environments.To solve the problem of limited training experience,the proposed Imaginary Filtered Hindsight Experience Replay(IFHER)generates successful episodes by reasonably imagining the target trajectory in the failed episode to augment the experiences.The welldesigned goal,episode,and quality filtering strategies ensure that only high-quality augmented experiences can be stored,while the sampling filtering strategy of IFHER ensures that these stored augmented experiences can be fully learned according to their high priorities.By training in a complex environment constructed based on the parameters of a real UAV,the proposed IFHER algorithm improves the convergence speed by 28.99%and the convergence result by 11.57%compared to the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm.The testing experiments carried out in environments with different complexities demonstrate the strong robustness and generalization ability of the IFHER agent.Moreover,the flight trajectory of the IFHER agent shows the superiority of the learned policy and the practical application value of the algorithm.

关 键 词:Artificial intelligence Autonomous navigation control Deep reinforcement learning Hindsight experience replay UAV 

分 类 号:V279[航空宇航科学与技术—飞行器设计] V249

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象