检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:夏雨奇 黄炎焱[1] 陈恰 XIA Yuqi;HUANG Yanyan;CHEN Qia(School of Automation,Nanjing University of Science and Technology,Nanjing 210094,China)
机构地区:[1]南京理工大学自动化学院,江苏南京210094
出 处:《系统工程与电子技术》2024年第9期3070-3081,共12页Systems Engineering and Electronics
基 金:国家自然科学基金(61374186)资助课题。
摘 要:在城市战场环境下,无人侦察车有助于指挥部更好地了解目标地区情况,提升决策准确性,降低军事行动的威胁。目前,无人侦察车多采用阿克曼转向结构,传统算法规划的路径不符合无人侦察车的运动学模型。对此,将自行车运动模型和深度Q网络相结合,通过端到端的方式生成无人侦察车的运动轨迹。针对深度Q网络学习速度慢、泛化能力差的问题,根据神经网络的训练特点提出基于经验分类的深度Q网络,并提出具有一定泛化能力的状态空间。仿真实验结果表明,相较于传统路径规划算法,所提算法规划出的路径更符合无人侦察车的运动轨迹并提升无人侦察车的学习效率和泛化能力。In urban battlefield environments,unmanned reconnaissance vehicles help command centers better understand the situation in target areas,enhance decision-making accuracy,and reduce the threat of military operations.At present,unmanned reconnaissance vehicles mostly use Ackermann steering geometry.The path planned by the traditional algorithms does not conform to the kinematic model of the unmanned reconnaissance vehicle.Thus,the combination of bicycle motion model and deep Q-network are proposed to generate the motion trajectory of unmanned reconnaissance vehicles in an end-to-end manner.In order to solve the problems of slow learning speed and poor generalizing of deep Q-network,a deep Q-network based on experience classification according to the training characteristics of neural network and a state space with certain generalization ability are proposed.The simulation experiment results show that compared with the traditional path planning algorithms,the path planned by proposed algorithm is more in line with the movement trajectory of the unmanned reconnaissance vehicle,and which improve the learning efficiency and generalization ability of the unmanned reconnaissance vehicle.
分 类 号:TP242[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7