检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:邵浩然 陈建松[1] SHAO Haoran;CHEN Jiansong(School of Mechanical Engineering,Southeast University,Nanjing 211189,China)
出 处:《机械科学与技术》2024年第8期1411-1417,共7页Mechanical Science and Technology for Aerospace Engineering
基 金:教育部高等教育司产学合作协同育人项目(202101204004)。
摘 要:为解决移动机器人在自主避障时存在的搜索效率低、缺乏动态避障能力等问题,提出一种融合强化学习(Reinforcement learning,RL)和爬行动物搜索算法(Reptile search algorithm,RSA)的避障方法。引入强化学习中的Q学习模型,以平衡RSA算法的勘探与开发过程,从而提升算法搜索效率;引入混沌机制和随机对立学习策略,以增加种群的多样性,从而跳出局部极值。分别在静态与动态场景进行仿真,结果表明RL-RSA算法在路径长度、寻优耗时、运行时长等方面均优于对比算法。通过实际场景实验,验证了RL-RSA算法可行性及优良的综合避障性能。To solve the low search efficiency and lack of dynamic obstacle avoidance ability of mobile robots in autonomous obstacle avoidance,an obstacle avoidance method combining the reinforcement learning(RL)and the reptile search algorithm(RSA)was proposed.The Q learning model for reinforcement learning was adopted to balance the exploration and development process of RSA algorithm,in order to improve the search efficiency of the algorithm.The chaotic mechanism and random opposition learning strategy were adopted to increase the diversity of the population and jump out of the local extreme value.The static and dynamic scenarios simulation show that the RL-RSA algorithm was better that the algorithm in terms of the path length,optimization time and running time.The feasibility of RL-RSA algorithm and its excellent comprehensive obstacle avoidance performance were verified with the experiments.
分 类 号:TP242[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28