检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:郑延斌[1,2] 樊文鑫 韩梦云 陶雪丽 ZHENG Yanbin;FAN Wenxin;HAN Mengyun;TAO Xueli(College of Computer and Information Engineering,Henan Normal University,Xinxiang Henan 453007,China;Henan Engineering Laboratory of Smart Commerce and Internet of Things Technologies,Xinxiang Henan 453007,China)
机构地区:[1]河南师范大学计算机与信息工程学院,河南新乡453007 [2]智慧商务与物联网技术河南省工程实验室,河南新乡453007
出 处:《计算机应用》2020年第6期1613-1620,共8页journal of Computer Applications
基 金:国家自然科学基金资助项目(U1604156);河南师范大学青年基金资助项目(2017QK20)。
摘 要:多Agent协作追捕问题是多Agent协调与协作研究中的一个典型问题。针对具有学习能力的单逃跑者追捕问题,提出了一种基于博弈论及Q学习的多Agent协作追捕算法。首先,建立协作追捕团队,并构建协作追捕的博弈模型;其次,通过对逃跑者策略选择的学习,建立逃跑者有限的Step-T累积奖赏的运动轨迹,并把运动轨迹调整到追捕者的策略集中;最后,求解协作追捕博弈得到Nash均衡解,每个Agent执行均衡策略完成追捕任务。同时,针对在求解中可能存在多个均衡解的问题,加入了虚拟行动行为选择算法来选择最优的均衡策略。C#仿真实验表明,所提算法能够有效地解决障碍环境中单个具有学习能力的逃跑者的追捕问题,实验数据对比分析表明该算法在同等条件下的追捕效率要优于纯博弈或纯学习的追捕算法。The multi-agent collaborative pursuit problem is a typical problem in the multi-agent coordination and collaboration research.Aiming at the pursuit problem of single escaper with learning ability,a multi-agent collaborative pursuit algorithm based on game theory and Q-learning was proposed.Firstly,a cooperative pursuit team was established and a game model of cooperative pursuit was built.Secondly,through the learning of the escaper’s strategy choices,the trajectory of the escaper’s limited Step-T cumulative reward was established,and the trajectory was adjusted to the pursuer’s strategy set.Finally,the Nash equilibrium solution was obtained by solving the cooperative pursuit game,and the equilibrium strategy was executed by each agent to complete the pursuit task.At the same time,in order to solve the problem that there may be multiple equilibrium solutions,the virtual action behavior selection algorithm was added to select the optimal equilibrium strategy.C#simulation experiments show that,the proposed algorithm can effectively solve the pursuit problem of single escaper with learning ability in the obstacle environment,and the comparative analysis of experimental data shows that the pursuit efficiency of the algorithm under the same conditions is better than that of pure game or pure learning.
关 键 词:多AGENT 协作追捕 博弈论 Q学习 强化学习
分 类 号:TP24[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117