检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]复旦大学计算机与信息技术系,上海200433
出 处:《小型微型计算机系统》2009年第7期1268-1273,共6页Journal of Chinese Computer Systems
基 金:国家重点基础研究发展"九七三"计划项目(2005CB321906)资助
摘 要:机器学习在多Agent系统的协作和行为决策中得到广泛关注和深入研究.分析基于均衡解和最佳响应的学习算法,提出了两个混合多Agent环境下动态策略的强化学习算法.该算法不仅能适应系统中其他Agent的行为策略和变化,而且能利用过去的行为历史制定更为准确的时间相关的行为策略.基于两个知名零和博弈,验证了该算法的收敛性和理性,在与最佳响应Agent的重复博弈中能获得更高的收益.Recently machine learning is paid much attention to and researched more deeply in collaboration and action selection of multi-agent systems. In this paper we analyzed equilibrium based and best response based learning algorithms, and proposed two reinforcement learning algorithms for dynamic policy under mixed multi-agent domains. These algorithms not only can adapt to policy and its variation of other agents, but also can make out more accurate time-related policy using past behavior history. Based on two well-known zero-sum games, convergence and rationality of this algorithm is validated, and it can receive higher utility in repeated games against best response based agents.
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.70