检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
出 处:《计算机应用研究》2015年第5期1335-1338,1344,共5页Application Research of Computers
基 金:国家自然科学基金面上资助项目(71371018);北京市社科规划项目(13JDJGB037)
摘 要:为提高传统协商自学习能力,利用多agent智能技术,建立基于黑板模型的协商框架,构建五元组协商模型,采取Q-强化学习算法,给出一种协商策略;使用RBF神经网络进一步优化协商策略,预测对手信息并调整让步幅度。通过算例验证该方法的可行性和有效性,通过与未改进的Q-强化学习算法对比,该方法可增强协商agent的自学习能力,缩短协商时间,提高冲突消解效率。In order to improve the self-learning ability of traditional negotiation,this paper integrated multi-agent intelligent technology,designed the negotiation framework based on the blackboard model,constructed the five-elements negotiation model,adopted the negotiation strategy based on Q-reinforcement learning,proposed a negotiation strategy; then it optimized the negotiation strategy by the RBF neural network,predicted the information of opponent for adjusting the concession extent.At last,it verifies the feasibility and validity of the algorithm through a sample application. When comparing to the un-optimized Q-reinforcement learning,it can enhance the learning ability of the negotiation agents,reduce the negotiation time,and improve the efficiency of resolving conflicts.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.127