检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
出 处:《计算机科学》2014年第1期290-292,共3页Computer Science
基 金:中央高校基本科研业务费科研专项项目(CDJZR10180014)资助
摘 要:针对传统增强学习算法存在妥协过快导致自身效用降低的缺点,通过设计改进增强学习算法的双边多议题协商模型,引入期望还原率,还原Agent的期望,从而提高协商解的质量。通过实验分析了期望还原率不同取值对协商的影响,并对传统增强学习协商策略、基于时间的协商策略和改进增强学习协商策略的协商效果做了对比。实验表明,在协商次数允许的范围之内,基于期望还原率的改进增强学习算法在双边多议题协商中能够提升双方的效用。Traditional reinforcement learning negotiation strategy has the shortcoming of compromising too fast and re- duces the utility of agent. Aiming at this problem, improved reinforcement learning bilateral multi-issue negotiation strategy which imports expectation restoration rate to restore the expectation of agent can improve the quality of the ne- gotiation result. This paper analysed the influence of different expectation reduction rate on negotiation and contrasted traditional reinforcement learning negotiation strategies, time-based negotiation strategy and the proposed enhance learning negotiation strategy consultation. The result shows that negotiation strategy can get higher bilateral utility within allowing negotiation turns.
分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.46