检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:LI Kai HUANG Wenhan LI Chenchen DENG Xiaotie 李凯;黄文瀚;李晨晨;邓小铁
机构地区:[1]Department of Computer Science and Engineering,Shanghai Jiao Tong University,Shanghai 200240,China [2]Beijing Jingdong Century Trading Co.,Ltd.,Beijing 100176,China [3]Center on Frontiers of Computing Studies,Peking University,Beijing 100871,China
出 处:《Journal of Shanghai Jiaotong university(Science)》2025年第2期385-398,共14页上海交通大学学报(英文版)
基 金:the Science and Technology Innovation 2030-"New Generation Artificial Intelligence"Major Project(No.2018AAA0100901)。
摘 要:In repeated zero-sum games,instead of constantly playing an equilibrium strategy of the stage game,learning to exploit the opponent given historical interactions could typically obtain a higher utility.However,when playing against a fully adaptive opponent,one would have dificulty identifying the opponent's adaptive dynamics and further exploiting its potential weakness.In this paper,we study the problem of optimizing against the adaptive opponent who uses no-regret learning.No-regret learning is a classic and widely-used branch of adaptive learning algorithms.We propose a general framework for online modeling no-regret opponents and exploiting their weakness.With this framework,one could approximate the opponent's no-regret learning dynamics and then develop a response plan to obtain a significant profit based on the inferences of the opponent's strategies.We employ two system identification architectures,including the recurrent neural network(RNN)and the nonlinear autoregressive exogenous model,and adopt an efficient greedy response plan within the framework.Theoretically,we prove the approximation capability of our RNN architecture at approximating specific no-regret dynamics.Empirically,we demonstrate that during interactions at a low level of non-stationarity,our architectures could approximate the dynamics with a low error,and the derived policies could exploit the no-regret opponent to obtain a decent utility.
关 键 词:no-regret learning repeated game opponent exploitation opponent modeling dynamical system system identification recurrent neural network(RNN)
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49