检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]School of Computer Science and Information Engineering,Hefei University of Technology,Hefei,China
出 处:《国际计算机前沿大会会议论文集》2022年第1期163-177,共15页International Conference of Pioneering Computer Scientists, Engineers and Educators(ICPCSEE)
摘 要:Traditional multi-agent deep reinforcement learning has difficulty obtaining rewards,slow convergence,and effective cooperation among agents in the pretraining period due to the large joint state space and sparse rewards for action.Therefore,this paper discusses the role of demonstration data in multiagent systems and proposes a multi-agent deep reinforcement learning algorithm from fuse adaptive weight fusion demonstration data.The algorithm sets the weights according to the performance and uses the importance sampling method to bridge the deviation in the mixed sampled data to combine the expert data obtained in the simulation environment with the distributed multi-agent reinforcement learning algorithm to solve the difficult problem.The problem of global exploration improves the convergence speed of the algorithm.The results in the RoboCup2D soccer simulation environment show that the algorithm improves the ability of the agent to hold and shoot the ball,enabling the agent to achieve a higher goal scoring rate and convergence speed relative to demonstration policies and mainstream multi-agent reinforcement learning algorithms.
关 键 词:Multiagent deep reinforcement learning Exploration Offline reinforcement learning Importance sampling
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.229