检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Qingyang Zhang Hongming Zhang Dengpeng Xing Bo Xu
机构地区:[1]Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China [2]School of Future Technology,University of Chinese Academy of Sciences,Beijing 100049,China [3]Department of Computing Science,University of Alberta,Edmonton T6G 2E8,Canada [4]School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100049,China
出 处:《Machine Intelligence Research》2025年第2期267-288,共22页机器智能研究(英文版)
基 金:supported by National Key R&D Program of China(No.2022ZD0116405);the Strategic Priority Research Program of the Chinese Academy of Sciences,China(No.XDA27030300).
摘 要:Goal-conditioned hierarchical reinforcement learning(GCHRL)decomposes the desired goal into subgoals and conducts exploration and exploitation in the subgoal space.Its effectiveness heavily relies on subgoal representation and selection.However,existing works do not consider distinct information across hierarchical time scales when learning subgoal representations and lack a subgoal selection strategy that balances exploration and exploitation.In this paper,we propose a novel method for efficient exploration-exploitation balance in HIerarchical reinforcement learning by dynamically constructing Latent Landmark graphs(HILL).HILL transforms the reward maximization problem of GCHRL into the shortest path planning on graphs.To effectively consider the hierarchical time-scale information,HILL adopts a contrastive representation learning objective to learn informative latent representations.Based on these representations,HILL dynamically constructs latent landmark graphs and selects subgoals using two measures to balance exploration and exploitation.We implement two variants:HILL-hf generates graphs periodically,while HILL-lf generates graphs adaptively.Empirical results on continuous control tasks with sparse rewards demonstrate that both variants outperform state-of-the-art baselines in sample efficiency and asymptotic performance,with HILL-lf further reducing training time by 40%compared to HILL-hf.
关 键 词:Hierarchical reinforcement learning representation learning latent landmark graph contrastive learning exploration and exploitation.
分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.171