检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yang Qi Jinxin Cao Baijing Wu
机构地区:[1]Institute of Transportation Engineering,Inner Mongolia University,Hohhot,010010,China [2]Inner Mongolia Academy of Science and Technology,Hohhot,010010,China [3]Institute of Electronics and Information Engineering,Lanzhou Jiaotong University,Lanzhou,730070,China
出 处:《Communications in Transportation Research》2024年第1期451-460,共10页交通研究通讯(英文)
基 金:National Natural Science Foundation of China(Grant Nos.72461028,71961024,72161032,72061028,and 71971022);Key Technology Research Plan of Inner Mongolia Autonomous Region(Grant No.2019GG287).
摘 要:With the continuous innovation in household appliance technology and the improvement of living standards,the production of discarded household appliances has rapidly increased,making their recycling increasingly significant.Traditional path planning algorithms encounter difficulties in balancing efficiency and constraints in addressing the multi-objective,multi-constraint challenge posed by discarded household appliance recycling routes.To tackle this issue,this study introduces a bi-directional Q-learning-based path planning algorithm.By developing a bi-directional Q-learning mechanism and enhancing the initialization method of Q-learning,the algorithm aims to achieve efficient and effective optimization of discarded household appliance recycling routes.It implements bidirectional updates of the state-action value function from both the starting point and the target point.Additionally,a hierarchical reinforcement learning strategy and guided rewards are introduced to minimize blind exploration and expedite convergence.By decomposing complex recycling tasks into multiple sub-tasks and seeking paths with superior performance at each sub-task level,the initial exploratory blindness is reduced.To validate the efficacy of the proposed algorithm,gridbased modeling of real-world environments is utilized.Comparative experiments reveal significant improvements in iteration counts and path lengths,thereby validating its practical applicability in path planning for recycling initiatives.
关 键 词:Path planning Q-LEARNING Waste electrical recovery Reinforcement learning Reward function
分 类 号:TN9[电子电信—信息与通信工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145