检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:贾瑞豪 JIA Ruihao(School of Automobile,Chang'an University,Xi'an 710064,China)
出 处:《汽车实用技术》2025年第1期25-30,共6页Automobile Applied Technology
摘 要:针对传统深度强化学习算法因训练时探索策略差导致在自动驾驶决策任务中同时出现行驶效率低、收敛慢和决策成功率低的问题,提出了结合专家评价的深度竞争双Q网络的决策方法。提出离线专家模型和在线模型,在两者间引入自适应平衡因子;引入自适应重要性系数的优先经验回放机制在竞争深度Q网络的基础上搭建在线模型;设计了考虑行驶效率、安全性和舒适性的奖励函数。结果表明,该算法相较于D3QN、PERD3QN在收敛速度上分别提高了25.93%和20.00%,决策成功率分别提高了3.19%和2.77%,平均步数分别降低了6.40%和0.14%,平均车速分别提升了7.46%与0.42%。Aiming at the traditional deep reinforcement learning algorithms'problems of simultan-eous low driving efficiency,slow convergence and low decision success rate in self-driving decision-making tasks due to poor exploration strategies during training,a decision-making method of deep competitive double Q network combined with expert evaluation is proposed.An offline expert model and an online model are proposed,and an adaptive balance factor is introduced between them;a prioritized experience replay mechanism with adaptive importance coefficients is introduced to build an online model on the basis of the competitive deep Q-network;and a reward function that considers driving efficiency,safety,and comfort is designed.The results show that the algorithm improves the convergence speed by 25.93%and 20.00%,the decision success rate by 3.19%and 2.77%,the average steps by 6.40%and 0.14%,and the average speed by 7.46%and 0.42%,respectively,compared with D3QN and PERD3QN.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49