检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:郑阳俊 贺帅 帅志斌[1] 李建秋[2] 盖江涛[1] 李勇 张颖 李国辉 ZHENG Yangjun;HE Shuai;SHUAI Zhibin;LI Jianqiu;GAI Jiangtao;LI Yong;ZHANG Ying;LI Guohui(China North Vehicle Research Institute,Beijing 100072,China;State Key Laboratory of Automotive Safety and Energy(Tsinghua University),Beijing 100084,China)
机构地区:[1]中国北方车辆研究所,北京100072 [2]汽车安全与节能国家重点实验室(清华大学),北京100084
出 处:《汽车安全与节能学报》2022年第2期309-316,共8页Journal of Automotive Safety and Energy
基 金:汽车安全与节能国家重点实验室开放基金课题(KF2018);国家自然科学基金项目(51975543)。
摘 要:为精确估计车辆行驶状态,提出了一种四轮独立驱动电动车辆侧向车速估计方法。基于深度强化学习(DRL)范式,设计了侧向车速估计方法的架构;基于深度确定性策略梯度(DDPG)算法,设计了DRL智能体;采用循环神经网络,搭建了DDPG算法中的Actor网络和Critic网络。基于设计的奖励函数和训练场景,借助Matlab/Simulink软件,完成了算法的实现和训练;并通过在车辆双车道变换等实际行驶工况的仿真,进行了验证。结果表明:在经过了630次的学习训练之后,与扩展Kalman滤波方法相比,本文方法的估计精度提升40%。因而,本文方法能够在常用行驶工况中对车辆侧向车速进行估计。A lateral-velocity estimation method was proposed for an electric vehicle with four-wheel independent-drive to estimate the vehicle motion states precisely.An architecture was designed for the lateral velocity estimation method based on the deep reinforcement learning(DRL)paradigm;A DRL agent was designed with deep deterministic policy gradient(DDPG)algorithm;The actor network and the critic network of the DDPG algorithm were constructed with the recurrent neural network(RNN).The algorithm was realized and trained in Matlab/Simulink with the designed award function and training scenarios;The algorithm effectiveness was verified by the simulation of practical driving maneuvers such as double-lane changing.The results show that after 630 episodes of training and learning,the proposed method improves the estimation accuracy by 40%,compared with that of the extended Kalman filter(EKF)method.Therefore,the proposed method can be used to estimate vehicle lateral velocity in general driving scenarios.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.170