检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:路来伟 赵红[1] 徐福良 罗勇 Lu Laiwei;Zhao Hong;XüFuliang;Luo Yong(Qingdao University,Qingdao 266071)
机构地区:[1]青岛大学,青岛266071
出 处:《汽车技术》2024年第8期27-37,共11页Automobile Technology
基 金:国家自然科学基金项目(52175236);青岛市科技惠民示范专项(24-1-8-cspz-18-nsh)。
摘 要:为提高增程式电动汽车的能量管理性能,首先利用长短时记忆(LSTM)神经网络进行车速预测,然后计算出预测时域内的需求功率,并将其与当前时刻的需求功率共同输入深度确定性策略梯度(DDPG)智能体,由智能体输出控制量,最后通过硬件在环仿真验证了控制策略的实时性。结果表明,采用所提出的LSTM-DDPG能量管理策略相对于DDPG能量管理策略、深度Q网络(DQN)能量管理策略、功率跟随控制策略在世界重型商用车辆瞬态循环(WTVC)工况下的等效燃油消耗量分别减少0.613 kg、0.350 kg、0.607 kg,与采用动态规划控制策略时的等效燃油消耗量仅相差0.128 kg。In order to improve the energy management of Range Extended Electric Vehicle(REEV),firstly Long Short-Term Memory(LSTM)neural network was used to predicate vehicle speed,then calculates the demand power in the prediction time domain,and the demand power in the prediction time domain and the demand power at the current moment were jointly inputted to the Deep Deterministic Policy Gradient(DDPG)agent,which outputted the control quantity.Finally,the hardwarein-the-loop simulation was carried out to verify the real-time performance of the control strategy.The validation results show that using the proposed LSTM-DDPG energy management strategy reduces the equivalent fuel consumption by 0.613 kg,0.350 kg,and 0.607 kg compared to the DDPG energy management strategy,the Deep Q-Network(DQN)energy management strategy,and the power-following control strategy,respectively,under the World Transient Vehicle Cycling(WTVC)conditions,which is only 0.128 kg different from that of the dynamic planning control strategy when the dynamic planning control strategy is used.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49