Real-world validation of safe reinforcement learning,model predictive control and decision tree-based home energy management systems  

在线阅读下载全文

作  者:Julian Ruddick Glenn Ceusters Gilles Van Kriekinge Evgenii Genov Cedric De Cauwer Thierry Coosemans Maarten Messagie 

机构地区:[1]Electric Vehicle and Energy Research Group(EVERGI),Mobility,Logistics and Automotive Technology Research Centre(MOBI),Department of Electrical Engineering and Energy Technology,Vrije Universiteit Brussel,Pleinlaan 2,Brussels,1050,Belgium [2]ABB N.V.,Culliganlaan 3B 101,Diegem,1831,Belgium [3]AI lab,Vrije Universiteit Brussel,Pleinlaan 2,Brussels,1050,Belgium

出  处:《Energy and AI》2024年第4期470-488,共19页能源与人工智能(英文)

基  金:supported by the ECOFLEX project funded by FOD Economie,K.M.O.,Middenstand en Energie,by the ICON project OPTIMESH(FLUX50 ICON Project Collaboration Agreement-HBC.2021.0395)funded by VLAIO;by the Baekeland project SLIMness(HBC.2019.2613)funded by ABB n.v.and VLAIO in equal parts.

摘  要:Recent advancements in machine learning based energy management approaches,specifically reinforcement learning with a safety layer(OptLayerPolicy)and a metaheuristic algorithm generating a decision tree control policy(TreeC),have shown promise.However,their effectiveness has only been demonstrated in computer simulations.This paper presents the real-world validation of these methods,comparing them against model predictive control and simple rule-based control benchmarks.The experiments were conducted on the electrical installation of four reproductions of residential houses,each with its own battery,photovoltaic,and dynamic load system emulating a non-controllable electrical load and a controllable electric vehicle charger.The results show that the simple rules,TreeC,and model predictive control-based methods achieved similar costs,with a difference of only 0.6%.The reinforcement learning based method,still in its training phase,obtained a cost 25.5%higher to the other methods.Additional simulations show that the costs can be further reduced by using a more representative training dataset for TreeC and addressing errors in the model predictive control implementation caused by its reliance on accurate data from various sources.The OptLayerPolicy safety layer allows safe online training of a reinforcement learning agent in the real world,given an accurate constraint function formulation.The proposed safety layer method remains error-prone;nonetheless,it has been found beneficial for all investigated methods.The TreeC method,which does require building a realistic simulation for training,exhibits the safest operational performance,exceeding the grid limit by only 27.1 Wh compared to 593.9 Wh for reinforcement learning.

关 键 词:Energy management system Machine learning Reinforcement learning Decision tree Model predictive control HARDWARE-IN-THE-LOOP Implementation Experimental 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象