检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]IEEE [2]School of Electrical Engineering and Automation,Wuhan University,Wuhan,China
出 处:《Journal of Modern Power Systems and Clean Energy》2023年第5期1396-1404,共9页现代电力系统与清洁能源学报(英文)
摘 要:The high penetration and uncertainty of distributed energies force the upgrade of volt-var control(VVC) to smooth the voltage and var fluctuations faster. Traditional mathematical or heuristic algorithms are increasingly incompetent for this task because of the slow online calculation speed. Deep reinforcement learning(DRL) has recently been recognized as an effective alternative as it transfers the computational pressure to the off-line training and the online calculation timescale reaches milliseconds. However, its slow offline training speed still limits its application to VVC. To overcome this issue, this paper proposes a simplified DRL method that simplifies and improves the training operations in DRL, avoiding invalid explorations and slow reward calculation speed. Given the problem that the DRL network parameters of original topology are not applicable to the other new topologies, side-tuning transfer learning(TL) is introduced to reduce the number of parameters needed to be updated in the TL process. Test results based on IEEE 30-bus and 118-bus systems prove the correctness and rapidity of the proposed method, as well as their strong applicability for large-scale control variables.
关 键 词:Volt-var control(VVC) deep reinforcement learning(DRL) topologically variable power system transfer learning
分 类 号:TM714.3[电气工程—电力系统及自动化] TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.226.88.23