检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李士丹 李航[1] 李国杰[1] 韩蓓[1] 徐晋 李玲 王宏韬 LI Shidan;LI Hang;LI Guojie;HAN Bei;XU Jin;LI Ling;WANG Hongtao(Key Laboratory of Control of Power Transmission and Conversion,Ministry of Education(Shanghai Jiao Tong University),Shanghai 200240,China;Shanghai PeiKe Technology Co.,Ltd.,Shanghai 200240,China;Jiaxing Power Supply Company,State Grid Zhejiang Electric Power Co.,Ltd.,Jiaxing 314000,China)
机构地区:[1]电力传输与功率变换控制教育部重点实验室(上海交通大学),上海200240 [2]上海沛可科技有限公司,上海200240 [3]国网浙江省电力有限公司嘉兴供电公司,浙江嘉兴314000
出 处:《电力系统保护与控制》2024年第22期1-11,共11页Power System Protection and Control
基 金:国家重点研发计划项目资助(2022YFE0105200);国网浙江省电力有限公司科技项目资助(5211JX230004)。
摘 要:现有深度强化学习(deep reinforcement learning,DRL)方法在解决配电网电压优化问题时,存在信用分配难、探索效率低等问题,在模型训练速度和优化效果等方面表现欠佳。为此,结合配电网分区降损与模仿学习的思想,提出一种基于指导信号的多智能体深度确定性策略梯度(guidance signal based multi-agent deep deterministic policy gradient,GS-MADDPG)的电压优化方法。首先,将电动汽车(electric vehicles,EV)集群、分布式电源(distributed generations,DG)和无功调节装置作为决策智能体,构建强化学习优化模型。然后,通过配电网分区,解耦多智能体的外部奖励,并结合模仿学习,利用指导信号引入内部奖励,帮助智能体快速寻优。最后,基于改进IEEE33节点系统进行算例测试。结果表明,所提电压优化策略较传统DRL方法具有更高的样本利用率,实现了更稳定的收敛及更高的模型训练效率,提升了配电网电压的优化效果。The current deep reinforcement learning(DRL)method has some issues with voltage optimization,such as challenging credit allocation and low exploration efficiency.These all lead to poor performance in model training speed and optimization effect.Considering regionalization and imitation learning,a voltage optimization strategy based on the guidance signal-based multi-agent deep deterministic policy gradient(GS-MADDPG)is proposed.First,electric vehicle(EV)clusters,distributed generation(DG)and reactive power regulators are taken as decision agents to build the reinforcement learning optimization model.Secondly,the external reward is decoupled through regionalization of the distribution network,and combined with imitation learning,an internal reward is introduced through the guidance signal to help agents search for optimization quickly.Finally,an example test is conducted on the improved IEEE 33-node distribution network.The results indicate that the proposed voltage optimization strategy has higher sample utilization,more stable convergence,and higher model training efficiency than the traditional DRL method,and improves the voltage optimization effect.
关 键 词:配电网电压优化 深度强化学习 分区降损 模仿学习 指导信号
分 类 号:TM73[电气工程—电力系统及自动化] TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.226.169.66