检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:陈载宇 李阳 殷明慧[1] 顾伟峰 刘建坤 邹云[1] CHEN Zai-yu;LI Yang;YIN Ming-hui;GU Wei-feng;LIU Jian-kun;ZOU Yuny(School of Automation,Nanjing University of Science and Technology,Nanjing Jiangsu 210094,China;Beijing Goldwind Science&Creation Windpower Equipment Co.,Ltd.,Beijing 100176,China;State Grid Jiangsu Electric Power Co.,Ltd.,Research Institute,Nanjing Jiangsu 211103,China)
机构地区:[1]南京理工大学自动化学院,江苏南京210094 [2]北京金风科创风电设备有限公司,北京100176 [3]国网江苏省电力有限公司电力科学研究院,江苏南京211103
出 处:《控制理论与应用》2022年第7期1219-1228,共10页Control Theory & Applications
基 金:国家自然科学基金项目(61773214,51977111);江苏省“六大人才高峰”高层次人才项目(XNY–025);江苏省科技成果转化专项资金项目(BA2019045)资助。
摘 要:变速风电机组在额定风速以下应用最大功率点跟踪实现最大化风能捕获.然而,大惯量风电机组在面对快速波动的湍流风速时,因转速调节慢而难以保持运行于最大功率点.本文研究进一步发现,平均转速跟踪误差与整体的风能捕获效率并非单调关系,这使得当前以减小转速跟踪误差为目标的控制器设计难以有效提升风电机组的发电效率.为此,本文以提升风能捕获效率(而非减小转速跟踪误差)为目标,提出一种基于参考输入优化的风电机组最大化风能捕获方法.考虑到参考转速对风能捕获效率的复杂影响难以准确建模,本文借助深度确定性策略梯度(DDPG)强化学习算法实现参考输入优化.仿真结果表明该方法能够有效提升湍流风下变速风电机组的风能捕获效率.Variable-speed wind turbines(VSWTs) are expected to maximize their power extraction via maximum power point tracking(MPPT). However, turbines with large inertia are unable to track the optimal rotor speed which continuously fluctuates depending on instantaneous wind speed, leading to the decline in wind energy extraction efficiency. It is found that the average speed tracking error is not monotonically related to the overall wind energy extraction efficiency. This makes it difficult for the MPPT controllers which are designed aiming to reduce the speed tracking error to effectively improve the wind energy extraction efficiency of the turbines with slow dynamic characteristics. Therefore, in order to improve the efficiency of wind energy capture(rather than reduce the speed tracking error) as the goal, this paper proposes a wind turbine maximum wind energy capture method based on reference input optimization. The optimization of reference input is realized with a reinforcement learning algorithm called deep deterministic policy gradient(DDPG), meeting the challenge of the complex effect of reference on performance. The simulation results show that the proposed method can effectively improve the efficiencies of VSWTs under turbulence.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222