检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:魏东 贾宇辰 韩少然 WEI Dong;JIA Yuchen;HAN Shaoran(School of Electrical and Information Engineering,Beijing University of Civil Engineering and Architecture,Beijing 100044;Key Laboratory of Intelligent Processing for Building Big Data,Beijing University of Civil Engineering and Architecture,Beijing 100044;Beijing Jingcheng Ruida Electric Engineering Technology Co.,Ltd.,Beijing 100176,China)
机构地区:[1]北京建筑大学电气与信息工程学院,北京100044 [2]北京建筑大数据智能处理方法研究北京市重点实验室,北京100044 [3]北京京诚瑞达电气工程技术有限公司,北京100176
出 处:《计算机工程与科学》2025年第3期422-433,共12页Computer Engineering & Science
基 金:国家自然科学基金(62371032);北京市自然科学基金(4232021);住房城乡建设部科学技术项目(研究开发项目)(2019-K-149);北京建筑大学高级主讲教师培育计划(GJZJ20220803)。
摘 要:数据中心制冷系统需要全年不间断运行,其能耗不容忽视,且传统PID控制方法难以实现系统整体节能。为此提出数据中心制冷系统强化学习控制方法,控制目标为在满足制冷要求的前提下提升系统整体能效。设计双层递阶控制结构,针对上层优化层提出多步预测深度确定性策略梯度MP-DDPG算法,利用DDPG处理制冷系统多维连续动作空间问题,以求取空气处理机组水阀开度以及制冷站系统各回路的最佳设定值,同时通过多步预测提升算法效率,并在实时控制阶段克服系统大时滞影响。下层现场控制层通过PID控制使被控变量跟踪优化层得出的最优设定值,可在不破坏原有现场控制系统的情况下实现性能优化。针对无模型强化学习控制难以满足控制实时性问题,首先构建系统预测模型,将强化学习控制器与其进行离线交互训练,然后实现在线实时控制。实验结果表明,与传统DDPG算法相比,控制器学习效率提升50%;与PID和MP-DQN相比,系统动态性能得到了改善,且整体能效提升约30.149%和11.6%。The refrigeration system in data centers needs to operate continuously throughout the year,and its energy consumption cannot be ignored.Moreover,traditional PID control methods struggle to achieve overall energy savings for the system.To address this,a reinforcement learning control strategy is proposed for data center refrigeration systems,with the control objective of enhancing the overall energy efficiency of the system while meeting cooling requirements.A two-layer hierarchical control structure is designed.The upper optimization layer introduces the multistep prediction-deep deterministic policy gradient(MP-DDPG)algorithm,which leverages DDPG to handle the multi-dimensional continuous action space of the refrigeration system to determine the water valve opening of the air hand-ling unit and the optimal setpoint for each loop in the chilling station system.Multistep prediction is employed to enhance algorithm efficiency and overcome the impact of large system delay during real-time control.The lower field control layer uses PID control to enable the controlled variables to track the optimal setpoints obtained from the optimization layer,achieving performance optimization without disrupting the existing field control system.To address the challenge of real-time control with model-free reinforcement learning,a system prediction model is first constructed,and the reinforcement learning controller is trained offline through interaction with this model.Subsequently,online real-time control is implemented.Experimental results show that compared to the traditional DDPG algorithm,the learning efficiency of the controller is improved by 50%.Compared to PID and MP-DQN(multistep prediction-deep Q network),the system's dynamic performance is improved,and the whole energy efficiency is increased by approximately 30.149%and 11.6%,respectively.
关 键 词:数据中心制冷系统 预测控制 强化学习 深度确定性策略梯度法 集成学习
分 类 号:TP273[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.147