检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Haotian Zhang Jianyong Sun Thomas Back Zongben Xu
机构地区:[1]School of Mathematics and Statistics,Xi’an Jiaotong University,Xi’an,710049,China [2]Leiden Institute of Advanced Computer Science,Leiden University,Leiden,2333 CA,Netherlands
出 处:《Science China Mathematics》2024年第6期1457-1480,共24页中国科学(数学)(英文版)
基 金:supported by National Natural Science Foundation of China(Grant No.62076197);Key Research and Development Project of Shaanxi Province(Grant No.2022GXLH-01-15)。
摘 要:Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA's performance.A variety of heuristic mechanisms for AOS have been proposed in recent decades,which usually contain two main components:the feature extraction and the policy setting.The feature extraction refers to as extracting relevant features from the information collected during the search process.The policy setting means to set a strategy(or policy)on how to select an operator from a pool of operators based on the extracted feature.Both components are designed by hand in existing studies,which may not be efficient for adapting optimization problems.In this paper,a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs,namely,differential evolution(DE).In the framework,the feature extraction is parameterized as a deep neural network(DNN),while a Dirichlet distribution is considered to be the policy.A reinforcement learning method,named policy gradient,is used to train the DNN.As case studies,the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE,which result in two new algorithms named PG-DE and PG-MPEDE,respectively.Experiments on the Congress of Evolutionary Computation(CEC)2018 test suite show that the proposed new algorithms perform significantly better than their counterparts.Finally,we prove theoretically that the considered classic methods are the special cases of the proposed framework.
关 键 词:evolutionary algorithm differential evolution adaptive operator selection reinforcement learning deep learning
分 类 号:O224[理学—运筹学与控制论] TP18[理学—数学]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49