检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:佘维[1,2,3,4] 李阳 钟李红[2,4,5] 孔德锋 田钊 SHE Wei;LI Yang;ZHONG Lihong;KONG Defeng;TIAN Zhao(School of Cyber Science and Engineering,Zhengzhou University,Zhengzhou Henan 450001,China;Songshan Laboratory,Zhengzhou Henan 450000,China;Henan Provincial Collaborative Innovation Center for Internet Medical and Health Services,Zhengzhou Henan 450052,China;Zhengzhou Key Laboratory of Blockchain and Data Intelligence(Zhengzhou University),Zhengzhou Henan 450001,China;School of Computer and Artificial Intelligence,Zhengzhou University,Zhengzhou Henan 450001,China;Institute of Engineering Protection,National Defense Engineering Research Institute of the Academy of Military Sciences,Luoyang Henan 471023,China)
机构地区:[1]郑州大学网络空间安全学院,郑州450001 [2]嵩山实验室,郑州450000 [3]互联网医疗与健康服务河南省协同创新中心(郑州大学),郑州450052 [4]郑州市区块链与数据智能重点实验室(郑州大学),郑州450001 [5]郑州大学计算机与人工智能学院,郑州450001 [6]军事科学院国防工程研究院工程防护研究所,河南洛阳471023
出 处:《计算机应用》2024年第3期671-676,共6页journal of Computer Applications
基 金:嵩山实验室预研项目(YYYY022022003);河南省重点研发与推广专项(212102310039)。
摘 要:针对神经网络超参数优化效果差、容易陷入次优解和优化效率低的问题,提出一种基于改进实数编码遗传算法(IRCGA)的深度神经网络超参数优化算法——IRCGA-DNN(IRCGA for Deep Neural Network)。首先,采用实数编码方式表示超参数的取值,使超参数的搜索空间更灵活;然后,引入分层比例选择算子增加解集多样性;最后,分别设计了改进的单点交叉和变异算子,以更全面地探索超参数空间,提高优化算法的效率和质量。基于两个仿真数据集,验证IRCGA-DNN的毁伤效果预测性能和收敛效率。实验结果表明,在两个数据集上,与GA-DNN(Genetic Algorithm for Deep Neural Network)相比,所提算法的收敛迭代次数分别减少了8.7%和13.6%,均方误差(MSE)相差不大;与IGA-DNN(Improved GA-DNN)相比,IRCGA-DNN的收敛迭代次数分别减少了22.2%和13.6%。实验结果表明,所提算法收敛速度和预测性能均更优,能有效处理神经网络超参数优化问题。To address the problems of poor effects,easily falling into suboptimal solutions,and inefficiency in neural network hyperparameter optimization,an Improved Real Coding Genetic Algorithm(IRCGA)based hyperparameter optimization algorithm for the neural network was proposed,which was named IRCGA-DNN(IRCGA for Deep Neural Network).Firstly,a real-coded form was used to represent the values of hyperparameters,which made the search space of hyperparameters more flexible.Then,a hierarchical proportional selection operator was introduced to enhance the diversity of the solution set.Finally,improved single-point crossover and variational operators were designed to explore the hyperparameter space more thoroughly and improve the efficiency and quality of the optimization algorithm,respectively.Two simulation datasets were used to show IRCGA’s performance in damage effectiveness prediction and convergence efficiency.The experimental results on two datasets indicate that,compared to GA-DNN(Genetic Algorithm for Deep Neural Network),the proposed algorithm reduces the convergence iterations by 8.7%and 13.6%individually,and the MSE(Mean Square Error)is not much different;compared to IGA-DNN(Improved Genetic Algorithm for Deep Neural Network),IRCGA-DNN achieves reductions of 22.2%and 13.6%in convergence iterations respectively.Experimental results show that the proposed algorithm is better in both convergence speed and prediction performance,and is suitable for hyperparametric optimization of neural networks.
关 键 词:实数编码 遗传算法 超参数优化 进化神经网络 机器学习
分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.141.28.197