检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:屈伟洋 俞扬[1] Qu Weiyang Yu Yang(National Key Laboratory for Novel Software Technology, Naniing University, Nanjing, 210046, China)
机构地区:[1]南京大学软件新技术国家重点实验室,南京210046
出 处:《南京大学学报(自然科学版)》2017年第2期340-349,共10页Journal of Nanjing University(Natural Science)
基 金:国家自然科学基金(61375061);江苏省自然科学基金(BK20160066)
摘 要:传统神经网络训练方法通过计算输出Y和目标T之间误差,并将该误差反向传递,用以修改节点权重,并不断重复该过程直至达到预期结果.该方法在模型训练时存在收敛较慢、容易过度拟合的问题.多样性正则项(diversity regularization)最近显示出有简化模型、提高泛化能力的作用,对带有多样性正则项的神经网络训练方法进行探索,在计算目标函数时加入权重多样性的考虑,从而使得网络的内部结构减少重复.与传统神经网络训练方法——反向传播算法(back-propagation algorithm,BP)和目标差传播方法(difference target propagation,DTP)的结合与对比实验表明,带多样性正则项的训练方法具有更快的收敛速度和较低的错误率.Traditional neural network training methods usually compute the loss function between the output Y of neural network and the target T,and transfer the loss back so as to update the weight of nodes in neural network.The training method repeats the process until it achieves the desired results.This type of method has some deficiencies when training the model,such as slow convergence,easy overfitting and higher error and so on.In this paper,we propose a neural network training method with diversity regularization,which adds the influence of weight when computes the loss function,which means that not only the output but also the weight of nodes are considered.The contrast experiments with the traditional neural network methods,such as back-propagation(BP)and difference target propagation(DTP),show that training methods with diversity regularization have a faster convergence rate and lower error rate.
关 键 词:多样性正则项 前馈神经网络 反向传播算法 目标差传播算法
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.28