多样性正则的神经网络训练方法探索  被引量:2

Exploring diversity regularization in neural networks

在线阅读下载全文

作  者:屈伟洋 俞扬[1] Qu Weiyang Yu Yang(National Key Laboratory for Novel Software Technology, Naniing University, Nanjing, 210046, China)

机构地区:[1]南京大学软件新技术国家重点实验室,南京210046

出  处:《南京大学学报(自然科学版)》2017年第2期340-349,共10页Journal of Nanjing University(Natural Science)

基  金:国家自然科学基金(61375061);江苏省自然科学基金(BK20160066)

摘  要:传统神经网络训练方法通过计算输出Y和目标T之间误差,并将该误差反向传递,用以修改节点权重,并不断重复该过程直至达到预期结果.该方法在模型训练时存在收敛较慢、容易过度拟合的问题.多样性正则项(diversity regularization)最近显示出有简化模型、提高泛化能力的作用,对带有多样性正则项的神经网络训练方法进行探索,在计算目标函数时加入权重多样性的考虑,从而使得网络的内部结构减少重复.与传统神经网络训练方法——反向传播算法(back-propagation algorithm,BP)和目标差传播方法(difference target propagation,DTP)的结合与对比实验表明,带多样性正则项的训练方法具有更快的收敛速度和较低的错误率.Traditional neural network training methods usually compute the loss function between the output Y of neural network and the target T,and transfer the loss back so as to update the weight of nodes in neural network.The training method repeats the process until it achieves the desired results.This type of method has some deficiencies when training the model,such as slow convergence,easy overfitting and higher error and so on.In this paper,we propose a neural network training method with diversity regularization,which adds the influence of weight when computes the loss function,which means that not only the output but also the weight of nodes are considered.The contrast experiments with the traditional neural network methods,such as back-propagation(BP)and difference target propagation(DTP),show that training methods with diversity regularization have a faster convergence rate and lower error rate.

关 键 词:多样性正则项 前馈神经网络 反向传播算法 目标差传播算法 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象