检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]沈阳航空航天大学计算机学院,沈阳110136 [2]辽宁大学信息学院,沈阳110036
出 处:《小型微型计算机系统》2017年第3期578-583,共6页Journal of Chinese Computer Systems
基 金:国家自然科学基金项目(61170185)资助;航空基金项目(2013ZC54011)资助;辽宁省博士启动资金项目(20121034)资助;辽宁省教育厅科学研究一般项目(L2014070)资助
摘 要:随着深度神经网络图像处理领域的研究与应用,图像分类精度得到了大幅提升.然而深度神经网络的随机处理过程导致同一网络对相同的训练图像,重复训练会提取到有差异性的特征.为了利用这种差异性,本文提出了一种对称神经网络模型,将两个特征维度相同的深度神经网络作为对称模型的左右子网络,通过前向传播得到有差异的图像特征,并在联合层对差异特征进行融合.为了优化网络,采用差异度量函数度量左右子网络的差异,用差异优化网络的损失函数,进而通过反向传播微调模型参数.基于上述思想,本文将扩展了左右子网络为深度置信网的对称深度置信网及左右子网络为卷积神经网络的对称卷积神经网络,在数据集MNIST和CIFAR-10上的实验测试表明,相较于深度置信网及卷积神经网络,该方式集成的对称深度模型能取得较好的分类性能.With deep neural network (DNN) studying and applying, the accuracy of classification has endured great improvement. However the stochastic processes of DNN cause that the same network extracts difference feature for the same image, re-training can extracted difference features. In order to utilize the difference, we propose a symmetric network. It separately utilizes two different DNN but same feature scale as its left and right sub-networks, extracts the difference features in forward propagation, and combines the feature layer in joint-layer. To optimize the network,measure the difference between sub-networks with difference measurement function, then optimize the loss function with the difference. Finally, fine-tune the weight of symmetric model by BP. Based on the propose method, we develop the deep belief network ( DBN ) to symmetric deep belief network and develop the convolutional neural network (CNN) to symmetric convohitional neural network. The experiments are conducted on MNIST and CIFAR-10 datasets, which demon- strates that the proposed method achieves improved accuracy compared with DBN and CNN architecture.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222