检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]中国民航大学天津市智能信号与图像处理重点实验室,天津300300
出 处:《系统工程与电子技术》2017年第6期1391-1399,共9页Systems Engineering and Electronics
基 金:国家自然科学青年基金(11402294);天津市智能信号与图像处理重点实验室开放基金项目(2015AFS03);中国民航大学第六期波音基金(20160159209)资助课题
摘 要:为解决深度卷积神经网络(convolutional neural networks,CNN)难以训练的问题,提出一种快速、高效的双通道神经网络(dual-channel neural networks,DCNN),该神经网络由直通通道和卷积通道两种通道构成,直通通道负责保障深度网络的畅通性,卷积通道负责深度网络的学习。考虑到深层网络在训练时容易出现性能不稳定的问题,在卷积通道上引入卷积衰减因子,对其响应数据进行约束。设计一种"双池化层"对同一特征图进行降采样,不仅可以防止训练过拟合,还能保证各通道的维度一致性。在3个图像数据集CIFAR-10、CIFAR-100和MNIST上的实验结果表明,无论是神经网络的可训练深度、稳定性和分类精度,DCNN都明显优于现有的深度卷积神经网络。To resolve the problem of training in deep convolutional neural networks (CNN), a fast and efficient dual-channel neural networks (DCNN) is put forward, which consists of a straight channel and a convolution channel. The straight channel is responsible for ensuring the patency of the deep neural networks, and the convolution channel is responsible for learning the deep neural networks. The training of the deep networks is prone to exhibit instability. To this end, the convolution attenuation factor is proposed, which can scale down the convolution channel's responses. A new pooling method called "dual-pool layer" is proposed to down-sample on the same feature map, which can prevent over-fitting train and ensure the consistency of the dimensions on each channel. The proposed algorithm is evaluated on three image datasets CIFAR-10, CIFAR-100 and MNIST. Experimental results indicate that compared with existing deep convolutional neural networks, the depth, stability and accuracy of DCNN are significantly increased.
关 键 词:图像分类 深度学习 卷积神经网络 双通道神经网络 卷积衰减因子
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15