基于分组卷积的密集连接网络研究  被引量:1

Grouped dense convolutional network

在线阅读下载全文

作  者:康一帅 王敏[1] KANG Yishuai;WANG Min(School of Electronics and Information,Jiangsu University of Science and Technology,Zhenjiang 212003,China)

机构地区:[1]江苏科技大学电子信息学院,镇江212003

出  处:《江苏科技大学学报(自然科学版)》2020年第1期49-53,共5页Journal of Jiangsu University of Science and Technology:Natural Science Edition

基  金:国家自然科学基金资助项目(61901195)。

摘  要:随着深度学习的发展,不断出现具有良好性能表现的复杂网络模型.由于复杂的卷积神经网络存在计算资源消耗大和存储空间大的问题,使其不能很好地部署在各硬件平台之上.文中通过对用于视觉识别领域的密集连接的卷积网络(dense convolutional network,DenseNet)进行研究,发现网络中的参数存在大量冗余,其计算效率仍有提升的空间.因此,在其基础上引入分组卷积,对其密集块与增长率进行改进,提出一种基于分组卷积的密集连接网络(grouped dense convolutional network,GDenseNet).在两个数据集(CIFAR-10和CIFAR-100)上的实验表明:当错误率基本相同时,GDenseNet的模型复杂度和计算复杂度比DenseNet分别低12%和36%.With the development of deep learning,there are complex neural network models emerging,and they all have good performance.However,the consumption of computing resources and the limitation of storage space make it not well deployed on various hardware platforms.First of all,through the research on the convolutional network named Dense Convolutional Network(DenseNet),which is used in the field of visual recognition,we find that there is a lot of redundancy in the parameters of the network,and there is still room for improvement in the computational efficiency of the convolutional network.Then,group convolution is introduced,and its dense block and growth rate are improved.Grouped Dense Convolutional Network(GDenseNet)is proposed.Experiments based on two data sets(CIFAR-10 and CIFAR-100)show that the model complexity and computational complexity of GDenseNet are 12%and 36%lower than that of DenseNet when the error rate is basically the same.

关 键 词:分组卷积 增长率 密集块 错误率 计算效率 

分 类 号:U661.3[交通运输工程—船舶及航道工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象