检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]天津工业大学,天津300387 [2]天津商业大学,天津300134
出 处:《计算机应用与软件》2016年第12期165-168,共4页Computer Applications and Software
基 金:天津市教育信息化协会2015年度课题(15-01-405-0089)
摘 要:目前,国内关于评估预训练与微调对卷积神经网络性能影响的研究较少。基于此,提出采用Caffe框架中的Caffe Net网络结构,将卷积神经网络用于图片物体识别。为更直观分析计算过程,将卷积网络中部分隐含层特征进行了可视化,并在Caltech-101数据集上分析了随机初始化与预训练模型初始化条件下深度卷积的分类效果,以及全局微调模式与局部微调模式对图像分类的影响。结果表明,预训练模型初始化能够极大提高收敛速度和识别正确率,全局微调模式能较好地拟合新的样本数据,同样提高了识别正确率。在Caltech-101数据集上获得了95.24%的平均识别率,更加有效地优化了图像识别过程。At present, there are few studies about the effect of pre-training and fine tuning on the performance of convolutional neural network. Based on this, we proposed adopting CaffeNet network structure developed by Caffe framework and using convolutional neural network for image object recognition. In order to analyze the calculation process more intutively, we visualized the hidden layers' features in convolutional network. Through two experiments on Cahech-101 data sets, we analyzed the effects of randomly initialized model and pretraining model on the performance of deep convolutional classification as well as the effects of global fine tuning mode and local fine tuning mode on the performance of image classification. Experimental results showed that the pre-training model initialization can greatly improve the convergence speed and recognition accuracy, while the global fine tuning mode can fit the new sample data well and improve recognition accuracy as well. We achieved the mean recognition accuracy of 95.24% on Caltech-101 data sets and optimized the image recognition process more effectively.
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.148.212.53