检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:杨楠 南琳[1,2] 张丁一 库涛[1,2] Yang Nan;Nan Lin;Zhang Dingyi;Ku Tao(Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China;University of Chinese Academy of Sciences,Beijing 100049,China)
机构地区:[1]中国科学院沈阳自动化研究所,辽宁沈阳110016 [2]中国科学院大学,北京100049
出 处:《红外与激光工程》2018年第2期9-16,共8页Infrared and Laser Engineering
基 金:国家科技支撑计划(2015BAF02B01);中国科学院网络化控制系统重点实验室(2015BAF02B00)
摘 要:卷积神经网络(Convolution Neural Networks,CNN)和循环神经网络(Recurrent Neural Networks,RNN)在图像分类、计算机视觉、自然语言处理、语音识别、机器翻译、语义分析等领域取得了迅速的发展,引起了研究者对计算机自动生成图像描述的广泛关注。目前图像描述存在的主要问题有输入文本数据稀疏、模型存在过拟合、模型损失函数震荡难以收敛等问题。文中使用NIC作为基线模型,针对数据稀疏问题,改变了基线模型中的文本one-hot表示,使用word2vec对文本进行映射,为了防止过拟合,在模型中加入了正则项和使用Dropout技术,并在词序记忆方面取得创新,引入联想记忆单元GRU,用于文本生成。在试验中使用Adam Optimizer优化器进行参数迭代更新。实验结果表明:改进后的模型参数减少且收敛速度大幅加快,损失函数曲线更加平滑,损失最大降至2.91,模型的准确率比NIC提高了接近15%。实验有效地验证了在模型当中使用word2vec对文本进行映射可明显缓解数据稀疏问题,加入正则项和使用Dropout技术可有效防止模型过拟合,引入联想记忆单元GRU能够大幅减少模型训练参数,加快算法收敛速度,进而提高整个模型的准确率。Convolution Neural Networks(CNN)and Recurrent Neural Networks(RNN)had developed rapidly in the fields of image classification,computer vision,natural language process,speech recognition,machine translation and semantic analysis,which caused researchers'close attention to computers'automatic generation of image interpretation.At present,the main problems in image description were sparse input text data,over-fitting of the model,difficult convergence of the model loss function,and so on.In this paper,NIC was used as a baseline model.For data sparseness,one-hot text in the baseline model was changed and word2vec was used to map the text.To prevent over-fitting,regular items were added to the model and Dropout technology was used.In order to make innovations in word order memory,the associative memory unit GRU for text generation was used.In experiment,the AdamOptimizer optimizer was used to update parameters iteratively.The experimental results show that the improved model parameters are reduced and the convergence speed is significantly faster,the loss function curves are smoother,the maximum loss is reduced to 2.91,and the model accuracy rate increases by nearly 15%compared with the NIC.Experiments validate that the use of word2vec to map text in the model obviously alleviates the data sparseness problem.Adding regular items and using Dropout technology could effectively prevent over-fitting of the model.The introduction of associative memory unit GRU could greatly reduce the model trained parameters and speed up the algorithm of convergence rate,improve the accuracy of the entire model.
关 键 词:卷积神经网络 循环神经网络 门控循环单元 自然语言处理 图像描述
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222