检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Wang Yue Liu Jinlai Wang Xiaojie
机构地区:[1]School of Computer Science,Beijing University of Posts and Telecommunications
出 处:《The Journal of China Universities of Posts and Telecommunications》2019年第2期52-58,共7页中国邮电高校学报(英文版)
基 金:supported by the National Natural Science Foundation of China(61273365);111 Project(B08004)
摘 要:Video description aims to generate descriptive natural language for videos.Inspired from the deep neural network(DNN) used in the machine translation,the video description(VD) task applies the convolutional neural network(CNN) to extracting video features and the long short-term memory(LSTM) to generating descriptions.However,some models generate incorrect words and syntax.The reason may because that the previous models only apply LSTM to generate sentences,which learn insufficient linguistic information.In order to solve this problem,an end-to-end DNN model incorporated subject,verb and object(SVO) supervision is proposed.Experimental results on a publicly available dataset,i.e.Youtube2 Text,indicate that our model gets a 58.4% consensus-based image description evaluation(CIDEr) value.It outperforms the mean pool and video description with first feed(VD-FF) models,demonstrating the effectiveness of SVO supervision.Video description aims to generate descriptive natural language for videos. Inspired from the deep neural network(DNN) used in the machine translation, the video description(VD) task applies the convolutional neural network(CNN) to extracting video features and the long short-term memory(LSTM) to generating descriptions. However, some models generate incorrect words and syntax. The reason may because that the previous models only apply LSTM to generate sentences, which learn insufficient linguistic information. In order to solve this problem, an end-to-end DNN model incorporated subject, verb and object(SVO) supervision is proposed. Experimental results on a publicly available dataset, i.e. Youtube2 Text, indicate that our model gets a 58.4% consensus-based image description evaluation(CIDEr) value. It outperforms the mean pool and video description with first feed(VD-FF) models, demonstrating the effectiveness of SVO supervision.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.249