A Sentence Retrieval Generation Network Guided Video Captioning  

在线阅读下载全文

作  者:Ou Ye Mimi Wang Zhenhua Yu Yan Fu Shun Yi Jun Deng 

机构地区:[1]College of Computer Science and Technology,Xi’an University of Science and Technology,Xi’an,710054,China [2]College of Safety and Engineering,Xi’an University of Science and Technology,Xi’an,710054,China

出  处:《Computers, Materials & Continua》2023年第6期5675-5696,共22页计算机、材料和连续体(英文)

基  金:supported in part by the National Natural Science Foundation of China under Grants 62273272 and 61873277;in part by the Chinese Postdoctoral Science Foundation under Grant 2020M673446;in part by the Key Research and Development Program of Shaanxi Province under Grant 2023-YBGY-243;in part by the Youth Innovation Team of Shaanxi Universities.

摘  要:Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide the generation of video captioning,which is not conducive to the accurate descrip-tion and understanding of video content.To address this issue,a novel video captioning method guided by a sentence retrieval generation network(ED-SRG)is proposed in this paper.First,a ResNeXt network model,an efficient convolutional network for online video understanding(ECO)model,and a long short-term memory(LSTM)network model are integrated to construct an encoder-decoder,which is utilized to extract the 2D features,3D features,and object features of video data respectively.These features are decoded to generate textual sentences that conform to video content for sentence retrieval.Then,a sentence-transformer network model is employed to retrieve different sentences in an external corpus that are semantically similar to the above textual sentences.The candidate sentences are screened out through similarity measurement.Finally,a novel GPT-2 network model is constructed based on GPT-2 network structure.The model introduces a designed random selector to randomly select predicted words with a high probability in the corpus,which is used to guide and generate textual sentences that are more in line with human natural language expressions.The proposed method in this paper is compared with several existing works by experiments.The results show that the indicators BLEU-4,CIDEr,ROUGE_L,and METEOR are improved by 3.1%,1.3%,0.3%,and 1.5%on a public dataset MSVD and 1.3%,0.5%,0.2%,1.9%on a public dataset MSR-VTT respectively.It can be seen that the proposed method in this paper can generate video captioning with richer semantics than several state-of-the-art approaches.

关 键 词:Video captioning encoder-decoder sentence retrieval external corpus RS GPT-2 network model 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象