A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection  

在线阅读下载全文

作  者:Alaa Thobhani Beiji Zou Xiaoyan Kui Amr Abdussalam Muhammad Asim Naveed Ahmed Mohammed Ali Alshara 

机构地区:[1]School of Computer Science and Engineering,Central South University,Changsha,410083,China [2]Electronic Engineering and Information Science Department,University of Science and Technology of China,Hefei,230026,China [3]EIAS Data Science Lab,College of Computer and Information Sciences,Prince Sultan University,Riyadh,11586,Saudi Arabia [4]College of Computer and Information Sciences,Prince Sultan University,Riyadh,11586,Saudi Arabia [5]College of Computer and Information Sciences,Imam Mohammad Ibn Saud Islamic University,Riyadh,11432,Saudi Arabia

出  处:《Computers, Materials & Continua》2024年第11期2873-2894,共22页计算机、材料和连续体(英文)

基  金:supported by the National Natural Science Foundation of China(Nos.U22A2034,62177047);High Caliber Foreign Experts Introduction Plan funded by MOST,and Central South University Research Programme of Advanced Interdisciplinary Studies(No.2023QYJC020).

摘  要:Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024).

关 键 词:Visual attention image captioning visual feature detector visual feature visual attention 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象