Deconfounded fashion image captioning with transformer and multimodal retrieval  

在线阅读下载全文

作  者:Tao PENG Weiqiao YIN Junping LIU Li LI Xinrong HU 

机构地区:[1]School of Computer Science and Artificial Intelligence,Wuhan Textile University,Wuhan 430200,China

出  处:《虚拟现实与智能硬件(中英文)》2025年第2期127-138,共12页Virtual Reality & Intelligent Hardware

摘  要:Background The annotation of fashion images is a significantly important task in the fashion industry as well as social media and e-commerce.However,owing to the complexity and diversity of fashion images,this task entails multiple challenges,including the lack of fine-grained captions and confounders caused by dataset bias.Specifically,confounders often cause models to learn spurious correlations,thereby reducing their generalization capabilities.Method In this work,we propose the Deconfounded Fashion Image Captioning(DFIC)framework,which first uses multimodal retrieval to enrich the predicted captions of clothing,and then constructs a detailed causal graph using causal inference in the decoder to perform deconfounding.Multimodal retrieval is used to obtain semantic words related to image features,which are input into the decoder as prompt words to enrich sentence descriptions.In the decoder,causal inference is applied to disentangle visual and semantic features while concurrently eliminating visual and language confounding.Results Overall,our method can not only effectively enrich the captions of target images,but also greatly reduce confounders caused by the dataset.To verify the effectiveness of the proposed framework,the model was experimentally verified using the FACAD dataset.

关 键 词:Image caption Causal inference Fashion caption 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象