ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval  被引量:1

在线阅读下载全文

作  者:Mingyong Li Qiqi Li Zheng Jiang Yan Ma 

机构地区:[1]College of Computer and Information Science,Chongqing Normal University,Chongqing,401331,China

出  处:《Computer Systems Science & Engineering》2023年第8期1401-1414,共14页计算机系统科学与工程(英文)

基  金:This work was partially supported by Science and Technology Project of Chongqing Education Commission of China(KJZD-K202200513);National Natural Science Foundation of China(61370205);Chongqing Normal University Fund(22XLB003);Chongqing Education Science Planning Project(2021-GX-320).

摘  要:In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.

关 键 词:Hash learning cross-modal retrieval fine-grained matching TRANSFORMER 

分 类 号:TN624[电子电信—电路与系统]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象