检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Mingyong Li Qiqi Li Zheng Jiang Yan Ma
机构地区:[1]College of Computer and Information Science,Chongqing Normal University,Chongqing,401331,China
出 处:《Computer Systems Science & Engineering》2023年第8期1401-1414,共14页计算机系统科学与工程(英文)
基 金:This work was partially supported by Science and Technology Project of Chongqing Education Commission of China(KJZD-K202200513);National Natural Science Foundation of China(61370205);Chongqing Normal University Fund(22XLB003);Chongqing Education Science Planning Project(2021-GX-320).
摘 要:In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.
关 键 词:Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
分 类 号:TN624[电子电信—电路与系统]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.90