基于跨模态自蒸馏的零样本草图检索  被引量:3

Cross-modal Self-distillation for Zero-shot Sketch-based Image Retrieval

在线阅读下载全文

作  者:田加林 徐行 沈复民 申恒涛 TIAN Jia-Lin;XU Xing;SHEN Fu-Min;SHEN Heng-Tao(School of Computer Science and Engineering,University of Electronic Science and Technology of China,Chengdu 611731,China)

机构地区:[1]电子科技大学计算机科学与工程学院,四川成都611731

出  处:《软件学报》2022年第9期3152-3164,共13页Journal of Software

基  金:国家自然科学基金(61976049,62072080,61632007)。

摘  要:零样本草图检索将未见类的草图作为查询样本,用于检索未见类的图像.因此,这个任务同时面临两个挑战:草图和图像之间的模态差异以及可见类和未见类的不一致性.过去的方法通过将草图和图像投射到一个公共空间来消除模态差异,还通过利用语义嵌入(如词向量和词相似度)来弥合可见类和未见类的语义不一致.提出了跨模态自蒸馏方法,从知识蒸馏的角度研究可泛化的特征,无需语义嵌入参与训练.具体而言,首先通过传统的知识蒸馏将预训练的图像识别网络的知识迁移到学生网络.然后,通过草图和图像的跨模态相关性,跨模态自蒸馏将上述知识间接地迁移到草图模态的识别上,提升草图特征的判别性和泛化性.为了进一步提升知识在草图模态内的集成和传播,进一步地提出草图自蒸馏.通过为数据学习辨别性的且泛化的特征,学生网络消除了模态差异和语义不一致性.在3个基准数据集,即Sketchy、TU-Berlin和QuickDraw,进行了广泛的实验,证明了所提跨模态自蒸馏方法与当前方法相比较的优越性.Zero-shot sketch-based image retrieval uses sketches of unseen classes as query samples for retrieving images of those classes.This task is thus faced with two challenges: the modal gap between a sketch and the image and inconsistencies between seen and unseen classes. Previous approaches tried to eliminate the modal gap by projecting the sketch and the image into a common space and bridge the semantic inconsistencies between seen and unseen classes with semantic embeddings(e.g., word vectors and word similarity). This study proposes a cross-modal self-distillation approach to investigate generalizable features from the perspective of knowledge distillation without the involvement of semantic embeddings in training. Specifically, the knowledge of the pre-trained image recognition network is transferred to the student network through traditional knowledge distillation. Then, according to the cross-modal correlation between a sketch and the image, cross-modal self-distillation indirectly transfers the above knowledge to the recognition of the sketch modality to enhance the discriminative and generalizable features of sketch features. To further promote the integration and propagation of the knowledge within the sketch modality, this study proposes sketch self-distillation. By learning discriminative and generalizable features from the data, the student network eliminates the modal gap and semantic inconsistencies. Extensive experiments conducted on three benchmark datasets, namely Sketchy, TU-Berlin, and QuickDraw, demonstrate the superiority of the proposed cross-modal self-distillation approach to the state-of-the-art ones.

关 键 词:零样本草图检索 零样本学习 跨模态检索 知识蒸馏 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象