检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:金川[1] 付小思 JIN Chuan;FU Xiaosi(School of Art and Design,Anhui Broadcasting Movie and Television College,Hefei 230001,China;School of Computer,Central China Normal University,Wuhan 430001,China)
机构地区:[1]安徽广播影视职业技术学院艺术设计学院,安徽合肥230001 [2]华中师范大学计算机学院,湖北武汉430001
出 处:《荆楚理工学院学报》2024年第4期33-39,共7页Journal of Jingchu University of Technology
摘 要:传统的基于深度哈希图像检索方法在获取图像的特征信息时,会关注到部分冗余信息,影响最终的图像检索精度。针对上述问题,提出一种应用于卷积神经网络中的融合跨维度交互注意力机制模块,该模块可以提高网络的性能,学习到更多有利于图像检索的特征信息。在深度哈希图像检索任务中,选用VGG16与ResNet18两种经典模型作为图像检索的基础模型,加入注意力模块并且重新设计哈希码目标损失函数后,在CIFAR-10和NUS-WIDE数据集上进行了对比实验,实验结果表明添加了注意力机制后的图像检索精度有较大提高,验证了所提出方法的有效性。Traditional deep hash-based image retrieval methods focus on some redundant information when obtaining feature information of images,which affects the final image retrieval accuracy.In response to the above issues,this article proposes a fusion cross dimensional interactive attention mechanism module,which can be applied to convolutional neural networks to improve network performance and learn more feature information that is conducive to image retrieval.In the deep hash image retrieval task,two classic models,VGG16 and ResNet18,were selected as the basic models for image retrieval.After adding an attention module and redesigning the hash code target loss function,comparative experiments were conducted on the CIFAR-10 and NUS-WIDE datasets.The experimental results showed that the addition of attention mechanism significantly improved the accuracy of image retrieval,verifying the effectiveness of the proposed method.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.194