检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:叶汉民[1,2] 陆泗奇 程小辉[1,2] 张瑞芳[3] YE Han-min;LU Si-qi;CHENG Xiao-hui;ZHANG Rui-fang(College of Information Science and Engineering,Guilin University of Technology,Guilin 541006,China;Guangxi Key Laboratory of Embedded Technology and Intelligent System,Guilin University of Technology,Guilin 541004,China;College of Mechanical and Control Engineering,Guilin University of Technology,Guilin 541006,China)
机构地区:[1]桂林理工大学信息科学与工程学院,广西桂林541006 [2]桂林理工大学广西嵌入式技术与智能系统重点实验室,广西桂林541004 [3]桂林理工大学机械与控制工程学院,广西桂林541006
出 处:《计算机工程与设计》2024年第10期2960-2969,共10页Computer Engineering and Design
基 金:国家自然科学基金项目(61662017);广西嵌入式技术与智能系统重点实验室开放(主任)基金项目(2019-01-10)。
摘 要:为提高注意力机制对深度神经网络准确率的提升效果,提出一种重参数化通道注意力模块(RCAM)。鉴于挤压激励网络的通道压缩方法对网络精度存在较大误差,故提出一种基于重参数化技术的通道重参数化模块,将此模块与注意力机制进行有效结合;按集成策略消融实验所获得的结果,将此注意力模块放置进主干网络中。实验结果表明,在公共数据集CIFAR-100和ImageNet-100,主干网络为RepVGG_A0、ResNet-18时,其准确率分别较未添加注意力机制的网络提升了2.37%和1.72%,1.61%和0.71%,将结果与其它注意力机制进行比较,验证了基于重参数化的注意力机制对主干网络的提升远优于其它方法。To improve the accuracy of deep neural networks by enhancing the attention mechanism,a re-parameterized channel attention module(RCAM)was presented.It is pointed out that the channel compression method of squeeze-and-excitation networks has a significant impact on the accuracy of the network.Therefore,this parameter reduction method was chosen to be abandoned as the basis.A channel re-parameterization module based on re-parameterization techniques was proposed,and this module was effectively combined with attention mechanisms.According to the integration strategy of ablation experiments,this attention module was placed into the backbone network.Experimental results indicate that on the public datasets CIFAR-100 and ImageNet-100,the RepVGG_A0 backbone network achieves accuracy improvements of 2.37%and 1.72%respectively compared to the networks without the addition of attention mechanisms.Similarly,when using the ResNet-18 backbone network,the accuracy improvements are 1.61%and 0.71%for CIFAR-100 and ImageNet-100 respectively,compared to the networks without attention mechanisms,which is compared to other well-known attention mechanisms,indicating that this attention mechanism is significantly better than several other attention mechanisms in enhancing the performance of the backbone network.
关 键 词:重参数化 注意力机制 通道注意力机制 卷积神经网络 神经网络 图像分类 深度学习
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.249