检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:郜高飞 邵党国[1] 马磊[1] 易三莉[1] GAO Gaofei;SHAO Dangguo;MA Lei;YI Sanli(Yunnan Key Laboratory of Computer Technologies Application,Faculty of Information Engineer ing and Automation,Kunming University of Science and Technology,Kunming 650504,China)
机构地区:[1]昆明理工大学信息工程与自动化学院,云南省计算机技术应用重点实验室,昆明650504
出 处:《吉林大学学报(理学版)》2025年第2期437-444,共8页Journal of Jilin University:Science Edition
基 金:国家自然科学基金(批准号:62266025);云南省计算机技术应用重点实验室开放基金(批准号:CB22144S078A)。
摘 要:针对卷积神经网络参数量大、训练时间长的问题,提出一种基于轻量级注意力残差网络的面部表情识别方法.首先,以残差网络为骨架重新搭建模型,通过减少层数并改进残差模块提高模型性能;其次,引入深度可分离卷积减少模型的参数量和计算工作量;最后,采用Mish函数替代ReLU函数的挤压激励模块自适应地调整通道权重.该模型在两个公共数据集CK+和JAFFE上采用经典的十折交叉验证方式进行验证,分别获得了98.16%和96.67%的准确率.实验结果表明,该方法在模型识别精度和复杂度之间进行了较好权衡.Aiming at the problems of a large number of parameters and the long training time of convolutional neural networks,we proposed a facial expression recognition method based on a lightweight attention residual network.Firstly,we rebuilt the model by using the residual network as a skeleton,and improved the model performance by reducing the number of layers and improving the residual module.Secondly,the depthwise separable convolution was introduced to reduce the number of model parameters and computational effort.Finally,the squeeze and excitation module of ReLU function was replaced by Mish function to adaptively adjust the channel weight.The model was validated by using the classical ten-fold cross-validation mode on two public datasets CK+and JAFFE,and obtained accuracies of 98.16%and 96.67%,respectively.The experimental results show that the proposed method provides a better trade-off between model identification accuracy and complexity.
关 键 词:面部表情识别 轻量级 残差网络 深度可分离卷积 注意力机制
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49