检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:梁岩 黄润才[1] 卢士铖 Liang Yan;Huang Runcai;Lu Shicheng(School of Electrical&Electronic Engineering,Shanghai University of Engineering&Technology,Shanghai 201600,China)
机构地区:[1]上海工程技术大学电子电气工程学院,上海201600
出 处:《计算机应用研究》2025年第3期903-910,共8页Application Research of Computers
摘 要:针对微表情识别技术面临的时间特征提取挑战包括短暂性带来的捕捉难题、时空信息融合的难点、数据稀疏性导致的过拟合问题、静态特征提取方法的局限性、数据预处理对识别性能的影响,提出了一种基于改进3D ResNet的多模态微表情识别方法(IM3DR-MFER)。通过在传统3D ResNet中融入了参数精简策略和多尺度上下文感知融合策略改进3D ResNet18,在降低参数的同时提升对面部局部特征及其在广泛上下文中的信息捕捉能力。通过融合面部全局特征与光流动态特征,构建了一个双模态输入框架,从而显著提升了模型在不同维度上的特征表征能力。通过创新性地引入新型三维注意力机制(CASANet),自适应地识别并突出微表情序列中各个时间点的关键特征。经过在CASME II、SAMM以及复合数据集(CD)上的实验验证结果表明,所提方法分别取得了93.2%、88.7%和84.6%的准确率,从而验证了所提方法在人脸微表情识别任务中的有效性和先进性。Addressing the challenges of temporal feature extraction in micro-expression recognition technology,including the difficulties in capturing due to their transience,the complexity of spatiotemporal information fusion,the overfitting problem caused by data sparsity,the limitations of static feature extraction methods,and the impact of data preprocessing on recognition performance,this paper proposed a multimodal micro-expression recognition method based on an improved 3D ResNet(IM3DR-MFER).By incorporating parameter reduction strategies and multi-scale context-aware fusion strategies into the traditional 3D ResNet network,it improved the 3D ResNet18,reducing parameters while enhancing the ability to capture facial local features and their information in a broad context.By integrating global facial features with optical flow dynamic features,it constructed a dual-modal input framework,significantly enhancing the model s feature representation capabilities in different dimensions.By innovatively introducing a novel three-dimensional attention mechanism(CASANet),it adaptively identified and highlighted key features at each time point in the micro-expression sequence.Experimental results on CASME II,SAMM,and the composite dataset(CD)show that the proposed method achieves accuracy rates of 93.2%,88.7%,and 84.6%,respectively,thereby verifying the effectiveness and advancement of the proposed method in facial micro-expression recognition tasks.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49