检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:许振齐 朴燕[1] 康继元 鞠成伟 Xu Zhenqi;Piao Yan;Kang Jiyuan;Ju Chengwei(School of Electronic and Information Engineering,Changchun University of Science and Technology,Changchun 130022,China)
机构地区:[1]长春理工大学电子信息工程学院,长春130022
出 处:《电子测量与仪器学报》2025年第2期32-40,共9页Journal of Electronic Measurement and Instrumentation
基 金:吉林省科技支撑项目(YDZJ202402041GXJD)资助。
摘 要:针对传统的步态识别方法忽略了步态特征中的时间信息,提出了一种融合3D-CBAM和跨时间尺度特征分析的步态识别框架。研究将注意力模块集成到模型中,自适应地关注输入步态序列关键通道和空间位置,提高模型的步态识别性能。此外,增强的全局和局部特征提取器(EGLFE)中全局特征提取将时间信息和空间信息在一定程度上解耦,在2D卷积和1D卷积之间添加额外的LeakyReLU层,增加了网络的非线性数量,在步态特征提取过程中有助于扩大感受野,从而提升模型对特征的学习能力,实现更好的全局特征提取效果,融合局部特征,弥补局部因分块带来的特征损失。多尺度时间增强模块融合帧级特征和长短期时序特征,增强模型对遮挡的鲁棒性。在CASIA-B数据集和OU-MVLP数据集上进行训练和测试,在CASIA-B数据集上,平均识别准确率为92.7%,在NM,BG,CL上的rank-1准确率分别为98.1%,95.1%,84.9%,实验结果表明,所提方法在正常行走和复杂条件下都表现出很好的性能。Addressing the limitation of traditional gait recognition methods that neglect temporal information in gait features,we propose a gait recognition framework that integrates 3D-CBAM and cross-temporal scale feature analysis.By incorporating an attention module into the model,it adaptively focuses on critical channels and spatial locations within the input gait sequences,enhancing the model’s gait recognition performance.Furthermore,the enhanced global and local feature extractor(EGLFE)decouples temporal and spatial information to a certain extent during global feature extraction.By inserting additional LeakyReLU layers between 2D and 1D convolutions,the number of nonlinearities in the network is increased,which aids in expanding the receptive field during gait feature extraction.This,in turn,boosts the model’s ability to learn features,achieving better global feature extraction results.Local features are also integrated to compensate for feature loss due to partitioning.A multi-scale temporal enhancement module fuses frame-level features and short-to-long-term temporal features,enhancing the model’s robustness against occlusion.We conducted training and testing on the CASIA-B and OU-MVLP datasets.On the CASIA-B dataset,the average recognition accuracy reached 92.7%,with rank-1 accuracies of 98.1%,95.1%,and 84.9%for normal(NM),bag(BG),and clothing(CL)conditions,respectively.Experimental results demonstrate that the proposed method exhibits excellent performance under both normal walking and complex conditions.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7