基于多步态特征融合的情感识别  被引量:1

Emotion Recognition Based on Multi-gait Feature Fusion

在线阅读下载全文

作  者:彭涛[1,2,3] 唐经 何凯 胡新荣[1,2,3] 刘军平 何儒汉[1,2,3] PENG Tao;TANG Jing;HE Kai;HU Xinrong;LIU Junping;HE Ruhan(Hubei Engineering Research Center of Textile and Garment Intellectualization(Wuhan Textile University),Wuhan Hubei 430200,China;Hubei Engineering Research Center of Garment Information Technology(Wuhan Textile University),Wuhan Hubei 430200,China;School of Computer Science and Artificial Intelligence,Wuhan Textile University,Wuhan Hubei 430200,China)

机构地区:[1]纺织服装智能化湖北省工程研究中心(武汉纺织大学),湖北武汉430200 [2]湖北省服装信息化工程技术研究中心(武汉纺织大学),湖北武汉430200 [3]武汉纺织大学计算机与人工智能学院,湖北武汉430200

出  处:《广西师范大学学报(自然科学版)》2022年第3期104-111,共8页Journal of Guangxi Normal University:Natural Science Edition

基  金:国家自然科学基金(61901308)。

摘  要:在情感计算、心理治疗、机器人、监视和观众理解等方面,基于步态特征的情感识别有着广泛的应用前景。已有方法表明,考虑手势位置等上下文信息可以显著提高情绪识别性能,且时空信息能显著提高情绪识别精度。但是单纯使用骨骼空间信息无法充分表达步态中的情绪信息。为了充分利用步态特征,本文提出自适应融合的方法,将骨骼时空信息与骨骼旋转角度结合,提升了现有模型的情感识别精度。本文模型利用自编码器,学习人类行走时的骨骼旋转信息,利用时空图卷积神经网络提取骨骼点时空信息,将骨骼旋转信息与时空信息输入自适应融合网络,得到最终特征进行分类。模型在Emotion-Gait数据集上测试,实验结果显示:悲伤、愤怒和中立情绪的AP值比最新HAP方法分别提升5、8、5个百分点;总体分类的平均MAP值提高了5个百分点。Emotion recognition based on gait features is considered to have a wide range of applications in emotion computing,psychotherapy,robotics,surveillance and audience understanding.Existing methods show that combining the context information such as gesture position can significantly improve the performance of emotion recognition,and spatiotemporal information can significantly improve the accuracy of emotion recognition.However,the emotional information in gait can not be fully expressed only by using bone spatial information.In order to make good use of the gait features,an adaptive fusion method is proposed in this paper,which combines the spatiotemporal information of the skeleton with the rotation angle of the skeleton,and improves the emotion recognition accuracy of the existing models.The model uses the Autoencoder to learn the bone rotation information of human walking,uses the spatio-temporal convolution neural network to extract the spatio-temporal information of bone points,inputs the bone rotation information and spatio-temporal information into the adaptive fusion network,and obtains the final feature for classification.The model is tested on the Emotion-gait data set,and the experimental results show that the AP values of sadness,anger and neutral emotion have increased by 5,8 and 5 percentage point respectively compared with the latest HAP method,and the average map value of the overall classification has increased by 5 percentage point.

关 键 词:步态特征 时空图卷积神经网络 特征融合 情感识别 自编码器 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象