检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:陈景霞[1] 刘洋[1] 张鹏伟[1] 雪雯 CHEN Jing-xia;LIU Yang;ZHANG Peng-wei;XUE Wen(School of Electronic Information and Artificial Intelligence,Shaanxi University of Science&Technology,Xi′an 710021,China)
机构地区:[1]陕西科技大学电子信息与人工智能学院,陕西西安710021
出 处:《陕西科技大学学报》2023年第3期192-199,共8页Journal of Shaanxi University of Science & Technology
基 金:国家自然科学基金项目(61806118);陕西科技大学博士科研启动基金项目(2020BJ-30)。
摘 要:脑电(Electroencephalogram, EEG)等生理信号凭借其独有的客观性,在情感识别领域已经成为热门的研究对象.针对单一模态特征不够完备的问题,本文提出一种基于注意力双向门控循环单元(Gated Recurrent Unit, GRU)神经网络的多模态脑电情感识别方法,用Mul-AT-BiGRU表示.该方法首先通过注意力机制融合脑电、眼动这两种模态的三种不同特征,实现不同模态特征间的全局交互,再将得到的多模态融合特征输入带有注意力机制的双向GRU网络进行深度语义特征提取和情感分类.该方法通过挖掘不同模态数据间的互补关系,使学习到的深层情感相关特征更具判别性.所提方法在多模态数据集SEED-IV上进行实验,被试内平均分类准确率达到95.19%,比三种单一模态特征的平均分类准确率分别提升了20.22%、20.04%和17.5%;被试间的平均分类准确率达到62.77%,优于目前一些同类方法,验证了所提方法在多模态脑电情感识别上的有效性和泛化性.Physiological signals such as electroencephalogram(EEG)have become popular research objects in the field of emotion recognition due to their unique objectivity.To solve the problem that the single modal feature is not complete enough,this paper proposes a multimodal EEG based emotion recognition method by using attention bidirectional Gated Recurrent Unit(GRU)neural network,which is represented by Mul-AT-BiGRU.At first,the attention mechanism is used to fuse three different features of two modalities including EEG signals and eye movement data to achieve global interaction between different modal features.Then,the obtained multimodal fused features are input into the Mul-AT-BiGRU network for deep emotional feature extraction and classification.The Mul-AT-BiGRU model makes the learned deep emotion-related features more obvious and discriminative by mining the complementary relationships between different modal data,thus improves the emotion recognition performance.The comparative experiments are carried out on the multimodal dataset SEED-IV.The experimental results show that the average intra-subject emotion classification accuracy of the proposed method reaches 95.19%,which is 20.22%,20.04%and 17.5%higher than that on three single-modal features,respectively.The average inter-subject classification accuracy of the proposed method reaches 62.77%,which also outperforms other available similar comparative methods,verifying the effectiveness and generalization of the proposed method for multimodal EEG based emotion recognition.
关 键 词:脑电 情感识别 多模态特征融合 双向GRU 注意力机制
分 类 号:TP311[自动化与计算机技术—计算机软件与理论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.201