检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yixin Wang Shuang Qiu Dan Li Changde Du Bao-Liang Lu Huiguang He
机构地区:[1]the Research Center for Brain-inspired Intelligence,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Science,Beijing 100190 [2]the University of Chinese Academy of Sciences,Beijing 100049 [3]the Beijing Institute of Control and Electronic Technology,Beijing 100038,China [4]the School of Mathematics and Information Sciences,Yantai University,Yantai 264003,China [5]the Center for Excellence in Brain Science and Intelligence Technology,Chinese Academy of Science,Beijing,China [6]the Department of Computer Science and Engineering,Shanghai Jiao Tong University,Shanghai 200240,China
出 处:《IEEE/CAA Journal of Automatica Sinica》2022年第9期1612-1626,共15页自动化学报(英文版)
基 金:National Natural Science Foundation of China(61976209,62020106015,U21A20388);in part by the CAS International Collaboration Key Project(173211KYSB20190024);in part by the Strategic Priority Research Program of CAS(XDB32040000)。
摘 要:Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
关 键 词:Cycle-consistency domain adaptation electroencephalograph(EEG) multi modality variational autoencoder
分 类 号:TN911.7[电子电信—通信与信息系统]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.15