检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:杨青[1,2,3] 王亚群[1,2,3] 文斗 王莹 王翔宇[1,2,3] YANG Qing;WANG Yaqun;WEN Dou;WANG Ying;WANG Xiangyu(Hubei Provincial Key Laboratory of Artificial Intelligence and Smart Learning,Central China Normal University,Wuhan 430079,China;School of Computer,Central China Normal University,Wuhan 430079,China;National Language Resources Monitoring&Research Center for Network Media,Central China Normal University,Wuhan 430079,China)
机构地区:[1]华中师范大学人工智能与智慧学习湖北省重点实验室,湖北武汉430079 [2]华中师范大学计算机学院,湖北武汉430079 [3]华中师范大学国家语言资源监测与研究网络媒体中心,湖北武汉430079
出 处:《郑州大学学报(工学版)》2024年第5期69-76,共8页Journal of Zhengzhou University(Engineering Science)
基 金:湖北省重点研发计划项目(2020BAB017);武汉市科技计划项目(2019010701011392);国家语委科研中心项目(ZDI135-135)。
摘 要:针对直接使用图像诱发的脑电信号进行视觉分类的现有研究少,并且视觉分类的平均准确率低等问题,设计了一种卷积神经网络(CNN)和集成学习相结合的方法,用于学习脑电信号相关的视觉特征表示。通过在StackCNN网络中加入K-max池化方法,解决在提取脑电特征时信息丢失的问题,并结合Bagging算法增强网络的泛化能力,该方法称为StackCNN-B。采用基于残差神经网络(ResNet)回归对图像进行分类,验证StackCNN-B方法在图像分类上的性能。消融实验及与现有研究对比实验的结果表明:所提方法识别准确率较高,在学习脑电信号的视觉特征表示上的平均准确率达到99.78%,在图像分类上的平均准确率达到96.45%,与Bi-LSTM-AttGW方法相比,平均提高了0.28百分点和2.97百分点。研究结果验证了脑电信号可以有效地解码与视觉识别相关的人类大脑活动,也表明所提出StackCNN-B模型的优越性。Aiming at the limited studies researches on visual classification directly using image-induced EEG signals and low average accuracy of visual classification,a method combining convolutional neural networks(CNN)and ensemble learning was designed to learn the visual feature representation related to EEG signals.By adding the K-max pooling method to the stackCNN network to solve the problem of information loss when extracting EEG features,and combining with Bagging algorithm to enhance the generalization ability of the network,this method was called StackCNN-B.In order to verify the performance of StackCNN-B method in image classification,images were classified using deep residual network regression.The results of ablation experiments and comparative experiments with existing studies showed that the recognition accuracy of this method was high.The average accuracy in learning the visual feature representation of EEG signals was 99.78%,and the average accuracy in image classification was 96.45%.Compared with the most advanced Bi-LSTM-AttGW method,the average accuracy was improved by 0.28 percentage point and 2.97 percentage point.The results verified that EEG signals could effectively decode human brain activities related to visual recognition,proved the advantages of the proposed StackCNN-B model.
关 键 词:脑电图 视觉分类 卷积神经网络 BAGGING算法 ResNet网络
分 类 号:TP399[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49