融合多模态数据的数字阅读情感计算研究  

Research on Affective Computing of Digital Reading Based on Multimodal Data Fusion

在线阅读下载全文

作  者:司俊勇 付永华[1] Si Junyong;Fu Yonghua(School of Information Management,Zhengzhou University of Aeronautics,Henan Province 450046,China)

机构地区:[1]郑州航空工业管理学院信息管理学院,河南郑州450046

出  处:《评价与管理》2024年第3期69-77,共9页Evaluation & Management

基  金:2024年度河南省高等教育教学改革研究与实践项目“智能技术驱动的新文科教育教学模式探索”(2024SJGLX0413)。

摘  要:即时识别数字阅读情感,获悉阅读状态,改善读者的阅读效果和阅读体验并提供情绪参考,以此促进高质量阅读的深层发展。文章构建融合多模态数据的数字阅读情感计算模型,采集被试面部表情和瞳孔位置数据,通过赋予各模态情感识别模型权重和赋予各模态情感识别模型各种情感权重两类方法,实现高兴、愤怒、厌恶、恐惧、悲伤、惊、中性7类情感的计算,并确定最优情感计算模型。实验得知,在数字阅读中,基于最优情感计算模型的7类情感识别准确率分别为86.34%、83.82%、81.30%、78.98%、80.51%、83.54%、79.33%,证实该模型能够为数字阅读场景下的情感计算提供有效实行方案。Real time recognition of participants'digital reading emotions,understanding of reading status,improving readers'reading effectiveness and experience,and providing emotional references,in order to promote the deep development of high-quality reading.The article constructs a digital reading emotion calculation model that integrates multimodal data,collects facial expressions and pupil position data of the subjects,and achieves the calculation of seven emotions:happiness,anger,disgust,fear,sadness,surprise,and neutrality by assigning weights to each modality emotion recognition model and assigning various emotion weights to each modality emotion recognition model,and determines the optimal emotion calculation model.The experiment showed that in digital reading,the accuracy rates of seven types of emotion recognition based on the optimal emotion calculation model were 86.34%,83.82%,81.30%,78.98%,80.51%,83.54%,and 79.33%,respectively.This confirms that the model can provide effective implementation solutions for emotion calculation in digital reading scenarios.

关 键 词:多模态数据 情感计算 数字阅读情感 面部表情 瞳孔位置 

分 类 号:F49[经济管理—产业经济] G353.1[文化科学—情报学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象