融合图像元数据的用户情感分类  

User emotion classification combined with image metadata

在线阅读下载全文

作  者:吴彦文 严巍[2] 何华卿 冉茂良 WU Yan-wen;YAN Wei;HE Hua-qing;RAN Mao-liang(National Engineering Research Center for E-Learning,Central China Normal University,Wuhan 430079,China;Institute of Information Processing and Artificial Intelligence,Department of Physical Science and Technology,Central China Normal University,Wuhan 430079,China)

机构地区:[1]华中师范大学国家数字化学习工程技术研究中心,湖北武汉430079 [2]华中师范大学物理科学与技术学院信息处理与人工智能研究所,湖北武汉430079

出  处:《计算机工程与设计》2022年第1期127-134,共8页Computer Engineering and Design

基  金:国家自然科学基金重点基金项目(61937001)。

摘  要:为解决仅从图像域进行用户情感分析造成准确率低的问题,提出一种融合拍照情境特征和图像特征来预测用户情感类别的方法。对用户上传到网络上的图像,使用预训练的vgg19模型的特征提取层获取图像内容特征和图像纹理特征,从对应的图像元数据提取用户拍照情境信息,建立情境-情感的映射关系,通过embedding的方法得到情境特征的低维稠密向量表示,将3种融合的特征经过情感识别网络进行情感分类。实验结果表明,融合情境特征后比只考虑图像特征域的方法在准确率上提高了4.12%。To solve the problem of low accuracy caused by that user emotion recognition only considers the image domain,a method of combining photographic context features and image features to predict user emotion categories was proposed.For the image uploaded by the user on the network,the feature-extraction-layer of the pre-trained vgg19 model on Image-Net was used to obtain the image content feature and image texture feature,and the user’s photo context information was extracted from the corresponding image metadata,and a context-sentiment mapping relationship was established,a low-dimensional dense vector representation of context features was obtained through embedding,and the three fused features were classified through the emotion recognition network.Experimental results show that the fusion of context features improves the accuracy by 4.12%compared to the method that only considers the image feature domain.

关 键 词:情感分析 情境建模 图像元数据 图像情感识别 卷积神经网络 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象