检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:廖振 林国军 胡鑫 游松 兰江海 周旭 罗春兰 LIAO Zhen;LIN Guojun;HU Xin;YOU Song;LAN Jianghai;ZHOU Xu;LUO Chunlan(School of Automation and Information Engineering,Sichuan University of Science&Engineering,Yibin,Sichuan 644000,China)
机构地区:[1]四川轻化工大学自动化与信息工程学院,四川宜宾644000
出 处:《内江师范学院学报》2024年第4期65-71,共7页Journal of Neijiang Normal University
基 金:四川省科技厅项目(2022YFSY0056)。
摘 要:当前,由素描头像转换的现实头像还存在不够逼真和人脸识别率不高的问题.为此,提出了一种改进CycleGAN的现实头像转换方法.首先,在U-Net自编码器的基础上增加了一个人脸特征提取器.其次,将人脸特征提取器提取的特征与U-Net解码器中的特征采用通道连接的方式进行特征融合,再对融合后的特征做进一步解码处理.最后,将基础模型CycleGAN转化为监督学习模型,从而对转换头像与真实头像添加图像空间损失和风格损失.实验结果表明:改进模型较基础模型转换的现实头像,在CUHK测试集上FID值降低了27.31、Rank-1提高了19%,在XM2VTS测试集上FID值降低了8.65、Rank-1提高了4.1%.Currently,the realistic avatars converted from sketch avatars has the problems of insufficient lifelikeness and low face recognition rate.In response to this,a realistic avatar conversion method with improved CycleGAN is proposed.Firstly,a face feature extractor is added to the U-Net self-encoder.Secondly,the features extracted by the face feature extractor and the features in the U-Net decoder are fused using the channel connection method,and the fused features are further decoded.Finally,the base model CycleGAN is transformed into a supervised learning model so as to add image spatial loss and style loss to the converted avatar and the real avatar.The experimental results show that the improved model reduces the FID value by 27.31 and improves Rank-1 by 19%on the CUHK test set,and reduces the FID value by 8.65 and improves Rank-1 by 4.1%on the XM2VTS test set,compared with the results collected from the converted real avatar of the base model.
关 键 词:CycleGAN U-Net自编码 人脸特征提取器 监督学习 图像空间损失 风格损失
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.218.135.221