检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:豆子闻 李文书[1] DOU Ziwen;LI Wenshu(School of Computer Science and Technology,Zhejiang Sci-Tech University,Hangzhou 310018,China)
机构地区:[1]浙江理工大学计算机科学与技术学院,浙江杭州310018
出 处:《软件工程》2023年第12期59-62,共4页Software Engineering
摘 要:在面部动画生成领域,克服人脸几何形状的复杂性是一项极具挑战性的任务。为了更好地应对这一挑战,文章采用了一种创新的方法,即将经过一维卷积堆叠和自注意力提取后的音频特征作为输入,通过Transformer模型从音频信号中生成面部动画。这个过程采用时间自回归模型逐步合成面部运动。使用BIWI数据集开展实验证明,该方法成功地将唇部顶点误差率缩小至令人满意的6.123%,同步率超过MeshTalk79.64%,这意味该方法在口型同步和面部表情生成方面表现出色,在完成面部动画生成任务中表现出很高的潜力,可为未来相关研究提供方向和参考。In the field of facial animation generation,overcoming the complexity of face geometry has always been a highly challenging task.To better meet this challenge,this paper proposes an innovative approach which uses audio features that have undergone one-dimensional convolutional stacking and self-attention extraction as input,and generates facial animations from audio signals using a Transformer model.During the process,a time auto-regression model is used to gradually synthesize facial movements.Experiments on BIWI dataset show that this method has successfully reduced the lip vertex error rate to a satisfactory 6.123%,with a synchronization rate 79.64%higher than MeshTalk.This means that the proposed method performs well in lip synchronization and facial expression generation,and shows high potential in accomplishing facial animation generation tasks.It provides directions and reference for future related research.
关 键 词:动画生成 自回归 深度学习 唇形同步 TRANSFORMER
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.191