检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:郭二伟 朱欣娟[1] 高全力 GUO Er-Wei;ZHU Xin-Juan;GAO Quan-Li(School of Computer Science,Xi’an Polytechnic University,Xi’an 710600,China)
机构地区:[1]西安工程大学计算机科学学院,西安710600
出 处:《计算机系统应用》2025年第3期40-50,共11页Computer Systems & Applications
基 金:陕西省科技厅重点研发计划(2024GX-YBXM-548)。
摘 要:为了提升音频驱动人体动画生成的真实性,对UnifiedGesture模型进行了改进研究.首先,通过引入编码器-解码器架构,从音频中提取面部特征,以弥补原模型在面部表情生成方面的不足.其次,结合交叉局部注意力机制和基于Transformer-XL的多头注意力机制,以增强长序列中的时序依赖性.同时,利用变分量化自动编码器(vector quantized variational autoencoder,VQVAE),融合生成全身运动序列,以提升生成动作的多样性和完整性.最后,在BEAT数据集上进行实验,通过定量和定性分析结果表明,改进后的UnifiedGesture-F模型在音频与人体动作同步性和整体真实感方面相比原模型有显著提升.This study researches improving the UnifiedGesture model to enhance the realism of audio-driven human body animation generation.Firstly,an encoder-decoder architecture is introduced to extract facial features from audio,compensating for the deficiencies of the original model in facial expression generation.Secondly,the cross-local attention mechanism and the multi-head attention mechanism based on Transformer-XL are combined to enhance the temporal dependency within long sequences.Simultaneously,the vector quantized variational autoencoder(VQVAE)is utilized to integrate and generate full-body motion sequences,enhancing the diversity and integrity of the generated motions.Finally,experiments are conducted on the BEAT dataset.The quantitative and qualitative analysis results demonstrate that the improved UnifiedGesture-F model achieves a significant improvement in the synchronicity between audio and human body movements as well as in the overall realism compared to the original model.
关 键 词:音频驱动 人体动画生成技术 UnifiedGesture模型 VQVAE
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222