基于UnifiedGesture改进模型的三维人体动画生成  

3D Human Animation Generation Based on Improved UnifiedGesture Model

在线阅读下载全文

作  者:郭二伟 朱欣娟[1] 高全力 GUO Er-Wei;ZHU Xin-Juan;GAO Quan-Li(School of Computer Science,Xi’an Polytechnic University,Xi’an 710600,China)

机构地区:[1]西安工程大学计算机科学学院,西安710600

出  处:《计算机系统应用》2025年第3期40-50,共11页Computer Systems & Applications

基  金:陕西省科技厅重点研发计划(2024GX-YBXM-548)。

摘  要:为了提升音频驱动人体动画生成的真实性,对UnifiedGesture模型进行了改进研究.首先,通过引入编码器-解码器架构,从音频中提取面部特征,以弥补原模型在面部表情生成方面的不足.其次,结合交叉局部注意力机制和基于Transformer-XL的多头注意力机制,以增强长序列中的时序依赖性.同时,利用变分量化自动编码器(vector quantized variational autoencoder,VQVAE),融合生成全身运动序列,以提升生成动作的多样性和完整性.最后,在BEAT数据集上进行实验,通过定量和定性分析结果表明,改进后的UnifiedGesture-F模型在音频与人体动作同步性和整体真实感方面相比原模型有显著提升.This study researches improving the UnifiedGesture model to enhance the realism of audio-driven human body animation generation.Firstly,an encoder-decoder architecture is introduced to extract facial features from audio,compensating for the deficiencies of the original model in facial expression generation.Secondly,the cross-local attention mechanism and the multi-head attention mechanism based on Transformer-XL are combined to enhance the temporal dependency within long sequences.Simultaneously,the vector quantized variational autoencoder(VQVAE)is utilized to integrate and generate full-body motion sequences,enhancing the diversity and integrity of the generated motions.Finally,experiments are conducted on the BEAT dataset.The quantitative and qualitative analysis results demonstrate that the improved UnifiedGesture-F model achieves a significant improvement in the synchronicity between audio and human body movements as well as in the overall realism compared to the original model.

关 键 词:音频驱动 人体动画生成技术 UnifiedGesture模型 VQVAE 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象