Personalized Multi-View Face Animation with Lifelike Textures  

Personalized Multi-View Face Animation with Lifelike Textures

在线阅读下载全文

作  者:柳杨华 徐光祐 

机构地区:[1]Key Laboratory on Pervasive Computing (Tsinghua University) of the Ministry of Education, Department of Computer Science and Technology, Tsinghua University

出  处:《Tsinghua Science and Technology》2007年第1期51-57,共7页清华大学学报(自然科学版(英文版)

基  金:the National Natural Science Foundation of China (No. 60673189)

摘  要:Realistic personalized face animation mainly depends on a picture-perfect appearance and natural head rotation. This paper describes a face model for generation of novel view facial textures with various realistic expressions and poses. The model is achieved from corpora of a talking person using machine learning techniques. In face modeling, the facial texture variation is expressed by a multi-view facial texture space model, with the facial shape variation represented by a compact 3-D point distribution model (PDM). The facial texture space and the shape space are connected by bridging 2-D mesh structures. Levenberg-Marquardt optimization is employed for fine model fitting. Animation trajectory is trained for smooth and continuous image sequences. The test results show that this approach can achieve a vivid talking face sequence in various views. Moreover, the animation complexity is significantly reduced by the vector representation.Realistic personalized face animation mainly depends on a picture-perfect appearance and natural head rotation. This paper describes a face model for generation of novel view facial textures with various realistic expressions and poses. The model is achieved from corpora of a talking person using machine learning techniques. In face modeling, the facial texture variation is expressed by a multi-view facial texture space model, with the facial shape variation represented by a compact 3-D point distribution model (PDM). The facial texture space and the shape space are connected by bridging 2-D mesh structures. Levenberg-Marquardt optimization is employed for fine model fitting. Animation trajectory is trained for smooth and continuous image sequences. The test results show that this approach can achieve a vivid talking face sequence in various views. Moreover, the animation complexity is significantly reduced by the vector representation.

关 键 词:face animation point distribution model (PDM) TEXTURE MULTI-VIEW 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象