Model Transduction for Triangle Meshes  

Model Transduction for Triangle Meshes

在线阅读下载全文

作  者:吴怀宇 潘春洪 查红彬 马颂德 

机构地区:[1]Key Laboratory of Machine Perception (MOE), Peking University [2]National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences

出  处:《Journal of Computer Science & Technology》2010年第3期583-594,共12页计算机科学技术学报(英文版)

基  金:supported by the National Natural Science Foundation of China under Grant Nos. 60903060 and 60675012;the National High-Tech Research and Development 863 Program of China under Grant No. 2009AA012104;the China Postdoctoral Science Foundation under Grant No. 20080440258

摘  要:This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargetting methods, such as deformation transfer, model transduction does not require a reference source mesh to obtain the source deformation, thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show other two applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-free deformation animation based on 3D Mocap (Motion capture) data. Model transduction is based on two ingredients: model deformation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh deformation method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Then we propose a novel scheme for shape-preserving correspondence between manifold meshes. Our method fits nicely in a unified framework, where the similar type of operator is applied in all phases. The resulting quadratic formulation can be efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargetting methods, such as deformation transfer, model transduction does not require a reference source mesh to obtain the source deformation, thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show other two applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-free deformation animation based on 3D Mocap (Motion capture) data. Model transduction is based on two ingredients: model deformation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh deformation method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Then we propose a novel scheme for shape-preserving correspondence between manifold meshes. Our method fits nicely in a unified framework, where the similar type of operator is applied in all phases. The resulting quadratic formulation can be efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.

关 键 词:retargetting mesh deformation mean-value manifold operator cross-parameterization model transduction 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象