Monocular Video Guided Garment Simulation  

Monocular Video Guided Garment Simulation

在线阅读下载全文

作  者:李发明 陈小武 周彬 卢飞翔 郭侃 符强 

机构地区:[1]State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering Beihang University, Beijing 100191, China

出  处:《Journal of Computer Science & Technology》2015年第3期528-539,共12页计算机科学技术学报(英文版)

基  金:This work was partially supported by the National High Technology Research and Development 863 Program of China under Grant No. 2013AA013801, the National Natural Science Foundation of China under Grant No. 61325011, and the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20131102130002.

摘  要:We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

关 键 词:garment simulation monocular video shape correspondence 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术] TS941.2[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象