Video-based 3D Motion Capture through Biped Control

Marek Vondrak1 Leonid Sigal2 Jessica Hodgins2 Odest Jenkins1

1Brown University 2Disney Research, Pittsburgh


Marker-less motion capture is a challenging problem, particularly when only monocular video is available. We estimate human motion from monocular video by recovering three-dimensional controllers capable of implicitly simulating the observed human behavior and replaying this behavior in other environments and under physical perturbations. Our approach employs a state-space biped controller with a balance feedback mechanism that encodes control as a sequence of simple control tasks. Transitions among these tasks are triggered on time and on proprioceptive events (e.g., contact).  Inference takes the form of optimal control where we optimize a high-dimensional vector of control parameters and the structure of the controller based on an objective function that compares the resulting simulated motion with input observations. We illustrate our approach by automatically estimating controllers for a variety of motions directly from monocular video.  We show that the estimation of controller structure through incremental optimization and refinement leads to controllers that are more stable and that better approximate the reference motion. We demonstrate our approach by capturing sequences of walking, jumping, and gymnastics.

Papers

M. Vondrak, L. Sigal, J. Hodgins, and O. Jenkins, “Video-based 3D Motion Capture through Biped Control,” to appear in ACM Transactions on Graphics, August 2012.

Videos

 

No comments

Be the first one to leave a comment.

Post a Comment

You must be logged in to post a comment.