Panna Felsen

University of California, Berkeley

In ICCV 2019


We present Predicting Human Dynamics (PHD), a neural autoregressive model that takes a video sequence of a person as input to predict the future 3D human mesh motion. Left: Input past video sequence. Middle: Predicted future mesh sequence. Right: Predicted mesh from alternate viewpoint.

Abstract

Given a video of a person in action, we can easily guess the 3D future motion of the person. In this work, we present perhaps the first approach for predicting a future 3D mesh model sequence of a person from past video input. We do this for periodic motions such as walking and also actions like bowling and squatting seen in sports or workout videos. While there has been a surge of future prediction problems in computer vision, most approaches predict 3D future from 3D past or 2D future from 2D past inputs. In this work, we focus on the problem of predicting 3D future motion from past image sequences, which has a plethora of practical applications in autonomous systems that must operate safely around people from visual inputs. Inspired by the success of autoregressive models in language modeling tasks, we learn an intermediate latent space on which we predict the future. This effectively facilitates autoregressive predictions when the input differs from the output domain. Our approach can be trained on video sequences obtained in-the-wild without 3D ground truth labels.




Paper


Predicting 3D Human Dynamics from Video

Jason Zhang, Panna Felsen, Angjoo Kanazawa, and Jitendra Malik
@InProceedings{zhang2019phd,
    title = {Predicting 3D Human Dynamics from Video},
    author = {Zhang, Jason Y. and Felsen, Panna and Kanazawa, Angjoo and Malik, Jitendra},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    year = {2019},
}



Video





Code

[Github] (Coming Soon)




Acknowledgements


We would like to thank Ke Li for insightful discussion and Allan Jabri and Ashish Kumar for valuable feedback. We thank Alexei A. Efros for the statues. This work was supported in part by Intel/NSF VEC award IIS-1539099 and BAIR sponsors.