DiffEgoPose: An End-to-end Diffusion Model for EgoPose Estimation

DiffEgoPose takes an egocentric video as input and predicts full-body poses in an end-to-end manner.

Abstract

Ego pose estimation is to predict 3D human poses from a first-person video, which has an important role in various AR/VR applications. Recent work decomposes this problem into multiple stages, however, it leads to estimation errors accumulating through the pipeline.

Unlike previous work, we propose an end-to-end diffusion model, DiffEgoPose, to estimate the camera wearer's pose directly from first-person videos. Our method first extracts the visual features of each frame in the video. Then the diffusion model generates human pose sequences conditioned on the visual features through iterations. By capturing and fusing the dual-end temporal dependencies in videos and human motions, we demonstrate that the end-to-end conditional diffusion model can achieve good results without pretraining on large-scale motion datasets.

In addition, it is very costly to collect pairs of egocentric videos and human poses in the real world. Therefore, we construct a synthetic dataset, CMU-EGO-MOTION, containing diverse scenes and human motions as a complement to existing real-world datasets. On both real-world dataset and synthetic dataset, DiffEgoPose outperforms the state-of-the-art method.

Method Overview