Video Autoencoder: self-supervised disentanglement of static 3D structure and motion

Zihang Lai1Sifei Liu2Alexei A. Efros3Xiaolong Wang4
1CMU  2NVIDIA  3UC Berkeley  4UC San Diego ICCV 2021 (Oral)

Abstract

We present Video Autoencoder for learning disentangled representations of 3D structure and camera pose from videos in a self-supervised manner. Relying on temporal continuity in videos, our work assumes that the 3D scene structure in nearby video frames remains static. Given a sequence of video frames as input, the Video Autoencoder extracts a disentangled representation of the scene including: (i) a temporally-consistent deep voxel feature to represent the 3D structure and (ii) a 3D trajectory of camera poses for each frame. These two representations will then be re-entangled for rendering the input video frames. Video Autoencoder can be trained directly using a pixel reconstruction loss, without any ground truth 3D or camera pose annotations. The disentangled representation can be applied to a range of tasks, including novel view synthesis, camera pose estimation, and video generation by motion following. We evaluate our method on several large-scale natural video datasets, and show generalization results on out-of-domain images.



Video

Method

The proposed Video Autoencoder is a conceptually simple method for encoding a video into a 3D representation and a trajectory in a completely self-supervised manner (no 3D labels are required).

Temporal super-resolution

Our model could be used to increase the frame rate of videos - by simply interpolating between the estimated trajectories between two video frames.

Camera stabilization

Here, we "stabilize" all video frames to a fixed viewpoint. With Video Autoencoder, we can simply warp the whole video to the first frame with the estimated pose difference.

Test on anime scenes

Our model could also work for out-of-distribution data such as anime scenes, Spirited Away.

and even paintings...

Video Autoencoder could generate results as if we are walking into Vincent van Gogh’s bedroom in Arles.

Trajectory estimation in videos

We can also estimate trajectory in videos. For each video clip, we can estimate the relative pose between every two video frames and chain them together to get the full trajectory.

Video following

Our model can also animate a single image (shown left) with the motion trajectories from a different video (shown in the middle). We call this video following.

BibTeX

@inproceedings{Lai21a,
        title={Video Autoencoder: self-supervised disentanglement of 3D structure and motion},
        author={Lai, Zihang and Liu, Sifei and Efros, Alexei A and Wang, Xiaolong},
        booktitle={ICCV},
        year={2021}
}