Overview

A natural approach to generative modeling of videos is to represent them as a composition of moving objects. Recent works model a set of 2D sprites over a slowly-varying background, but without considering the underlying 3D scene that gives rise to them. We instead propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background. Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects. We conduct detailed experiments on two datasets, going beyond the visual complexity supported by state-of-the-art generative approaches. We evaluate our method on depth-prediction and 3D object detection—tasks which cannot be addressed by those earlier works—and show it out-performs them even on 2D instance segmentation and tracking.

Left: Our generative model represents videos as a 3D scene viewed by a moving camera. It has a single latent variable, which is mapped to parameters of multiple objects and a background. Each object has a pose and an appearance embedding, which is decoded to an explicit representation of its 3D shape and color. Right: Plan view of our 3D scene structure, viewed by a moving camera (green). We define a grid of candidate objects in 3D space (gray boxes); indicator variables specify which are present in a given scene (blue). Each object has a displacement (orange arrows) and rotation, which may vary over time. The background is a spherical shell (gray dashed), deformed (purple arrows) into the correct shape (red) to capture the surrounding environment
Results
(ROOMS) decomposition
original
recon.
inferred background
inferred objects
gt instance segmentation
inferred instance segmentation
gt depth
inferred depth
show more >>
(ROOMS) generation
generated video
generated background
generated objects
instance segmentation
depths
show more >>
(TRAFFIC) decomposition
original
recon.
inferred background
inferred objects
gt instance segmentation
inferred instance segmentation
gt depth
inferred depth
show more >>
(TRAFFIC) generation
generated video
generated background
generated objects
instance segmentation
depths
show more >>
Bibtex
@inproceedings{henderson20neurips,
  title={Unsupervised object-centric video generation and decomposition in {3D}},
  author={Henderson, Paul and Lampert, Christoph H.},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS) 33},
  year={2020}
}