3D Reconstruction from Accidental Motion

Fisher Yu, David Gallup
CVPR 2014

3D Reconstruction from Accidental Motion

Abstract

We have discovered that 3D reconstruction can be achieved from a single still photographic capture due to accidental motions of the photographer, even while attempting to hold the camera still. Although these motions result in little baseline and therefore high depth uncertainty, in theory, we can combine many such measurements over the duration of the capture process (a few seconds) to achieve usable depth estimates. We present a novel 3D reconstruction system tailored for this problem that produces depth maps from short video sequences from standard cameras without the need for multi-lens optics, active sensors, or intentional motions by the photographer. This result leads to the possibility that depth maps of sufficient quality for RGB-D photography applications like perspective change, simulated aperture, and object segmentation, can come “for free” for a significant fraction of still photographs under reasonable conditions.

Video

Poster

Click here to open high-res pdf poster.

Paper

Code

paper
github.com/fyu/tiny

Citation

@inproceedings{Yu14,    
  Author    = {Fisher Yu and David Gallup},    
  Title     = {3D Reconstruction from Accidental Motion},    
  Booktitle = {27th IEEE Conference on Computer Vision and Pattern Recognition},    
  Year      = {2014},
}