The goal of this module is to learn the principles of the 3D reconstruction of an object or a scene from multiple images or stereoscopic videos. For that, the basic concepts of the projective geometry and the 3D space are firstly introduced. The rest of the theoretical aspects and applications are built upon these basic tools. The mapping from the 3D world to the image plane will be studied, for that we will introduce different camera models, their parameters and how to estimate them (camera calibration and auto-calibration). The geometry that relates a pair of views will be analyzed. All these concepts will be applied to obtain a 3D reconstruction in the two main possible settings: calibrated or uncalibrated cameras. In particular, we will learn how to: estimate the depth of image points, extract the underlying 3D points given a set of point correspondences in the images, generate novel views, estimate the 3D object given a set of calibrated color images or binary images, and estimate a sparse set of 3D points given a set of uncalibrated images. The 3D representation in voxels and meshes will be studied. We will explain the reconstruction and modeling from Kinect data, as a particular model of sensors that provide an image of the scene together with its depths. Finally, we will see some techniques for processing 3D point clouds. The concepts and techniques learnt in this module are used in real
techniques for processing 3D point clouds. The concepts and techniques learnt in this module are used in real applications ranging from augmented reality, object scanning, motion capture, new view synthesis, bullet-time effect, robotics, etc.
Module Project: 3D recovery of urban scenes
The aim of this project is to learn the basic concepts and techniques to reconstruct a real world scene given several images (points of view) of it, not necessarily previously calibrated. In this project we focus on 3D recovery of urban scenes using images of different datasets, namely images of facades and aerial images of cities.
This project can be useful in different applications where some 3D information has to be inferred from images taken at different points of view. Some examples of such kind of applications are: image mosaics or panoramas, augmented reality, depth computation, 3D reconstruction, 3D localization and navigation, and new view synthesis.
M4 Schedule – Academic Year 2021-2022 – Student Guide <here>