A digital image encodes many important physical properties of a scene, such as illumination, surface orientation, shape and reflectance. In this presentation, I will introduce several work on how to compute and classify those fundamental properties from images alone.
First, I will first briefly introduce three techniques on measurements and synthesis of illumination. Our first technique targets a single image captured outdoor, and detects its shadow boundaries using machine learning algorithms with visual features motivated by physical model. Our second technique could separate reflectance and illumination from a single color image. Our third technique gather images lit from different lighting positions, and synthesize new images by removing ambient light and only keeping directional illumination.
Second, I will describe our recent work on image relighting and 3D shape recovery. We first capture multiple images viewed at a fixed camera position, but lit from various different lighting directions from LEDs mounted inside a dome. We could use those images to either create an interactive relit image that reveals textural characteristics of materials, or recover the 3D shape of the object. Unlike the classic photometric stereo algorithms, our 3d shape recovery algorithm don’t require the light source to be far away - we handle the near light illumination with novel optimization methods. This work has helped to resolve longstanding art historical questions about the evolution of the artist Paul Gauguin’s printing techniques, which was reported by Newsweek etc.
In the end, I will talk about how to recover missing colors in an image. We present a learning based technique for color demosaicing of 4D light field cameras. We exploit the spectral, spatial and angular correlations in naturally occurring light fields by learning an over-complete dictionary, and reconstruct the missing colors using sparse optimization.