Discrete Light Source Estimation from Light Probes for Photorealistic Rendering
Author(s) -
Farshad Einabadi,
Oliver Grau
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.29.43
Subject(s) - rendering (computer graphics) , computer science , computer vision , image based lighting , computer graphics (images) , artificial intelligence , image based modeling and rendering , global illumination
Applications like rendering of images using computer graphics methods are usually requiring more sophisticated light models to give better control. Complex scenes in computer generated images are requiring very differentiated light models to give a realistic rendering of the scene. That usually includes a high number of (virtual) light sources to model a scene to reproduce accurate shadows and shadings. In particular in the production of visual effects for movies and TV the real scene lighting needs to be captured very accurately to give a realistic rendering of virtual objects into that scene. In this context the light modeling is usually done manually by skilled artists in a time consuming process. This contribution describes a new technique for estimation of discrete spot light sources. The method uses a consumer grade DSLR camera equipped with a fisheye lens to capture light probe images registered to the scene. From these probe images the geometric and radiometric properties of the dominant light sources in the scene are estimated. The first step is a robust approach to identify light sources in the light probes and to find exact positions by triangulation. Then the light direction and radiometric fall-off properties are formulated and estimated in a least square minimization approach. There are a number of advantages in our approach. First, the probing camera is registered using a multi-camera setup which requires the minimum amendments to the studio. Second, we are not limited to any specific probing object since the properties of each light are estimated based on processing the probe images. In addition, since the probing camera can move freely in the area of interest, there are no limits in terms of the covered space. Large field of view of the fisheye lens is also beneficial in this matter. Calibration and Registration of Cameras. We propose a two-step calibration and registration approach. In the first step, a planar asymmetric calibration pattern is used for simultaneous calibration of the intrinsics and the pose of all the witness cameras and the principal camera using a bundle adjustment module. In the next step, parameters of witness cameras are kept fixed and the probing camera is registered in the same coordinate system by using color features of an attached calibration rig. Position Estimation. To estimate the 3D position vectors of the light sources, one needs to shoot rays from every detected light blob in all probe images and triangulate the corresponding rays from at least two probe positions for each source. Figure 1 summarizes the required steps.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom