Premium
On the use of shadows in stance recovery
Author(s) -
Bruckstein Alfred M.,
Holt Robert J.,
Jean Yves D.,
Netravali Arun N.
Publication year - 2000
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.1016
Subject(s) - shadow (psychology) , computer vision , artificial intelligence , position (finance) , pixel , projection (relational algebra) , computer science , pinhole camera , object (grammar) , feature (linguistics) , point (geometry) , perspective (graphical) , plane (geometry) , noise (video) , image plane , pinhole (optics) , image (mathematics) , mathematics , algorithm , geometry , optics , physics , psychology , linguistics , philosophy , finance , economics , psychotherapist
Abstract The image of an object and of the shadow it casts on a planar surface provides important cues for three‐dimensional (3D) stance recovery. We assume that the position of the plane on which the shadow lies with respect to a pinhole camera is known and that the position of the light source is unknown. If the light source is sufficiently far away that parallel projection may be assumed, then knowledge of two point correspondences between images of feature points and images of their shadows is enough to determine the position of the object and the direction of the light source. If the light source is close enough that the shadow points are obtained via perspective projection, then there is a one‐parameter infinite family of solutions for the position of the object and the light source. Determining the stance of an object is highly sensitive to noise, so we provide algorithms for stance recovery that take into account known information about the object. In our experiments, the errors for the location of the 3D feature points obtained by these algorithms are generally less than 0.2% times the error in pixels in the image points and the errors for the 3D directions of the links is roughly 0.04° times the error in pixels, normalized by the distance to the object from the camera and the length of the link. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 11, 315–330, 2000