Effects of chromatic image statistics on illumination induced color differences
Author(s) -
Marcel P. Lucassen,
Theo Gevers,
Arjan Gijsenij,
Niels Dekker
Publication year - 2013
Publication title -
journal of the optical society of america a
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.803
H-Index - 158
eISSN - 1520-8532
pISSN - 1084-7529
DOI - 10.1364/josaa.30.001871
Subject(s) - standard illuminant , chromaticity , artificial intelligence , chromatic scale , achromatic lens , color constancy , computer vision , chromatic adaptation , color space , rgb color model , computer science , color rendering index , color balance , fidelity , optics , mathematics , physics , color image , image processing , telecommunications , light emitting diode , image (mathematics)
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom