Depiction Inviariant Object Matching
Author(s) -
Anupriya Balikai,
Peter Hall
Publication year - 2012
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.26.56
Subject(s) - depiction , artificial intelligence , robustness (evolution) , computer science , computer vision , matching (statistics) , invariant (physics) , object (grammar) , pattern recognition (psychology) , feature matching , cognitive neuroscience of visual object recognition , feature extraction , mathematics , art , statistics , visual arts , mathematical physics , biochemistry , chemistry , gene
We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom