z-logo
open-access-imgOpen Access
Learning Discriminative Chamfer Regularization
Author(s) -
Pradeep Yarlagadda,
Angela Eigenstetter,
Björn Ommer
Publication year - 2012
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.26.20
Subject(s) - artificial intelligence , discriminative model , clutter , pattern recognition (psychology) , pixel , spurious relationship , computer science , computer vision , chamfer (geometry) , boundary (topology) , robustness (evolution) , edge detection , mathematics , image (mathematics) , image processing , geometry , radar , telecommunications , mathematical analysis , biochemistry , chemistry , machine learning , gene
Chamfer matching is an effective and widely used technique for detecting objects or parts thereof by their shape. However, a serious limitation is its susceptibility to background clutter. The primary reason for this is that the presence of individual model points in a query image is measured independently. A match with the object model is then represented by the sum of all the individual model point distance transformations. Consequently, i) all object pixels are treated as being independent and equally relevant, and ii) the model contour (the foreground) is prone to accidental matches with background clutter. As demonstrated by Attneave [1], and various experiments on illusionary contours, object boundary pixels are not all equally important due to their statistical interdependence. Moreover, in dense background clutter the points on the model have a high likelihood to find good spurious matches [1, 3]. However, any arbitrary model would match to such a cluttered region, which consequently gives rise to matches with high accidentalness. Chamfer matching only matches the template contour and thus fails to discount the matching score by the accidentalness, i.e., the likelihood that this is a spurious match. We take account of the fact that boundary pixels are not all equally important by applying a discriminative approach to chamfer distance computation, thereby increasing its robustness. Let T = {ti} and Q = {q j} be the sets of template and query edge map respectively. Let φ(ti) denote the edge orientation of the edge point ti. For a given location x of the template in the query image, directional chamfer matching [2] finds the best q j ∈ Q for each ti ∈ T , thus resulting in a matching cost p (T,Q) i (x). p i (x) = min q j∈Q |(ti +x)−q j|+λ |φ(ti +x)−φ(q j)| (1) Adjacent template pixels are statistically dependent and, thus, we do average (1) over the direct neighbors of pixel i. The resulting pi are then used to learn the importance of contour pixels. While learning the weights for individual pixels improves the robustness of template matching, chamfer matching is still prone to accidental responses in spurious background clutter. To estimate the accidentalness of a match, a small dictionary of simple background contours Tbg is utilized. Rather than placing background contours at a fixed single location, i.e., at the center of the model contour as in [3], background elements are trained to focus at locations where, relative to the foreground, typically accidental matches occur. Let d DCM (x) denote the directional chamfer distance between Q and T with a relative displacement x. To measure where clutter typically interferes with the model contour we compute dbg ) DCM between each background contour Tbg and the object template T . We consider placements of the background contour with better (lower) chamfer matching score to be more important since they occur on or close to the model contour. In order to weight these matching locations higher we create a mask M(Tbg,T )(x)

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom