
Extraction of compact boundary normalisation based geometric descriptors for affine invariant shape retrieval
Author(s) -
Paramarthalingam Arjun,
Thankanadar Mirnalinee
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12088
Subject(s) - centroid , affine transformation , artificial intelligence , robustness (evolution) , computer vision , pattern recognition (psychology) , active shape model , invariant (physics) , mathematics , computer science , feature extraction , shape analysis (program analysis) , image retrieval , boundary (topology) , segmentation , image (mathematics) , geometry , mathematical analysis , mathematical physics , static analysis , biochemistry , chemistry , gene , programming language
Shape recognition and retrieval is a complex task on non‐rigid objects and it can be effectively performed by using a set of compact shape descriptors. This paper presents a new technique for generating normalised contour points from shape silhouettes, which involves the identification of object contour from images and subsequently the object area normalisation (OAN) method is used to partition the object into equal part area segments with respect to shape centroid. Later, these contour points are used to derive six descriptors such as compact centroid distance (CCD), central angle (ANG), normalized points distance (NPD), centroid distance ratio (CDR), angular pattern descriptor (APD) and multi‐triangle area representation (MTAR). These descriptors are a 1D shape feature vector which preserve contour and region information of the shapes. The performance of the proposed descriptors is evaluated on MPEG‐7 Part‐A, Part‐B and multi‐view curve dataset images. The present experiments are aimed to check proposed shape descriptor's robustness to affine invariance property and image retrieval performance. Comparative study has also been carried out for evaluating our approach with other state of the art approaches. The results show that image retrieval rate in OAN approach performs significantly better than that in other existing shape descriptors.