z-logo
open-access-imgOpen Access
PixISegNet: pixel‐level iris segmentation network using convolutional encoder–decoder with stacked hourglass bottleneck
Author(s) -
Jha Ranjeet Ranjan,
Jaswal Gaurav,
Gupta Divij,
Saini Shreshth,
Nigam Aditya
Publication year - 2020
Publication title -
iet biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 28
eISSN - 2047-4946
pISSN - 2047-4938
DOI - 10.1049/iet-bmt.2019.0025
Subject(s) - computer science , segmentation , artificial intelligence , convolutional neural network , encoder , hourglass , robustness (evolution) , cross entropy , pattern recognition (psychology) , computer vision , pixel , iris recognition , image segmentation , biometrics , archaeology , history , operating system , biochemistry , chemistry , gene
In this paper, we present a new iris ROI segmentation algorithm using a deep convolutional neural network (NN) to achieve the state‐of‐the‐art segmentation performance on well‐known iris image data sets. The authors’ model surpasses the performance of state‐of‐the‐art Iris DenseNet framework by applying several strategies, including multi‐scale/ multi‐orientation training, model training from scratch, and proper hyper‐parameterisation of crucial parameters. The proposed Pix I SegNet consists of an autoencoder which primarily uses long and short skip connections and a stacked hourglass network between encoder and decoder. There is a continuous scale up–down in stacked hourglass networks, which helps in extracting features at multiple scales and robustly segments the iris even in an occluded environment. Furthermore, cross‐entropy loss and content loss optimise the proposed model. The content loss considers the high‐level features, thus operating at a different scale of abstraction, which compliments the cross‐entropy loss, which considers pixel‐to‐pixel classification loss. Additionally, they have checked the robustness of the proposed network by rotating images to certain degrees with a change in the aspect ratio along with blurring and a change in contrast. Experimental results on the various iris characteristics demonstrate the superiority of the proposed method over state‐of‐the‐art iris segmentation methods considered in this study. In order to demonstrate the network generalisation, they deploy a very stringent TOTA (i.e. train‐once‐test‐all) strategy. Their proposed method achieves E 1 scores of 0.00672, 0.00916 and 0.00117 on UBIRIS‐V2, IIT‐D and CASIA V3.0 Interval data sets, respectively. Moreover, such a deep convolutional NN for segmentation when included in an end‐to‐end iris recognition system with a siamese based matching network will augment the performance of the siamese network.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here