
Unconstrained ear recognition using deep neural networks
Author(s) -
Dodge Samuel,
Mounsef Jinane,
Karam Lina
Publication year - 2018
Publication title -
iet biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 28
ISSN - 2047-4946
DOI - 10.1049/iet-bmt.2017.0208
Subject(s) - computer science , extractor , pattern recognition (psychology) , classifier (uml) , artificial intelligence , artificial neural network , feature extraction , deep neural networks , transfer of learning , feature (linguistics) , ensemble learning , deep learning , machine learning , speech recognition , linguistics , philosophy , process engineering , engineering
The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature‐extraction models with fine‐tuned networks. However, because the datasets are limited in size, a fine‐tuned network tends to over‐fit. They propose a deep learning‐based averaging ensemble to reduce the effect of over‐fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature‐extraction based models and single fine‐tuned models.