
Multi-Modal Biometrics based on Data Fusion
Author(s) -
H. X. Yang,
Et al. Mu Sun,
Cheng Cheng,
Anthony H. Ding
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1684/1/012023
Subject(s) - biometrics , modal , computer science , bottleneck , fuse (electrical) , modality (human–computer interaction) , artificial intelligence , pattern recognition (psychology) , kernel (algebra) , feature (linguistics) , sensor fusion , data mining , mode (computer interface) , machine learning , engineering , mathematics , human–computer interaction , linguistics , chemistry , philosophy , combinatorics , polymer chemistry , electrical engineering , embedded system
With the development of intelligent application, biometrics recognition technology has been widely concerned and applied in many fields of the real world, such as access control and payment. The traditional biometrics are usually based on single modality data of the subjects, but they are limited by the feature information capacity and the bottleneck in recognition accuracy. In this paper, a multi-modal biometric recognition framework is presented, which utilizes a multi-kernel learning algorithm to fuse heterogeneous information of different modal data. In order to extract complementary information from them, we combine the kernel matrix to form the mixed kernel matrix, and then give the final classification results. The experimental results on multiple biometric datasets show that our method can obtain higher recognition accuracy compared with the existing single mode and multi-mode fusion methods.