Deep Hybrid Multimodal Biometric Recognition System Based on Features-Level Deep Fusion of Five Biometric Traits.

Researchers

Journal

Modalities

Models

Abstract

The need for information security and the adoption of the relevant regulations is becoming an overwhelming demand worldwide. As an efficient solution, hybrid multimodal biometric systems utilize fusion to combine multiple biometric traits and sources with improving recognition accuracy, higher security assurance, and to cope with the limitations of the uni-biometric system. In this paper, three strategies for dealing with a feature-level deep fusion of five biometric traits (face, both irises, and two fingerprints) derived from three sources of evidence are proposed and compared. In the first two proposed methodologies, each feature vector is mapped from the feature space into the reproducing kernel Hilbert space (RKHS) separately by selecting the appropriate reproducing kernel. In this higher space, where the result is the conversion of nonlinear relations to linear ones, dimensionality reduction algorithms (KPCA, KLDA) and quaternion-based algorithms (KQPCA, KQPCA) are used for the fusion of the feature vectors. In the third methodology, the fusion of feature spaces based on deep learning is administered by combining feature vectors in in-depth and fully connected layers. The experimental results on 6 databases in the proposed hybrid multibiometric system clearly show the multimodal template obtained from the deep fusion of feature spaces; while being secure against spoof attacks and making the system robust, they can use the low dimensionality of the fused vector to increase the accuracy of a hybrid multimodal biometric system to 100%, showing a significant improvement compared with uni-biometric and other multimodal systems.Copyright © 2023 Mohammad Hassan Safavipour et al.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *