
DDLA: dual deep learning architecture for classification of plant species
Author(s) -
Sundara Sobitha Raj Anubha Pearline,
Vajravelu Sathiesh Kumar
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.0346
Subject(s) - artificial intelligence , classifier (uml) , computer science , pattern recognition (psychology) , quadratic classifier , naive bayes classifier , random forest , support vector machine , feature extraction , perceptron , multilayer perceptron , machine learning , linear discriminant analysis , extractor , artificial neural network , engineering , process engineering
Plant species recognition is performed using a dual deep learning architecture (DDLA) approach. DDLA consists of MobileNet and DenseNet‐121 architectures. The feature vectors obtained from individual architectures are concatenated to form a final feature vector. The extracted features are then classified using machine learning (ML) classifiers such as linear discriminant analysis, multinomial logistic regression (LR), Naive Bayes, classification and regression tree, k ‐nearest neighbour, random forest classifier, bagging classifier and multi‐layer perceptron. The dataset considered in the studies is standard (Flavia, Folio, and Swedish Leaf) and custom collected (Leaf‐12) dataset. The MobileNet and DenseNet‐121 architectures are also used as a feature extractor and a classifier. It is observed that the DDLA architecture with LR classifier produced the highest accuracies of 98.71, 96.38, 99.41, and 99.39% for Flavia, Folio, Swedish leaf, and Leaf‐12 datasets. The observed accuracy for DDLA + LR is higher compared with other approaches (DDLA + ML classifiers, MobileNet + ML classifiers, DenseNet‐121 + ML classifiers, MobileNet + fully connected layer (FCL), DenseNet‐121 + FCL). It is also observed that the DDLA architecture with LR classifier achieves higher accuracy in comparable computation time with other approaches.