z-logo
open-access-imgOpen Access
Enhanced Field-Based Detection of Potato Blight in Complex Backgrounds Using Deep Learning
Author(s) -
Joe Johnson,
Geetanjali Sharma,
Srikant Srinivasan,
Shyam Kumar Masakapalli,
Sanjeev Sharma,
Jagdev Sharma,
Vijay Kumar Dua
Publication year - 2021
Publication title -
plant phenomics
Language(s) - English
Resource type - Journals
eISSN - 2097-0374
pISSN - 2643-6515
DOI - 10.34133/2021/9835724
Subject(s) - rgb color model , blight , artificial intelligence , convolutional neural network , computer science , transfer of learning , pattern recognition (psychology) , field (mathematics) , deep learning , phytophthora infestans , color space , computer vision , horticulture , mathematics , image (mathematics) , biology , pure mathematics
Rapid and automated identification of blight disease in potato will help farmers to apply timely remedies to protect their produce. Manual detection of blight disease can be cumbersome and may require trained experts. To overcome these issues, we present an automated system using the Mask Region-based convolutional neural network (Mask R-CNN) architecture, with residual network as the backbone network for detecting blight disease patches on potato leaves in field conditions. The approach uses transfer learning, which can generate good results even with small datasets. The model was trained on a dataset of 1423 images of potato leaves obtained from fields in different geographical locations and at different times of the day. The images were manually annotated to create over 6200 labeled patches covering diseased and healthy portions of the leaf. The Mask R-CNN model was able to correctly differentiate between the diseased patch on the potato leaf and the similar-looking background soil patches, which can confound the outcome of binary classification. To improve the detection performance, the original RGB dataset was then converted to HSL, HSV, LAB, XYZ, and YCrCb color spaces. A separate model was created for each color space and tested on 417 field-based test images. This yielded 81.4% mean average precision on the LAB model and 56.9% mean average recall on the HSL model, slightly outperforming the original RGB color space model. Manual analysis of the detection performance indicates an overall precision of 98% on leaf images in a field environment containing complex backgrounds.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom