Premium
Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture
Author(s) -
Bosilj Petra,
Aptoula Erchan,
Duckett Tom,
Cielniak Grzegorz
Publication year - 2020
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21869
Subject(s) - crop , segmentation , agriculture , precision agriculture , agronomy , transfer of learning , artificial intelligence , agricultural engineering , computer science , machine learning , biology , engineering , ecology
Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds to perform selective treatments and increase yield and crop health while reducing the amount of chemicals used. Deep‐learning approaches have recently achieved both excellent classification performance and real‐time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labeling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep‐learning‐based classifiers for different crop types, with the goal of reducing the retraining time and labeling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds and compare the performance and retraining efforts required when using data labeled at pixel level with partially labeled data obtained through a less time‐consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible and reduces training times for up to 80%. Furthermore, we show that even when the data used for retraining are imperfectly annotated, the classification performance is within 2% of that of networks trained with laboriously annotated pixel‐precision data.