1
|
Tamim M, Hamim S, Malik S, Mridha M, Mahmood S. InsightNet: A Deep Learning Framework for Enhanced Plant Disease Detection and Explainable Insights. PLANT DIRECT 2025; 9:e70076. [PMID: 40330704 PMCID: PMC12050364 DOI: 10.1002/pld3.70076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Revised: 04/12/2025] [Accepted: 04/17/2025] [Indexed: 05/08/2025]
Abstract
Sustainable agriculture holds the key in meeting food production requirements for a rapidly growing population without exacerbating environmental degradation. Plant leaf diseases pose a critical threat to crop yield and quality. Existing inspection methods are labor-intensive and prone to human errors, while lacking support for large-scale agriculture. This research aims to enhance plant health by developing advanced deep learning models for the detection and classification of plant diseases across a variety of species. A deep learning model based on the paradigm of the MobileNet architecture is proposed, which employs a dedicated design through deeper convolutional layers, dropout regularization, and fully connected layers. This results in significant improvements in disease classification in tomato, bean, and chili plants, with accuracy rates of 97.90%, 98.12%, and 97.95%, respectively. Moreover, Grad-CAM is used to shed light on the decision-making process of the proposed model. The work contributes to the advancement of precision farming and sustainable agricultural practices, supporting timely and accurate plant disease diagnosis.
Collapse
Affiliation(s)
- Mubasshar U. I. Tamim
- Department of Computer Science and EngineeringAmerican International University–BangladeshDhakaBangladesh
| | - Sultanul A. Hamim
- Department of Computer Science and EngineeringAmerican International University–BangladeshDhakaBangladesh
| | - Sumaiya Malik
- Department of Computer Science and EngineeringAmerican International University–BangladeshDhakaBangladesh
| | - M. F. Mridha
- Department of Computer Science and EngineeringAmerican International University–BangladeshDhakaBangladesh
| | - Sharfuddin Mahmood
- Department of Computer Science and EngineeringAmerican International University–BangladeshDhakaBangladesh
| |
Collapse
|
2
|
Zvirin A, Shapira A, Attal E, Gozlan T, Soussan A, De La Vega D, Harush Y, Kimmel R. Identification of non-glandular trichome hairs in cannabis using vision-based deep learning methods. J Forensic Sci 2025. [PMID: 40249026 DOI: 10.1111/1556-4029.70058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2025] [Revised: 03/17/2025] [Accepted: 03/27/2025] [Indexed: 04/19/2025]
Abstract
The detection of cannabis and cannabis-related products is a critical task for forensic laboratories and law enforcement agencies, given their harmful effects. Forensic laboratories analyze large quantities of plant material annually to identify genuine cannabis and its illicit substitutes. Ensuring accurate identification is essential for supporting judicial proceedings and combating drug-related crimes. The naked eye alone cannot distinguish between genuine cannabis and non-cannabis plant material that has been sprayed with synthetic cannabinoids, especially after distribution into the market. Reliable forensic identification typically requires two colorimetric tests (Duquenois-Levine and Fast Blue BB), as well as a drug laboratory expert test for affirmation or negation of cannabis hair (non-glandular trichomes), making the process time-consuming and resource-intensive. Here, we propose a novel deep learning-based computer vision method for identifying non-glandular trichome hairs in cannabis. A dataset of several thousand annotated microscope images was collected, including genuine cannabis and non-cannabis plant material apparently sprayed with synthetic cannabinoids. Ground-truth labels were established using three forensic tests, two chemical assays, and expert microscopic analysis, ensuring reliable classification. The proposed method demonstrated an accuracy exceeding 97% in distinguishing cannabis from non-cannabis plant material. These results suggest that deep learning can reliably identify non-glandular trichome hairs in cannabis based on microscopic trichome features, potentially reducing reliance on costly and time-consuming expert microscopic analysis. This framework provides forensic departments and law enforcement agencies with an efficient and accurate tool for identifying non-glandular trichome hairs in cannabis, supporting efforts to combat illicit drug trafficking.
Collapse
Affiliation(s)
- Alon Zvirin
- Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel
| | - Amitzur Shapira
- The Division of Forensic Sciences, National Police Headquarters, Jerusalem, Israel
| | - Emma Attal
- Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel
| | - Tamar Gozlan
- Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel
| | - Arthur Soussan
- Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel
| | - Dafna De La Vega
- The Division of Forensic Sciences, National Police Headquarters, Jerusalem, Israel
| | - Yehudit Harush
- The Division of Forensic Sciences, National Police Headquarters, Jerusalem, Israel
| | - Ron Kimmel
- Computer Science Department, Technion - Israel Institute of Technology, Haifa, Israel
- Faculty of Electrical and Computer Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
3
|
Naqvi SAF, Khan MA, Hamza A, Alsenan S, Alharbi M, Teng S, Nam Y. Fruit and vegetable leaf disease recognition based on a novel custom convolutional neural network and shallow classifier. FRONTIERS IN PLANT SCIENCE 2024; 15:1469685. [PMID: 39403618 PMCID: PMC11471556 DOI: 10.3389/fpls.2024.1469685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 09/04/2024] [Indexed: 01/12/2025]
Abstract
Fruits and vegetables are among the most nutrient-dense cash crops worldwide. Diagnosing diseases in fruits and vegetables is a key challenge in maintaining agricultural products. Due to the similarity in disease colour, texture, and shape, it is difficult to recognize manually. Also, this process is time-consuming and requires an expert person. We proposed a novel deep learning and optimization framework for apple and cucumber leaf disease classification to consider the above challenges. In the proposed framework, a hybrid contrast enhancement technique is proposed based on the Bi-LSTM and Haze reduction to highlight the diseased part in the image. After that, two custom models named Bottleneck Residual with Self-Attention (BRwSA) and Inverted Bottleneck Residual with Self-Attention (IBRwSA) are proposed and trained on the selected datasets. After the training, testing images are employed, and deep features are extracted from the self-attention layer. Deep extracted features are fused using a concatenation approach that is further optimized in the next step using an improved human learning optimization algorithm. The purpose of this algorithm was to improve the classification accuracy and reduce the testing time. The selected features are finally classified using a shallow wide neural network (SWNN) classifier. In addition to that, both trained models are interpreted using an explainable AI technique such as LIME. Based on this approach, it is easy to interpret the inside strength of both models for apple and cucumber leaf disease classification and identification. A detailed experimental process was conducted on both datasets, Apple and Cucumber. On both datasets, the proposed framework obtained an accuracy of 94.8% and 94.9%, respectively. A comparison was also conducted using a few state-of-the-art techniques, and the proposed framework showed improved performance.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia
| | - Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Shrooq Alsenan
- Information Systems Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Sokea Teng
- Department of ICT Convergence, Soonchunhyang University, Asan, Republic of Korea
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, Republic of Korea
| |
Collapse
|
4
|
Nourinejhad Zarghani S, Monavari M, Nourinejhad Zarghani A, Nouri S, Ehlers J, Hamacher J, Bandte M, Büttner C. Quantifying Plant Viruses: Evolution from Bioassay to Infectivity Dilution Curves along the Model of Tobamoviruses. Viruses 2024; 16:440. [PMID: 38543805 PMCID: PMC10974926 DOI: 10.3390/v16030440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 05/23/2024] Open
Abstract
This review describes the development of the bioassay as a means of quantifying plant viruses, with particular attention to tobamovirus. It delves into various models used to establish a correlation between virus particle concentration and the number of induced local lesions (the infectivity dilution curve), including the Poisson, Furumoto and Mickey, Kleczkowski, Growth curve, and modified Poisson models. The parameters of each model are described, and their application or performance in the context of the tobacco mosaic virus is explored. This overview highlights the enduring value of the infectivity dilution curve in tobamovirus quantification, providing valuable insights for researchers or practitioners of bioassays and theoreticians of modeling.
Collapse
Affiliation(s)
- Shaheen Nourinejhad Zarghani
- Division Phytomedicine, Faculty of Life Sciences, Albrecht Daniel Thaer-Institute of Agricultural and Horticultural Sciences, Humboldt-Universität in Berlin, Lentzeallee 55–57, 14197 Berlin, Germany; (S.N.); (J.E.); (M.B.); (C.B.)
| | - Mehran Monavari
- Section eScience, Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin, Germany;
| | - Amin Nourinejhad Zarghani
- School of Mechanical Engineering, Hamburg University of Technology, Eissendorfer Str. 38, 21073 Hamburg, Germany;
| | - Sahar Nouri
- Division Phytomedicine, Faculty of Life Sciences, Albrecht Daniel Thaer-Institute of Agricultural and Horticultural Sciences, Humboldt-Universität in Berlin, Lentzeallee 55–57, 14197 Berlin, Germany; (S.N.); (J.E.); (M.B.); (C.B.)
| | - Jens Ehlers
- Division Phytomedicine, Faculty of Life Sciences, Albrecht Daniel Thaer-Institute of Agricultural and Horticultural Sciences, Humboldt-Universität in Berlin, Lentzeallee 55–57, 14197 Berlin, Germany; (S.N.); (J.E.); (M.B.); (C.B.)
- Menno Chemie Vertrieb GmbH, Langer Kamp 104, 22850 Norderstedt, Germany
| | - Joachim Hamacher
- Institute of Crop Science and Resource Conservation (INRES)—Plant Pathology, Universität Bonn, Nussallee 9, 53115 Bonn, Germany;
| | - Martina Bandte
- Division Phytomedicine, Faculty of Life Sciences, Albrecht Daniel Thaer-Institute of Agricultural and Horticultural Sciences, Humboldt-Universität in Berlin, Lentzeallee 55–57, 14197 Berlin, Germany; (S.N.); (J.E.); (M.B.); (C.B.)
| | - Carmen Büttner
- Division Phytomedicine, Faculty of Life Sciences, Albrecht Daniel Thaer-Institute of Agricultural and Horticultural Sciences, Humboldt-Universität in Berlin, Lentzeallee 55–57, 14197 Berlin, Germany; (S.N.); (J.E.); (M.B.); (C.B.)
| |
Collapse
|
5
|
Ngugi HN, Ezugwu AE, Akinyelu AA, Abualigah L. Revolutionizing crop disease detection with computational deep learning: a comprehensive review. ENVIRONMENTAL MONITORING AND ASSESSMENT 2024; 196:302. [PMID: 38401024 PMCID: PMC10894121 DOI: 10.1007/s10661-024-12454-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 02/12/2024] [Indexed: 02/26/2024]
Abstract
Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
Collapse
Affiliation(s)
- Habiba N Ngugi
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg, KwaZulu-Natal, 3201, South Africa
| | - Absalom E Ezugwu
- Unit for Data Science and Computing, North-West University, 11 Hoffman Street, Potchefstroom, 2520, South Africa.
| | - Andronicus A Akinyelu
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Laith Abualigah
- Artificial Intelligence and Sensing Technologies (AIST) Research Center, University of Tabuk, Tabuk, 71491, Saudi Arabia.
- Computer Science Department, Al al-Bayt University, Mafraq, 25113, Jordan.
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman, 19328, Jordan.
- MEU Research Unit, Middle East University, Amman, 11831, Jordan.
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 13-5053, Lebanon.
- School of Engineering and Technology, Sunway University Malaysia, Petaling Jaya, 27500, Malaysia.
- Applied Science Research Center, Applied Science Private University, Amman, 11931, Jordan.
- College of Engineering, Yuan Ze University, Taoyuan, Taiwan.
| |
Collapse
|
6
|
Li J, Zhao X, Xu H, Zhang L, Xie B, Yan J, Zhang L, Fan D, Li L. An Interpretable High-Accuracy Method for Rice Disease Detection Based on Multisource Data and Transfer Learning. PLANTS (BASEL, SWITZERLAND) 2023; 12:3273. [PMID: 37765436 PMCID: PMC10534448 DOI: 10.3390/plants12183273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 09/04/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
With the evolution of modern agriculture and precision farming, the efficient and accurate detection of crop diseases has emerged as a pivotal research focus. In this study, an interpretative high-precision rice disease detection method, integrating multisource data and transfer learning, is introduced. This approach harnesses diverse data types, including imagery, climatic conditions, and soil attributes, facilitating enriched information extraction and enhanced detection accuracy. The incorporation of transfer learning bestows the model with robust generalization capabilities, enabling rapid adaptation to varying agricultural environments. Moreover, the interpretability of the model ensures transparency in its decision-making processes, garnering trust for real-world applications. Experimental outcomes demonstrate superior performance of the proposed method on multiple datasets when juxtaposed against advanced deep learning models and traditional machine learning techniques. Collectively, this research offers a novel perspective and toolkit for agricultural disease detection, laying a solid foundation for the future advancement of agriculture.
Collapse
Affiliation(s)
- Jiaqi Li
- China Agricultural University, Beijing 100083, China
| | - Xinyan Zhao
- China Agricultural University, Beijing 100083, China
| | - Hening Xu
- China Agricultural University, Beijing 100083, China
| | - Liman Zhang
- China Agricultural University, Beijing 100083, China
| | - Boyu Xie
- China Agricultural University, Beijing 100083, China
| | - Jin Yan
- China Agricultural University, Beijing 100083, China
| | | | - Dongchen Fan
- School of Computer Science and Engineering, Beihang University, Beijing 100191, China
| | - Lin Li
- China Agricultural University, Beijing 100083, China
| |
Collapse
|
7
|
Wu X, Deng H, Wang Q, Lei L, Gao Y, Hao G. Meta-learning shows great potential in plant disease recognition under few available samples. THE PLANT JOURNAL : FOR CELL AND MOLECULAR BIOLOGY 2023; 114:767-782. [PMID: 36883481 DOI: 10.1111/tpj.16176] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/15/2023] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Plant diseases worsen the threat of food shortage with the growing global population, and disease recognition is the basis for the effective prevention and control of plant diseases. Deep learning has made significant breakthroughs in the field of plant disease recognition. Compared with traditional deep learning, meta-learning can still maintain more than 90% accuracy in disease recognition with small samples. However, there is no comprehensive review on the application of meta-learning in plant disease recognition. Here, we mainly summarize the functions, advantages, and limitations of meta-learning research methods and their applications for plant disease recognition with a few data scenarios. Finally, we outline several research avenues for utilizing current and future meta-learning in plant science. This review may help plant science researchers obtain faster, more accurate, and more credible solutions through deep learning with fewer labeled samples.
Collapse
Affiliation(s)
- Xue Wu
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, Guizhou, China
| | - Hongyu Deng
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, Guizhou, China
| | - Qi Wang
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, Guizhou, China
| | - Liang Lei
- School of Physics & Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 550000, Guangzhou, China
| | - Yangyang Gao
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, Guizhou, China
| | - Gefei Hao
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, Guizhou, China
| |
Collapse
|
8
|
Long M, Hartley M, Morris RJ, Brown JKM. Classification of wheat diseases using deep learning networks with field and glasshouse images. PLANT PATHOLOGY 2023; 72:536-547. [PMID: 38516179 PMCID: PMC10953319 DOI: 10.1111/ppa.13684] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 10/20/2022] [Accepted: 10/21/2022] [Indexed: 03/23/2024]
Abstract
Crop diseases can cause major yield losses, so the ability to detect and identify them in their early stages is important for disease control. Deep learning methods have shown promise in classifying multiple diseases; however, many studies do not use datasets that represent real field conditions, necessitating either further image processing or reducing their applicability. In this paper, we present a dataset of wheat images taken in real growth situations, including both field and glasshouse conditions, with five categories: healthy plants and four foliar diseases, yellow rust, brown rust, powdery mildew and Septoria leaf blotch. This dataset was used to train a deep learning model. The resulting model, named CerealConv, reached a 97.05% classification accuracy. When tested against trained pathologists on a subset of images from the larger dataset, the model delivered an accuracy score 2% higher than the best-performing pathologist. Image masks were used to show that the model was using the correct information to drive its classifications. These results show that deep learning networks are a viable tool for disease detection and classification in the field, and disease quantification is a logical next step.
Collapse
Affiliation(s)
- Megan Long
- Department of Crop GeneticsJohn Innes CentreNorwichUK
| | - Matthew Hartley
- Department of Computational and Systems BiologyJohn Innes CentreNorwichUK
- Present address:
European Molecular Biology LaboratoryEuropean Bioinformatics InstituteHinxtonUK
| | - Richard J. Morris
- Department of Computational and Systems BiologyJohn Innes CentreNorwichUK
| | | |
Collapse
|
9
|
Mzoughi O, Yahiaoui I. Deep learning-based segmentation for disease identification. ECOL INFORM 2023. [DOI: 10.1016/j.ecoinf.2023.102000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
10
|
Nawaz M, Nazir T, Javed A, Masood M, Rashid J, Kim J, Hussain A. A robust deep learning approach for tomato plant leaf disease localization and classification. Sci Rep 2022; 12:18568. [PMID: 36329073 PMCID: PMC9633769 DOI: 10.1038/s41598-022-21498-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 09/28/2022] [Indexed: 11/06/2022] Open
Abstract
Tomato plants' disease detection and classification at the earliest stage can save the farmers from expensive crop sprays and can assist in increasing the food quantity. Although, extensive work has been presented by the researcher for the tomato plant disease classification, however, the timely localization and identification of various tomato leaf diseases is a complex job as a consequence of the huge similarity among the healthy and affected portion of plant leaves. Furthermore, the low contrast information between the background and foreground of the suspected sample has further complicated the plant leaf disease detection process. To deal with the aforementioned challenges, we have presented a robust deep learning (DL)-based approach namely ResNet-34-based Faster-RCNN for tomato plant leaf disease classification. The proposed method includes three basic steps. Firstly, we generate the annotations of the suspected images to specify the region of interest (RoI). In the next step, we have introduced ResNet-34 along with Convolutional Block Attention Module (CBAM) as a feature extractor module of Faster-RCNN to extract the deep key points. Finally, the calculated features are utilized for the Faster-RCNN model training to locate and categorize the numerous tomato plant leaf anomalies. We tested the presented work on an accessible standard database, the PlantVillage Kaggle dataset. More specifically, we have obtained the mAP and accuracy values of 0.981, and 99.97% respectively along with the test time of 0.23 s. Both qualitative and quantitative results confirm that the presented solution is robust to the detection of plant leaf disease and can replace the manual systems. Moreover, the proposed method shows a low-cost solution to tomato leaf disease classification which is robust to several image transformations like the variations in the size, color, and orientation of the leaf diseased portion. Furthermore, the framework can locate the affected plant leaves under the occurrence of blurring, noise, chrominance, and brightness variations. We have confirmed through the reported results that our approach is robust to several tomato leaf diseases classification under the varying image capturing conditions. In the future, we plan to extend our approach to apply it to other parts of plants as well.
Collapse
Affiliation(s)
- Marriam Nawaz
- grid.442854.bDepartment of Computer Science, University of Engineering and Technology Taxila, Taxila, 47050 Pakistan ,grid.442854.bDepartment of Software Engineering, University of Engineering and Technology Taxila, Taxila, 47050 Pakistan
| | - Tahira Nazir
- grid.414839.30000 0001 1703 6673Faculty of Computing, Riphah International University, Islamabad, Pakistan
| | - Ali Javed
- grid.442854.bDepartment of Software Engineering, University of Engineering and Technology Taxila, Taxila, 47050 Pakistan
| | - Momina Masood
- grid.442854.bDepartment of Computer Science, University of Engineering and Technology Taxila, Taxila, 47050 Pakistan
| | - Junaid Rashid
- grid.411118.c0000 0004 0647 1065Department of Computer Science and Engineering, Kongju National University, Cheonan, 31080 South Korea
| | - Jungeun Kim
- grid.411118.c0000 0004 0647 1065Department of Computer Science and Engineering, Kongju National University, Cheonan, 31080 South Korea ,grid.411118.c0000 0004 0647 1065Department of Software, Kongju National University, Cheonan, 31080 South Korea
| | - Amir Hussain
- grid.20409.3f000000012348339XCentre of AI and Data Science, Edinburgh Napier University, Edinburgh, EH11 4DY UK
| |
Collapse
|
11
|
Saleem MH, Potgieter J, Arif KM. A weight optimization-based transfer learning approach for plant disease detection of New Zealand vegetables. FRONTIERS IN PLANT SCIENCE 2022; 13:1008079. [PMID: 36388538 PMCID: PMC9641257 DOI: 10.3389/fpls.2022.1008079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/22/2022] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) is an effective approach to identifying plant diseases. Among several DL-based techniques, transfer learning (TL) produces significant results in terms of improved accuracy. However, the usefulness of TL has not yet been explored using weights optimized from agricultural datasets. Furthermore, the detection of plant diseases in different organs of various vegetables has not yet been performed using a trained/optimized DL model. Moreover, the presence/detection of multiple diseases in vegetable organs has not yet been investigated. To address these research gaps, a new dataset named NZDLPlantDisease-v2 has been collected for New Zealand vegetables. The dataset includes 28 healthy and defective organs of beans, broccoli, cabbage, cauliflower, kumara, peas, potato, and tomato. This paper presents a transfer learning method that optimizes weights obtained through agricultural datasets for better outcomes in plant disease identification. First, several DL architectures are compared to obtain the best-suited model, and then, data augmentation techniques are applied. The Faster Region-based Convolutional Neural Network (RCNN) Inception ResNet-v2 attained the highest mean average precision (mAP) compared to the other DL models including different versions of Faster RCNN, Single-Shot Multibox Detector (SSD), Region-based Fully Convolutional Networks (RFCN), RetinaNet, and EfficientDet. Next, weight optimization is performed on datasets including PlantVillage, NZDLPlantDisease-v1, and DeepWeeds using image resizers, interpolators, initializers, batch normalization, and DL optimizers. Updated/optimized weights are then used to retrain the Faster RCNN Inception ResNet-v2 model on the proposed dataset. Finally, the results are compared with the model trained/optimized using a large dataset, such as Common Objects in Context (COCO). The final mAP improves by 9.25% and is found to be 91.33%. Moreover, the robustness of the methodology is demonstrated by testing the final model on an external dataset and using the stratified k-fold cross-validation method.
Collapse
Affiliation(s)
- Muhammad Hammad Saleem
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| | - Johan Potgieter
- Massey AgriFood Digital Lab, Massey University, Palmerston North, New Zealand
| | - Khalid Mahmood Arif
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| |
Collapse
|
12
|
Franchetti B, Pirri F. Detection and Localization of Tip-Burn on Large Lettuce Canopies. FRONTIERS IN PLANT SCIENCE 2022; 13:874035. [PMID: 35646012 PMCID: PMC9133957 DOI: 10.3389/fpls.2022.874035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 03/29/2022] [Indexed: 06/15/2023]
Abstract
Recent years have seen an increased effort in the detection of plant stresses and diseases using non-invasive sensors and deep learning methods. Nonetheless, no studies have been made on dense plant canopies, due to the difficulty in automatically zooming into each plant, especially in outdoor conditions. Zooming in and zooming out is necessary to focus on the plant stress and to precisely localize the stress within the canopy, for further analysis and intervention. This work concentrates on tip-burn, which is a plant stress affecting lettuce grown in controlled environmental conditions, such as in plant factories. We present a new method for tip-burn stress detection and localization, combining both classification and self-supervised segmentation to detect, localize, and closely segment the stressed regions. Starting with images of a dense canopy collecting about 1,000 plants, the proposed method is able to zoom into the tip-burn region of a single plant, covering less than 1/10th of the plant itself. The method is crucial for solving the manual phenotyping that is required in plant factories. The precise localization of the stress within the plant, of the plant within the tray, and of the tray within the table canopy allows to automatically deliver statistics and causal annotations. We have tested our method on different data sets, which do not provide any ground truth segmentation mask, neither for the leaves nor for the stresses; therefore, the results on the self-supervised segmentation is even more impressive. Results show that the accuracy for both classification and self supervised segmentation is new and efficacious. Finally, the data set used for training test and validation is currently available on demand.
Collapse
Affiliation(s)
| | - Fiora Pirri
- Alcor Lab, DIAG, Sapienza University of Rome, Rome, Italy
- Deep Plants, Rome, Italy
| |
Collapse
|
13
|
Saleem MH, Velayudhan KK, Potgieter J, Arif KM. Weed Identification by Single-Stage and Two-Stage Neural Networks: A Study on the Impact of Image Resizers and Weights Optimization Algorithms. FRONTIERS IN PLANT SCIENCE 2022; 13:850666. [PMID: 35548295 PMCID: PMC9083231 DOI: 10.3389/fpls.2022.850666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 03/11/2022] [Indexed: 06/15/2023]
Abstract
The accurate identification of weeds is an essential step for a site-specific weed management system. In recent years, deep learning (DL) has got rapid advancements to perform complex agricultural tasks. The previous studies emphasized the evaluation of advanced training techniques or modifying the well-known DL models to improve the overall accuracy. In contrast, this research attempted to improve the mean average precision (mAP) for the detection and classification of eight classes of weeds by proposing a novel DL-based methodology. First, a comprehensive analysis of single-stage and two-stage neural networks including Single-shot MultiBox Detector (SSD), You look only Once (YOLO-v4), EfficientDet, CenterNet, RetinaNet, Faster Region-based Convolutional Neural Network (RCNN), and Region-based Fully Convolutional Network (RFCN), has been performed. Next, the effects of image resizing techniques along with four image interpolation methods have been studied. It led to the final stage of the research through optimization of the weights of the best-acquired model by initialization techniques, batch normalization, and DL optimization algorithms. The effectiveness of the proposed work is proven due to a high mAP of 93.44% and validated by the stratified k-fold cross-validation technique. It was 5.8% improved as compared to the results obtained by the default settings of the best-suited DL architecture (Faster RCNN ResNet-101). The presented pipeline would be a baseline study for the research community to explore several tasks such as real-time detection and reducing the computation/training time. All the relevant data including the annotated dataset, configuration files, and inference graph of the final model are provided with this article. Furthermore, the selection of the DeepWeeds dataset shows the robustness/practicality of the study because it contains images collected in a real/complex agricultural environment. Therefore, this research would be a considerable step toward an efficient and automatic weed control system.
Collapse
Affiliation(s)
- Muhammad Hammad Saleem
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| | - Kesini Krishnan Velayudhan
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| | - Johan Potgieter
- Massey AgriFood Digital Lab, Massey University, Palmerston North, New Zealand
| | - Khalid Mahmood Arif
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Auckland, New Zealand
| |
Collapse
|
14
|
FF-PCA-LDA: Intelligent Feature Fusion Based PCA-LDA Classification System for Plant Leaf Diseases. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073514] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Crop leaf disease management and control pose significant impact on enhancement in yield and quality to fulfill consumer needs. For smart agriculture, an intelligent leaf disease identification system is inevitable for efficient crop health monitoring. In this view, a novel approach is proposed for crop disease identification using feature fusion and PCA-LDA classification (FF-PCA-LDA). Handcrafted hybrid and deep features are extracted from RGB images. TL-ResNet50 is used to extract the deep features. Fused feature vector is obtained by combining handcrafted hybrid and deep features. After fusing the image features, PCA is employed to select most discriminant features for LDA model development. Potato crop leaf disease identification is used as a case study for the validation of the approach. The developed system is experimentally validated on a potato crop leaf benchmark dataset. It offers high accuracy of 98.20% on an unseen dataset which was not used during the model training process. Performance comparison of the proposed technique with other approaches shows its superiority. Owing to the better discrimination and learning ability, the proposed approach overcomes the leaf segmentation step. The developed approach may be used as an automated tool for crop monitoring, management control, and can be extended for other crop types.
Collapse
|
15
|
Li D, Ahmed F, Wu N, Sethi AI. YOLO-JD: A Deep Learning Network for Jute Diseases and Pests Detection from Images. PLANTS (BASEL, SWITZERLAND) 2022; 11:plants11070937. [PMID: 35406915 PMCID: PMC9003326 DOI: 10.3390/plants11070937] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 03/25/2022] [Accepted: 03/27/2022] [Indexed: 06/12/2023]
Abstract
Recently, disease prevention in jute plants has become an urgent topic as a result of the growing demand for finer quality fiber. This research presents a deep learning network called YOLO-JD for detecting jute diseases from images. In the main architecture of YOLO-JD, we integrated three new modules such as Sand Clock Feature Extraction Module (SCFEM), Deep Sand Clock Feature Extraction Module (DSCFEM), and Spatial Pyramid Pooling Module (SPPM) to extract image features effectively. We also built a new large-scale image dataset for jute diseases and pests with ten classes. Compared with other state-of-the-art experiments, YOLO-JD has achieved the best detection accuracy, with an average mAP of 96.63%.
Collapse
Affiliation(s)
- Dawei Li
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; (D.L.); (F.A.)
- State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, Donghua University, Shanghai 201620, China
- Engineering Research Center of Digitized Textile and Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
| | - Foysal Ahmed
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; (D.L.); (F.A.)
| | - Nailong Wu
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; (D.L.); (F.A.)
- Engineering Research Center of Digitized Textile and Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
| | - Arlin I. Sethi
- Department of Chemistry, Faculty of Science, National University of Bangladesh, Gazipur, Dhaka 1704, Bangladesh;
| |
Collapse
|
16
|
Wöber W, Mehnen L, Sykacek P, Meimberg H. Investigating Explanatory Factors of Machine Learning Models for Plant Classification. PLANTS (BASEL, SWITZERLAND) 2021; 10:plants10122674. [PMID: 34961145 PMCID: PMC8708324 DOI: 10.3390/plants10122674] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/24/2021] [Accepted: 12/01/2021] [Indexed: 06/12/2023]
Abstract
Recent progress in machine learning and deep learning has enabled the implementation of plant and crop detection using systematic inspection of the leaf shapes and other morphological characters for identification systems for precision farming. However, the models used for this approach tend to become black-box models, in the sense that it is difficult to trace characters that are the base for the classification. The interpretability is therefore limited and the explanatory factors may not be based on reasonable visible characters. We investigate the explanatory factors of recent machine learning and deep learning models for plant classification tasks. Based on a Daucus carota and a Beta vulgaris image data set, we implement plant classification models and compare those models by their predictive performance as well as explainability. For comparison we implemented a feed forward convolutional neuronal network as a default model. To evaluate the performance, we trained an unsupervised Bayesian Gaussian process latent variable model as well as a convolutional autoencoder for feature extraction and rely on a support vector machine for classification. The explanatory factors of all models were extracted and analyzed. The experiments show, that feed forward convolutional neuronal networks (98.24% and 96.10% mean accuracy) outperforms the Bayesian Gaussian process latent variable pipeline (92.08% and 94.31% mean accuracy) as well as the convolutional autoenceoder pipeline (92.38% and 93.28% mean accuracy) based approaches in terms of classification accuracy, even though not significant for Beta vulgaris images. Additionally, we found that the neuronal network used biological uninterpretable image regions for the plant classification task. In contrast to that, the unsupervised learning models rely on explainable visual characters. We conclude that supervised convolutional neuronal networks must be used carefully to ensure biological interpretability. We recommend unsupervised machine learning, careful feature investigation, and statistical feature analysis for biological applications.
Collapse
Affiliation(s)
- Wilfried Wöber
- Department of Integrative Biology and Biodiversity Research, Institute of Integrative Conservation Research, University of Natural Resources and Life Sciences, Gregor Mendel Str. 33, 1080 Vienna, Austria;
- Department Industrial Engineering, University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria
| | - Lars Mehnen
- Department Computer Science, University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria;
| | - Peter Sykacek
- Department of Biotechnology, Institute of Computational Biology, University of Natural Resources and Life Sciences, Muthgasse 18, 1190 Vienna, Austria;
| | - Harald Meimberg
- Department of Integrative Biology and Biodiversity Research, Institute of Integrative Conservation Research, University of Natural Resources and Life Sciences, Gregor Mendel Str. 33, 1080 Vienna, Austria;
| |
Collapse
|
17
|
Waldamichael FG, Debelee TG, Ayano YM. Coffee disease detection using a robust HSV color‐based segmentation and transfer learning for use on smartphones. INT J INTELL SYST 2021. [DOI: 10.1002/int.22747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
| | - Taye Girma Debelee
- Research and Development Cluster Ethiopian Artificial Intelligence Center Addis Ababa Ethiopia
- Department of Electrical and Computer Engineering Addis Ababa Science and Technology University Addis Ababa Ethiopia
| | | |
Collapse
|
18
|
Genaev MA, Skolotneva ES, Gultyaeva EI, Orlova EA, Bechtold NP, Afonnikov DA. Image-Based Wheat Fungi Diseases Identification by Deep Learning. PLANTS 2021; 10:plants10081500. [PMID: 34451545 PMCID: PMC8399806 DOI: 10.3390/plants10081500] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 06/07/2021] [Accepted: 07/15/2021] [Indexed: 11/19/2022]
Abstract
Diseases of cereals caused by pathogenic fungi can significantly reduce crop yields. Many cultures are exposed to them. The disease is difficult to control on a large scale; thus, one of the relevant approaches is the crop field monitoring, which helps to identify the disease at an early stage and take measures to prevent its spread. One of the effective control methods is disease identification based on the analysis of digital images, with the possibility of obtaining them in field conditions, using mobile devices. In this work, we propose a method for the recognition of five fungal diseases of wheat shoots (leaf rust, stem rust, yellow rust, powdery mildew, and septoria), both separately and in case of multiple diseases, with the possibility of identifying the stage of plant development. A set of 2414 images of wheat fungi diseases (WFD2020) was generated, for which expert labeling was performed by the type of disease. More than 80% of the images in the dataset correspond to single disease labels (including seedlings), more than 12% are represented by healthy plants, and 6% of the images labeled are represented by multiple diseases. In the process of creating this set, a method was applied to reduce the degeneracy of the training data based on the image hashing algorithm. The disease-recognition algorithm is based on the convolutional neural network with the EfficientNet architecture. The best accuracy (0.942) was shown by a network with a training strategy based on augmentation and transfer of image styles. The recognition method was implemented as a bot on the Telegram platform, which allows users to assess plants by lesions in the field conditions.
Collapse
Affiliation(s)
- Mikhail A. Genaev
- Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 630090 Novosibirsk, Russia; (M.A.G.); (E.S.S.)
- Faculty of Natural Sciences, Novosibirsk State University, 630090 Novosibirsk, Russia
- Kurchatov Genomics Center, Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 630090 Novosibirsk, Russia
| | - Ekaterina S. Skolotneva
- Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 630090 Novosibirsk, Russia; (M.A.G.); (E.S.S.)
- Faculty of Natural Sciences, Novosibirsk State University, 630090 Novosibirsk, Russia
| | - Elena I. Gultyaeva
- All Russian Institute of Plant Protection, Pushkin, 196608 St. Petersburg, Russia;
| | - Elena A. Orlova
- Siberian Research Institute of Plant Production and Breeding, Branch of the Institute of Cytology and Genetics, Siberian Branch of Russian Academy of Sciences, 630501 Krasnoobsk, Russia; (E.A.O.); (N.P.B.)
| | - Nina P. Bechtold
- Siberian Research Institute of Plant Production and Breeding, Branch of the Institute of Cytology and Genetics, Siberian Branch of Russian Academy of Sciences, 630501 Krasnoobsk, Russia; (E.A.O.); (N.P.B.)
| | - Dmitry A. Afonnikov
- Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 630090 Novosibirsk, Russia; (M.A.G.); (E.S.S.)
- Faculty of Natural Sciences, Novosibirsk State University, 630090 Novosibirsk, Russia
- Kurchatov Genomics Center, Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 630090 Novosibirsk, Russia
- Correspondence: ; Tel.: +7-(383)-363-49-63
| |
Collapse
|
19
|
Yebasse M, Shimelis B, Warku H, Ko J, Cheoi KJ. Coffee Disease Visualization and Classification. PLANTS (BASEL, SWITZERLAND) 2021; 10:plants10061257. [PMID: 34205610 PMCID: PMC8235481 DOI: 10.3390/plants10061257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 06/13/2023]
Abstract
Deep learning architectures are widely used in state-of-the-art image classification tasks. Deep learning has enhanced the ability to automatically detect and classify plant diseases. However, in practice, disease classification problems are treated as black-box methods. Thus, it is difficult to trust the model that it truly identifies the region of the disease in the image; it may simply use unrelated surroundings for classification. Visualization techniques can help determine important areas for the model by highlighting the region responsible for the classification. In this study, we present a methodology for visualizing coffee diseases using different visualization approaches. Our goal is to visualize aspects of a coffee disease to obtain insight into what the model "sees" as it learns to classify healthy and non-healthy images. In addition, visualization helped us identify misclassifications and led us to propose a guided approach for coffee disease classification. The guided approach achieved a classification accuracy of 98% compared to the 77% of naïve approach on the Robusta coffee leaf image dataset. The visualization methods considered in this study were Grad-CAM, Grad-CAM++, and Score-CAM. We also provided a visual comparison of the visualization methods.
Collapse
Affiliation(s)
- Milkisa Yebasse
- Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea; (M.Y.); (J.K.)
| | - Birhanu Shimelis
- Artificial Intelligence Center (AIC), Addis Ababa 2Q92+88, Ethiopia;
| | - Henok Warku
- Department of IT-Bio Convergence System, Electronics Engineering, Graduate School, Chosun University, Gwangju 61452, Korea;
| | - Jaepil Ko
- Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea; (M.Y.); (J.K.)
| | - Kyung Joo Cheoi
- Department of Computer Science, Chungbuk National University, Cheongju 28644, Korea
| |
Collapse
|