1
|
Yu H, Che M, Yu H, Ma Y. Research on weed identification in soybean fields based on the lightweight segmentation model DCSAnet. FRONTIERS IN PLANT SCIENCE 2023; 14:1268218. [PMID: 38116146 PMCID: PMC10728600 DOI: 10.3389/fpls.2023.1268218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/08/2023] [Indexed: 12/21/2023]
Abstract
Weeds can compete with crops for sunlight, water, space and various nutrients, which can affect the growth of crops.In recent years, people have started to use self-driving agricultural equipment, robots, etc. for weeding work and use of drones for weed identification and spraying of weeds with herbicides, and the effectiveness of these mobile weeding devices is largely limited by the superiority of weed detection capability. To improve the weed detection capability of mobile weed control devices, this paper proposes a lightweight weed segmentation network model DCSAnet that can be better applied to mobile weed control devices. The whole network model uses an encoder-decoder structure and the DCA module as the main feature extraction module. The main body of the DCA module is based on the reverse residual structure of MobileNetV3, effectively combines asymmetric convolution and depthwise separable convolution, and uses a channel shuffle strategy to increase the randomness of feature extraction. In the decoding stage, feature fusion utilizes the high-dimensional feature map to guide the aggregation of low-dimensional feature maps to reduce feature loss during fusion and increase the accuracy of the model. To validate the performance of this network model on the weed segmentation task, we collected a soybean field weed dataset containing a large number of weeds and crops and used this dataset to conduct an experimental study of DCSAnet. The results showed that our proposed DCSAnet achieves an MIoU of 85.95% with a model parameter number of 0.57 M and the highest segmentation accuracy in comparison with other lightweight networks, which demonstrates the effectiveness of the model for the weed segmentation task.
Collapse
Affiliation(s)
- Helong Yu
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Minghang Che
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Han Yu
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Yuntao Ma
- College of Land Science and Technology, China Agricultural University, Beijing, China
| |
Collapse
|
2
|
Yordanov M, d'Andrimont R, Martinez-Sanchez L, Lemoine G, Fasbender D, van der Velde M. Crop Identification Using Deep Learning on LUCAS Crop Cover Photos. SENSORS (BASEL, SWITZERLAND) 2023; 23:6298. [PMID: 37514593 PMCID: PMC10383911 DOI: 10.3390/s23146298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Massive and high-quality in situ data are essential for Earth-observation-based agricultural monitoring. However, field surveying requires considerable organizational effort and money. Using computer vision to recognize crop types on geo-tagged photos could be a game changer allowing for the provision of timely and accurate crop-specific information. This study presents the first use of the largest multi-year set of labelled close-up in situ photos systematically collected across the European Union from the Land Use Cover Area frame Survey (LUCAS). Benefiting from this unique in situ dataset, this study aims to benchmark and test computer vision models to recognize major crops on close-up photos statistically distributed spatially and through time between 2006 and 2018 in a practical agricultural policy relevant context. The methodology makes use of crop calendars from various sources to ascertain the mature stage of the crop, of an extensive paradigm for the hyper-parameterization of MobileNet from random parameter initialization, and of various techniques from information theory in order to carry out more accurate post-processing filtering on results. The work has produced a dataset of 169,460 images of mature crops for the 12 classes, out of which 15,876 were manually selected as representing a clean sample without any foreign objects or unfavorable conditions. The best-performing model achieved a macro F1 (M-F1) of 0.75 on an imbalanced test dataset of 8642 photos. Using metrics from information theory, namely the equivalence reference probability, resulted in an increase of 6%. The most unfavorable conditions for taking such images, across all crop classes, were found to be too early or late in the season. The proposed methodology shows the possibility of using minimal auxiliary data outside the images themselves in order to achieve an M-F1 of 0.82 for labelling between 12 major European crops.
Collapse
Affiliation(s)
| | | | | | - Guido Lemoine
- European Commission, Joint Research Centre (JRC), 21027 Ispra, Italy
| | - Dominique Fasbender
- European Commission, Joint Research Centre (JRC), 21027 Ispra, Italy
- Walloon Institute of Evaluation, Foresight and Statistics (IWEPS), 5001 Namur, Belgium
| | | |
Collapse
|
3
|
Wang C, Yang G, Huang Y, Liu Y, Zhang Y. A transformer-based mask R-CNN for tomato detection and segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
Fruit detection is essential for harvesting robot platforms. However, complicated environmental attributes such as illumination variation and occlusion have made fruit detection a challenging task. In this study, a Transformer-based mask region-based convolution neural network (R-CNN) model for tomato detection and segmentation is proposed to address these difficulties. Swin Transformer is used as the backbone network for better feature extraction. Multi-scale training techniques are shown to yield significant performance gains. Apart from accurately detecting and segmenting tomatoes, the method effectively identifies tomato cultivars (normal-size and cherry tomatoes) and tomato maturity stages (fully-ripened, half-ripened, and green). Compared with existing work, the method has the best detection and segmentation performance for these tomatoes, with mean average precision (mAP) results of 89.4% and 89.2%, respectively.
Collapse
Affiliation(s)
- Chong Wang
- School of Software, Shandong University, Jinan, China
| | - Gongping Yang
- School of Software, Shandong University, Jinan, China
- School of Computer, Heze University, Heze, China
| | - Yuwen Huang
- School of Computer, Heze University, Heze, China
| | - Yikun Liu
- School of Software, Shandong University, Jinan, China
| | - Yan Zhang
- School of Software, Shandong University, Jinan, China
| |
Collapse
|
4
|
Divyanth LG, Soni P, Pareek CM, Machavaram R, Nadimi M, Paliwal J. Detection of Coconut Clusters Based on Occlusion Condition Using Attention-Guided Faster R-CNN for Robotic Harvesting. Foods 2022; 11:foods11233903. [PMID: 36496712 PMCID: PMC9737954 DOI: 10.3390/foods11233903] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/23/2022] [Accepted: 12/01/2022] [Indexed: 12/09/2022] Open
Abstract
Manual harvesting of coconuts is a highly risky and skill-demanding operation, and the population of people involved in coconut tree climbing has been steadily decreasing. Hence, with the evolution of tree-climbing robots and robotic end-effectors, the development of autonomous coconut harvesters with the help of machine vision technologies is of great interest to farmers. However, coconuts are very hard and experience high occlusions on the tree. Hence, accurate detection of coconut clusters based on their occlusion condition is necessary to plan the motion of the robotic end-effector. This study proposes a deep learning-based object detection Faster Regional-Convolutional Neural Network (Faster R-CNN) model to detect coconut clusters as non-occluded and leaf-occluded bunches. To improve identification accuracy, an attention mechanism was introduced into the Faster R-CNN model. The image dataset was acquired from a commercial coconut plantation during daylight under natural lighting conditions using a handheld digital single-lens reflex camera. The proposed model was trained, validated, and tested on 900 manually acquired and augmented images of tree crowns under different illumination conditions, backgrounds, and coconut varieties. On the test dataset, the overall mean average precision (mAP) and weighted mean intersection over union (wmIoU) attained by the model were 0.886 and 0.827, respectively, with average precision for detecting non-occluded and leaf-occluded coconut clusters as 0.912 and 0.883, respectively. The encouraging results provide the base to develop a complete vision system to determine the harvesting strategy and locate the cutting position on the coconut cluster.
Collapse
Affiliation(s)
- L. G. Divyanth
- Department of Agricultural and Food Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Peeyush Soni
- Department of Agricultural and Food Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
- Correspondence: (P.S.); (J.P.)
| | - Chaitanya Madhaw Pareek
- Department of Agricultural and Food Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Rajendra Machavaram
- Department of Agricultural and Food Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Mohammad Nadimi
- Department of Biosystems Engineering, University of Manitoba, Winnipeg, MB R3T 5V6, Canada
| | - Jitendra Paliwal
- Department of Biosystems Engineering, University of Manitoba, Winnipeg, MB R3T 5V6, Canada
- Correspondence: (P.S.); (J.P.)
| |
Collapse
|
5
|
Li Z, Yuan X, Wang C. A review on structural development and recognition–localization methods for end-effector of fruit–vegetable picking robots. INT J ADV ROBOT SYST 2022. [DOI: 10.1177/17298806221104906] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The excellent performance of fruit and vegetable picking robots is usually contributed by the reasonable structure of end-effector and recognition–localization methods with high accuracy. As a result, efforts are focused on two aspects, and diverse structures of end-effector, target recognition methods as well as their combinations are yielded continuously. A good understanding for the working principle, advantages, limitations, and the adaptability in respective fields is helpful to design picking robots. Therefore, depending on different grasping ways, separating methods, structures, materials, and driving modes, main characteristics existing in traditional schemes will be depicted firstly. According to technical routes, advantages, potential applications, and challenges, underactuated manipulators and soft manipulators representing future development are then summarized systematically. Secondly, partial recognition and localization methods are also demonstrated. Specifically, current recognition manners adopting the single-feature, multi-feature fusion and deep learning are explained in view of their advantages, limitations, and successful instances. In the field of 3D localization, active vision based on the structured light, laser scanning, time of flight, and radar is reflected through the respective applications. Also, another 3D localization method called passive vision is also evaluated by advantages, limitations, the degree of automation, reconstruction effects, and the application scenario, such as monocular vision, binocular vision, and multiocular vision. Finally portrayed from structural development, recognition, and localization methods, it is possible to develop future end-effectors for fruit and vegetable picking robots with superior characteristics containing the less driving element, rigid–flexible–bionic coupling soft manipulators, simple control program, high efficiency, low damage, low cost, high versatility, and high recognition accuracy in all-season picking tasks.
Collapse
Affiliation(s)
- Ziyue Li
- School of Automotive Engineering, Hubei University of Automotive Technology, Shiyan, PR China
| | - Xianju Yuan
- School of Automotive Engineering, Hubei University of Automotive Technology, Shiyan, PR China
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
| | - Chuyan Wang
- School of Automotive Engineering, Hubei University of Automotive Technology, Shiyan, PR China
| |
Collapse
|
6
|
Lv J, Xu H, Xu L, Zou L, Rong H, Yang B, Niu L, Ma Z. Recognition of fruits and vegetables with similar‐color background in natural environment: A survey. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Jidong Lv
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Hao Xu
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Liming Xu
- Department of Equipment Engineering Jiangsu Urban and Rural Construction College Changzhou China
| | - Ling Zou
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Hailong Rong
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Biao Yang
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Liangliang Niu
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| | - Zhenghua Ma
- Department of Automation, School of Microelectronics and Control Engineering Changzhou University Changzhou China
| |
Collapse
|
7
|
Dynamic Viewpoint Selection for Sweet Pepper Maturity Classification Using Online Economic Decisions. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This paper presents a rule-based methodology for dynamic viewpoint selection for maturity classification of red and yellow sweet peppers. The method makes an online decision to capture an additional next-best viewpoint based on an economic analysis that considers potential misclassification and robot operational costs. The next-best viewpoint is selected based on color variations on the pepper. Peppers were classified into mature and immature using a random forest classifier based on principle components of various color features derived from an RGB-D camera. The method first attempts to classify maturity based on a single viewpoint. An additional viewpoint is acquired and added to the point cloud only when it is deemed profitable. The methodology was evaluated using leave-one-out cross-validation on datasets of 69 red and 70 yellow sweet peppers from three different maturity stages. Classification accuracy was increased by 6% and 5% using dynamic viewpoint selection along with 52% and 12% decrease in economic costs for red and yellow peppers, respectively, compared to using a single viewpoint. Sensitivity analyses were performed for misclassification and robot operational costs.
Collapse
|
8
|
An Automated, Clip-Type, Small Internet of Things Camera-Based Tomato Flower and Fruit Monitoring and Harvest Prediction System. SENSORS 2022; 22:s22072456. [PMID: 35408071 PMCID: PMC9002604 DOI: 10.3390/s22072456] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 12/04/2022]
Abstract
Automated crop monitoring using image analysis is commonly used in horticulture. Image-processing technologies have been used in several studies to monitor growth, determine harvest time, and estimate yield. However, accurate monitoring of flowers and fruits in addition to tracking their movements is difficult because of their location on an individual plant among a cluster of plants. In this study, an automated clip-type Internet of Things (IoT) camera-based growth monitoring and harvest date prediction system was proposed and designed for tomato cultivation. Multiple clip-type IoT cameras were installed on trusses inside a greenhouse, and the growth of tomato flowers and fruits was monitored using deep learning-based blooming flower and immature fruit detection. In addition, the harvest date was calculated using these data and temperatures inside the greenhouse. Our system was tested over three months. Harvest dates measured using our system were comparable with the data manually recorded. These results suggest that the system could accurately detect anthesis, number of immature fruits, and predict the harvest date within an error range of ±2.03 days in tomato plants. This system can be used to support crop growth management in greenhouses.
Collapse
|
9
|
Optimization Model for Selective Harvest Planning Performed by Humans and Robots. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper addresses the formulation of an individual fruit harvest decision as a nonlinear programming problem to maximize profit, while considering selective harvesting based on fruit maturity. A model for the operational level decision was developed and includes four features: time window constraints, resource limitations, yield perishability, and uncertainty. The model implementation was demonstrated through numerical studies that compared decisions for different types of worker and analyzed different robotic harvester capabilities for a case study of sweet pepper harvesting. The results show the influence of the maturity classification capabilities of the robot on its output, as well as the improvement in cycle times needed to reach the economic feasibility of a robotic harvester.
Collapse
|
10
|
A Simple and Efficient Deep Learning-Based Framework for Automatic Fruit Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6538117. [PMID: 35237311 PMCID: PMC8885238 DOI: 10.1155/2022/6538117] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/20/2022] [Accepted: 01/26/2022] [Indexed: 11/19/2022]
Abstract
Accurate detection and recognition of various kinds of fruits and vegetables by using the artificial intelligence (AI) approach always remain a challenging task due to similarity between various types of fruits and challenging environments such as lighting and background variations. Therefore, developing and exploring an expert system for automatic fruits' recognition is getting more and more important after many successful approaches; however, this technology is still far from being mature. The deep learning-based models have emerged as state-of-the-art techniques for image segmentation and classification and have a lot of promise in challenging domains such as agriculture, where they can deal with the large variability in data better than classical computer vision methods. In this study, we proposed a deep learning-based framework to detect and recognize fruits and vegetables automatically with difficult real-world scenarios. The proposed method might be helpful for the fruit sellers to identify and differentiate various kinds of fruits and vegetables that have similarities. The proposed method has applied deep convolutional neural network (DCNN) to the undertakings of distinguishing natural fruit images of the Gilgit-Baltistan (GB) region as this area is famous for fruits' production in Pakistan as well as in the world. The experimental outcomes demonstrate that the suggested deep learning algorithm has the effective capability of automatically recognizing the fruit with high accuracy of 96%. This high accuracy exhibits that the proposed approach can meet world application requirements.
Collapse
|
11
|
Kurtser P, Castro-Alves V, Arunachalam A, Sjöberg V, Hanell U, Hyötyläinen T, Andreasson H. Development of novel robotic platforms for mechanical stress induction, and their effects on plant morphology, elements, and metabolism. Sci Rep 2021; 11:23876. [PMID: 34903776 PMCID: PMC8669031 DOI: 10.1038/s41598-021-02581-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 11/12/2021] [Indexed: 11/09/2022] Open
Abstract
This research evaluates the effect on herbal crops of mechanical stress induced by two specially developed robotic platforms. The changes in plant morphology, metabolite profiles, and element content are evaluated in a series of three empirical experiments, conducted in greenhouse and CNC growing bed conditions, for the case of basil plant growth. Results show significant changes in morphological features, including shortening of overall stem length by up to 40% and inter-node distances by up to 80%, for plants treated with a robotic mechanical stress-induction protocol, compared to control groups. Treated plants showed a significant increase in element absorption, by 20–250% compared to controls, and changes in the metabolite profiles suggested an improvement in plants’ nutritional profiles. These results suggest that repetitive, robotic, mechanical stimuli could be potentially beneficial for plants’ nutritional and taste properties, and could be performed with no human intervention (and therefore labor cost). The changes in morphological aspects of the plant could potentially replace practices involving chemical treatment of the plants, leading to more sustainable crop production.
Collapse
Affiliation(s)
- Polina Kurtser
- Centre for Applied Autonomous Sensor Systems, Örebro University, 701 82, Örebro, Sweden.
| | - Victor Castro-Alves
- Man-Technology-Environment Research Centre, Örebro University, 701 82, Örebro, Sweden
| | - Ajay Arunachalam
- Centre for Applied Autonomous Sensor Systems, Örebro University, 701 82, Örebro, Sweden
| | - Viktor Sjöberg
- Man-Technology-Environment Research Centre, Örebro University, 701 82, Örebro, Sweden
| | - Ulf Hanell
- Man-Technology-Environment Research Centre, Örebro University, 701 82, Örebro, Sweden
| | - Tuulia Hyötyläinen
- Man-Technology-Environment Research Centre, Örebro University, 701 82, Örebro, Sweden
| | - Henrik Andreasson
- Centre for Applied Autonomous Sensor Systems, Örebro University, 701 82, Örebro, Sweden
| |
Collapse
|
12
|
Ubaid MT, Darboe A, Uche FS, Daffeh A, Khan MUG. Kett Mangoes Detection in the Gambia using Deep Learning Techniques. 2021 INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING (ICIC) 2021. [DOI: 10.1109/icic53490.2021.9693082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Muhammad Talha Ubaid
- Intelligent Criminology Research Lab National Center of Artificial Intelligence,KICS,UET Lahore
| | - Abdou Darboe
- University of The Gambia,Dept. of Information Technology Services,Banjul,The Gambia
| | - Fred Sangol Uche
- University of The Gambia,Computer Science Department,Banjul,The Gambia
| | - Adama Daffeh
- University of The Gambia,Computer Science Department,Banjul,The Gambia
| | | |
Collapse
|
13
|
Boatswain Jacques AA, Adamchuk VI, Park J, Cloutier G, Clark JJ, Miller C. Towards a Machine Vision-Based Yield Monitor for the Counting and Quality Mapping of Shallots. Front Robot AI 2021; 8:627067. [PMID: 34046434 PMCID: PMC8146908 DOI: 10.3389/frobt.2021.627067] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 02/04/2021] [Indexed: 11/13/2022] Open
Abstract
In comparison to field crops such as cereals, cotton, hay and grain, specialty crops often require more resources, are usually more sensitive to sudden changes in growth conditions and are known to produce higher value products. Providing quality and quantity assessment of specialty crops during harvesting is crucial for securing higher returns and improving management practices. Technical advancements in computer and machine vision have improved the detection, quality assessment and yield estimation processes for various fruit crops, but similar methods capable of exporting a detailed yield map for vegetable crops have yet to be fully developed. A machine vision-based yield monitor was designed to perform size categorization and continuous counting of shallots in-situ during the harvesting process. Coupled with a software developed in Python, the system is composed of a video logger and a global navigation satellite system. Computer vision analysis is performed within the tractor while an RGB camera collects real-time video data of the crops under natural sunlight conditions. Vegetables are first segmented using Watershed segmentation, detected on the conveyor, and then classified by size. The system detected shallots in a subsample of the dataset with a precision of 76%. The software was also evaluated on its ability to classify the shallots into three size categories. The best performance was achieved in the large class (73%), followed by the small class (59%) and medium class (44%). Based on these results, the occasional occlusion of vegetables and inconsistent lighting conditions were the main factors that hindered performance. Although further enhancements are envisioned for the prototype system, its modular and novel design permits the mapping of a selection of other horticultural crops. Moreover, it has the potential to benefit many producers of small vegetable crops by providing them with useful harvest information in real-time.
Collapse
Affiliation(s)
- Amanda A Boatswain Jacques
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | - Viacheslav I Adamchuk
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | - Jaesung Park
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | | | - James J Clark
- Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada
| | - Connor Miller
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| |
Collapse
|
14
|
Magalhães SA, Castro L, Moreira G, dos Santos FN, Cunha M, Dias J, Moreira AP. Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse. SENSORS (BASEL, SWITZERLAND) 2021; 21:3569. [PMID: 34065568 PMCID: PMC8160895 DOI: 10.3390/s21103569] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 05/14/2021] [Accepted: 05/17/2021] [Indexed: 02/07/2023]
Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms.
Collapse
Affiliation(s)
- Sandro Augusto Magalhães
- INESC TEC-Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Campus da FEUP, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal; (L.C.); (M.C.); (A.P.M.)
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| | - Luís Castro
- INESC TEC-Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Campus da FEUP, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal; (L.C.); (M.C.); (A.P.M.)
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| | - Germano Moreira
- Faculty of Sciences, University of Porto, Rua do Campo Alegre, s/n, 4169-007 Porto, Portugal;
| | - Filipe Neves dos Santos
- INESC TEC-Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Campus da FEUP, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal; (L.C.); (M.C.); (A.P.M.)
| | - Mário Cunha
- INESC TEC-Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Campus da FEUP, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal; (L.C.); (M.C.); (A.P.M.)
- Faculty of Sciences, University of Porto, Rua do Campo Alegre, s/n, 4169-007 Porto, Portugal;
| | - Jorge Dias
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology (KU), Abu Dhabi 127788, United Arab Emirates;
| | - António Paulo Moreira
- INESC TEC-Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Campus da FEUP, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal; (L.C.); (M.C.); (A.P.M.)
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| |
Collapse
|
15
|
Designing a Low-Cost Mechatronic Device for Semi-Automatic Saffron Harvesting. MACHINES 2021. [DOI: 10.3390/machines9050094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper addresses the design of a novel mechatronic device for saffron harvesting. The main proposed challenge consists of proposing a new paradigm for semi-automatic harvesting of saffron flowers. The proposed novel solution is designed for being easily portable with user-friendly and cost-oriented features and with a fully electric battery-powered actuation. A preliminary concept design has been proposed as based on a specific novel cam mechanism in combination with an elastic spring for fulfilling the detachment of the flowers from their stems. Numerical calculations and simulations have been carried out to complete the full design of a proof-of-concept prototype. Preliminary experimental tests have been carried out to demonstrate the engineering feasibility and effectiveness of the proposed design solutions, whose concept has been submitted for patenting.
Collapse
|
16
|
Development and performance evaluation of a machine vision system and an integrated prototype for automated green shoot thinning in vineyards. J FIELD ROBOT 2021. [DOI: 10.1002/rob.22013] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
17
|
Afonso M, Fonteijn H, Fiorentin FS, Lensink D, Mooij M, Faber N, Polder G, Wehrens R. Tomato Fruit Detection and Counting in Greenhouses Using Deep Learning. FRONTIERS IN PLANT SCIENCE 2020; 11:571299. [PMID: 33329628 PMCID: PMC7717966 DOI: 10.3389/fpls.2020.571299] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 10/20/2020] [Indexed: 05/07/2023]
Abstract
Accurately detecting and counting fruits during plant growth using imaging and computer vision is of importance not only from the point of view of reducing labor intensive manual measurements of phenotypic information, but also because it is a critical step toward automating processes such as harvesting. Deep learning based methods have emerged as the state-of-the-art techniques in many problems in image segmentation and classification, and have a lot of promise in challenging domains such as agriculture, where they can deal with the large variability in data better than classical computer vision methods. This paper reports results on the detection of tomatoes in images taken in a greenhouse, using the MaskRCNN algorithm, which detects objects and also the pixels corresponding to each object. Our experimental results on the detection of tomatoes from images taken in greenhouses using a RealSense camera are comparable to or better than the metrics reported by earlier work, even though those were obtained in laboratory conditions or using higher resolution images. Our results also show that MaskRCNN can implicitly learn object depth, which is necessary for background elimination.
Collapse
Affiliation(s)
- Manya Afonso
- Wageningen University and Research, Wageningen, Netherlands
| | | | | | | | | | | | - Gerrit Polder
- Wageningen University and Research, Wageningen, Netherlands
| | - Ron Wehrens
- Wageningen University and Research, Wageningen, Netherlands
| |
Collapse
|
18
|
Real-Time Fruit Recognition and Grasping Estimation for Robotic Apple Harvesting. SENSORS 2020; 20:s20195670. [PMID: 33020430 PMCID: PMC7583839 DOI: 10.3390/s20195670] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 09/30/2020] [Accepted: 09/30/2020] [Indexed: 01/17/2023]
Abstract
Robotic harvesting shows a promising aspect in future development of agricultural industry. However, there are many challenges which are still presented in the development of a fully functional robotic harvesting system. Vision is one of the most important keys among these challenges. Traditional vision methods always suffer from defects in accuracy, robustness, and efficiency in real implementation environments. In this work, a fully deep learning-based vision method for autonomous apple harvesting is developed and evaluated. The developed method includes a light-weight one-stage detection and segmentation network for fruit recognition and a PointNet to process the point clouds and estimate a proper approach pose for each fruit before grasping. Fruit recognition network takes raw inputs from RGB-D camera and performs fruit detection and instance segmentation on RGB images. The PointNet grasping network combines depth information and results from the fruit recognition as input and outputs the approach pose of each fruit for robotic arm execution. The developed vision method is evaluated on RGB-D image data which are collected from both laboratory and orchard environments. Robotic harvesting experiments in both indoor and outdoor conditions are also included to validate the performance of the developed harvesting system. Experimental results show that the developed vision method can perform highly efficient and accurate to guide robotic harvesting. Overall, the developed robotic harvesting system achieves 0.8 on harvesting success rate and cycle time is 6.5 s.
Collapse
|
19
|
Brown J, Sukkarieh S. Design and evaluation of a modular robotic plum harvesting system utilizing soft components. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21987] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- Jasper Brown
- The Australian Centre for Field Robotics, Faculty of Engineering The University of Sydney Sydney Australia
| | - Salah Sukkarieh
- The Australian Centre for Field Robotics, Faculty of Engineering The University of Sydney Sydney Australia
| |
Collapse
|
20
|
Shao Y, Wang Y, Xuan G, Gao Z, Hu Z, Gao C, Wang K. Assessment of Strawberry Ripeness Using Hyperspectral Imaging. ANAL LETT 2020. [DOI: 10.1080/00032719.2020.1812622] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Yuanyuan Shao
- College of Mechanical and Electrical Engineering, Shandong Intelligent Engineering Laboratory of Agricultural Equipment, Shandong Agricultural University, Tai’an, China
- Ministry of Agriculture and Rural Affairs, Nanjing Research Institute of Agricultural Mechanization, Nanjing, China
| | - Yongxian Wang
- College of Mechanical and Electrical Engineering, Shandong Intelligent Engineering Laboratory of Agricultural Equipment, Shandong Agricultural University, Tai’an, China
| | - Guantao Xuan
- College of Mechanical and Electrical Engineering, Shandong Intelligent Engineering Laboratory of Agricultural Equipment, Shandong Agricultural University, Tai’an, China
| | - Zongmei Gao
- Department of Biological Systems Engineering, Center for Precision and Automated Agricultural Systems, Prosser, WA, USA
| | - Zhichao Hu
- Ministry of Agriculture and Rural Affairs, Nanjing Research Institute of Agricultural Mechanization, Nanjing, China
| | - Chong Gao
- College of Mechanical and Electrical Engineering, Shandong Intelligent Engineering Laboratory of Agricultural Equipment, Shandong Agricultural University, Tai’an, China
| | - Kaili Wang
- College of Mechanical and Electrical Engineering, Shandong Intelligent Engineering Laboratory of Agricultural Equipment, Shandong Agricultural University, Tai’an, China
| |
Collapse
|
21
|
Srivastava S, Vani B, Sadistap S. Machine-vision based handheld embedded system to extract quality parameters of citrus cultivars. JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION 2020. [DOI: 10.1007/s11694-020-00520-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
22
|
Tang Y, Chen M, Wang C, Luo L, Li J, Lian G, Zou X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. FRONTIERS IN PLANT SCIENCE 2020; 11:510. [PMID: 32508853 PMCID: PMC7250149 DOI: 10.3389/fpls.2020.00510] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 04/06/2020] [Indexed: 05/13/2023]
Abstract
The utilization of machine vision and its associated algorithms improves the efficiency, functionality, intelligence, and remote interactivity of harvesting robots in complex agricultural environments. Machine vision and its associated emerging technology promise huge potential in advanced agricultural applications. However, machine vision and its precise positioning still have many technical difficulties, making it difficult for most harvesting robots to achieve true commercial applications. This article reports the application and research progress of harvesting robots and vision technology in fruit picking. The potential applications of vision and quantitative methods of localization, target recognition, 3D reconstruction, and fault tolerance of complex agricultural environment are focused, and fault-tolerant technology designed for utilization with machine vision and robotic systems are also explored. The two main methods used in fruit recognition and localization are reviewed, including digital image processing technology and deep learning-based algorithms. The future challenges brought about by recognition and localization success rates are identified: target recognition in the presence of illumination changes and occlusion environments; target tracking in dynamic interference-laden environments, 3D target reconstruction, and fault tolerance of the vision system for agricultural robots. In the end, several open research problems specific to recognition and localization applications for fruit harvesting robots are mentioned, and the latest development and future development trends of machine vision are described.
Collapse
Affiliation(s)
- Yunchao Tang
- College of Urban and Rural Construction, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Mingyou Chen
- Key Laboratory of Key Technology on Agricultural Machine and Equipment, College of Engineering, South China Agricultural University, Guangzhou, China
| | - Chenglin Wang
- College of Mechanical and Electrical Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Lufeng Luo
- College of Mechanical and Electrical Engineering, Foshan University, Foshan, China
| | - Jinhui Li
- Key Laboratory of Key Technology on Agricultural Machine and Equipment, College of Engineering, South China Agricultural University, Guangzhou, China
| | - Guoping Lian
- Department of Chemical and Process Engineering, University of Surrey, Guildford, United Kingdom
| | - Xiangjun Zou
- Key Laboratory of Key Technology on Agricultural Machine and Equipment, College of Engineering, South China Agricultural University, Guangzhou, China
| |
Collapse
|
23
|
Kurtser P, Ringdahl O, Rotstein N, Berenstein R, Edan Y. In-Field Grape Cluster Size Assessment for Vine Yield Estimation Using a Mobile Robot and a Consumer Level RGB-D Camera. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2970654] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
Applications of Deep Learning for Dense Scenes Analysis in Agriculture: A Review. SENSORS 2020; 20:s20051520. [PMID: 32164200 PMCID: PMC7085505 DOI: 10.3390/s20051520] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/20/2020] [Accepted: 03/03/2020] [Indexed: 11/23/2022]
Abstract
Deep Learning (DL) is the state-of-the-art machine learning technology, which shows superior performance in computer vision, bioinformatics, natural language processing, and other areas. Especially as a modern image processing technology, DL has been successfully applied in various tasks, such as object detection, semantic segmentation, and scene analysis. However, with the increase of dense scenes in reality, due to severe occlusions, and small size of objects, the analysis of dense scenes becomes particularly challenging. To overcome these problems, DL recently has been increasingly applied to dense scenes and has begun to be used in dense agricultural scenes. The purpose of this review is to explore the applications of DL for dense scenes analysis in agriculture. In order to better elaborate the topic, we first describe the types of dense scenes in agriculture, as well as the challenges. Next, we introduce various popular deep neural networks used in these dense scenes. Then, the applications of these structures in various agricultural tasks are comprehensively introduced in this review, including recognition and classification, detection, counting and yield estimation. Finally, the surveyed DL applications, limitations and the future work for analysis of dense images in agriculture are summarized.
Collapse
|
25
|
Arad B, Balendonck J, Barth R, Ben‐Shahar O, Edan Y, Hellström T, Hemming J, Kurtser P, Ringdahl O, Tielen T, Tuijl B. Development of a sweet pepper harvesting robot. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21937] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Affiliation(s)
- Boaz Arad
- Department of Computer ScienceBen‐Gurion University of the NegevBeer‐Sheva Israel
| | - Jos Balendonck
- Greenhouse HorticultureWageningen University & ResearchWageningen The Netherlands
| | - Ruud Barth
- Greenhouse HorticultureWageningen University & ResearchWageningen The Netherlands
| | - Ohad Ben‐Shahar
- Department of Computer ScienceBen‐Gurion University of the NegevBeer‐Sheva Israel
| | - Yael Edan
- Department of Industrial Engineering and ManagementBen‐Gurion University of the NegevBeer‐Sheva Israel
| | | | - Jochen Hemming
- Greenhouse HorticultureWageningen University & ResearchWageningen The Netherlands
| | - Polina Kurtser
- Department of Industrial Engineering and ManagementBen‐Gurion University of the NegevBeer‐Sheva Israel
| | - Ola Ringdahl
- Department of Computing ScienceUmeå UniversityUmeå Sweden
| | - Toon Tielen
- Greenhouse HorticultureWageningen University & ResearchWageningen The Netherlands
| | - Bart Tuijl
- Greenhouse HorticultureWageningen University & ResearchWageningen The Netherlands
| |
Collapse
|
26
|
Zhang T, Huang Z, You W, Lin J, Tang X, Huang H. An Autonomous Fruit and Vegetable Harvester with a Low-Cost Gripper Using a 3D Sensor. SENSORS (BASEL, SWITZERLAND) 2019; 20:E93. [PMID: 31877904 PMCID: PMC6982854 DOI: 10.3390/s20010093] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 12/02/2019] [Accepted: 12/19/2019] [Indexed: 11/26/2022]
Abstract
Reliable and robust systems to detect and harvest fruits and vegetables in unstructured environments are crucial for harvesting robots. In this paper, we propose an autonomous system that harvests most types of crops with peduncles. A geometric approach is first applied to obtain the cutting points of the peduncle based on the fruit bounding box, for which we have adapted the model of the state-of-the-art object detector named Mask Region-based Convolutional Neural Network (Mask R-CNN). We designed a novel gripper that simultaneously clamps and cuts the peduncles of crops without contacting the flesh. We have conducted experiments with a robotic manipulator to evaluate the effectiveness of the proposed harvesting system in being able to efficiently harvest most crops in real laboratory environments.
Collapse
Affiliation(s)
| | | | | | | | | | - Hui Huang
- Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, Shenzhen 510000, China; (T.Z.); (Z.H.); (W.Y.); (J.L.); (X.T.)
| |
Collapse
|
27
|
Yu JG, Li Y, Gao C, Gaoa H, Xia GS, Yub ZL, Lic Y. Exemplar-Based Recursive Instance Segmentation With Application to Plant Image Analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:389-404. [PMID: 31329554 DOI: 10.1109/tip.2019.2923571] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Instance segmentation is a challenging computer vision problem which lies at the intersection of object detection and semantic segmentation. Motivated by plant image analysis in the context of plant phenotyping, a recently emerging application field of computer vision, this paper presents the Exemplar-Based Recursive Instance Segmentation (ERIS) framework. A three-layer probabilistic model is firstly introduced to jointly represent hypotheses, voting elements, instance labels and their connections. Afterwards, a recursive optimization algorithm is developed to infer the maximum a posteriori (MAP) solution, which handles one instance at a time by alternating among the three steps of detection, segmentation and update. The proposed ERIS framework departs from previous works mainly in two respects. First, it is exemplar-based and model-free, which can achieve instance-level segmentation of a specific object class given only a handful of (typically less than 10) annotated exemplars. Such a merit enables its use in case that no massive manually-labeled data is available for training strong classification models, as required by most existing methods. Second, instead of attempting to infer the solution in a single shot, which suffers from extremely high computational complexity, our recursive optimization strategy allows for reasonably efficient MAP-inference in full hypothesis space. The ERIS framework is substantialized for the specific application of plant leaf segmentation in this work. Experiments are conducted on public benchmarks to demonstrate the superiority of our method in both effectiveness and efficiency in comparison with the state-of-the-art.
Collapse
|
28
|
Bellocchio E, Ciarfuglia TA, Costante G, Valigi P. Weakly Supervised Fruit Counting for Yield Estimation Using Spatial Consistency. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2903260] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
29
|
Bresilla K, Perulli GD, Boini A, Morandi B, Corelli Grappadelli L, Manfrini L. Single-Shot Convolution Neural Networks for Real-Time Fruit Detection Within the Tree. FRONTIERS IN PLANT SCIENCE 2019; 10:611. [PMID: 31178875 PMCID: PMC6537632 DOI: 10.3389/fpls.2019.00611] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 04/25/2019] [Indexed: 05/22/2023]
Abstract
Image/video processing for fruit detection in the tree using hard-coded feature extraction algorithms has shown high accuracy on fruit detection during recent years. While accurate, these approaches even with high-end hardware are still computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks architecture based on single-stage detectors. Using deep-learning techniques eliminates the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This architecture takes the input image and divides into AxA grid, where A is a configurable hyper-parameter that defines the fineness of the grid. To each grid cell an image detection and localization algorithm is applied. Each of those cells is responsible to predict bounding boxes and confidence score for fruit (apple and pear in the case of this study) detected in that cell. We want this confidence score to be high if a fruit exists in a cell, otherwise to be zero, if no fruit is in the cell. More than 100 images of apple and pear trees were taken. Each tree image with approximately 50 fruits, that at the end resulted on more than 5000 images of apple and pear fruits each. Labeling images for training consisted on manually specifying the bounding boxes for fruits, where (x, y) are the center coordinates of the box and (w, h) are width and height. This architecture showed an accuracy of more than 90% fruit detection. Based on correlation between number of visible fruits, detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Processing speed is higher than 20 FPS which is fast enough for any grasping/harvesting robotic arm or other real-time applications. HIGHLIGHTS Using new convolutional deep learning techniques based on single-shot detectors to detect and count fruits (apple and pear) within the tree canopy.
Collapse
Affiliation(s)
- Kushtrim Bresilla
- Dipartimento di Scienze Agrarie, University of Bologna, Bologna, Italy
| | | | | | | | | | - Luigi Manfrini
- Dipartimento di Scienze Agrarie, University of Bologna, Bologna, Italy
| |
Collapse
|
30
|
Automatic Parameter Tuning for Adaptive Thresholding in Fruit Detection. SENSORS 2019; 19:s19092130. [PMID: 31071989 PMCID: PMC6539906 DOI: 10.3390/s19092130] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 04/30/2019] [Accepted: 05/02/2019] [Indexed: 11/16/2022]
Abstract
This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions. The image is dynamically split into variably-sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected to accommodate three different illumination levels for three different dimensions in four color spaces: RGB, HSI, LAB, and NDI. Each color space uses a different method to represent a pixel in an image: RGB (Red, Green, Blue), HSI (Hue, Saturation, Intensity), LAB (Lightness, Green to Red and Blue to Yellow) and NDI (Normalized Difference Index, which represents the normal difference between the RGB color dimensions). The thresholds were selected by quantifying the required relation between the true positive rate and false positive rate. A tuning process was developed to determine the best fit values of the algorithm parameters to enable easy adaption to different kinds of fruits (shapes, colors) and environments (illumination conditions). Extensive analyses were conducted on three different databases acquired in natural growing conditions: red apples (nine images with 113 apples), green grape clusters (129 images with 1078 grape clusters), and yellow peppers (30 images with 73 peppers). These databases are provided as part of this paper for future developments. The algorithm was evaluated using cross-validation with 70% images for training and 30% images for testing. The algorithm successfully detected apples and peppers in variable lighting conditions resulting with an F-score of 93.17% and 99.31% respectively. Results show the importance of the tuning process for the generalization of the algorithm to different kinds of fruits and environments. In addition, this research revealed the importance of evaluating different color spaces since for each kind of fruit, a different color space might be superior over the others. The LAB color space is most robust to noise. The algorithm is robust to changes in the threshold learned by the training process and to noise effects in images.
Collapse
|
31
|
A Mature-Tomato Detection Algorithm Using Machine Learning and Color Analysis. SENSORS 2019; 19:s19092023. [PMID: 31052169 PMCID: PMC6539546 DOI: 10.3390/s19092023] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 04/25/2019] [Accepted: 04/26/2019] [Indexed: 11/16/2022]
Abstract
An algorithm was proposed for automatic tomato detection in regular color images to reduce the influence of illumination and occlusion. In this method, the Histograms of Oriented Gradients (HOG) descriptor was used to train a Support Vector Machine (SVM) classifier. A coarse-to-fine scanning method was developed to detect tomatoes, followed by a proposed False Color Removal (FCR) method to remove the false-positive detections. Non-Maximum Suppression (NMS) was used to merge the overlapped results. Compared with other methods, the proposed algorithm showed substantial improvement in tomato detection. The results of tomato detection in the test images showed that the recall, precision, and F1 score of the proposed method were 90.00%, 94.41 and 92.15%, respectively.
Collapse
|
32
|
Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. SENSORS 2019; 19:s19061390. [PMID: 30901837 PMCID: PMC6470490 DOI: 10.3390/s19061390] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 03/09/2019] [Accepted: 03/14/2019] [Indexed: 11/17/2022]
Abstract
Current harvesting robots are limited by low detection rates due to the unstructured and dynamic nature of both the objects and the environment. State-of-the-art algorithms include color- and texture-based detection, which are highly sensitive to the illumination conditions. Deep learning algorithms promise robustness at the cost of significant computational resources and the requirement for intensive databases. In this paper we present a Flash-No-Flash (FNF) controlled illumination acquisition protocol that frees the system from most ambient illumination effects and facilitates robust target detection while using only modest computational resources and no supervised training. The approach relies on the simultaneous acquisition of two images—with/without strong artificial lighting (“Flash”/“no-Flash”). The difference between these images represents the appearance of the target scene as if only the artificial light was present, allowing a tight control over ambient light for color-based detection. A performance evaluation database was acquired in greenhouse conditions using an eye-in-hand RGB camera mounted on a robotic manipulator. The database includes 156 scenes with 468 images containing a total of 344 yellow sweet peppers. Performance of both color blob and deep-learning detection algorithms are compared on Flash-only and FNF images. The collected database is made public.
Collapse
|
33
|
Vit A, Shani G. Comparing RGB-D Sensors for Close Range Outdoor Agricultural Phenotyping. SENSORS (BASEL, SWITZERLAND) 2018; 18:E4413. [PMID: 30551636 PMCID: PMC6308665 DOI: 10.3390/s18124413] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 12/03/2018] [Accepted: 12/05/2018] [Indexed: 11/20/2022]
Abstract
Phenotyping is the task of measuring plant attributes for analyzing the current state of the plant. In agriculture, phenotyping can be used to make decisions concerning the management of crops, such as the watering policy, or whether to spray for a certain pest. Currently, large scale phenotyping in fields is typically done using manual labor, which is a costly, low throughput process. Researchers often advocate the use of automated systems for phenotyping, relying on the use of sensors for making measurements. The recent rise of low cost, yet reasonably accurate, RGB-D sensors has opened the way for using these sensors in field phenotyping applications. In this paper, we investigate the applicability of four different RGB-D sensors for this task. We conduct an outdoor experiment, measuring plant attribute in various distances and light conditions. Our results show that modern RGB-D sensors, in particular, the Intel D435 sensor, provides a viable tool for close range phenotyping tasks in fields.
Collapse
Affiliation(s)
- Adar Vit
- Software and Information Systems Engineering, Ben Gurion University, Beer Sheva 84105, Israel.
| | - Guy Shani
- Software and Information Systems Engineering, Ben Gurion University, Beer Sheva 84105, Israel.
| |
Collapse
|
34
|
Hughes J, Scimeca L, Ifrim I, Maiolino P, Iida F. Achieving Robotically Peeled Lettuce. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2855043] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
35
|
Dias PA, Tabb A, Medeiros H. Multispecies Fruit Flower Detection Using a Refined Semantic Segmentation Network. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2849498] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
36
|
Alencastre-Miranda M, Davidson JR, Johnson RM, Waguespack H, Krebs HI. Robotics for Sugarcane Cultivation: Analysis of Billet Quality using Computer Vision. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2856999] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Abstract
Abstract
In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use such classifier.
Collapse
|
38
|
|
39
|
Evaluation of approach strategies for harvesting robots: Case study of sweet pepper harvesting. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0892-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
40
|
|
41
|
|
42
|
|
43
|
Abstract
The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.
Collapse
|
44
|
Adamides G, Katsanos C, Parmet Y, Christou G, Xenos M, Hadzilacos T, Edan Y. HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer. APPLIED ERGONOMICS 2017; 62:237-246. [PMID: 28411734 DOI: 10.1016/j.apergo.2017.03.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Revised: 01/03/2017] [Accepted: 03/12/2017] [Indexed: 05/05/2023]
Abstract
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented.
Collapse
Affiliation(s)
- George Adamides
- School of Pure and Applied Sciences, Open University of Cyprus, Lefkosia, Cyprus.
| | - Christos Katsanos
- School of Science and Technology, Hellenic Open University, Patras, Greece; Dept. of Business Administration, Technological Educational Institute of Western Greece, Patras, Greece
| | - Yisrael Parmet
- Dept. of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | | | - Michalis Xenos
- School of Science and Technology, Hellenic Open University, Patras, Greece
| | - Thanasis Hadzilacos
- School of Pure and Applied Sciences, Open University of Cyprus, Lefkosia, Cyprus; The Cyprus Institute, Lefkosia, Cyprus; CHILI. EPFL, Switzerland
| | - Yael Edan
- Dept. of Industrial Engineering & Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
45
|
Affiliation(s)
- Ron Berenstein
- Ben-Gurion University of the Negev; Beer-Sheva 8410501 Israel
| | - Yael Edan
- Ben-Gurion University of the Negev; Beer-Sheva 8410501 Israel
| |
Collapse
|
46
|
Kusumam K, Krajník T, Pearson S, Duckett T, Cielniak G. 3D-vision based detection, localization, and sizing of broccoli heads in the field. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21726] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | - Tomáš Krajník
- Artificial Intelligence Center, FEE; Czech Technical University; Czechia
| | - Simon Pearson
- Lincoln Institute for Agri-food Technology; University of Lincoln; UK
| | - Tom Duckett
- Lincoln Centre for Autonomous Systems; University of Lincoln; UK
| | | |
Collapse
|
47
|
Adamides G, Katsanos C, Constantinou I, Christou G, Xenos M, Hadzilacos T, Edan Y. Design and development of a semi-autonomous agricultural vineyard sprayer: Human-robot interaction aspects. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21721] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- George Adamides
- School of Pure and Applied Sciences; Open University of Cyprus; Latsia
- Lefkosia Cyprus
| | - Christos Katsanos
- School of Science and Technology; Hellenic Open University, Patras, Greece
- Department of Business Administration; Technological Educational Institute of Western Greece, Patras, Greece
| | | | | | - Michalis Xenos
- Department of Computer Engineering & Informatics; University of Patras, Patras, Greece
| | - Thanasis Hadzilacos
- School of Pure and Applied Sciences; Open University of Cyprus; Latsia
- Lefkosia Cyprus
| | - Yael Edan
- Department of Industrial Engineering & Management; Ben-Gurion University of the Negev Beer-Sheva; Israel
| |
Collapse
|
48
|
Lehnert C, English A, McCool C, Tow AW, Perez T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2655622] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
49
|
Bargoti S, Underwood JP. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21699] [Citation(s) in RCA: 240] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Suchet Bargoti
- Australian Centre for Field Robotics; The University of Sydney; NSW 2006 Australia
| | - James P. Underwood
- Australian Centre for Field Robotics; The University of Sydney; NSW 2006 Australia
| |
Collapse
|
50
|
Botterill T, Paulin S, Green R, Williams S, Lin J, Saxton V, Mills S, Chen X, Corbett-Davies S. A Robot System for Pruning Grape Vines. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21680] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Affiliation(s)
- Tom Botterill
- Department of Computer Science; University of Canterbury; Christchurch New Zealand
| | - Scott Paulin
- Department of Mechatronics Engineering; University of Canterbury; Christchurch New Zealand
| | - Richard Green
- Department of Computer Science; University of Canterbury; Christchurch New Zealand
| | - Samuel Williams
- Department of Computer Science; University of Canterbury; Christchurch New Zealand
| | - Jessica Lin
- Department of Computer Science; University of Canterbury; Christchurch New Zealand
| | - Valerie Saxton
- Faculty of Agriculture and Life Sciences; Lincoln University; Lincoln New Zealand
| | - Steven Mills
- Department of Computer Science; University of Otago; Dunedin New Zealand
| | - XiaoQi Chen
- Department of Mechatronics Engineering; University of Canterbury; Christchurch New Zealand
| | - Sam Corbett-Davies
- Computational Vision and Geometry Lab.; Stanford University; Stanford California 94305
| |
Collapse
|