1
|
Wang J, Du J, Tao C, Qi M, Yan J, Hu B, Zhang Z. Classification of Benign-Malignant Thyroid Nodules Based on Hyperspectral Technology. SENSORS (BASEL, SWITZERLAND) 2024; 24:3197. [PMID: 38794051 PMCID: PMC11126106 DOI: 10.3390/s24103197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 05/14/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024]
Abstract
In recent years, the incidence of thyroid cancer has rapidly increased. To address the issue of the inefficient diagnosis of thyroid cancer during surgery, we propose a rapid method for the diagnosis of benign and malignant thyroid nodules based on hyperspectral technology. Firstly, using our self-developed thyroid nodule hyperspectral acquisition system, data for a large number of diverse thyroid nodule samples were obtained, providing a foundation for subsequent diagnosis. Secondly, to better meet clinical practical needs, we address the current situation of medical hyperspectral image classification research being mainly focused on pixel-based region segmentation, by proposing a method for nodule classification as benign or malignant based on thyroid nodule hyperspectral data blocks. Using 3D CNN and VGG16 networks as a basis, we designed a neural network algorithm (V3Dnet) for classification based on three-dimensional hyperspectral data blocks. In the case of a dataset with a block size of 50 × 50 × 196, the classification accuracy for benign and malignant samples reaches 84.63%. We also investigated the impact of data block size on the classification performance and constructed a classification model that includes thyroid nodule sample acquisition, hyperspectral data preprocessing, and an algorithm for thyroid nodule classification as benign and malignant based on hyperspectral data blocks. The proposed model for thyroid nodule classification is expected to be applied in thyroid surgery, thereby improving surgical accuracy and providing strong support for scientific research in related fields.
Collapse
Affiliation(s)
- Junjie Wang
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Jian Du
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Chenglong Tao
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Meijie Qi
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Jiayue Yan
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Bingliang Hu
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| | - Zhoufeng Zhang
- Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (J.W.); (J.D.); (C.T.); (M.Q.); (J.Y.)
- Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an 710119, China
| |
Collapse
|
2
|
Liu Y, Wei C, Yoon SC, Ni X, Wang W, Liu Y, Wang D, Wang X, Guo X. Development of Multimodal Fusion Technology for Tomato Maturity Assessment. SENSORS (BASEL, SWITZERLAND) 2024; 24:2467. [PMID: 38676084 PMCID: PMC11054974 DOI: 10.3390/s24082467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/02/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024]
Abstract
The maturity of fruits and vegetables such as tomatoes significantly impacts indicators of their quality, such as taste, nutritional value, and shelf life, making maturity determination vital in agricultural production and the food processing industry. Tomatoes mature from the inside out, leading to an uneven ripening process inside and outside, and these situations make it very challenging to judge their maturity with the help of a single modality. In this paper, we propose a deep learning-assisted multimodal data fusion technique combining color imaging, spectroscopy, and haptic sensing for the maturity assessment of tomatoes. The method uses feature fusion to integrate feature information from images, near-infrared spectra, and haptic modalities into a unified feature set and then classifies the maturity of tomatoes through deep learning. Each modality independently extracts features, capturing the tomatoes' exterior color from color images, internal and surface spectral features linked to chemical compositions in the visible and near-infrared spectra (350 nm to 1100 nm), and physical firmness using haptic sensing. By combining preprocessed and extracted features from multiple modalities, data fusion creates a comprehensive representation of information from all three modalities using an eigenvector in an eigenspace suitable for tomato maturity assessment. Then, a fully connected neural network is constructed to process these fused data. This neural network model achieves 99.4% accuracy in tomato maturity classification, surpassing single-modal methods (color imaging: 94.2%; spectroscopy: 87.8%; haptics: 87.2%). For internal and external maturity unevenness, the classification accuracy reaches 94.4%, demonstrating effective results. A comparative analysis of performance between multimodal fusion and single-modal methods validates the stability and applicability of the multimodal fusion technique. These findings demonstrate the key benefits of multimodal fusion in terms of improving the accuracy of tomato ripening classification and provide a strong theoretical and practical basis for applying multimodal fusion technology to classify the quality and maturity of other fruits and vegetables. Utilizing deep learning (a fully connected neural network) for processing multimodal data provides a new and efficient non-destructive approach for the massive classification of agricultural and food products.
Collapse
Affiliation(s)
- Yang Liu
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Chaojie Wei
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Seung-Chul Yoon
- Quality & Safety Assessment Research Unit, U. S. National Poultry Research Center, USDA-ARS, 950 College Station Rd., Athens, GA 30605, USA
| | - Xinzhi Ni
- Crop Genetics and Breeding Research Unit, United States Department of Agriculture Agricultural Research Service, 2747 Davis Road, Tifton, GA 31793, USA
| | - Wei Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Yizhe Liu
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Daren Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Xiaorong Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Xiaohuan Guo
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| |
Collapse
|
3
|
Montesinos-López A, Crespo-Herrera L, Dreisigacker S, Gerard G, Vitale P, Saint Pierre C, Govindan V, Tarekegn ZT, Flores MC, Pérez-Rodríguez P, Ramos-Pulido S, Lillemo M, Li H, Montesinos-López OA, Crossa J. Deep learning methods improve genomic prediction of wheat breeding. FRONTIERS IN PLANT SCIENCE 2024; 15:1324090. [PMID: 38504889 PMCID: PMC10949530 DOI: 10.3389/fpls.2024.1324090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/19/2024] [Indexed: 03/21/2024]
Abstract
In the field of plant breeding, various machine learning models have been developed and studied to evaluate the genomic prediction (GP) accuracy of unseen phenotypes. Deep learning has shown promise. However, most studies on deep learning in plant breeding have been limited to small datasets, and only a few have explored its application in moderate-sized datasets. In this study, we aimed to address this limitation by utilizing a moderately large dataset. We examined the performance of a deep learning (DL) model and compared it with the widely used and powerful best linear unbiased prediction (GBLUP) model. The goal was to assess the GP accuracy in the context of a five-fold cross-validation strategy and when predicting complete environments using the DL model. The results revealed the DL model outperformed the GBLUP model in terms of GP accuracy for two out of the five included traits in the five-fold cross-validation strategy, with similar results in the other traits. This indicates the superiority of the DL model in predicting these specific traits. Furthermore, when predicting complete environments using the leave-one-environment-out (LOEO) approach, the DL model demonstrated competitive performance. It is worth noting that the DL model employed in this study extends a previously proposed multi-modal DL model, which had been primarily applied to image data but with small datasets. By utilizing a moderately large dataset, we were able to evaluate the performance and potential of the DL model in a context with more information and challenging scenario in plant breeding.
Collapse
Affiliation(s)
- Abelardo Montesinos-López
- Departamento de Matemáticas, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Leonardo Crespo-Herrera
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Susanna Dreisigacker
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Guillermo Gerard
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Paolo Vitale
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Carolina Saint Pierre
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | - Velu Govindan
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
| | | | - Moisés Chavira Flores
- Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Paulino Pérez-Rodríguez
- Estudios del Desarrollo Rural, Economía, Estadística y Cómputo Aplicado, Colegio de Postgraduados, Texcoco, Estado de México, Mexico
| | - Sofía Ramos-Pulido
- Departamento de Matemáticas, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Morten Lillemo
- Department of Plant Science, Norwegian University of Life Science (NMBU), Ås, Norway
| | - Huihui Li
- 6State Key Laboratory of Crop Gene Resources and Breeding, Institute of Crop Sciences and CIMMYT China Office, Chinese Academy of Agricultural Sciences (CAAS), Beijing, China
| | | | - Jose Crossa
- International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Estado. de México, Mexico
- Estudios del Desarrollo Rural, Economía, Estadística y Cómputo Aplicado, Colegio de Postgraduados, Texcoco, Estado de México, Mexico
| |
Collapse
|
4
|
Badeka E, Karapatzak E, Karampatea A, Bouloumpasi E, Kalathas I, Lytridis C, Tziolas E, Tsakalidou VN, Kaburlasos VG. A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7. SENSORS (BASEL, SWITZERLAND) 2023; 23:8126. [PMID: 37836956 PMCID: PMC10575379 DOI: 10.3390/s23198126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/19/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
In the viticulture sector, robots are being employed more frequently to increase productivity and accuracy in operations such as vineyard mapping, pruning, and harvesting, especially in locations where human labor is in short supply or expensive. This paper presents the development of an algorithm for grape maturity estimation in the framework of vineyard management. An object detection algorithm is proposed based on You Only Look Once (YOLO) v7 and its extensions in order to detect grape maturity in a white variety of grape (Assyrtiko grape variety). The proposed algorithm was trained using images received over a period of six weeks from grapevines in Drama, Greece. Tests on high-quality images have demonstrated that the detection of five grape maturity stages is possible. Furthermore, the proposed approach has been compared against alternative object detection algorithms. The results showed that YOLO v7 outperforms other architectures both in precision and accuracy. This work paves the way for the development of an autonomous robot for grapevine management.
Collapse
Affiliation(s)
- Eftichia Badeka
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| | - Eleftherios Karapatzak
- Department of Agricultural Biotechnology and Oenology, International Hellenic University, 66100 Drama, Greece; (E.K.); (A.K.); (E.B.)
| | - Aikaterini Karampatea
- Department of Agricultural Biotechnology and Oenology, International Hellenic University, 66100 Drama, Greece; (E.K.); (A.K.); (E.B.)
| | - Elisavet Bouloumpasi
- Department of Agricultural Biotechnology and Oenology, International Hellenic University, 66100 Drama, Greece; (E.K.); (A.K.); (E.B.)
| | - Ioannis Kalathas
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| | - Chris Lytridis
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| | - Emmanouil Tziolas
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| | - Viktoria Nikoleta Tsakalidou
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| | - Vassilis G. Kaburlasos
- Human-Machines Interaction Laboratory (HUMAIN-Lab), Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece; (E.B.); (I.K.); (C.L.); (E.T.); (V.N.T.)
| |
Collapse
|
5
|
Aline U, Bhattacharya T, Faqeerzada MA, Kim MS, Baek I, Cho BK. Advancement of non-destructive spectral measurements for the quality of major tropical fruits and vegetables: a review. FRONTIERS IN PLANT SCIENCE 2023; 14:1240361. [PMID: 37662162 PMCID: PMC10471194 DOI: 10.3389/fpls.2023.1240361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/27/2023] [Indexed: 09/05/2023]
Abstract
The quality of tropical fruits and vegetables and the expanding global interest in eating healthy foods have resulted in the continual development of reliable, quick, and cost-effective quality assurance methods. The present review discusses the advancement of non-destructive spectral measurements for evaluating the quality of major tropical fruits and vegetables. Fourier transform infrared (FTIR), Near-infrared (NIR), Raman spectroscopy, and hyperspectral imaging (HSI) were used to monitor the external and internal parameters of papaya, pineapple, avocado, mango, and banana. The ability of HSI to detect both spectral and spatial dimensions proved its efficiency in measuring external qualities such as grading 516 bananas, and defects in 10 mangoes and 10 avocados with 98.45%, 97.95%, and 99.9%, respectively. All of the techniques effectively assessed internal characteristics such as total soluble solids (TSS), soluble solid content (SSC), and moisture content (MC), with the exception of NIR, which was found to have limited penetration depth for fruits and vegetables with thick rinds or skins, including avocado, pineapple, and banana. The appropriate selection of NIR optical geometry and wavelength range can help to improve the prediction accuracy of these crops. The advancement of spectral measurements combined with machine learning and deep learning technologies have increased the efficiency of estimating the six maturity stages of papaya fruit, from the unripe to the overripe stages, with F1 scores of up to 0.90 by feature concatenation of data developed by HSI and visible light. The presented findings in the technological advancements of non-destructive spectral measurements offer promising quality assurance for tropical fruits and vegetables.
Collapse
Affiliation(s)
- Umuhoza Aline
- Department of Agricultural Machinery Engineering, Chungnam National University, Daejeon, Republic of Korea
| | - Tanima Bhattacharya
- Department of Agricultural Machinery Engineering, Chungnam National University, Daejeon, Republic of Korea
| | | | - Moon S. Kim
- Environmental Microbial and Food Safety Laboratory, Agricultural Research Service, United States Department of Agriculture, Beltsville, MD, United States
| | - Insuck Baek
- Environmental Microbial and Food Safety Laboratory, Agricultural Research Service, United States Department of Agriculture, Beltsville, MD, United States
| | - Byoung-Kwan Cho
- Department of Agricultural Machinery Engineering, Chungnam National University, Daejeon, Republic of Korea
- Department of Smart Agricultural Systems, Chungnam National University, Daejeon, Republic of Korea
| |
Collapse
|
6
|
Li P, Zheng J, Li P, Long H, Li M, Gao L. Tomato Maturity Detection and Counting Model Based on MHSA-YOLOv8. SENSORS (BASEL, SWITZERLAND) 2023; 23:6701. [PMID: 37571485 PMCID: PMC10422388 DOI: 10.3390/s23156701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/19/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023]
Abstract
The online automated maturity grading and counting of tomato fruits has a certain promoting effect on digital supervision of fruit growth status and unmanned precision operations during the planting process. The traditional grading and counting of tomato fruit maturity is mostly done manually, which is time-consuming and laborious work, and its precision depends on the accuracy of human eye observation. The combination of artificial intelligence and machine vision has to some extent solved this problem. In this work, firstly, a digital camera is used to obtain tomato fruit image datasets, taking into account factors such as occlusion and external light interference. Secondly, based on the tomato maturity grading task requirements, the MHSA attention mechanism is adopted to improve YOLOv8's backbone to enhance the network's ability to extract diverse features. The Precision, Recall, F1-score, and mAP50 of the tomato fruit maturity grading model constructed based on MHSA-YOLOv8 were 0.806, 0.807, 0.806, and 0.864, respectively, which improved the performance of the model with a slight increase in model size. Finally, thanks to the excellent performance of MHSA-YOLOv8, the Precision, Recall, F1-score, and mAP50 of the constructed counting models were 0.990, 0.960, 0.975, and 0.916, respectively. The tomato maturity grading and counting model constructed in this study is not only suitable for online detection but also for offline detection, which greatly helps to improve the harvesting and grading efficiency of tomato growers. The main innovations of this study are summarized as follows: (1) a tomato maturity grading and counting dataset collected from actual production scenarios was constructed; (2) considering the complexity of the environment, this study proposes a new object detection method, MHSA-YOLOv8, and constructs tomato maturity grading models and counting models, respectively; (3) the models constructed in this study are not only suitable for online grading and counting but also for offline grading and counting.
Collapse
Affiliation(s)
| | | | | | | | | | - Lihong Gao
- Chongqing Academy of Agricultural Sciences, Chongqing 401329, China
| |
Collapse
|
7
|
Zuo Z, Mu J, Li W, Bu Q, Mao H, Zhang X, Han L, Ni J. Study on the detection of water status of tomato ( Solanum lycopersicum L.) by multimodal deep learning. FRONTIERS IN PLANT SCIENCE 2023; 14:1094142. [PMID: 37324706 PMCID: PMC10264697 DOI: 10.3389/fpls.2023.1094142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 05/10/2023] [Indexed: 06/17/2023]
Abstract
Water plays a very important role in the growth of tomato (Solanum lycopersicum L.), and how to detect the water status of tomato is the key to precise irrigation. The objective of this study is to detect the water status of tomato by fusing RGB, NIR and depth image information through deep learning. Five irrigation levels were set to cultivate tomatoes in different water states, with irrigation amounts of 150%, 125%, 100%, 75%, and 50% of reference evapotranspiration calculated by a modified Penman-Monteith equation, respectively. The water status of tomatoes was divided into five categories: severely irrigated deficit, slightly irrigated deficit, moderately irrigated, slightly over-irrigated, and severely over-irrigated. RGB images, depth images and NIR images of the upper part of the tomato plant were taken as data sets. The data sets were used to train and test the tomato water status detection models built with single-mode and multimodal deep learning networks, respectively. In the single-mode deep learning network, two CNNs, VGG-16 and Resnet-50, were trained on a single RGB image, a depth image, or a NIR image for a total of six cases. In the multimodal deep learning network, two or more of the RGB images, depth images and NIR images were trained with VGG-16 or Resnet-50, respectively, for a total of 20 combinations. Results showed that the accuracy of tomato water status detection based on single-mode deep learning ranged from 88.97% to 93.09%, while the accuracy of tomato water status detection based on multimodal deep learning ranged from 93.09% to 99.18%. The multimodal deep learning significantly outperformed the single-modal deep learning. The tomato water status detection model built using a multimodal deep learning network with ResNet-50 for RGB images and VGG-16 for depth and NIR images was optimal. This study provides a novel method for non-destructive detection of water status of tomato and gives a reference for precise irrigation management.
Collapse
Affiliation(s)
- Zhiyu Zuo
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| | - Jindong Mu
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Wenjie Li
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Quan Bu
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Hanping Mao
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| | - Xiaodong Zhang
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Lvhua Han
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Jiheng Ni
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| |
Collapse
|
8
|
Zhu J, Zhou S, Ning Y, Dun X, Dong S, Wang Z, Cheng X. Grayscale-patterned integrated multilayer-metal-dielectric microcavities for on-chip multi/hyperspectral imaging in the extended visible bandwidth. OPTICS EXPRESS 2023; 31:14027-14036. [PMID: 37157275 DOI: 10.1364/oe.485869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Pixelated filter arrays of Fabry-Perot (FP) cavities are widely integrated with photodetectors to achieve a WYSIWYG ("what you see is what you get") on-chip spectral measurements. However, FP-filter-based spectral sensors typically have a trade-off between their spectral resolution and working bandwidth due to design limitations of conventional metal or dielectric multilayer microcavities. Here, we propose a new idea of integrated color filter arrays (CFAs) consisting of multilayer metal-dielectric-mirror FP microcavities that, enable a hyperspectral resolution over an extended visible bandwidth (∼300 nm). By introducing another two dielectric layers on the metallic film, the broadband reflectance of the FP-cavity mirror was greatly enhanced, accompanied by as-flat-as-possible reflection-phase dispersion. This resulted in balanced spectral resolution (∼10 nm) and spectral bandwidth from 450 nm to 750 nm. In the experiment, we used a one-step rapid manufacturing process by using grayscale e-beam lithography. A 16-channel (4 × 4) CFA was fabricated and demonstrated on-chip spectral imaging with a CMOS sensor and an impressive identification capability. Our results provide an attractive method for developing high-performance spectral sensors and have potential commercial applications by extending the utility of low-cost manufacturing process.
Collapse
|
9
|
Mohd Ali M, Hashim N, Abd Aziz S, Lasekan O. Utilisation of Deep Learning with Multimodal Data Fusion for Determination of Pineapple Quality Using Thermal Imaging. AGRONOMY 2023; 13:401. [DOI: 10.3390/agronomy13020401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Fruit quality is an important aspect in determining the consumer preference in the supply chain. Thermal imaging was used to determine different pineapple varieties according to the physicochemical changes of the fruit by means of the deep learning method. Deep learning has gained attention in fruit classification and recognition in unimodal processing. This paper proposes a multimodal data fusion framework for the determination of pineapple quality using deep learning methods based on the feature extraction acquired from thermal imaging. Feature extraction was selected from the thermal images that provided a correlation with the quality attributes of the fruit in developing the deep learning models. Three different types of deep learning architectures, including ResNet, VGG16, and InceptionV3, were built to develop the multimodal data fusion framework for the classification of pineapple varieties based on the concatenation of multiple features extracted by the robust networks. The multimodal data fusion coupled with powerful convolutional neural network architectures can remarkably distinguish different pineapple varieties. The proposed multimodal data fusion framework provides a reliable determination of fruit quality that can improve the recognition accuracy and the model performance up to 0.9687. The effectiveness of multimodal deep learning data fusion and thermal imaging has huge potential in monitoring the real-time determination of physicochemical changes of fruit.
Collapse
Affiliation(s)
- Maimunah Mohd Ali
- Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
| | - Norhashila Hashim
- Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
- SMART Farming Technology Research Centre, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
| | - Samsuzana Abd Aziz
- Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
- SMART Farming Technology Research Centre, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
| | - Ola Lasekan
- Department of Food Technology, Faculty of Food Science and Technology, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
| |
Collapse
|
10
|
Hosseinnia Shavaki F, Ebrahimi Ghahnavieh A. Applications of deep learning into supply chain management: a systematic literature review and a framework for future research. Artif Intell Rev 2022; 56:4447-4489. [PMID: 36212799 PMCID: PMC9524740 DOI: 10.1007/s10462-022-10289-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/26/2022]
Abstract
In today’s complex and ever-changing world, Supply Chain Management (SCM) is increasingly becoming a cornerstone to any company to reckon with in this global era for all industries. The rapidly growing interest in the application of Deep Learning (a class of machine learning algorithms) in SCM, has urged the need for an up-to-date systematic review on the research development. The main purpose of this study is to provide a comprehensive vision by reviewing a set of 43 papers about applications of Deep Learning (DL) methods to the SCM, as well as the trends, perspectives, and potential research gaps. This review uses content analysis to answer three research questions namely: 1- What SCM problems have been solved by the use of DL techniques? 2- What DL algorithms have been used to solve these problems? 3- What alternative algorithms have been used to tackle the same problems? And do DL outperform these methods and through which evaluation metrics? This review also responds to this call by developing a conceptual framework in a value-adding perspective that provides a full picture of areas on where and how DL can be applied within the SCM context. This makes it easier to identify potential applications to corporations, in addition to potential future research areas to science. It might also provide businesses a competitive advantage over their competitors by allowing them to add value to their data by analyzing it quickly and precisely.
Collapse
|
11
|
Deep Learning Based Dual Channel Banana Grading System Using Convolution Neural Network. J FOOD QUALITY 2022. [DOI: 10.1155/2022/6050284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Deep learning has recently been hailed as the most advanced computer vision technology for image classification. The invention of convolutional neural network (CNN) simplified the effort of feature engineering. Classification of various stages of fruit maturity using machine learning algorithms is a difficult task since it is difficult to distinguish the visual features of the fruits at different maturity stages. Fruit ripeness is critical in agriculture since it impacts the quality of the fruit. Manually determining the maturity of the fruit has various flaws, including the fact that it takes a long time, needs a lot of labor, and can lead to inconsistencies. In developing countries, agriculture is one of the most important economic sectors. Created system can be employed in the food processing business, in real-life applications where the intelligent system’s accuracy, cost, and speed will improve the production rate and allow satisfying consumer demand. With small number of image samples, the system is capable of automating assembly line related work for classifying bananas along with sufficient overall accuracy. The noninvasive method will also be used to classify other clustered fruits or horticultural crops in the future. The system can either replace or aid human operators who can focus their efforts on fruit selection. The combined merits of RGB and HSI (hyperspectral imaging) for classification of bananas were highlighted in the present study; they have possible application as a model for classification of several types of horticultural produce. The multi-input model’s quick processing time can be a useful and handy technique in the farm field during postharvest procedures. Via a combination of CNN and MLP applied to data collected using RGB and hyperspectral imaging, the multi-input model reliably recognizes bananas with an accuracy level of 98.4 percent as well as an F1-score of 0.97. The AI algorithm predicted the size (large, medium, and microscopic) and perspective (front or rear half) of banana classes with 99 percent accuracy. In comparison to previous studies that simply employed RGB imaging, the presented model revealed the value of integrating RGB imaging and HSI approaches.
Collapse
|
12
|
Nondestructive Detection of Codling Moth Infestation in Apples Using Pixel-Based NIR Hyperspectral Imaging with Machine Learning and Feature Selection. Foods 2021; 11:foods11010008. [PMID: 35010134 PMCID: PMC8750721 DOI: 10.3390/foods11010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/16/2021] [Accepted: 12/17/2021] [Indexed: 11/17/2022] Open
Abstract
Codling moth (CM) (Cydia pomonella L.), a devastating pest, creates a serious issue for apple production and marketing in apple-producing countries. Therefore, effective nondestructive early detection of external and internal defects in CM-infested apples could remarkably prevent postharvest losses and improve the quality of the final product. In this study, near-infrared (NIR) hyperspectral reflectance imaging in the wavelength range of 900–1700 nm was applied to detect CM infestation at the pixel level for three organic apple cultivars, namely Gala, Fuji and Granny Smith. An effective region of interest (ROI) acquisition procedure along with different machine learning and data processing methods were used to build robust and high accuracy classification models. Optimal wavelength selection was implemented using sequential stepwise selection methods to build multispectral imaging models for fast and effective classification purposes. The results showed that the infested and healthy samples were classified at pixel level with up to 97.4% total accuracy for validation dataset using a gradient tree boosting (GTB) ensemble classifier, among others. The feature selection algorithm obtained a maximum accuracy of 91.6% with only 22 selected wavelengths. These findings indicate the high potential of NIR hyperspectral imaging (HSI) in detecting and classifying latent CM infestation in apples of different cultivars.
Collapse
|
13
|
Wu X, Li J, Zhou G, Lü B, Li Q, Yang H. RRG-GAN Restoring Network for Simple Lens Imaging System. SENSORS (BASEL, SWITZERLAND) 2021; 21:3317. [PMID: 34064779 PMCID: PMC8150399 DOI: 10.3390/s21103317] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/16/2022]
Abstract
The simple lens computational imaging method provides an alternative way to achieve high-quality photography. It simplifies the design of the optical-front-end to a single-convex-lens and delivers the correction of optical aberration to a dedicated computational restoring algorithm. Traditional single-convex-lens image restoration is based on optimization theory, which has some shortcomings in efficiency and efficacy. In this paper, we propose a novel Recursive Residual Groups network under Generative Adversarial Network framework (RRG-GAN) to generate a clear image from the aberrations-degraded blurry image. The RRG-GAN network includes dual attention module, selective kernel network module, and residual resizing module to make it more suitable for the non-uniform deblurring task. To validate the evaluation algorithm, we collect sharp/aberration-degraded datasets by CODE V simulation. To test the practical application performance, we built a display-capture lab setup and reconstruct a manual registering dataset. Relevant experimental comparisons and actual tests verify the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Xiaotian Wu
- College of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China;
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (B.L.); (Q.L.)
| | - Jiongcheng Li
- School of Informatics, Xiamen University, Xiamen 361005, China; (J.L.); (G.Z.)
| | - Guanxing Zhou
- School of Informatics, Xiamen University, Xiamen 361005, China; (J.L.); (G.Z.)
| | - Bo Lü
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (B.L.); (Q.L.)
| | - Qingqing Li
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (B.L.); (Q.L.)
| | - Hang Yang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (B.L.); (Q.L.)
| |
Collapse
|