1
|
Hamdy S, Charrier A, Corre LL, Rasti P, Rousseau D. Toward robust and high-throughput detection of seed defects in X-ray images via deep learning. Plant Methods 2024; 20:63. [PMID: 38711143 DOI: 10.1186/s13007-024-01195-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024]
Abstract
BACKGROUND The detection of internal defects in seeds via non-destructive imaging techniques is a topic of high interest to optimize the quality of seed lots. In this context, X-ray imaging is especially suited. Recent studies have shown the feasibility of defect detection via deep learning models in 3D tomography images. We demonstrate the possibility of performing such deep learning-based analysis on 2D X-ray radiography for a faster yet robust method via the X-Robustifier pipeline proposed in this article. RESULTS 2D X-ray images of both defective and defect-free seeds were acquired. A deep learning model based on state-of-the-art object detection neural networks is proposed. Specific data augmentation techniques are introduced to compensate for the low ratio of defects and increase the robustness to variation of the physical parameters of the X-ray imaging systems. The seed defects were accurately detected (F1-score >90%), surpassing human performance in computation time and error rates. The robustness of these models against the principal distortions commonly found in actual agro-industrial conditions is demonstrated, in particular, the robustness to physical noise, dimensionality reduction and the presence of seed coating. CONCLUSION This work provides a full pipeline to automatically detect common defects in seeds via 2D X-ray imaging. The method is illustrated on sugar beet and faba bean and could be efficiently extended to other species via the proposed generic X-ray data processing approach (X-Robustifier). Beyond a simple proof of feasibility, this constitutes important results toward the effective use in the routine of deep learning-based automatic detection of seed defects.
Collapse
Affiliation(s)
- Sherif Hamdy
- GEVES, Station Nationale d'Essais de Semences, 25 Georges Morel, 49070, Beaucouze, France
| | - Aurélie Charrier
- GEVES, Station Nationale d'Essais de Semences, 25 Georges Morel, 49070, Beaucouze, France
| | - Laurence Le Corre
- GEVES, Station Nationale d'Essais de Semences, 25 Georges Morel, 49070, Beaucouze, France
| | - Pejman Rasti
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRAe IRHS, Université d'Angers, 62 Avenue Notre Dame du Lac, 49100, Angers, France
- Centre d'Études et de Recherche pour l'Aide à la Décision (CERADE), École d'ingénieurs (ESAIP), 49100, Angers, France
| | - David Rousseau
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRAe IRHS, Université d'Angers, 62 Avenue Notre Dame du Lac, 49100, Angers, France.
- IRHS, INRAE, Institut Agro, Univ. Angers, SFR4207 QuaSaV, 42 Georges Morel CS 60057, 49071, Beaucouze, France.
| |
Collapse
|
2
|
Chaudhuri A. Smart traffic management of vehicles using faster R-CNN based deep learning method. Sci Rep 2024; 14:10357. [PMID: 38710753 DOI: 10.1038/s41598-024-60596-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 04/25/2024] [Indexed: 05/08/2024] Open
Abstract
With constant growth of civilization and modernization of cities all across the world since past few centuries smart traffic management of vehicles is one of the most sorted after problem by research community. Smart traffic management basically involves segmentation of vehicles, estimation of traffic density and tracking of vehicles. The vehicle segmentation from videos helps realization of niche applications such as monitoring of speed and estimation of traffic. When occlusions, background with clutters and traffic with density variations, this problem becomes more intractable in nature. Keeping this motivation in this research work, we investigate Faster R-CNN based deep learning method towards segmentation of vehicles. This problem is addressed in four steps viz minimization with adaptive background model, Faster R-CNN based subnet operation, Faster R-CNN initial refinement and result optimization with extended topological active nets. The computational framework uses adaptive background modeling. It also addresses shadow and illumination issues. Higher segmentation accuracy is achieved through topological active net deformable models. The topological and extended topological active nets help to achieve stated deformations. Mesh deformation is achieved with minimization of energy. The segmentation accuracy is improved with modified version of extended topological active net. The experimental results demonstrate superiority of this framework with respect to other methods.
Collapse
|
3
|
Li G, Yao Z, Hu Y, Lian A, Yuan T, Pang G, Huang X. Deep Learning-Based Fish Detection Using Above-Water Infrared Camera for Deep-Sea Aquaculture: A Comparison Study. Sensors (Basel) 2024; 24:2430. [PMID: 38676049 PMCID: PMC11054504 DOI: 10.3390/s24082430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/03/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
Long-term, automated fish detection provides invaluable data for deep-sea aquaculture, which is crucial for safe and efficient seawater aquafarming. In this paper, we used an infrared camera installed on a deep-sea truss-structure net cage to collect fish images, which were subsequently labeled to establish a fish dataset. Comparison experiments with our dataset based on Faster R-CNN as the basic objection detection framework were conducted to explore how different backbone networks and network improvement modules influenced fish detection performances. Furthermore, we also experimented with the effects of different learning rates, feature extraction layers, and data augmentation strategies. Our results showed that Faster R-CNN with the EfficientNetB0 backbone and FPN module was the most competitive fish detection network for our dataset, since it took a significantly shorter detection time while maintaining a high AP50 value of 0.85, compared to the best AP50 value of 0.86 being achieved by the combination of VGG16 with all improvement modules plus data augmentation. Overall, this work has verified the effectiveness of deep learning-based object detection methods and provided insights into subsequent network improvements.
Collapse
Affiliation(s)
- Gen Li
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
- Key Laboratory of Open-Sea Fishery Development, Ministry of Agriculture and Rural Affairs, Guangzhou 510300, China
- Research and Development Center for Tropical Aquatic Products, South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Sanya 572018, China
- Sanya Tropical Fisheries Research Institute, Sanya 572018, China
| | - Zidan Yao
- School of Marine Engineering Equipment, Zhejiang Ocean University, Zhoushan 316022, China;
| | - Yu Hu
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
- Key Laboratory of Open-Sea Fishery Development, Ministry of Agriculture and Rural Affairs, Guangzhou 510300, China
- Research and Development Center for Tropical Aquatic Products, South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Sanya 572018, China
- Sanya Tropical Fisheries Research Institute, Sanya 572018, China
| | - Anji Lian
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
| | - Taiping Yuan
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
- Key Laboratory of Open-Sea Fishery Development, Ministry of Agriculture and Rural Affairs, Guangzhou 510300, China
- Research and Development Center for Tropical Aquatic Products, South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Sanya 572018, China
- Sanya Tropical Fisheries Research Institute, Sanya 572018, China
| | - Guoliang Pang
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
- Key Laboratory of Open-Sea Fishery Development, Ministry of Agriculture and Rural Affairs, Guangzhou 510300, China
- Research and Development Center for Tropical Aquatic Products, South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Sanya 572018, China
- Sanya Tropical Fisheries Research Institute, Sanya 572018, China
| | - Xiaohua Huang
- South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Guangzhou 510300, China; (G.L.); (Y.H.); (A.L.); (T.Y.); (G.P.)
- Key Laboratory of Open-Sea Fishery Development, Ministry of Agriculture and Rural Affairs, Guangzhou 510300, China
- Research and Development Center for Tropical Aquatic Products, South China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Sanya 572018, China
- Sanya Tropical Fisheries Research Institute, Sanya 572018, China
| |
Collapse
|
4
|
Figueroa J, Rivas-Villar D, Rouco J, Novo J. Phytoplankton detection and recognition in freshwater digital microscopy images using deep learning object detectors. Heliyon 2024; 10:e25367. [PMID: 38327447 PMCID: PMC10847640 DOI: 10.1016/j.heliyon.2024.e25367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 12/13/2023] [Accepted: 01/25/2024] [Indexed: 02/09/2024] Open
Abstract
Water quality can be negatively affected by the presence of some toxic phytoplankton species, whose toxins are difficult to remove by conventional purification systems. This creates the need for periodic analyses, which are nowadays manually performed by experts. These labor-intensive processes are affected by subjectivity and expertise, causing unreliability. Some automatic systems have been proposed to address these limitations. However, most of them are based on classical image processing pipelines with not easily scalable designs. In this context, deep learning techniques are more adequate for the detection and recognition of phytoplankton specimens in multi-specimen microscopy images, as they integrate both tasks in a single end-to-end trainable module that is able to automatize the adaption to such a complex domain. In this work, we explore the use of two different object detectors: Faster R-CNN and RetinaNet, from the one-stage and two-stage paradigms respectively. We use a dataset composed of multi-specimen microscopy images captured using a systematic protocol. This allows the use of widely available optical microscopes, also avoiding manual adjustments on a per-specimen basis, which would require expert knowledge. We have made our dataset publicly available to improve the reproducibility and to foment the development of new alternatives in the field. The selected Faster R-CNN methodology reaches maximum recall levels of 95.35%, 84.69%, and 79.81%, and precisions of 94.68%, 89.30% and 82.61%, for W. naegeliana, A. spiroides, and D. sociale, respectively. The system is able to adapt to the dataset problems and improves the results overall with respect to the reference state-of-the-art work. In addition, the proposed system improves the automation and abstraction from the domain and simplifies the workflow and adjustment.
Collapse
Affiliation(s)
- Jorge Figueroa
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruna, 15006 A Coruña, Spain
| | - David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruna, 15006 A Coruña, Spain
| | - José Rouco
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruna, 15006 A Coruña, Spain
| | - Jorge Novo
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruna, 15006 A Coruña, Spain
| |
Collapse
|
5
|
R Shihabuddin A, Beevi K S. Efficient mitosis detection: leveraging pre-trained faster R-CNN and cell-level classification. Biomed Phys Eng Express 2024; 10:025031. [PMID: 38357907 DOI: 10.1088/2057-1976/ad262f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 02/05/2024] [Indexed: 02/16/2024]
Abstract
The assessment of mitotic activity is an integral part of the comprehensive evaluation of breast cancer pathology. Understanding the level of tumor dissemination is essential for assessing the severity of the malignancy and guiding appropriate treatment strategies. A pathologist must manually perform the intricate and time-consuming task of counting mitoses by examining biopsy slices stained with Hematoxylin and Eosin (H&E) under a microscope. Mitotic cells can be challenging to distinguish in H&E-stained sections due to limited available datasets and similarities among mitotic and non-mitotic cells. Computer-assisted mitosis detection approaches have simplified the whole procedure by selecting, detecting, and labeling mitotic cells. Traditional detection strategies rely on image processing techniques that apply custom criteria to distinguish between different aspects of an image. Additionally, the automatic feature extraction from histopathology images that exhibit mitosis using neural networks.Additionally, the possibility of automatically extracting features from histopathological images using deep neural networks was investigated. This study examines mitosis detection as an object detection problem using multiple neural networks. From a medical standpoint, mitosis at the tissue level was also investigated utilising pre-trained Faster R-CNN and raw image data. Experiments were done on the MITOS-ATYPIA- 14 dataset and TUPAC16 dataset, and the results were compared to those of other methods described in the literature.
Collapse
Affiliation(s)
- Abdul R Shihabuddin
- Centre For Artificial Intelligence, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| | - Sabeena Beevi K
- Department of Electrical and Electronics Engineering, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| |
Collapse
|
6
|
Zeng Q, Sun J, Wang S. DIC-Transformer: interpretation of plant disease classification results using image caption generation technology. Front Plant Sci 2024; 14:1273029. [PMID: 38333041 PMCID: PMC10850568 DOI: 10.3389/fpls.2023.1273029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 12/29/2023] [Indexed: 02/10/2024]
Abstract
Disease image classification systems play a crucial role in identifying disease categories in the field of agricultural diseases. However, current plant disease image classification methods can only predict the disease category and do not offer explanations for the characteristics of the predicted disease images. Due to the current situation, this paper employed image description generation technology to produce distinct descriptions for different plant disease categories. A two-stage model called DIC-Transformer, which encompasses three tasks (detection, interpretation, and classification), was proposed. In the first stage, Faster R-CNN was utilized to detect the diseased area and generate the feature vector of the diseased image, with the Swin Transformer as the backbone. In the second stage, the model utilized the Transformer to generate image captions. It then generated the image feature vector, which is weighted by text features, to improve the performance of image classification in the subsequent classification decoder. Additionally, a dataset containing text and visualizations for agricultural diseases (ADCG-18) was compiled. The dataset contains images of 18 diseases and descriptive information about their characteristics. Then, using the ADCG-18, the DIC-Transformer was compared to 11 existing classical caption generation methods and 10 image classification models. The evaluation indicators for captions include Bleu1-4, CiderD, and Rouge. The values of BLEU-1, CIDEr-D, and ROUGE were 0.756, 450.51, and 0.721. The results of DIC-Transformer were 0.01, 29.55, and 0.014 higher than those of the highest-performing comparison model, Fc. The classification evaluation metrics include accuracy, recall, and F1 score, with accuracy at 0.854, recall at 0.854, and F1 score at 0.853. The results of DIC-Transformer were 0.024, 0.078, and 0.075 higher than those of the highest-performing comparison model, MobileNetV2. The results indicate that the DIC-Transformer outperforms other comparison models in classification and caption generation.
Collapse
Affiliation(s)
| | | | - Shansong Wang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
7
|
T S, R V. Deep Learning-based Automated Knee Joint Localization in Radiographic Images Using Faster R-CNN. Curr Med Imaging 2023:CMIR-EPUB-135374. [PMID: 37881088 DOI: 10.2174/0115734056262464230922112606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 08/03/2023] [Accepted: 08/10/2023] [Indexed: 10/27/2023]
Abstract
BACKGROUND Osteoarthritis is a condition that poses a risk to the knee joint, resulting in pain and impaired function. However, traditional knee X-ray evaluations using the Kellgren-Lawrence grading system have proven to be inefficient. These evaluations are subjective, time-consuming, and labor-intensive, particularly in busy hospital settings. OBJECTIVE The objective of this research was to present a deep learning-based approach that can detect knee joint regions in medical images. By addressing the limitations of traditional methods, the aim was to develop a more efficient and automated approach for knee joint analysis. METHODS The proposed method utilizes the Faster R-CNN model, which consists of a region proposal network (RPN) and Fast R-CNN. The RPN generates region proposals that potentially contain knee joint regions, while the Fast R-CNN network categorizes and extracts features from these proposals. To train the model, a dataset of knee joint images was employed. The performance of the model was evaluated using metrics, such as accuracy, precision, recall, F1-score, and mean IoU (Intersection Over Union). RESULTS The results demonstrated the high accuracy of the proposed method in detecting knee joint regions. The model achieved a mean IoU of 94.5, indicating a strong overlap between the predicted and ground truth regions. These findings highlight the potential of deep learning-based approaches in automating medical image analysis, specifically in the diagnosis and management of knee joint disorders. CONCLUSION This study emphasizes the significance of leveraging advanced technologies, such as deep learning, in medical imaging. By developing more efficient and accurate methods for identifying knee joint regions in medical images, it becomes feasible to enhance patient outcomes and healthcare delivery. The proposed deep learning-based approach showcases promising results, paving the way for further advancements in the field of medical image analysis and contributing to improved diagnostic capabilities for knee joint disorders.
Collapse
Affiliation(s)
- Sivakumari T
- SRM Institute of Science and Technology, Faculty of Engineering and Technology, Ramapuram, Chennai, Tamil Nadu, India
| | - Vani R
- SRM Institute of Science and Technology, Faculty of Engineering and Technology, Ramapuram, Chennai, Tamil Nadu, India
| |
Collapse
|
8
|
Al-homed LS, Jambi KM, Al-Barhamtoshy HM. A Deep Learning Approach for Arabic Manuscripts Classification. Sensors (Basel) 2023; 23:8133. [PMID: 37836963 PMCID: PMC10575097 DOI: 10.3390/s23198133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/14/2023] [Accepted: 09/23/2023] [Indexed: 10/15/2023]
Abstract
For centuries, libraries worldwide have preserved ancient manuscripts due to their immense historical and cultural value. However, over time, both natural and human-made factors have led to the degradation of many ancient Arabic manuscripts, causing the loss of significant information, such as authorship, titles, or subjects, rendering them as unknown manuscripts. Although catalog cards attached to these manuscripts might contain some of the missing details, these cards have degraded significantly in quality over the decades within libraries. This paper presents a framework for identifying these unknown ancient Arabic manuscripts by processing the catalog cards associated with them. Given the challenges posed by the degradation of these cards, simple optical character recognition (OCR) is often insufficient. The proposed framework uses deep learning architecture to identify unknown manuscripts within a collection of ancient Arabic documents. This involves locating, extracting, and classifying the text from these catalog cards, along with implementing processes for region-of-interest identification, rotation correction, feature extraction, and classification. The results demonstrate the effectiveness of the proposed method, achieving an accuracy rate of 92.5%, compared to 83.5% with classical image classification and 81.5% with OCR alone.
Collapse
Affiliation(s)
- Lutfieh S. Al-homed
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Kamal M. Jambi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Hassanin M. Al-Barhamtoshy
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| |
Collapse
|
9
|
郑 天, 杨 娜, 耿 诗, 赵 先, 王 跃, 程 德, 赵 蕾. [An Improved Object Detection Algorithm for Thyroid Nodule Ultrasound Image Based on Faster R-CNN]. Sichuan Da Xue Xue Bao Yi Xue Ban 2023; 54:915-922. [PMID: 37866946 PMCID: PMC10579083 DOI: 10.12182/20230960106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Indexed: 10/24/2023]
Abstract
Objective To propose an improved algorithm for thyroid nodule object detection based on Faster R-CNN so as to improve the detection precision of thyroid nodules in ultrasound images. Methods The algorithm used ResNeSt50 combined with deformable convolution (DC) as the backbone network to improve the detection effect of irregularly shaped nodules. Feature pyramid networks (FPN) and Region of Interest (RoI) Align were introduced in the back of the trunk network. The former was used to reduce missed or mistaken detection of thyroid nodules, and the latter was used to improve the detection precision of small nodules. To improve the generalization ability of the model, parameters were updated during backpropagation with an optimizer improved by Sharpness-Aware Minimization (SAM). Results In this experiment, 6 261 thyroid ultrasound images from the Affiliated Hospital of Xuzhou Medical University and the First Hospital of Nanjing were used to compare and evaluate the effectiveness of the improved algorithm. According to the findings, the algorithm showed optimization effect to a certain degree, with the AP50 of the final test set being as high as 97.4% and AP@50:5:95 also showing a 10.0% improvement compared with the original model. Compared with both the original model and the existing models, the improved algorithm had higher detection precision and improved capacity to detect thyroid nodules with better accuracy and precision. In particular, the improved algorithm had a higher recall rate under the requirement of lower detection frame precision. Conclusion The improved method proposed in the study is an effective object detection algorithm for thyroid nodules and can be used to detect thyroid nodules with accuracy and precision.
Collapse
Affiliation(s)
- 天雷 郑
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
- 徐州医科大学附属医院 医疗设备管理处 人工智能研究组 (徐州 221004)Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou 221004, China
| | - 娜 杨
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - 诗 耿
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - 先云 赵
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - 跃 王
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - 德强 程
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - 蕾 赵
- 中国矿业大学信息与控制工程学院 (徐州 221116)School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| |
Collapse
|
10
|
Xu X, Shi J, Chen Y, He Q, Liu L, Sun T, Ding R, Lu Y, Xue C, Qiao H. Research on machine vision and deep learning based recognition of cotton seedling aphid infestation level. Front Plant Sci 2023; 14:1200901. [PMID: 37645464 PMCID: PMC10461631 DOI: 10.3389/fpls.2023.1200901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/10/2023] [Indexed: 08/31/2023]
Abstract
Aphis gossypii Glover is a major insect pest in cotton production, which can cause yield reduction in severe cases. In this paper, we proposed the A. gossypii infestation monitoring method, which identifies the infestation level of A. gossypii at the cotton seedling stage, and can improve the efficiency of early warning and forecasting of A. gossypii, and achieve precise prevention and cure according to the predicted infestation level. We used smartphones to collect A. gossypii infestation images and compiled an infestation image data set. And then constructed, trained, and tested three different A. gossypii infestation recognition models based on Faster Region-based Convolutional Neural Network (R-CNN), You Only Look Once (YOLO)v5 and single-shot detector (SSD) models. The results showed that the YOLOv5 model had the highest mean average precision (mAP) value (95.7%) and frames per second (FPS) value (61.73) for the same conditions. In studying the influence of different image resolutions on the performance of the YOLOv5 model, we found that YOLOv5s performed better than YOLOv5x in terms of overall performance, with the best performance at an image resolution of 640×640 (mAP of 96.8%, FPS of 71.43). And the comparison with the latest YOLOv8s showed that the YOLOv5s performed better than the YOLOv8s. Finally, the trained model was deployed to the Android mobile, and the results showed that mobile-side detection was the best when the image resolution was 256×256, with an accuracy of 81.0% and FPS of 6.98. The real-time recognition system established in this study can provide technical support for infestation forecasting and precise prevention of A. gossypii.
Collapse
Affiliation(s)
- Xin Xu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Jing Shi
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Yongqin Chen
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Qiang He
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Tong Sun
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Ruifeng Ding
- Institute of Plant Protection, Xinjiang Academy of Agricultural Sciences, Urumqi, China
| | - Yanhui Lu
- Institute of Plant Protection, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Chaoqun Xue
- Zhengzhou Tobacco Research Institute of China National Tobacco Corporation (CNTC), Zhengzhou, China
| | - Hongbo Qiao
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| |
Collapse
|
11
|
Phan AC, Ngoan Trieu T, Cang Phan T. Hounsfield Unit Variations-Based Liver Lesions Detection and Classification Using Deep Learning. Curr Med Imaging 2023:CMIR-EPUB-131326. [PMID: 37132318 DOI: 10.2174/1573405620666230428121748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/10/2023] [Accepted: 03/20/2023] [Indexed: 05/04/2023]
Abstract
BACKGROUND Deep learning-based diagnosis systems are useful to identify abnormalities in medical images with the greatly increased workload of doctors. Specifically, the rate of new cases and deaths from malignancies is rising for liver diseases. Early detection of liver lesions plays an extremely important role in effective treatment and gives a higher chance of survival for patients. Therefore, automatic detection and classification of common liver lesions are essential for doctors. In fact, radiologists mainly rely on Hounsfield Units to locate liver lesions but previous studies often pay little attention to this factor. METHODS In this paper, we propose an improved method for the automatic classification of common liver lesions based on deep learning techniques and the variation of Hounsfield Unit densities on CT images with and without contrast. Hounsfield Unit is used to locate liver lesions accurately and support data labeling for classification. We construct a multi-phase classification model developed on the deep neural networks of Faster R-CNN, R-FCN, SSD, and Mask R-CNN with the transfer learning approach. RESULTS The experiments are conducted on six scenarios with multi-phase CT images of common liver lesions. Experimental results show that the proposed method improves the detection and classification of liver lesions compared with recent methods because its accuracy achieves up to 97.4%. CONCLUSION The proposed models are very useful to assist doctors in the automatic segmentation and classification of liver lesions to solve the problem of depending on the clinician's experience in the diagnosis and treatment of liver lesions.
Collapse
Affiliation(s)
- Anh-Cang Phan
- Faculty of Information Technology, Vinh Long University of Technology Education, 85110 Vinh Long, Vietnam
| | - Thanh Ngoan Trieu
- College of Information and Communication Technology, Can Tho University, 94115 Can Tho, Vietnam
- La Faculté Sciences et Techniques, Université de Bretagne Occidentale, 29200 Brest, France
| | - Thuong Cang Phan
- College of Information and Communication Technology, Can Tho University, 94115 Can Tho, Vietnam
| |
Collapse
|
12
|
Zheng J, Zhang T. Wafer Surface Defect Detection Based on Background Subtraction and Faster R-CNN. Micromachines (Basel) 2023; 14:mi14050905. [PMID: 37241529 DOI: 10.3390/mi14050905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 05/28/2023]
Abstract
Concerning the problem that wafer surface defects are easily confused with the background and are difficult to detect, a new detection method for wafer surface defects based on background subtraction and Faster R-CNN is proposed. First, an improved spectral analysis method is proposed to measure the period of the image, and the substructure image can then be obtained on the basis of the period. Then, a local template matching method is adopted to position the substructure image, thereby reconstructing the background image. Then, the interference of the background can be eliminated by an image difference operation. Finally, the difference image is input into an improved Faster R-CNN network for detection. The proposed method has been validated on a self-developed wafer dataset and compared with other detectors. The experimental results show that compared with the original Faster R-CNN, the proposed method increases the mAP effectively by 5.2%, which can meet the requirements of intelligent manufacturing and high detection accuracy.
Collapse
Affiliation(s)
- Jiebing Zheng
- School of Computer Science and Technology, Soochow University, Suzhou 215006, China
| | - Tao Zhang
- School of Computer Science and Engineering, Changshu Institute of Technology, Suzhou 215500, China
| |
Collapse
|
13
|
Wang J, Long Q, Liang Y, Song J, Feng Y, Li P, Sun W, Zhao L. AI-assisted identification of intrapapillary capillary loops in magnification endoscopy for diagnosing early-stage esophageal squamous cell carcinoma: a preliminary study. Med Biol Eng Comput 2023:10.1007/s11517-023-02777-3. [PMID: 36841920 DOI: 10.1007/s11517-023-02777-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 12/22/2022] [Indexed: 02/27/2023]
Abstract
Esophageal squamous cell carcinoma (ESCC) is one of the most common histological types of esophageal cancers. It can seriously affect public health, particularly in Eastern Asia. Early diagnosis and effective therapy of ESCC can significantly help improve patient prognoses. The visualization of intrapapillary capillary loops (IPCLs) under magnification endoscopy (ME) can greatly support the identification of ESCC occurrences by endoscopists. This paper proposes an artificial-intelligence-assisted endoscopic diagnosis approach using deep learning for localizing and identifying IPCLs to diagnose early-stage ESCC. An improved Faster region-based convolutional network (R-CNN) with a polarized self-attention (PSA)-HRNetV2p backbone was employed to automatically detect IPCLs in ME images. In our study, 2887 ME with blue laser imaging (ME-BLI) images of 246 patients and 493 ME with narrow-band imaging (ME-NBI) images of 81 patients were collected from multiple hospitals and used to train and test our detection model. The ME-NBI images were used as the external testing set to verify the generalizability of the model. The experimental evaluation revealed that the proposed method achieved a recall of 79.25%, precision of 75.54%, F1-score of 0.764 and mean average precision (mAP) of 74.95%. Our method outperformed other existing approaches in our evaluation. It can effectively improve the accuracy of ESCC detection and provide a useful adjunct to the assessment of early-stage ESCC for endoscopists.
Collapse
Affiliation(s)
- Jinming Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Qigang Long
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yan Liang
- Department of Gastroenterology, Zhongda Hospital Affiliated to Southeast University, Nanjing, 210009, China
| | - Jie Song
- Department of Gastroenterology, Zhongda Hospital Affiliated to Southeast University, Nanjing, 210009, China
| | - Yadong Feng
- Department of Gastroenterology, Zhongda Hospital Affiliated to Southeast University, Nanjing, 210009, China
| | - Peng Li
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Wei Sun
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Lingxiao Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
14
|
Ouyang H, Zeng J, Leng L. Inception Convolution and Feature Fusion for Person Search. Sensors (Basel) 2023; 23:1984. [PMID: 36850579 PMCID: PMC9963104 DOI: 10.3390/s23041984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/02/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
With the rapid advancement of deep learning theory and hardware device computing capacity, computer vision tasks, such as object detection and instance segmentation, have entered a revolutionary phase in recent years. As a result, extremely challenging integrated tasks, such as person search, might develop quickly. The majority of efficient network frameworks, such as Seq-Net, are based on Faster R-CNN. However, because of the parallel structure of Faster R-CNN, the performance of re-ID can be significantly impacted by the single-layer, low resolution, and occasionally overlooked check feature diagrams retrieved during pedestrian detection. To address these issues, this paper proposed a person search methodology based on an inception convolution and feature fusion module (IC-FFM) using Seq-Net (Sequential End-to-end Network) as the benchmark. First, we replaced the general convolution in ResNet-50 with the new inception convolution module (ICM), allowing the convolution operation to effectively and dynamically distribute various channels. Then, to improve the accuracy of information extraction, the feature fusion module (FFM) was created to combine multi-level information using various levels of convolution. Finally, Bounding Box regression was created using convolution and the double-head module (DHM), which considerably enhanced the accuracy of pedestrian retrieval by combining global and fine-grained information. Experiments on CHUK-SYSU and PRW datasets showed that our method has higher accuracy than Seq-Net. In addition, our method is simpler and can be easily integrated into existing two-stage frameworks.
Collapse
Affiliation(s)
- Huan Ouyang
- School of Software, Nanchang Hangkong University, Nanchang 330063, China
- Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063, China
| | - Jiexian Zeng
- School of Software, Nanchang Hangkong University, Nanchang 330063, China
- Science and Technology College, Nanchang Hangkong University, Gongqingcheng 332020, China
| | - Lu Leng
- School of Software, Nanchang Hangkong University, Nanchang 330063, China
- Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063, China
| |
Collapse
|
15
|
Xu J, Ren H, Cai S, Zhang X. An improved faster R-CNN algorithm for assisted detection of lung nodules. Comput Biol Med 2023; 153:106470. [PMID: 36587571 DOI: 10.1016/j.compbiomed.2022.106470] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 12/29/2022]
Abstract
The morbidity and mortality of lung cancer are increasing rapidly in every country in the world, and pulmonary nodules are the main symptoms of lung cancer in the early stage. If we can diagnose pulmonary nodules in time at the early stage and follow up and treat suspicious patients, we can effectively reduce the incidence of lung cancer. CT (Computed Tomography) has been applied to the screening of many diseases because of its high resolution. Pulmonary nodules show white round shadows in CT images. With the popularity of CT equipment, doctors need to review a large number of imaging results every day. Doctors will misjudge and miss the lesions because of reviewing CT scanning results for a long time. At this time, the method of automatic detection of pulmonary nodules by computer can relieve the pressure of doctors in reviewing CT scan results. Traditional lung nodule detection methods, such as gray threshold method and region growing method, divide the detection process into two steps: extracting candidate regions and eliminating false regions. In addition, the traditional detection method can only operate on a single image, which leads to the inability of this method to detect the batch scanning results in real time. With the continuous development of computer equipment performance and artificial intelligence, the relationship between medical image processing and deep learning is getting closer and closer. In deep learning, object detection methods such as Faster R-CNN、YOLO can complete parallel detection of batch images, and deep structure can fully extract the features of input images. Compared with traditional lung nodule detection methods, it has the characteristics of high efficiency and high precision. Faster R-CNN is a classical and high-precision two-stage object detection method. In this paper, an improved Faster R-CNN model is proposed. On the basis of Faster R-CNN, multi-scale training strategy is used to fully mine the features of different scale spaces and perform path augmentation on lower-dimensional features, which improves the small object detection ability of the model. Through Online Hard Example Mining (OHEM), the loss value is used to quantify the difficulty of candidate region detection, and the training times of the region to be detected are adaptively adjusted. Make full use of prior information to customize the size and proportion of preset boundary anchor boxes. Using deformable convolution to improve the visual field to enhance the global features and enhance the ability to extract the feature information of pulmonary nodules in the same scale space. The new model was tested on LUNA16 (Lung Nodule Analysis 2016) dataset. The detection precision of the improved Faster R-CNN model for pulmonary nodules increased from 76.4% to 90.7%, and the recall rate increased from 40.1% to 56.8% Compared with the mainstream object detection algorithms YOLOv3 and Cascade R-CNN, the improved model is superior to the above models in every index.
Collapse
Affiliation(s)
- Jing Xu
- School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China; Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, 310018, China
| | - Haojie Ren
- School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China; Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, 310018, China.
| | - Shenzhou Cai
- School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China; Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, 310018, China
| | - Xiaoping Zhang
- School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China; Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, 310018, China
| |
Collapse
|
16
|
Xing W, Li G, He C, Huang Q, Cui X, Li Q, Li W, Chen J, Ta D. Automatic detection of A-line in lung ultrasound images using deep learning and image processing. Med Phys 2023; 50:330-343. [PMID: 35950481 DOI: 10.1002/mp.15908] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 06/29/2022] [Accepted: 07/30/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Auxiliary diagnosis and monitoring of lung diseases based on lung ultrasound (LUS) images is important clinical research. A-line is one of the most common indicators of LUS that can offer support for the assessment of lung diseases. A traditional A-line detection method mainly relies on experienced clinicians, which is inefficient and cannot meet the needs of these areas with backward medical level. Therefore, how to realize the automatic detection of A-line in LUS image is important. PURPOSE In order to solve the disadvantages of traditional A-line detection methods, realize automatic and accurate detection, and provide theoretical support for clinical application, we proposed a novel A-line detection method for LUS images with different probe types in this paper. METHODS First, the improved Faster R-CNN model with a selection strategy of localization box was designed to accurately locate the pleural line. Then, the LUS image below the pleural line was segmented for independent analysis excluding the influence of other similar structures. Next, image-processing methods based on total variation, matched filter, and gray difference were applied to achieve the automatic A-line detection. Finally, the "depth" index was designed to verify the accuracy by judging whether the automatic measurement results belong to corresponding manual results (±5%). In experiments, 3000 convex array LUS images were used for training and validating the improved pleural line localization model by five-fold cross validation. 850 convex array LUS images and 1080 linear array LUS images were used for testing the trained pleural line localization model and the proposed image-processing-based A-line detection method. The accuracy analysis, error statistics, and Harsdorff distance were employed to evaluate the experimental results. RESULTS After 100 epochs, the mean loss value of training and validation set of improved Faster R-CNN model reached 0.6540 and 0.7882, with the validation accuracy of 98.70%. The trained pleural line localization model was applied in the testing set of convex and linear probes and reached the accuracy of 97.88% and 97.11%, respectively, which were 3.83% and 8.70% higher than the original Faster R-CNN model. The accuracy, sensitivity, and specificity of A-line detection reached 95.41%, 0.9244%, 0.9875%, and 94.63%, 0.9230%, and 0.9766% for convex and linear probes, respectively. Compared to the experienced clinicians' results, the mean value and p value of depth error were 1.5342 ± 1.2097 and 0.9021, respectively, and the Harsdorff distance was 5.7305 ± 1.8311. In addition, the accumulated accuracy of the two-stage experiment (pleural line localization and A-line detection) was calculated as the final accuracy of the whole A-line detection system. They were 93.39% and 91.90% for convex and linear probes, respectively, which were higher than these previous methods. CONCLUSIONS The proposed method combining image processing and deep learning can automatically and accurately detect A-line in LUS images with different probe types, which has important application value for clinical diagnosis.
Collapse
Affiliation(s)
- Wenyu Xing
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China.,Human Phenome Institute, Fudan University, Shanghai, China
| | - Guannan Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Chao He
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Qiming Huang
- School of Advanced Computing and Artificial Intelligence, Xi'an Jiaotong-liverpool University, Suzhou, China
| | - Xulei Cui
- Department of Anesthesiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Wenfang Li
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China.,Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, Shanghai, China
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China.,Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
17
|
Yin TK, Huang KL, Chiu SR, Yang YQ, Chang BR. Endoscopy Artefact Detection by Deep Transfer Learning of Baseline Models. J Digit Imaging 2022; 35:1101-1110. [PMID: 35478060 PMCID: PMC9582060 DOI: 10.1007/s10278-022-00627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 10/18/2022] Open
Abstract
To visualise the tumours inside the body on a screen, a long and thin tube is inserted with a light source and a camera at the tip to obtain video frames inside organs in endoscopy. However, multiple artefacts exist in these video frames that cause difficulty during the diagnosis of cancers. In this research, deep learning was applied to detect eight kinds of artefacts: specularity, bubbles, saturation, contrast, blood, instrument, blur, and imaging artefacts. Based on transfer learning with pre-trained parameters and fine-tuning, two state-of-the-art methods were applied for detection: faster region-based convolutional neural networks (Faster R-CNN) and EfficientDet. Experiments were implemented on the grand challenge dataset, Endoscopy Artefact Detection and Segmentation (EAD2020). To validate our approach in this study, we used phase I of 2,200 frames and phase II of 331 frames in the original training dataset with ground-truth annotations as training and testing dataset, respectively. Among the tested methods, EfficientDet-D2 achieves a score of 0.2008 (mAPd[Formula: see text]0.6+mIoUd[Formula: see text]0.4) on the dataset that is better than three other baselines: Faster-RCNN, YOLOv3, and RetinaNet, and competitive to the best non-baseline result scored 0.25123 on the leaderboard although our testing was on phase II of 331 frames instead of the original 200 testing frames. Without extra improvement techniques beyond basic neural networks such as test-time augmentation, we showed that a simple baseline could achieve state-of-the-art performance in detecting artefacts in endoscopy. In conclusion, we proposed the combination of EfficientDet-D2 with suitable data augmentation and pre-trained parameters during fine-tuning training to detect the artefacts in endoscopy.
Collapse
Affiliation(s)
- Tang-Kai Yin
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan.
| | - Kai-Lun Huang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Si-Rong Chiu
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Yu-Qi Yang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Bao-Rong Chang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| |
Collapse
|
18
|
An J, Zhang D, Xu K, Wang D. An OpenCL-Based FPGA Accelerator for Faster R-CNN. Entropy (Basel) 2022; 24:1346. [PMID: 37420365 DOI: 10.3390/e24101346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/31/2022] [Accepted: 09/21/2022] [Indexed: 07/09/2023]
Abstract
In recent years, convolutional neural network (CNN)-based object detection algorithms have made breakthroughs, and much of the research corresponds to hardware accelerator designs. Although many previous works have proposed efficient FPGA designs for one-stage detectors such as Yolo, there are still few accelerator designs for faster regions with CNN features (Faster R-CNN) algorithms. Moreover, CNN's inherently high computational complexity and high memory complexity bring challenges to the design of efficient accelerators. This paper proposes a software-hardware co-design scheme based on OpenCL to implement a Faster R-CNN object detection algorithm on FPGA. First, we design an efficient, deep pipelined FPGA hardware accelerator that can implement Faster R-CNN algorithms for different backbone networks. Then, an optimized hardware-aware software algorithm was proposed, including fixed-point quantization, layer fusion, and a multi-batch Regions of interest (RoIs) detector. Finally, we present an end-to-end design space exploration scheme to comprehensively evaluate the performance and resource utilization of the proposed accelerator. Experimental results show that the proposed design achieves a peak throughput of 846.9 GOP/s at the working frequency of 172 MHz. Compared with the state-of-the-art Faster R-CNN accelerator and the one-stage YOLO accelerator, our method achieves 10× and 2.1× inference throughput improvements, respectively.
Collapse
Affiliation(s)
- Jianjing An
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
- Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing Jiaotong University, Beijing 100044, China
| | - Dezheng Zhang
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
- Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing Jiaotong University, Beijing 100044, China
| | - Ke Xu
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
- Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing Jiaotong University, Beijing 100044, China
| | - Dong Wang
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
- Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing Jiaotong University, Beijing 100044, China
| |
Collapse
|
19
|
Hou M, Dong X, Li J, Yu G, Deng R, Pan X. PDC: Pearl Detection with a Counter Based on Deep Learning. Sensors (Basel) 2022; 22:7026. [PMID: 36146375 PMCID: PMC9501133 DOI: 10.3390/s22187026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/01/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
Pearl detection with a counter (PDC) in a noncontact and high-precision manner is a challenging task in the area of commercial production. Additionally, sea pearls are considered to be quite valuable, so the traditional manual counting methods are not satisfactory, as touching may cause damage to the pearls. In this paper, we conduct a comprehensive study on nine object-detection models, and the key metrics of these models are evaluated. The results indicate that using Faster R-CNN with ResNet152, which was pretrained on the pearl dataset, mAP@0.5IoU = 100% and mAP@0.75IoU = 98.83% are achieved for pearl recognition, requiring only 15.8 ms inference time with a counter after the first loading of the model. Finally, the superiority of the proposed algorithm of Faster R-CNN ResNet152 with a counter is verified through a comparison with eight other sophisticated object detectors with a counter. The experimental results on the self-made pearl image dataset show that the total loss decreased to 0.00044. Meanwhile, the classification loss and the localization loss of the model gradually decreased to less than 0.00019 and 0.00031, respectively. The robust performance of the proposed method across the pearl dataset indicates that Faster R-CNN ResNet152 with a counter is promising for natural light or artificial light peal detection and accurate counting.
Collapse
Affiliation(s)
- Mingxin Hou
- College of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China
| | - Xuehu Dong
- Agricultural Machinery Appraisal and Extension Station in Hainan, Haikou 570206, China
| | - Jun Li
- College of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China
- Guangdong Marine Equipment and Manufacturing Engineering Technology Research Center, Zhanjiang 524088, China
| | - Guoyan Yu
- College of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China
- South China of Marine Science and Engineering Guangdong Laboratory, Zhanjiang 524088, China
| | - Ruoling Deng
- College of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China
- Guangdong Marine Equipment and Manufacturing Engineering Technology Research Center, Zhanjiang 524088, China
| | - Xinxiang Pan
- College of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China
| |
Collapse
|
20
|
Huang X, Zhang B, Perrie W, Lu Y, Wang C. A novel deep learning method for marine oil spill detection from satellite synthetic aperture radar imagery. Mar Pollut Bull 2022; 179:113666. [PMID: 35500373 DOI: 10.1016/j.marpolbul.2022.113666] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 04/11/2022] [Accepted: 04/12/2022] [Indexed: 06/14/2023]
Abstract
Oil spill discharges from operational maritime activities like ships, oil rigs and other structures, leaking pipelines, as well as natural hydrocarbon seepage pose serious threats to marine ecosystems and fisheries. Satellite synthetic aperture radar (SAR) is a unique microwave instrument for marine oil spill monitoring, as it is not dependent on weather or sunlight conditions. Existing SAR oil spill detection approaches are limited by algorithm complexity, imbalanced data sets, uncertainties in selecting optimal features, and relatively slow detection speed. To overcome these restrictions, a fast and effective SAR oil spill detection method is presented, based a novel deep learning model, named the Faster Region-based Convolutional Neural Network (Faster R-CNN). This approach is capable of achieving fast end-to-end oil spill detection with reasonable accuracy. A large data set consisting of 15,774 labeled oil spill samples derived from 1786C-band Sentinel-1 and RADARSAT-2 vertical polarization SAR images is used to train, validate and test the Faster R-CNN model. Our experimental results show that the proposed method exhibits good performance for detection of oil spills with wide swath SAR imagery. The Precision and Recall metrics are 89.23% and 89.14%, respectively. The average Precision is 92.56%. The effects of environmental conditions and sensor parameters on oil spill detection are analyzed. The expected detection results are obtained when wind speeds and incidence angles are between 3 m/s and 10 m/s, and 21° and 45°, respectively. Furthermore, the computer runtime for oil spill detection is less than 0.05 s for each full SAR image, using a workstation with NVIDIA GeForce RTX 3090 GPU. This suggests that the present approach has potential for applications that require fast oil spill detection from spaceborne SAR images.
Collapse
Affiliation(s)
- Xudong Huang
- Nanjing University of Information Science and Technology, Nanjing, China
| | - Biao Zhang
- Nanjing University of Information Science and Technology, Nanjing, China; Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai, China; Fisheries and Oceans Canada, Bedford Institute of Oceanography, Dartmouth, Canada.
| | - William Perrie
- Fisheries and Oceans Canada, Bedford Institute of Oceanography, Dartmouth, Canada
| | - Yingcheng Lu
- International Institute for Earth System Science, Nanjing University, Nanjing, China
| | - Chen Wang
- Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
21
|
Abstract
Medical image interpretation is an essential task for the correct diagnosis of many diseases. Pathologists, radiologists, physicians, and researchers rely heavily on medical images to perform diagnoses and develop new treatments. However, manual medical image analysis is tedious and time consuming, making it necessary to identify accurate automated methods. Deep learning—especially supervised deep learning—shows impressive performance in the classification, detection, and segmentation of medical images and has proven comparable in ability to humans. This survey aims to help researchers and practitioners of medical image analysis understand the key concepts and algorithms of supervised learning techniques. Specifically, this survey explains the performance metrics of supervised learning methods; summarizes the available medical datasets; studies the state-of-the-art supervised learning architectures for medical imaging processing, including convolutional neural networks (CNNs) and their corresponding algorithms, region-based CNNs and their variants, fully convolutional networks (FCN) and U-Net architecture; and discusses the trends and challenges in the application of supervised learning methods to medical image analysis. Supervised learning requires large labeled datasets to learn and achieve good performance, and data augmentation, transfer learning, and dropout techniques have widely been employed in medical image processing to overcome the lack of such datasets.
Collapse
Affiliation(s)
- Abeer Aljuaid
- Department of Computer Science, North Carolina A&T State University, 1601 E Market St, Greensboro, NC 27411 USA
| | - Mohd Anwar
- Department of Computer Science, North Carolina A&T State University, 1601 E Market St, Greensboro, NC 27411 USA
| |
Collapse
|
22
|
Signaroli M, Lana A, Martorell-Barceló M, Sanllehi J, Barcelo-Serra M, Aspillaga E, Mulet J, Alós J. Measuring inter-individual differences in behavioural types of gilthead seabreams in the laboratory using deep learning. PeerJ 2022; 10:e13396. [PMID: 35539012 PMCID: PMC9080431 DOI: 10.7717/peerj.13396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 04/16/2022] [Indexed: 01/14/2023] Open
Abstract
Deep learning allows us to automatize the acquisition of large amounts of behavioural animal data with applications for fisheries and aquaculture. In this work, we have trained an image-based deep learning algorithm, the Faster R-CNN (Faster region-based convolutional neural network), to automatically detect and track the gilthead seabream, Sparus aurata, to search for individual differences in behaviour. We collected videos using a novel Raspberry Pi high throughput recording system attached to individual experimental behavioural arenas. From the continuous recording during behavioural assays, we acquired and labelled a total of 14,000 images and used them, along with data augmentation techniques, to train the network. Then, we evaluated the performance of our network at different training levels, increasing the number of images and applying data augmentation. For every validation step, we processed more than 52,000 images, with and without the presence of the gilthead seabream, in normal and altered (i.e., after the introduction of a non-familiar object to test for explorative behaviour) behavioural arenas. The final and best version of the neural network, trained with all the images and with data augmentation, reached an accuracy of 92,79% ± 6.78% [89.24-96.34] of correct classification and 10.25 ± 61.59 pixels [6.59-13.91] of fish positioning error. Our recording system based on a Raspberry Pi and a trained convolutional neural network provides a valuable non-invasive tool to automatically track fish movements in experimental arenas and, using the trajectories obtained during behavioural tests, to assay behavioural types.
Collapse
Affiliation(s)
- Marco Signaroli
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Arancha Lana
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Martina Martorell-Barceló
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Javier Sanllehi
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Margarida Barcelo-Serra
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Eneko Aspillaga
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Júlia Mulet
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| | - Josep Alós
- Fish Ecology Group, Instituto Mediterráneo de Estudios Avanzados, IMEDEA (CSIC-UIB), Esporles, Illes Balears, Spain
| |
Collapse
|
23
|
Mima Y, Nakayama R, Hizukuri A, Murata K. Tooth detection for each tooth type by application of faster R-CNNs to divided analysis areas of dental panoramic X-ray images. Radiol Phys Technol 2022; 15:170-176. [PMID: 35507126 DOI: 10.1007/s12194-022-00659-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 04/19/2022] [Accepted: 04/20/2022] [Indexed: 11/28/2022]
Abstract
This study aimed to propose a computerized method for detecting the tooth region for each tooth type as the initial stage in the development of a computer-aided diagnosis (CAD) scheme for dental panoramic X-ray images. Our database consists of 160 panoramic dental X-ray images obtained from 160 adult patients. To reduce false positives (FPs), the proposed method first extracts a rectangular area including all teeth from a dental panoramic X-ray image with a faster region using a convolutional neural network (Faster R-CNN). From the rectangular area including all teeth, six divided areas are then extracted with Faster R-CNN: top left, top center, top right, bottom left, bottom center, and bottom right. Faster R-CNNs for detecting tooth regions for each tooth type were trained individually for each of the divided areas that narrowed down the target tooth types. By applying these Faster R-CNNs to each divided area, the bounding boxes of each tooth were detected and classified into 32 tooth types. A k-fold cross-validation method with k = 4 was used for training and testing the proposed method. The detection rate for each tooth, number of FPs per image, mean intersection over union for each tooth, and classification accuracy for the 32 tooth types were 98.9%, 0.415, 0.748, and 91.7%, respectively, showing an improvement compared to the application of the Faster R-CNN once to the entire image (98.0%, 1.194, 0.736, and 88.8%).
Collapse
Affiliation(s)
- Yuichi Mima
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan.
| | - Ryohei Nakayama
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Akiyoshi Hizukuri
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Kan Murata
- TAKARA TELESYSTEMS Corporation, 1-17-17 Nihonbashi, Chuo-ku, Osaka, 542-0073, Japan
| |
Collapse
|
24
|
Tan Z, Shi J, Lv R, Li Q, Yang J, Ma Y, Li Y, Wu Y, Zhang R, Ma H, Li Y, Zhu L, Zhu L, Zhang X, Kong J, Yang W, Min L. Fast anther dehiscence status recognition system established by deep learning to screen heat tolerant cotton. Plant Methods 2022; 18:53. [PMID: 35449108 PMCID: PMC9026675 DOI: 10.1186/s13007-022-00884-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND From an economic perspective, cotton is one of the most important crops in the world. The fertility of male reproductive organs is a key determinant of cotton yield. Anther dehiscence or indehiscence directly determines the probability of fertilization in cotton. Thus, rapid and accurate identification of cotton anther dehiscence status is important for judging anther growth status and promoting genetic breeding research. The development of computer vision technology and the advent of big data have prompted the application of deep learning techniques to agricultural phenotype research. Therefore, two deep learning models (Faster R-CNN and YOLOv5) were proposed to detect the number and dehiscence status of anthers. RESULT The single-stage model based on YOLOv5 has higher recognition speed and the ability to deploy to the mobile end. Breeding researchers can apply this model to terminals to achieve a more intuitive understanding of cotton anther dehiscence status. Moreover, three improvement strategies are proposed for the Faster R-CNN model, where the improved model has higher detection accuracy than the YOLOv5 model. We have made three improvements to the Faster R-CNN model and after the ensemble of the three models and original Faster R-CNN model, R2 of "open" reaches to 0.8765, R2 of "close" reaches to 0.8539, R2 of "all" reaches to 0.8481, higher than the prediction results of either model alone, which are completely able to replace the manual counting results. We can use this model to quickly extract the dehiscence rate of cotton anthers under high temperature (HT) conditions. In addition, the percentage of dehiscent anthers of 30 randomly selected cotton varieties were observed from the cotton population under normal conditions and HT conditions through the ensemble of the Faster R-CNN model and manual counting. The results show that HT decreased the percentage of dehiscent anthers in different cotton lines, consistent with the manual method. CONCLUSIONS Deep learning technology have been applied to cotton anther dehiscence status recognition instead of manual methods for the first time to quickly screen HT-tolerant cotton varieties. Deep learning can help to explore the key genetic improvement genes in the future, promoting cotton breeding and improvement.
Collapse
Affiliation(s)
- Zhihao Tan
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Jiawei Shi
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Rongjie Lv
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Qingyuan Li
- Forestry and Fruit Tree Research Institute, Wuhan Academy of Agricultural Sciences, Wuhan, 430075, China
| | - Jing Yang
- Institute of Economic Crops, Xinjiang Academy of Agricultural Sciences, Xinjiang, 830091, China
| | - Yizan Ma
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Yanlong Li
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Yuanlong Wu
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Rui Zhang
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Huanhuan Ma
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Yawei Li
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Li Zhu
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Longfu Zhu
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Xianlong Zhang
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China
| | - Jie Kong
- Institute of Economic Crops, Xinjiang Academy of Agricultural Sciences, Xinjiang, 830091, China.
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China.
| | - Ling Min
- National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, Hubei, China.
| |
Collapse
|
25
|
Kapoor R, Goel R, Sharma A. An intelligent railway surveillance framework based on recognition of object and railway track using deep learning. Multimed Tools Appl 2022; 81:21083-21109. [PMID: 35310890 PMCID: PMC8918909 DOI: 10.1007/s11042-022-12059-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 07/14/2021] [Accepted: 01/04/2022] [Indexed: 06/14/2023]
Abstract
In high speed railways, the intelligent railway safety system is necessary to avoid the accidents due to collision between trains and obstacles on the railway track. The unceasing research work is being performed to reinforce the railway safety and to diminish the accident rates. The rapid development in the field of deep learning has prompted new research opportunities in this area. In this paper, a novel and efficient approach is proposed to recognize the objects (obstacles) on the railway track ahead the train using deep classifier network. The 2-D Singular Spectrum Analysis (SSA) is utilized as decomposition tool that decomposes the image in useful components. That component is further applied to the deep classifier network. The obstacle recognition performance is enhanced by the combination of 2D-SSA and deep network. This method also presents a novel measure to identify the railway tracks. In addition, the performance of this approach is analyzed under different illumination conditions using OSU thermal pedestrian benchmark database. This system can be a tremendous support to curtail rail accidental rate and monetary loads. The results of proposed approach present good accuracy as well as can effectively recognize the objects (obstacles) on the railway track which helps to the railway safety. It also achieves a better performance with 85.2% accuracy, 84.5% precision and 88.6% recall.
Collapse
Affiliation(s)
- Rajiv Kapoor
- Department of Electronics and Communication Engineering, Delhi Technological University, Delhi, 110042 India
| | - Rohini Goel
- Computer Science and Engineering Department, Maharishi Markandeshwar (Deemed to be University), Mullana, Ambala, Haryana India
| | - Avinash Sharma
- Computer Science and Engineering Department, Maharishi Markandeshwar (Deemed to be University), Mullana, Ambala, Haryana India
| |
Collapse
|
26
|
Abdullah SS, Rajasekaran MP. Automatic detection and classification of knee osteoarthritis using deep learning approach. Radiol Med 2022; 127:398-406. [PMID: 35262842 DOI: 10.1007/s11547-022-01476-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 02/25/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE We developed a tool for locating and grading knee osteoarthritis (OA) from digital X-ray images and illustrate the possibility of deep learning techniques to predict knee OA as per the Kellgren-Lawrence (KL) grading system. The purpose of the project is to see how effectively an artificial intelligence (AI)-based deep learning approach can locate and diagnose the severity of knee OA in digital X-ray images. METHODS Selection criteria: Patients above 50 years old with OA symptoms (knee joint pain, stiffness, crepitus, and functional limitations) were included in the study. Medical experts excluded patients with post-surgical evaluation, trauma, and infection from the study. We used 3172 Anterior-posterior view knee joint digital X-ray images. We have trained the Faster RCNN architecture to locate the knee joint space width (JSW) region in digital X-ray images and we incorporate ResNet-50 with transfer learning to extract the features. We have used another pre-trained network (AlexNet with transfer learning) for the classification of knee OA severity. We trained the region proposal network (RPN) using manual extract knee area as the ground truth image and the medical experts graded the knee joint digital X-ray images based on the Kellgren-Lawrence score. An X-ray image is an input for the final model, and the output is a Kellgren-Lawrence grading value. RESULTS The proposed model identified the minimal knee JSW area with a maximum accuracy of 98.516%, and the overall knee OA severity classification accuracy was 98.90%. CONCLUSIONS Today numerous diagnostic methods are available, but tools are not transparent and automated analysis of OA remains a problem. The performance of the proposed model increases while fine-tuning the network and it is higher than the existing works. We will extend this work to grade OA in MRI data in the future.
Collapse
Affiliation(s)
- S Sheik Abdullah
- Department of Electronics and Communication, Kalasalingam Academy of Research and Education, Srivilliputhur, India
| | - M Pallikonda Rajasekaran
- Department of Electronics and Communication, Kalasalingam Academy of Research and Education, Srivilliputhur, India.
| |
Collapse
|
27
|
Zhu X, Zheng B, Cai W, Zhang J, Lu S, Li X, Xi L, Kong Y. Deep learning-based diagnosis models for onychomycosis in dermoscopy. Mycoses 2022; 65:466-472. [PMID: 35119144 DOI: 10.1111/myc.13427] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 01/06/2022] [Accepted: 01/26/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND Onychomycosis is a common disease. Emerging noninvasive, real-time techniques such as dermoscopy and deep convolutional neural networks have been proposed for the diagnosis of onychomycosis. However, deep learning application in dermoscopic image hasn't been reported. OBJECTIVES To explore the establishment of deep learning-based diagnostic models for onychomycosis in dermoscopy to improve the diagnostic efficiency and accuracy. METHODS We evaluated the dermoscopic patterns of onychomycosis diagnosed at Sun Yat-sen Memorial Hospital, Guangzhou, China from May 2019 to February 2021 and included nail psoriasis and traumatic onychodystrophy as control groups. Based on the dermoscopic images and the characteristic dermoscopic patterns of onychomycosis, we gain the faster region-based convolutional neural networks to distinguish between nail disorder and normal nail, onychomycosis and non-mycological nail disorder (nail psoriasis and traumatic onychodystrophy). The diagnostic performance is compared between deep learning-based diagnosis models and dermatologists. RESULTS All of 1155 dermoscopic images were collected, including onychomycosis (603 images), nail psoriasis (221 images), traumatic onychodystrophy (104 images) and normal cases (227 images). Statistical analyses revealed subungual keratosis, distal irregular termination, longitudinal striae, jagged edge, marble-like turbid area, and cone-shaped keratosis were of high specificity (>82%) for onychomycosis diagnosis. The deep learning-based diagnosis models (ensemble model) showed test accuracy /specificity/ sensitivity /Youden index of (95.7%/98.8%/82.1%/0.809), (87.5%/93.0%/78.5%/0.715) for nail disorder and onychomycosis. The diagnostic performance for onychomycosis using ensemble model was superior to 54 dermatologists. CONCLUSIONS Our study demonstrated onychomycosis had distinctive dermoscopic patterns, compared with nail psoriasis and traumatic onychodystrophy. The deep learning-based diagnosis models showed a diagnostic accuracy of onychomycosis, superior to dermatologists.
Collapse
Affiliation(s)
- Xianzhong Zhu
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China.,Department of Dermatology and Venereology, The Second Affiliated Hospital of Guangzhou Medical University
| | - Bowen Zheng
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China
| | - Wenying Cai
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China
| | - Jing Zhang
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China
| | - Sha Lu
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiqing Li
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China
| | - Liyan Xi
- Department of Dermatology and Venereology, Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou, China.,Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Yinying Kong
- School of Statistics and Mathematics, Guangdong University of Finance and Economics
| |
Collapse
|
28
|
Semwal A, Mohan RE, Melvin LMJ, Palanisamy P, Baskar C, Yi L, Pookkuttath S, Ramalingam B. False Ceiling Deterioration Detection and Mapping Using a Deep Learning Framework and the Teleoperated Reconfigurable 'Falcon' Robot. Sensors (Basel) 2021; 22:262. [PMID: 35009802 PMCID: PMC8749628 DOI: 10.3390/s22010262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/25/2021] [Accepted: 12/27/2021] [Indexed: 06/14/2023]
Abstract
Periodic inspection of false ceilings is mandatory to ensure building and human safety. Generally, false ceiling inspection includes identifying structural defects, degradation in Heating, Ventilation, and Air Conditioning (HVAC) systems, electrical wire damage, and pest infestation. Human-assisted false ceiling inspection is a laborious and risky task. This work presents a false ceiling deterioration detection and mapping framework using a deep-neural-network-based object detection algorithm and the teleoperated 'Falcon' robot. The object detection algorithm was trained with our custom false ceiling deterioration image dataset composed of four classes: structural defects (spalling, cracks, pitted surfaces, and water damage), degradation in HVAC systems (corrosion, molding, and pipe damage), electrical damage (frayed wires), and infestation (termites and rodents). The efficiency of the trained CNN algorithm and deterioration mapping was evaluated through various experiments and real-time field trials. The experimental results indicate that the deterioration detection and mapping results were accurate in a real false-ceiling environment and achieved an 89.53% detection accuracy.
Collapse
Affiliation(s)
- Archana Semwal
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Rajesh Elara Mohan
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Lee Ming Jun Melvin
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Povendhan Palanisamy
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Chanthini Baskar
- School of Electronics Engineering, Vellore Institute of Technology, Chennai 600127, India;
| | - Lim Yi
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Sathian Pookkuttath
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| | - Balakrishnan Ramalingam
- Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore; (A.S.); (R.E.M.); (L.M.J.M.); (P.P.); (L.Y.); (S.P.)
| |
Collapse
|
29
|
Kumari N, Ruf V, Mukhametov S, Schmidt A, Kuhn J, Küchemann S. Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4. Sensors (Basel) 2021; 21:7668. [PMID: 34833742 DOI: 10.3390/s21227668] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 11/12/2021] [Accepted: 11/15/2021] [Indexed: 12/24/2022]
Abstract
Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.
Collapse
|
30
|
Mochalova EN, Kotov IA, Lifanov DA, Chakraborti S, Nikitin MP. Imaging flow cytometry data analysis using convolutional neural network for quantitative investigation of phagocytosis. Biotechnol Bioeng 2021; 119:626-635. [PMID: 34750809 DOI: 10.1002/bit.27986] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 10/07/2021] [Accepted: 10/28/2021] [Indexed: 01/03/2023]
Abstract
Macrophages play an important role in the adaptive immune system. Their ability to neutralize cellular targets through Fc receptor-mediated phagocytosis has relied upon immunotherapy that has become of particular interest for the treatment of cancer and autoimmune diseases. A detailed investigation of phagocytosis is the key to the improvement of the therapeutic efficiency of existing medications and the creation of new ones. A promising method for studying the process is imaging flow cytometry (IFC) that acquires thousands of cell images per second in up to 12 optical channels and allows multiparametric fluorescent and morphological analysis of samples in the flow. However, conventional IFC data analysis approaches are based on a highly subjective manual choice of masks and other processing parameters that can lead to the loss of valuable information embedded in the original image. Here, we show the application of a Faster region-based convolutional neural network (CNN) for accurate quantitative analysis of phagocytosis using imaging flow cytometry data. Phagocytosis of erythrocytes by peritoneal macrophages was chosen as a model system. CNN performed automatic high-throughput processing of datasets and demonstrated impressive results in the identification and classification of macrophages and erythrocytes, despite the variety of shapes, sizes, intensities, and textures of cells in images. The developed procedure allows determining the number of phagocytosed cells, disregarding cases with a low probability of correct classification. We believe that CNN-based approaches will enable powerful in-depth investigation of a wide range of biological processes and will reveal the intricate nature of heterogeneous objects in images, leading to completely new capabilities in diagnostics and therapy.
Collapse
Affiliation(s)
- Elizaveta N Mochalova
- Nanobiotechnology Laboratory, Moscow Institute of Physics and Technology, Moscow, Russia.,Biophotonics Laboratory, Prokhorov General Physics Institute of the Russian Academy of Sciences, Moscow, Russia.,Nanobiomedicine Division, Sirius University of Science and Technology, Sochi, Russia
| | - Ivan A Kotov
- Nanobiotechnology Laboratory, Moscow Institute of Physics and Technology, Moscow, Russia
| | - Dmitry A Lifanov
- Nanobiotechnology Laboratory, Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Maxim P Nikitin
- Nanobiotechnology Laboratory, Moscow Institute of Physics and Technology, Moscow, Russia.,Nanobiomedicine Division, Sirius University of Science and Technology, Sochi, Russia
| |
Collapse
|
31
|
Ren C, Jung H, Lee S, Jeong D. Coastal Waste Detection Based on Deep Convolutional Neural Networks. Sensors (Basel) 2021; 21:s21217269. [PMID: 34770576 PMCID: PMC8586973 DOI: 10.3390/s21217269] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/25/2021] [Accepted: 10/27/2021] [Indexed: 12/22/2022]
Abstract
Coastal waste not only has a seriously destructive effect on human life and marine ecosystems, but it also poses a long-term economic and environmental threat. To solve the issues of a poor manual coastal waste sorting environment, such as low sorting efficiency and heavy tasks, we develop a novel deep convolutional neural network by combining several strategies to realize intelligent waste recognition and classification based on the state-of-the-art Faster R-CNN framework. Firstly, to effectively detect small objects, we consider multiple-scale fusion to get rich semantic information from the shallower feature map. Secondly, RoI Align is introduced to solve positioning deviation caused by the regions of interest pooling. Moreover, it is necessary to correct key parameters and take on data augmentation to improve model performance. Besides, we create a new waste object dataset, named IST-Waste, which is made publicly to facilitate future research in this field. As a consequence, the experiment shows that the algorithm’s mAP reaches 83%. Detection performance is significantly better than Faster R-CNN and SSD. Thus, the developed scheme achieves higher accuracy and better performance against the state-of-the-art alternative.
Collapse
|
32
|
Bhujel A, Arulmozhi E, Moon BE, Kim HT. Deep-Learning-Based Automatic Monitoring of Pigs' Physico-Temporal Activities at Different Greenhouse Gas Concentrations. Animals (Basel) 2021; 11:3089. [PMID: 34827821 PMCID: PMC8614322 DOI: 10.3390/ani11113089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/27/2021] [Accepted: 10/28/2021] [Indexed: 12/29/2022] Open
Abstract
Pig behavior is an integral part of health and welfare management, as pigs usually reflect their inner emotions through behavior change. The livestock environment plays a key role in pigs' health and wellbeing. A poor farm environment increases the toxic GHGs, which might deteriorate pigs' health and welfare. In this study a computer-vision-based automatic monitoring and tracking model was proposed to detect pigs' short-term physical activities in the compromised environment. The ventilators of the livestock barn were closed for an hour, three times in a day (07:00-08:00, 13:00-14:00, and 20:00-21:00) to create a compromised environment, which increases the GHGs level significantly. The corresponding pig activities were observed before, during, and after an hour of the treatment. Two widely used object detection models (YOLOv4 and Faster R-CNN) were trained and compared their performances in terms of pig localization and posture detection. The YOLOv4, which outperformed the Faster R-CNN model, was coupled with a Deep-SORT tracking algorithm to detect and track the pig activities. The results revealed that the pigs became more inactive with the increase in GHG concentration, reducing their standing and walking activities. Moreover, the pigs shortened their sternal-lying posture, increasing the lateral lying posture duration at higher GHG concentration. The high detection accuracy (mAP: 98.67%) and tracking accuracy (MOTA: 93.86% and MOTP: 82.41%) signify the models' efficacy in the monitoring and tracking of pigs' physical activities non-invasively.
Collapse
Affiliation(s)
- Anil Bhujel
- Department of Biosystems Engineering, Institute of Smart Farm, Gyeongsang National University, Jinju 52828, Korea; (A.B.); (E.A.)
- Ministry of Communication and Information Technology, Singha Durbar, Kathmandu 44600, Nepal
| | - Elanchezhian Arulmozhi
- Department of Biosystems Engineering, Institute of Smart Farm, Gyeongsang National University, Jinju 52828, Korea; (A.B.); (E.A.)
| | - Byeong-Eun Moon
- Smart Farm Research Center, Gyeongsang National University, Jinju 52828, Korea;
| | - Hyeon-Tae Kim
- Department of Biosystems Engineering, Institute of Smart Farm, Gyeongsang National University, Jinju 52828, Korea; (A.B.); (E.A.)
| |
Collapse
|
33
|
Uyar K, Taşdemir Ş, Ülker E, Öztürk M, Kasap H. Multi-Class brain normality and abnormality diagnosis using modified Faster R-CNN. Int J Med Inform 2021; 155:104576. [PMID: 34555555 DOI: 10.1016/j.ijmedinf.2021.104576] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 09/10/2021] [Accepted: 09/13/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND AND OBJECTIVE The detection and analysis of brain disorders through medical imaging techniques are extremely important to get treatment on time and sustain a healthy lifestyle. Disorders cause permanent brain damage and alleviate the lifespan. Moreover, the classification of large volumes of medical image data manually by medicine experts is tiring, time-consuming, and prone to errors. This study aims to diagnose brain normality and abnormalities using a novel ResNet50 modified Faster Regions with Convolutional Neural Network(R-CNN) model. The classification task is performed into multiple classes which are hemorrhage, hydrocephalus, and normal. The proposed model both determines the borders of the normal/abnormal parts and classifies them with the highest accuracy. METHODS To provide a comprehensive performance analysis in the classification problem, Machine Learning(ML) and Deep Learning(DL) techniques were discussed. Artificial Neural Network(ANN), AdaBoost(AB), Decision Tree(DT), Logistic Regression(LR), Naive Bayes(NB), Random Forest(RF), and Support Vector Machine(SVM) were used as ML models. Besides, various Convolutional Neural Network(CNN) models and proposed ResNet50 modified Faster R-CNN model were used as DL models. Methods were validated using a novel brain dataset that contains both normal and abnormal images. RESULTS Based on results, LR obtained the highest result among ML methods and DenseNet201 obtained the highest results among CNN models with the accuracy of 84.80% and 85.68% for the classification task, respectively. Besides, the accuracy obtained by the proposed model is 99.75%. CONCLUSIONS Experimental results demonstrate that the proposed model has yielded better performance for detection and classification tasks. This artificial intelligence(AI) framework can be utilized as a computer-aided medical decision support system for medical experts.
Collapse
|
34
|
Martin C, Zhang Q, Zhai D, Zhang X, Duarte CM. Anthropogenic litter density and composition data acquired flying commercial drones on sandy beaches along the Saudi Arabian Red Sea. Data Brief 2021; 36:107056. [PMID: 33997200 PMCID: PMC8102167 DOI: 10.1016/j.dib.2021.107056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 04/09/2021] [Indexed: 11/01/2022] Open
Abstract
Anthropogenic litter density and composition data were obtained by conducting aerial surveys on 44 beaches along the Saudi Arabian Coast of the Red Sea [1]. The aerial surveys were completed with commercial drones of the DJI Phantom suite flown at a 10 m altitude. The stills have a resolution of less than 0.5 cm pixels-1, hence, litter objects of few centimetres like bottle caps are easily detectable in the drone images. We here provide a subsample of the drone images acquired. To spare the time needed to visually count the litter objects in the thousands of drone images acquired, these were automatically screened using an object detection algorithm, specifically a Faster R-CNN, able to perform a binary classification in litter and non-litter and to categorize the objects in classes. The multi-class classification, however, is a challenging problem and, hence, it was conducted only on the 15 beaches that showed the highest performance after the binary classification. The performance of the algorithm was calculated by visually screening a subsample of images and it was used to correct the output of the Faster R-CNN. The described steps allowed to obtain an estimate of the litter density in 44 beaches and the litter composition in 15 beaches. By multiplying the relative abundance of each litter class and the median weight of objects belonging to each class, we obtained an estimate of the total mass of plastic beached on 15 beaches. Possible predictors of litter density and mass are the population and marine traffic densities at the site, the exposure of the beach to the prevailing wind and the wind speed, the fetch length and the presence of vegetation where litter could get trapped. Making such raw data (i.e. litter density and composition and their predictors) available can help building the base for a robust global estimate of anthropogenic litter in coastal environments and it is particularly important if data regards an understudied region like the Arabian Peninsula. Moreover, we share a subsample of the original drone images to allow usage from stakeholders.
Collapse
Affiliation(s)
- Cecilia Martin
- Red Sea Research Center and Computational Bioscience Research Center, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Qiannan Zhang
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Dongjun Zhai
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Xiangliang Zhang
- Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Carlos M. Duarte
- Red Sea Research Center and Computational Bioscience Research Center, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| |
Collapse
|
35
|
Jiang J, Lei S, Zhu M, Li R, Yue J, Chen J, Li Z, Gong J, Lin D, Wu X, Lin Z, Lin H. Improving the Generalizability of Infantile Cataracts Detection via Deep Learning-Based Lens Partition Strategy and Multicenter Datasets. Front Med (Lausanne) 2021; 8:664023. [PMID: 34026791 PMCID: PMC8137827 DOI: 10.3389/fmed.2021.664023] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 03/22/2021] [Indexed: 11/13/2022] Open
Abstract
Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Shutao Lei
- School of Communications and Information Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi'an, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiayun Yue
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiamin Gong
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
36
|
Hong SJ, Nam I, Kim SY, Kim E, Lee CH, Ahn S, Park IK, Kim G. Automatic Pest Counting from Pheromone Trap Images Using Deep Learning Object Detectors for Matsucoccus thunbergianae Monitoring. Insects 2021; 12:insects12040342. [PMID: 33921492 PMCID: PMC8068825 DOI: 10.3390/insects12040342] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/08/2021] [Accepted: 04/09/2021] [Indexed: 11/16/2022]
Abstract
Simple Summary The black pine bast scale, Matsucoccus thunbergianae, is a forest pest that causes widespread damage to black pine; therefore, monitoring this pest is necessary to minimize environmental and economic losses in forests. However, monitoring insects in pheromone traps performed by humans is labor intensive and time consuming. To develop an automated monitoring system, we aimed to develop algorithms that detect and count M. thunbergianae from images of pheromone traps using deep-learning-based object detection algorithms. Object detection models based on deep learning neural networks under various conditions were trained, and the performances of detection and counting were compared and evaluated. In addition, the models were trained to detect small objects well by cropping images into multiple windows. As a result, the algorithms based on deep learning neural networks successfully detected and counted M. thunbergianae. These results showed that accurate and constant pest monitoring is possible using the artificial-intelligence-based methods we proposed. Abstract The black pine bast scale, M. thunbergianae, is a major insect pest of black pine and causes serious environmental and economic losses in forests. Therefore, it is essential to monitor the occurrence and population of M. thunbergianae, and a monitoring method using a pheromone trap is commonly employed. Because the counting of insects performed by humans in these pheromone traps is labor intensive and time consuming, this study proposes automated deep learning counting algorithms using pheromone trap images. The pheromone traps collected in the field were photographed in the laboratory, and the images were used for training, validation, and testing of the detection models. In addition, the image cropping method was applied for the successful detection of small objects in the image, considering the small size of M. thunbergianae in trap images. The detection and counting performance were evaluated and compared for a total of 16 models under eight model conditions and two cropping conditions, and a counting accuracy of 95% or more was shown in most models. This result shows that the artificial intelligence-based pest counting method proposed in this study is suitable for constant and accurate monitoring of insect pests.
Collapse
Affiliation(s)
- Suk-Ju Hong
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
| | - Il Nam
- Department of Agriculture, Forestry and Bioresources, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea; (I.N.); (I.-K.P.)
| | - Sang-Yeon Kim
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
| | - Eungchan Kim
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
- Global Smart Farm Convergence Major, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
| | - Chang-Hyup Lee
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
- Global Smart Farm Convergence Major, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
| | - Sebeom Ahn
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
| | - Il-Kwon Park
- Department of Agriculture, Forestry and Bioresources, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea; (I.N.); (I.-K.P.)
- Research Institute of Agriculture and Life Science, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
| | - Ghiseok Kim
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (S.-J.H.); (S.-Y.K.); (E.K.); (C.-H.L.); (S.A.)
- Global Smart Farm Convergence Major, College of Agriculture and Life Sciences, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
- Research Institute of Agriculture and Life Science, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
- Correspondence: ; Tel.: +82-2-880-4603
| |
Collapse
|
37
|
Bari BS, Islam MN, Rashid M, Hasan MJ, Razman MAM, Musa RM, Ab Nasir AF, P.P. Abdul Majeed A. A real-time approach of diagnosing rice leaf disease using deep learning-based faster R-CNN framework. PeerJ Comput Sci 2021; 7:e432. [PMID: 33954231 PMCID: PMC8049121 DOI: 10.7717/peerj-cs.432] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/17/2021] [Indexed: 05/25/2023]
Abstract
The rice leaves related diseases often pose threats to the sustainable production of rice affecting many farmers around the world. Early diagnosis and appropriate remedy of the rice leaf infection is crucial in facilitating healthy growth of the rice plants to ensure adequate supply and food security to the rapidly increasing population. Therefore, machine-driven disease diagnosis systems could mitigate the limitations of the conventional methods for leaf disease diagnosis techniques that is often time-consuming, inaccurate, and expensive. Nowadays, computer-assisted rice leaf disease diagnosis systems are becoming very popular. However, several limitations ranging from strong image backgrounds, vague symptoms' edge, dissimilarity in the image capturing weather, lack of real field rice leaf image data, variation in symptoms from the same infection, multiple infections producing similar symptoms, and lack of efficient real-time system mar the efficacy of the system and its usage. To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) was employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions. The robustness of the Faster R-CNN model is enhanced by training the model with publicly available online and own real-field rice leaf datasets. The proposed deep-learning-based approach was observed to be effective in the automatic diagnosis of three discriminative rice leaf diseases including rice blast, brown spot, and hispa with an accuracy of 98.09%, 98.85%, and 99.17% respectively. Moreover, the model was able to identify a healthy rice leaf with an accuracy of 99.25%. The results obtained herein demonstrated that the Faster R-CNN model offers a high-performing rice leaf infection identification system that could diagnose the most common rice diseases more precisely in real-time.
Collapse
Affiliation(s)
- Bifta Sama Bari
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Md Nahidul Islam
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Mamunur Rashid
- Faculty of Electrical & Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Md Jahid Hasan
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Mohd Azraai Mohd Razman
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Rabiu Muazu Musa
- Centre for Fundamental and Continuing Education, Universiti Malaysia Terengganu, Kuala Nerus, Terengganu, Malaysia
| | - Ahmad Fakhri Ab Nasir
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
- Centre for Software Development & Integrated Computing, Universiti Malaysia Pahang, Pahang Darul Makmur, Pekan, Malaysia
| | - Anwar P.P. Abdul Majeed
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
- Centre for Software Development & Integrated Computing, Universiti Malaysia Pahang, Pahang Darul Makmur, Pekan, Malaysia
| |
Collapse
|
38
|
Liu S, Zhang Y, Ju Y, Li Y, Kang X, Yang X, Niu T, Xing X, Lu Y. Establishment and Clinical Application of an Artificial Intelligence Diagnostic Platform for Identifying Rectal Cancer Tumor Budding. Front Oncol 2021; 11:626626. [PMID: 33763362 PMCID: PMC7982570 DOI: 10.3389/fonc.2021.626626] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 01/25/2021] [Indexed: 12/23/2022] Open
Abstract
Tumor budding is considered a sign of cancer cell activity and the first step of tumor metastasis. This study aimed to establish an automatic diagnostic platform for rectal cancer budding pathology by training a Faster region-based convolutional neural network (F-R-CNN) on the pathological images of rectal cancer budding. Postoperative pathological section images of 236 patients with rectal cancer from the Affiliated Hospital of Qingdao University, China, taken from January 2015 to January 2017 were used in the analysis. The tumor site was labeled in Label image software. The images of the learning set were trained using Faster R-CNN to establish an automatic diagnostic platform for tumor budding pathology analysis. The images of the test set were used to verify the learning outcome. The diagnostic platform was evaluated through the receiver operating characteristic (ROC) curve. Through training on pathological images of tumor budding, an automatic diagnostic platform for rectal cancer budding pathology was preliminarily established. The precision–recall curves were generated for the precision and recall of the nodule category in the training set. The area under the curve = 0.7414, which indicated that the training of Faster R-CNN was effective. The validation in the validation set yielded an area under the ROC curve of 0.88, indicating that the established artificial intelligence platform performed well at the pathological diagnosis of tumor budding. The established Faster R-CNN deep neural network platform for the pathological diagnosis of rectal cancer tumor budding can help pathologists make more efficient and accurate pathological diagnoses.
Collapse
Affiliation(s)
- Shanglong Liu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yuejuan Zhang
- Department of Pathology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yiheng Ju
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Ying Li
- Department of Blood Transfusion, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiaoning Kang
- Department of Operating Room, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiaojuan Yang
- Department of Operating Room, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Tianye Niu
- Nuclear & Radiological Engineering and Medical Physics Programs, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Xiaoming Xing
- Department of Pathology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yun Lu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
39
|
Abstract
The existing joint embedding Visual Question Answering models use different combinations of image characterization, text characterization and feature fusion method, but all the existing models use static word vectors for text characterization. However, in the real language environment, the same word may represent different meanings in different contexts, and may also be used as different grammatical components. These differences cannot be effectively expressed by static word vectors, so there may be semantic and grammatical deviations. In order to solve this problem, our article constructs a joint embedding model based on dynamic word vector-none KB-Specific network (N-KBSN) model which is different from commonly used Visual Question Answering models based on static word vectors. The N-KBSN model consists of three main parts: question text and image feature extraction module, self attention and guided attention module, feature fusion and classifier module. Among them, the key parts of N-KBSN model are: image characterization based on Faster R-CNN, text characterization based on ELMo and feature enhancement based on multi-head attention mechanism. The experimental results show that the N-KBSN constructed in our experiment is better than the other 2017-winner (glove) model and 2019-winner (glove) model. The introduction of dynamic word vector improves the accuracy of the overall results.
Collapse
Affiliation(s)
- Zhiyang Ma
- School of Automation, University of Electronic Science and Technology of China, Chengdu, P. R. China
| | - Wenfeng Zheng
- School of Automation, University of Electronic Science and Technology of China, Chengdu, P. R. China
| | - Xiaobing Chen
- School of Automation, University of Electronic Science and Technology of China, Chengdu, P. R. China
| | - Lirong Yin
- Department of Geography and Anthropology, Louisiana State University, LA, USA
| |
Collapse
|
40
|
Singh S, Ahuja U, Kumar M, Kumar K, Sachdeva M. Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment. Multimed Tools Appl 2021; 80:19753-19768. [PMID: 33679209 PMCID: PMC7917166 DOI: 10.1007/s11042-021-10711-8] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 01/23/2021] [Accepted: 02/10/2021] [Indexed: 05/23/2023]
Abstract
There are many solutions to prevent the spread of the COVID-19 virus and one of the most effective solutions is wearing a face mask. Almost everyone is wearing face masks at all times in public places during the coronavirus pandemic. This encourages us to explore face mask detection technology to monitor people wearing masks in public places. Most recent and advanced face mask detection approaches are designed using deep learning. In this article, two state-of-the-art object detection models, namely, YOLOv3 and faster R-CNN are used to achieve this task. The authors have trained both the models on a dataset that consists of images of people of two categories that are with and without face masks. This work proposes a technique that will draw bounding boxes (red or green) around the faces of people, based on whether a person is wearing a mask or not, and keeps the record of the ratio of people wearing face masks on the daily basis. The authors have also compared the performance of both the models i.e., their precision rate and inference time.
Collapse
Affiliation(s)
- Sunil Singh
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Umang Ahuja
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Munish Kumar
- Department of Computational Sciences, Maharaja Ranjit Singh Punjab Technical University, Bathinda, Punjab India
| | - Krishan Kumar
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Monika Sachdeva
- Department of Computer Science and Engineering, I. K. G. Punjab Technical University, Kapurthala, Punjab India
| |
Collapse
|
41
|
Genze N, Bharti R, Grieb M, Schultheiss SJ, Grimm DG. Accurate machine learning-based germination detection, prediction and quality assessment of three grain crops. Plant Methods 2020; 16:157. [PMID: 33353559 PMCID: PMC7754596 DOI: 10.1186/s13007-020-00699-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 12/11/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND Assessment of seed germination is an essential task for seed researchers to measure the quality and performance of seeds. Usually, seed assessments are done manually, which is a cumbersome, time consuming and error-prone process. Classical image analyses methods are not well suited for large-scale germination experiments, because they often rely on manual adjustments of color-based thresholds. We here propose a machine learning approach using modern artificial neural networks with region proposals for accurate seed germination detection and high-throughput seed germination experiments. RESULTS We generated labeled imaging data of the germination process of more than 2400 seeds for three different crops, Zea mays (maize), Secale cereale (rye) and Pennisetum glaucum (pearl millet), with a total of more than 23,000 images. Different state-of-the-art convolutional neural network (CNN) architectures with region proposals have been trained using transfer learning to automatically identify seeds within petri dishes and to predict whether the seeds germinated or not. Our proposed models achieved a high mean average precision (mAP) on a hold-out test data set of approximately 97.9%, 94.2% and 94.3% for Zea mays, Secale cereale and Pennisetum glaucum respectively. Further, various single-value germination indices, such as Mean Germination Time and Germination Uncertainty, can be computed more accurately with the predictions of our proposed model compared to manual countings. CONCLUSION Our proposed machine learning-based method can help to speed up the assessment of seed germination experiments for different seed cultivars. It has lower error rates and a higher performance compared to conventional and manual methods, leading to more accurate germination indices and quality assessments of seeds.
Collapse
Affiliation(s)
- Nikita Genze
- Technical University of Munich, TUM Campus Straubing for Biotechnology and Sustainability, Bioinformatics, Schulgasse 22, 94315, Straubing, Germany
- Weihenstephan-Triesdorf University of Applied Sciences, Petersgasse 18, 94315, Straubing, Germany
| | - Richa Bharti
- Technical University of Munich, TUM Campus Straubing for Biotechnology and Sustainability, Bioinformatics, Schulgasse 22, 94315, Straubing, Germany
- Weihenstephan-Triesdorf University of Applied Sciences, Petersgasse 18, 94315, Straubing, Germany
| | - Michael Grieb
- Technology and Support Centre in the Centre of Excellence for Renewable Resources (TFZ), Schulgasse 20, 94315, Straubing, Germany
| | | | - Dominik G Grimm
- Technical University of Munich, TUM Campus Straubing for Biotechnology and Sustainability, Bioinformatics, Schulgasse 22, 94315, Straubing, Germany.
- Weihenstephan-Triesdorf University of Applied Sciences, Petersgasse 18, 94315, Straubing, Germany.
- Department of Informatics, Technical University of Munich, Boltzmannstr. 3, 85748, Garching, Germany.
| |
Collapse
|
42
|
Su F, Sun Y, Hu Y, Yuan P, Wang X, Wang Q, Li J, Ji JF. Development and validation of a deep learning system for ascites cytopathology interpretation. Gastric Cancer 2020; 23:1041-1050. [PMID: 32500456 DOI: 10.1007/s10120-020-01093-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 05/25/2020] [Indexed: 02/07/2023]
Abstract
BACKGROUND Early diagnosis of Peritoneal metastasis (PM) is clinically significant regarding optimal treatment selection and avoidance of unnecessary surgical procedures. Cytopathology plays an important role in early screening of PM. We aimed to develop a deep learning (DL) system to achieve intelligent cytopathology interpretation, especially in ascites cytopathology. METHODS The original ascites cytopathology image dataset consists of 139 patients' original hematoxylin-eosin (HE) and Papanicolaou (PAP) Staining images. DL system was developed using transfer learning (TL) to achieve cell detection and classification. Pre-trained alexnet, vgg16, goolenet, resnet18 and resnet50 models were studied. Cell detection dataset consists of 176 cropped images with 6573 annotated cell bounding boxes. Cell classification data set consists of 487 cropped images with 18,558 and 6089 annotated malignant and benign cells in total, respectively. RESULTS We established a novel ascites cytopathology image dataset and achieved automatically cell detection and classification. DetectionNet based on Faster R-CNN using pre-trained resnet18 achieved cell detection with 87.22% of cells' Intersection of Union (IoU) bigger than the threshold of 0.5. The mean average precision (mAP) was 0.8316. The ClassificationNet based on resnet50 achieved the greatest performance in cell classification with AUC = 0.8851, Precision = 96.80%, FNR = 4.73%. The DL system integrating the separately trained DetectionNet and Classificationnet showed great performance in the cytopathology image interpretation. CONCLUSIONS We demonstrate that the integration of DL can improve the efficiency of healthcare. The DL system we developed using TL techniques achieved accurate cytopathology interpretation, and had great potential to be integrated into clinician workflow.
Collapse
Affiliation(s)
- Feng Su
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Yu Sun
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Yajie Hu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Peijiang Yuan
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China
| | - Xinyu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Qian Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Jianmin Li
- Institute for Artificial Intelligence, the State Key Laboratory of Intelligence Technology and Systems, Beijing National Research Center for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Hai Dian District, Beijing, 100084, China.
| | - Jia-Fu Ji
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Gastrointestinal Cancer Center, Peking University, Cancer Hospital and Institute, No. 52 Fu Cheng Road, Hai Dian District, Beijing, 100142, China.
| |
Collapse
|
43
|
Xiao Y, Wang X, Zhang P, Meng F, Shao F. Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information. Sensors (Basel) 2020; 20:E5490. [PMID: 32992739 PMCID: PMC7582940 DOI: 10.3390/s20195490] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 09/20/2020] [Accepted: 09/24/2020] [Indexed: 11/16/2022]
Abstract
Deep learning is currently the mainstream method of object detection. Faster region-based convolutional neural network (Faster R-CNN) has a pivotal position in deep learning. It has impressive detection effects in ordinary scenes. However, under special conditions, there can still be unsatisfactory detection performance, such as the object having problems like occlusion, deformation, or small size. This paper proposes a novel and improved algorithm based on the Faster R-CNN framework combined with the Faster R-CNN algorithm with skip pooling and fusion of contextual information. This algorithm can improve the detection performance under special conditions on the basis of Faster R-CNN. The improvement mainly has three parts: The first part adds a context information feature extraction model after the conv5_3 of the convolutional layer; the second part adds skip pooling so that the former can fully obtain the contextual information of the object, especially for situations where the object is occluded and deformed; and the third part replaces the region proposal network (RPN) with a more efficient guided anchor RPN (GA-RPN), which can maintain the recall rate while improving the detection performance. The latter can obtain more detailed information from different feature layers of the deep neural network algorithm, and is especially aimed at scenes with small objects. Compared with Faster R-CNN, you only look once series (such as: YOLOv3), single shot detector (such as: SSD512), and other object detection algorithms, the algorithm proposed in this paper has an average improvement of 6.857% on the mean average precision (mAP) evaluation index while maintaining a certain recall rate. This strongly proves that the proposed method has higher detection rate and detection efficiency in this case.
Collapse
Affiliation(s)
| | - Xinqing Wang
- Department of Mechanical Engineering, College of Field Engineering, Army Engineering University of PLA, Nanjing 210007, China; (Y.X.); (P.Z.); (F.M.); (F.S.)
| | | | | | | |
Collapse
|
44
|
Li M, Zhang Z, Lei L, Wang X, Guo X. Agricultural Greenhouses Detection in High-Resolution Satellite Images Based on Convolutional Neural Networks: Comparison of Faster R-CNN, YOLO v3 and SSD. Sensors (Basel) 2020; 20:s20174938. [PMID: 32878345 PMCID: PMC7506698 DOI: 10.3390/s20174938] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/25/2020] [Accepted: 08/27/2020] [Indexed: 11/25/2022]
Abstract
Agricultural greenhouses (AGs) are an important facility for the development of modern agriculture. Accurately and effectively detecting AGs is a necessity for the strategic planning of modern agriculture. With the advent of deep learning algorithms, various convolutional neural network (CNN)-based models have been proposed for object detection with high spatial resolution images. In this paper, we conducted a comparative assessment of the three well-established CNN-based models, which are Faster R-CNN, You Look Only Once-v3 (YOLO v3), and Single Shot Multi-Box Detector (SSD) for detecting AGs. The transfer learning and fine-tuning approaches were implemented to train models. Accuracy and efficiency evaluation results show that YOLO v3 achieved the best performance according to the average precision (mAP), frames per second (FPS) metrics and visual inspection. The SSD demonstrated an advantage in detection speed with an FPS twice higher than Faster R-CNN, although their mAP is close on the test set. The trained models were also applied to two independent test sets, which proved that these models have a certain transability and the higher resolution images are significant for accuracy improvement. Our study suggests YOLO v3 with superiorities in both accuracy and computational efficiency can be applied to detect AGs using high-resolution satellite images operationally.
Collapse
Affiliation(s)
- Min Li
- Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (M.L.); (L.L.)
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Zhijie Zhang
- Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (M.L.); (L.L.)
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100190, China
- Correspondence: ; Tel.: +86-188-0131-0721
| | - Liping Lei
- Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (M.L.); (L.L.)
| | - Xiaofan Wang
- Key Laboratory of Land Use, Ministry of Natural Resources, China Land Surveying and Planning Institute, Beijing 100035, China; (X.W.); (X.G.)
| | - Xudong Guo
- Key Laboratory of Land Use, Ministry of Natural Resources, China Land Surveying and Planning Institute, Beijing 100035, China; (X.W.); (X.G.)
| |
Collapse
|
45
|
Xu W, Zhu Z, Ge F, Han Z, Li J. Analysis of Behavior Trajectory Based on Deep Learning in Ammonia Environment for Fish. Sensors (Basel) 2020; 20:E4425. [PMID: 32784391 DOI: 10.3390/s20164425] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 07/24/2020] [Accepted: 08/04/2020] [Indexed: 11/22/2022]
Abstract
Ammonia can be produced by the respiration and excretion of fish during the farming process, which can affect the life of fish. In this paper, to research the behavior of fish under different ammonia concentration and make the corresponding judgment and early warning for the abnormal behavior of fish, the different ammonia environments are simulated by adding the ammonium chloride into the water. Different from the existing methods of directly artificial observation or artificial marking, this paper proposed a recognition and analysis of behavior trajectory approach based on deep learning. Firstly, the three-dimensional spatial trajectories of fish are drawn by three-dimensional reconstruction. Then, the influence of different concentrations of ammonia on fish is analyzed according to the behavior trajectory of fish in different concentrations of ammonia. The results of comparative experiments show that the movement of fish and vitality decrease significantly, and the fish often stagnates in the water of containing ammonium chloride. The proposed approach can provide a new idea for the behavior analysis of animal.
Collapse
|
46
|
Yang SJ, Lu Y, Zheng XF, Zhang YJ, Xin FJ, Sun P, Li Y, Liu SS, Li S, Guo YT, Liu SL. [Establishment and clinical testing of pancreatic cancer Faster R-CNN AI system based on fast regional convolutional neural network]. Zhonghua Wai Ke Za Zhi 2020; 58:520-524. [PMID: 32610422 DOI: 10.3760/cma.j.cn112139-20191017-00515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Objective: To investigate the effectiveness of an enhanced CT automatic recognition system based on Faster R-CNN for pancreatic cancer and its clinical value. Methods: In this study, 4 024 enhanced CT imaging sequences of 315 patients with pancreatic cancer from January 2013 to May 2016 at the Affiliated Hospital of Qingdao University were collected retrospectively, and 2 614 imaging sequences were input into the faster R-CNN system as training dataset to create an automatic image recognition model, which was then validated by reading 1 410 enhanced CT images of 135 cases of pancreatic cancer.In order to identify its effectiveness, 3 750 CT images of 150 patients with pancreatic lesions were read and a followed-up was carried out.The accuracy and recall rate in detecting nodules were recorded and regression curves were generated.In addition, the accuracy, sensitivity and specificity of Faster R-CNN diagnosis were analyzed, the ROC curves were generated and the area under the curves were calculated. Results: Based on the enhanced CT images of 135 cases, the area under the ROC curve was 0.927 calculated by Faster R-CNN. The accuracy, specificity and sensitivity were 0.902, 0.913 and 0.801 respectively.After the data of 150 patients with pancreatic cancer were verified, 893 CT images showed positive and 2 857 negative.Ninety-eight patients with pancreatic cancer were diagnosed by Faster R-CNN.After the follow-up, it was found that 53 cases were post-operatively proved to be pancreatic ductal carcinoma, 21 cases of pancreatic cystadenocarcinoma, 12 cases of pancreatic cystadenoma, 5 cases of pancreatic cyst, and 7 cases were untreated.During 5 to 17 months after operation, 6 patients died of abdominal tumor infiltration, liver and lung metastasis.Of the 52 patients who were diagnosed negative by Faster R-CNN, 9 were post-operatively proved to be pancreatic ductal carcinoma. Conclusion: Faster R-CNN system has clinical value in helping imaging physicians to diagnose pancreatic cancer.
Collapse
Affiliation(s)
- S J Yang
- Department of Gastrointestinal Surgery, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - Y Lu
- Department of Gastrointestinal Surgery, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - X F Zheng
- Department of Gastrointestinal Surgery, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - Y J Zhang
- Department of Pathology, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - F J Xin
- Department of Pathology, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - P Sun
- Department of Cardiac Ultrasound, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - Y Li
- Department of Blood Transfusion, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - S S Liu
- Department of Gastrointestinal Surgery, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - S Li
- Beijing University of Aeronautics and Astronautics, Beijing 100191, China
| | - Y T Guo
- Beijing University of Aeronautics and Astronautics, Beijing 100191, China
| | - S L Liu
- Department of Gastrointestinal Surgery, Affiliated Hospital of Qingdao University, Qingdao 266000, China
| |
Collapse
|
47
|
Abstract
Manually counting hens in battery cages on large commercial poultry farms is a challenging task: time-consuming and often inaccurate. Therefore, the aim of this study was to develop a machine vision system that automatically counts the number of hens in battery cages. Automatically counting hens can help a regulatory agency or inspecting officer to estimate the number of living birds in a cage and, thus animal density, to ensure that they conform to government regulations or quality certification requirements. The test hen house was 87 m long, containing 37 battery cages stacked in 6-story high rows on both sides of the structure. Each cage housed 18 to 30 hens, for a total of approximately 11 000 laying hens. A feeder moves along the cages. A camera was installed on an arm connected to the feeder, which was specifically developed for this purpose. A wide-angle lens was used in order to frame an entire cage in the field of view. Detection and tracking algorithms were designed to detect hens in cages; the recorded videos were first processed using a convolutional neural network (CNN) object detection algorithm called Faster R-CNN, with an input of multi-angular view shifted images. After the initial detection, the hens' relative location along the feeder was tracked and saved using a tracking algorithm. Information was added with every additional frame, as the camera arm moved along the cages. The algorithm count was compared with that made by a human observer (the 'gold standard'). A validation dataset of about 2000 images achieved 89.6% accuracy at cage level, with a mean absolute error of 2.5 hens per cage. These results indicate that the model developed in this study is practicable for obtaining fairly good estimates of the number of laying hens in battery cages.
Collapse
|
48
|
Rosati R, Romeo L, Silvestri S, Marcheggiani F, Tiano L, Frontoni E. Faster R-CNN approach for detection and quantification of DNA damage in comet assay images. Comput Biol Med 2020; 123:103912. [PMID: 32658777 DOI: 10.1016/j.compbiomed.2020.103912] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 06/23/2020] [Accepted: 07/07/2020] [Indexed: 12/30/2022]
Abstract
BACKGROUND AND OBJECTIVE DNA damage analysis can provide valuable information in several areas ranging from the diagnosis/treatment of a disease to the monitoring of the effects of genetic and environmental influences. The evaluation of the damage is determined by comet scoring, which can be performed by a skilled operator with a manual procedure. However, this approach becomes very time-consuming and the operator dependency results in the subjectivity of the damage quantification and thus in a high inter/intra-operator variability. METHODS In this paper, we aim to overcome this issue by introducing a Deep Learning methodology based on Faster R-CNN to completely automatize the overall approach while discovering unseen discriminative patterns in comets. RESULTS The experimental results performed on two real use-case datasets reveal the higher performance (up to mean absolute precision of 0.74) of the proposed methodology against other state-of-the-art approaches. Additionally, the validation procedure performed by expert biologists highlights how the proposed approach is able to unveil true comets, often unseen from the human eye and standard computer vision methodology. CONCLUSIONS This work contributes to the biomedical informatics field by the introduction of a novel approach based on established object detection Deep Learning technique for evaluating the DNA damage. The main contribution is the application of Faster R-CNN for the detection and quantification of DNA damage in comet assay images, by fully automatizing the detection/classification DNA damage task. The experimental results extracted in two real use-case datasets demonstrated (i) the higher robustness of the proposed methodology against other state-of-the-art Deep Learning competitors, (ii) the speeding up of the comet analysis procedure and (iii) the minimization of the intra/inter-operator variability.
Collapse
Affiliation(s)
- Riccardo Rosati
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy.
| | - Luca Romeo
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; Computational Statistics and Machine Learning and Cognition, Motion and Neuroscience, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sonia Silvestri
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Fabio Marcheggiani
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Luca Tiano
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| |
Collapse
|
49
|
鞠 孟, 李 欣, 李 章. [Detection of white blood cells in microscopic leucorrhea images based on deep active learning]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2020; 37:519-526. [PMID: 32597095 PMCID: PMC10319563 DOI: 10.7507/1001-5515.201909040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Indexed: 11/03/2022]
Abstract
The number of white blood cells in the leucorrhea microscopic image can indicate the severity of vaginal inflammation. At present, the detection of white blood cells in leucorrhea mainly relies on manual microscopy by medical experts, which is time-consuming, expensive and error-prone. In recent years, some studies have proposed to implement intelligent detection of leucorrhea white blood cells based on deep learning technology. However, such methods usually require manual labeling of a large number of samples as training sets, and the labeling cost is high. Therefore, this study proposes the use of deep active learning algorithms to achieve intelligent detection of white blood cells in leucorrhea microscopic images. In the active learning framework, a small number of labeled samples were firstly used as the basic training set, and a faster region convolutional neural network (Faster R-CNN) training detection model was performed. Then the most valuable samples were automatically selected for manual annotation, and the training set and the corresponding detection model were iteratively updated, which made the performance of the model continue to increase. The experimental results show that the deep active learning technology can obtain higher detection accuracy under less manual labeling samples, and the average precision of white blood cell detection could reach 90.6%, which meets the requirements of clinical routine examination.
Collapse
Affiliation(s)
- 孟汐 鞠
- 重庆邮电大学 生物医学工程研究中心(重庆 400065)Biomedical Engineering Research Center, The Chongqing University of Posts and Telecommunications, Chongqing 400065, P.R.China
| | - 欣蔚 李
- 重庆邮电大学 生物医学工程研究中心(重庆 400065)Biomedical Engineering Research Center, The Chongqing University of Posts and Telecommunications, Chongqing 400065, P.R.China
| | - 章勇 李
- 重庆邮电大学 生物医学工程研究中心(重庆 400065)Biomedical Engineering Research Center, The Chongqing University of Posts and Telecommunications, Chongqing 400065, P.R.China
| |
Collapse
|
50
|
Peng J, Bao C, Hu C, Wang X, Jian W, Liu W. Automated mammographic mass detection using deformable convolution and multiscale features. Med Biol Eng Comput 2020; 58:1405-1417. [PMID: 32297129 DOI: 10.1007/s11517-020-02170-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 03/26/2020] [Indexed: 10/24/2022]
Abstract
Designing computer-assisted diagnosis (CAD) systems that can precisely identify lesions from mammography images would be useful for clinicians. Considering the morphological variation in breast cancer, it is necessary to extract robust features from the mammogram. Here, we propose a mass detection CAD system that is based on Faster R-CNN. First, we applied a novel convolution network in the backbone of Faster R-CNN, namely deformable convolution network (DCN), which improves the detection of lesions with varying shapes and sizes. Second, the original Faster R-CNN uses the output of the last layer of the backbone as a single-scale feature map. To facilitate the detection of small lesions, we used a multiscale feature pyramid network of multiple cross-scale connections between the different output layers of the backbone, called the neural architecture search-feature pyramid network (NAS-FPN). Thus, we were able to integrate the best features into the model. We then evaluated our method by using the datasets the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, respectively. Our method yielded a true positive rate of 0.9345 at 2.2805 false positive per image on CBIS-DDSM and a true positive rate of 0.9554 at 0.3829 false positive per image on INbreast. Graphical abstract.
Collapse
Affiliation(s)
- Junchuan Peng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China.,National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China
| | - Changyu Bao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China.,National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China
| | - Chuting Hu
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen, 518035, Guangdong, China
| | - Xianming Wang
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen, 518035, Guangdong, China.
| | - Wenjing Jian
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen, 518035, Guangdong, China
| | - Weixiang Liu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China. .,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China. .,National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, 518060, Guangdong, People's Republic of China.
| |
Collapse
|