1
|
[Chinese expert consensus on the technical and clinical practice specifications of artificial intelligence assisted morphology examination of blood cells (2024)]. ZHONGHUA XUE YE XUE ZA ZHI = ZHONGHUA XUEYEXUE ZAZHI 2024; 45:330-338. [PMID: 38951059 PMCID: PMC11168004 DOI: 10.3760/cma.j.cn121090-20240217-00064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Indexed: 07/03/2024]
Abstract
Blood cell morphological examination is a crucial method for the diagnosis of blood diseases, but traditional manual microscopy is characterized by low efficiency and susceptibility to subjective biases. The application of artificial intelligence (AI) technology has improved the efficiency and quality of blood cell examinations and facilitated the standardization of test results. Currently, a variety of AI devices are either in clinical use or under research, with diverse technical requirements and configurations. The Experimental Diagnostic Study Group of the Hematology Branch of the Chinese Medical Association has organized a panel of experts to formulate this consensus. The consensus covers term definitions, scope of application, technical requirements, clinical application, data management, and information security. It emphasizes the importance of specimen preparation, image acquisition, image segmentation algorithms, and cell feature extraction and classification, and sets forth basic requirements for the cell recognition spectrum. Moreover, it provides detailed explanations regarding the fine classification of pathological cells, requirements for cell training and testing, quality control standards, and assistance in issuing diagnostic reports by humans. Additionally, the consensus underscores the significance of data management and information security to ensure the safety of patient information and the accuracy of data.
Collapse
|
2
|
Classification of Breast Lesions on DCE-MRI Data Using a Fine-Tuned MobileNet. Diagnostics (Basel) 2023; 13:diagnostics13061067. [PMID: 36980377 PMCID: PMC10047403 DOI: 10.3390/diagnostics13061067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/06/2023] [Accepted: 03/07/2023] [Indexed: 03/14/2023] Open
Abstract
It is crucial to diagnose breast cancer early and accurately to optimize treatment. Presently, most deep learning models used for breast cancer detection cannot be used on mobile phones or low-power devices. This study intended to evaluate the capabilities of MobileNetV1 and MobileNetV2 and their fine-tuned models to differentiate malignant lesions from benign lesions in breast dynamic contrast-enhanced magnetic resonance images (DCE-MRI).
Collapse
|
3
|
Chen J, Jiang Y, Yang K, Ye X, Cui C, Shi S, Wu H, Tian H, Song D, Yao J, Wang L, Huang S, Xu J, Xu D, Dong F. Feasibility of using AI to auto-catch responsible frames in ultrasound screening for breast cancer diagnosis. iScience 2023; 26:105692. [PMID: 36570770 PMCID: PMC9771726 DOI: 10.1016/j.isci.2022.105692] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 10/31/2022] [Accepted: 11/26/2022] [Indexed: 12/12/2022] Open
Abstract
The research of AI-assisted breast diagnosis has primarily been based on static images. It is unclear whether it represents the best diagnosis image.To explore the method of capturing complementary responsible frames from breast ultrasound screening by using artificial intelligence. We used feature entropy breast network (FEBrNet) to select responsible frames from breast ultrasound screenings and compared the diagnostic performance of AI models based on FEBrNet-recommended frames, physician-selected frames, 5-frame interval-selected frames, all frames of video, as well as that of ultrasound and mammography specialists. The AUROC of AI model based on FEBrNet-recommended frames outperformed other frame set based AI models, as well as ultrasound and mammography physicians, indicating that FEBrNet can reach level of medical specialists in frame selection.FEBrNet model can extract video responsible frames for breast nodule diagnosis, whose performance is equivalent to the doctors selected responsible frames.
Collapse
Affiliation(s)
- Jing Chen
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Yitao Jiang
- Research and Development Department, Microport Prophecy, Shanghai 201203, China
| | - Keen Yang
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Xiuqin Ye
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Chen Cui
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Siyuan Shi
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Huaiyu Wu
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Hongtian Tian
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Di Song
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Jincao Yao
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, China
| | - Liping Wang
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, China
| | - Sijing Huang
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Jinfeng Xu
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| | - Dong Xu
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, China
| | - Fajin Dong
- Department of Ultrasound, Shenzhen People's Hospital (The Second Clinical School of Medicine, Jinan University; The First Affiliated Hospital of Southern University of Science and Technology), Shenzhen, Guangdong 518020, China
| |
Collapse
|
4
|
Sutaji D, Yıldız O. LEMOXINET: Lite ensemble MobileNetV2 and Xception models to predict plant disease. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
5
|
Zhou H, Deng J, Cai D, Lv X, Wu BM. Effects of Image Dataset Configuration on the Accuracy of Rice Disease Recognition Based on Convolution Neural Network. FRONTIERS IN PLANT SCIENCE 2022; 13:910878. [PMID: 35865283 PMCID: PMC9295741 DOI: 10.3389/fpls.2022.910878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/10/2022] [Indexed: 06/02/2023]
Abstract
In recent years, the convolution neural network has been the most widely used deep learning algorithm in the field of plant disease diagnosis and has performed well in classification. However, in practice, there are still some specific issues that have not been paid adequate attention to. For instance, the same pathogen may cause similar or different symptoms when infecting plant leaves, while the same pathogen may cause similar or disparate symptoms on different parts of the plant. Therefore, questions come up naturally: should the images showing different symptoms of the same disease be in one class or two separate classes in the image database? Also, how will the different classification methods affect the results of image recognition? In this study, taking rice leaf blast and neck blast caused by Magnaporthe oryzae, and rice sheath blight caused by Rhizoctonia solani as examples, three experiments were designed to explore how database configuration affects recognition accuracy in recognizing different symptoms of the same disease on the same plant part, similar symptoms of the same disease on different parts, and different symptoms on different parts. The results suggested that when the symptoms of the same disease were the same or similar, no matter whether they were on the same plant part or not, training combined classes of these images can get better performance than training them separately. When the difference between symptoms was obvious, the classification was relatively easy, and both separate training and combined training could achieve relatively high recognition accuracy. The results also, to a certain extent, indicated that the greater the number of images in the training data set, the higher the average classification accuracy.
Collapse
|
6
|
Heo J, Lim JH, Lee HR, Jang JY, Shin YS, Kim D, Lim JY, Park YM, Koh YW, Ahn SH, Chung EJ, Lee DY, Seok J, Kim CH. Deep learning model for tongue cancer diagnosis using endoscopic images. Sci Rep 2022; 12:6281. [PMID: 35428854 PMCID: PMC9012779 DOI: 10.1038/s41598-022-10287-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 03/29/2022] [Indexed: 12/29/2022] Open
Abstract
In this study, we developed a deep learning model to identify patients with tongue cancer based on a validated dataset comprising oral endoscopic images. We retrospectively constructed a dataset of 12,400 verified endoscopic images from five university hospitals in South Korea, collected between 2010 and 2020 with the participation of otolaryngologists. To calculate the probability of malignancy using various convolutional neural network (CNN) architectures, several deep learning models were developed. Of the 12,400 total images, 5576 images related to the tongue were extracted. The CNN models showed a mean area under the receiver operating characteristic curve (AUROC) of 0.845 and a mean area under the precision-recall curve (AUPRC) of 0.892. The results indicate that the best model was DenseNet169 (AUROC 0.895 and AUPRC 0.918). The deep learning model, general physicians, and oncology specialists had sensitivities of 81.1%, 77.3%, and 91.7%; specificities of 86.8%, 75.0%, and 90.9%; and accuracies of 84.7%, 75.9%, and 91.2%, respectively. Meanwhile, fair agreement between the oncologist and the developed model was shown for cancer diagnosis (kappa value = 0.685). The deep learning model developed based on the verified endoscopic image dataset showed acceptable performance in tongue cancer diagnosis.
Collapse
Affiliation(s)
- Jaesung Heo
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - June Hyuck Lim
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Hye Ran Lee
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Jeon Yeob Jang
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Yoo Seob Shin
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Dahee Kim
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Jae Yol Lim
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Young Min Park
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Yoon Woo Koh
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Soon-Hyun Ahn
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eun-Jae Chung
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Doh Young Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jungirl Seok
- Department of Otorhinolaryngology-Head & Neck Surgery, National Cancer Center, Goyang, Republic of Korea
| | - Chul-Ho Kim
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea.
| |
Collapse
|
7
|
A review and comparison of convolution neural network models under a unified framework. COMMUNICATIONS FOR STATISTICAL APPLICATIONS AND METHODS 2022. [DOI: 10.29220/csam.2022.29.2.161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
8
|
Fat-based studies for computer-assisted screening of child obesity using thermal imaging based on deep learning techniques: a comparison with quantum machine learning approach. Soft comput 2022. [DOI: 10.1007/s00500-021-06668-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
9
|
Optimized convolutional neural network architectures for efficient on-device vision-based object detection. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06830-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractConvolutional neural networks have pushed forward image analysis research and computer vision over the last decade, constituting a state-of-the-art approach in object detection today. The design of increasingly deeper and wider architectures has made it possible to achieve unprecedented levels of detection accuracy, albeit at the cost of both a dramatic computational burden and a large memory footprint. In such a context, cloud systems have become a mainstream technological solution due to their tremendous scalability, providing researchers and practitioners with virtually unlimited resources. However, these resources are typically made available as remote services, requiring communication over the network to be accessed, thus compromising the speed of response, availability, and security of the implemented solution. In view of these limitations, the on-device paradigm has emerged as a recent yet widely explored alternative, pursuing more compact and efficient networks to ultimately enable the execution of the derived models directly on resource-constrained client devices. This study provides an up-to-date review of the more relevant scientific research carried out in this vein, circumscribed to the object detection problem. In particular, the paper contributes to the field with a comprehensive architectural overview of both the existing lightweight object detection frameworks targeted to mobile and embedded devices, and the underlying convolutional neural networks that make up their internal structure. More specifically, it addresses the main structural-level strategies used for conceiving the various components of a detection pipeline (i.e., backbone, neck, and head), as well as the most salient techniques proposed for adapting such structures and the resulting architectures to more austere deployment environments. Finally, the study concludes with a discussion of the specific challenges and next steps to be taken to move toward a more convenient accuracy–speed trade-off.
Collapse
|
10
|
Gang L, Haixuan Z, Linning E, Ling Z, Yu L, Juming Z. Recognition of honeycomb lung in CT images based on improved MobileNet model. Med Phys 2021; 48:4304-4315. [PMID: 33826769 DOI: 10.1002/mp.14873] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/10/2021] [Accepted: 03/26/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The research is to improve the efficiency and accuracy of recognition of honeycomb lung in CT images. METHODS Deep learning methods are used to achieve automatic recognition of honeycomb lung in CT images, however, are time consuming and less accurate due to the large amount of structural parameters. In this paper, a novel recognition method based on MobileNetV1 network, multiscale feature fusion method (MSFF), and dilated convolution is explored to deal with honeycomb lung in CT image classification. Firstly, the dilated convolution with different dilated rate is used to extract features to obtain receptive fields of different sizes, and then fuse the features of different scales at multiscale feature fusion block is used to solve the problem of feature loss and incomplete feature extraction. After that, by using linear activation functions (Sigmoid) instead of nonlinear activation functions (ReLu) in the improved deep separable convolution blocks to retain the feature information of each channel. Finally, by reducing the number of improved deep separable blocks to reduce the computation and resource consumption of the model. RESULTS The experimental results show that improved MobileNet model has the best performance and the potential for recognition of honeycomb lung image datasets, which includes 6318 images. By comparing with 4 traditional models (SVM, RF, decision tree, and KNN) and 11 deep learning models (LeNet-5, AlexNet, VGG-16, GoogleNet, ResNet18, DenseNet121, SENet18, InceptionV3, InceptionV4, Xception, and MobileNetV1), our model achieved the performance with an accuracy of 99.52%, a sensitivity of 99.35%, and a specificity of 99.89%. CONCLUSION Improved MobileNet model is designed for the automatic recognition and classification of honeycomb lung in CT images. Through experiments comparative analysis of other models of machine learning and deep learning, it is proved that the proposed improved MobileNet method has the best recognition accuracy with fewer the model parameters and less the calculation time.
Collapse
Affiliation(s)
- Li Gang
- College of Software, Taiyuan University of Technology, Taiyuan, China
| | - Zhang Haixuan
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - E Linning
- Shanxi Bethune Hospital, Taiyuan, China
| | - Zhang Ling
- College of Software, Taiyuan University of Technology, Taiyuan, China
| | - Li Yu
- College of Data Science, Taiyuan University of Technology, Taiyuan, China
| | - Zhao Juming
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| |
Collapse
|
11
|
Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. SENSORS (BASEL, SWITZERLAND) 2021; 21:2852. [PMID: 33919583 PMCID: PMC8074091 DOI: 10.3390/s21082852] [Citation(s) in RCA: 164] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/08/2021] [Accepted: 04/16/2021] [Indexed: 12/18/2022]
Abstract
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region's image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.
Collapse
Affiliation(s)
- Parvathaneni Naga Srinivasu
- Department of Computer Science and Engineering, Gitam Institute of Technology, GITAM Deemed to be University, Rushikonda, Visakhapatnam 530045, India;
| | | | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea;
| | - Akash Kumar Bhoi
- Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, India;
| | - Wonjoon Kim
- Division of Future Convergence (HCI Science Major), Dongduk Women’s University, Seoul 02748, Korea
| | - James Jin Kang
- School of Science, Edith Cowan University, Joondalup 6027, Australia
| |
Collapse
|