1
|
Yang Y, Cheng J, Chen L, Cui C, Liu S, Zuo M. Application of machine learning for the differentiation of thymomas and thymic cysts using deep transfer learning: A multi-center comparison of diagnostic performance based on different dimensional models. Thorac Cancer 2024; 15:2235-2247. [PMID: 39305057 PMCID: PMC11543273 DOI: 10.1111/1759-7714.15454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 09/02/2024] [Accepted: 09/04/2024] [Indexed: 11/09/2024] Open
Abstract
OBJECTIVE This study aimed to evaluate the feasibility and performance of deep transfer learning (DTL) networks with different types and dimensions in differentiating thymomas from thymic cysts in a retrospective cohort. MATERIALS AND METHODS Based on chest-enhanced computed tomography (CT), the region of interest was delineated, and the maximum cross section of the lesion was selected as the input image. Five convolutional neural networks (CNNs) and Vision Transformer (ViT) were used to construct a 2D DTL model. The 2D model constructed by the maximum section (n) and the upper and lower layers (n - 1, n + 1) of the lesion was used for feature extraction, and the features were selected. The remaining features were pre-fused to construct a 2.5D model. The whole lesion image was selected for input and constructing a 3D model. RESULTS In the 2D model, the area under curve (AUC) of Resnet50 was 0.950 in the training cohort and 0.907 in the internal validation cohort. In the 2.5D model, the AUCs of Vgg11 in the internal validation cohort and external validation cohort 1 were 0.937 and 0.965, respectively. The AUCs of Inception_v3 in the training cohort and external validation cohort 2 were 0.981 and 0.950, respectively. The AUC values of 3D_Resnet50 in the four cohorts were 0.987, 0.937, 0.938, and 0.905. CONCLUSIONS The DTL model based on multiple different dimensions can be used as a highly sensitive and specific tool for the non-invasive differential diagnosis of thymomas and thymic cysts to assist clinicians in decision-making.
Collapse
Affiliation(s)
- Yuhua Yang
- Department of Radiology, The Second Affiliated Hospital, Jiangxi Medical CollegeNanchang UniversityNanchangChina
- Intelligent Medical Imaging of Jiangxi Key LaboratoryNanchangChina
| | - Jia Cheng
- Department of RadiologyThe First Affiliated Hospital of Gannan Medical UniversityGanzhouChina
| | - Liang Chen
- Department of RadiologyAffiliated Hospital of Jiujiang UniversityJiujiangChina
| | - Can Cui
- Department of Radiology, The Second Affiliated Hospital, Jiangxi Medical CollegeNanchang UniversityNanchangChina
- Intelligent Medical Imaging of Jiangxi Key LaboratoryNanchangChina
| | - Shaoqiang Liu
- Department of RadiologyThe First Affiliated Hospital of Gannan Medical UniversityGanzhouChina
| | - Minjing Zuo
- Department of Radiology, The Second Affiliated Hospital, Jiangxi Medical CollegeNanchang UniversityNanchangChina
- Intelligent Medical Imaging of Jiangxi Key LaboratoryNanchangChina
| |
Collapse
|
2
|
Restrepo D, Wu C, Vásquez-Venegas C, Nakayama LF, Celi LA, López DM. DF-DM: A foundational process model for multimodal data fusion in the artificial intelligence era. RESEARCH SQUARE 2024:rs.3.rs-4277992. [PMID: 38746100 PMCID: PMC11092829 DOI: 10.21203/rs.3.rs-4277992/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In the big data era, integrating diverse data modalities poses significant challenges, particularly in complex fields like healthcare. This paper introduces a new process model for multimodal Data Fusion for Data Mining, integrating embeddings and the Cross-Industry Standard Process for Data Mining with the existing Data Fusion Information Group model. Our model aims to decrease computational costs, complexity, and bias while improving efficiency and reliability. We also propose "disentangled dense fusion," a novel embedding fusion method designed to optimize mutual information and facilitate dense inter-modality feature interaction, thereby minimizing redundant information. We demonstrate the model's efficacy through three use cases: predicting diabetic retinopathy using retinal images and patient metadata, domestic violence prediction employing satellite imagery, internet, and census data, and identifying clinical and demographic features from radiography images and clinical notes. The model achieved a Macro F1 score of 0.92 in diabetic retinopathy prediction, an R-squared of 0.854 and sMAPE of 24.868 in domestic violence prediction, and a macro AUC of 0.92 and 0.99 for disease prediction and sex classification, respectively, in radiological analysis. These results underscore the Data Fusion for Data Mining model's potential to significantly impact multimodal data processing, promoting its adoption in diverse, resource-constrained settings.
Collapse
Affiliation(s)
- David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Departamento de Telemática, Universidad del Cauca, Popayán, Cauca, Colombia
| | - Chenwei Wu
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, United States of America
| | | | - Luis Filipe Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, Massachusetts, United States of America
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| | - Diego M López
- Departamento de Telemática, Universidad del Cauca, Popayán, Cauca, Colombia
| |
Collapse
|
3
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
4
|
Huang L, Lin Y, Cao P, Zou X, Qin Q, Lin Z, Liang F, Li Z. Automated detection and segmentation of pleural effusion on ultrasound images using an Attention U-net. J Appl Clin Med Phys 2024; 25:e14231. [PMID: 38088928 PMCID: PMC10795456 DOI: 10.1002/acm2.14231] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 11/19/2023] [Accepted: 11/21/2023] [Indexed: 01/19/2024] Open
Abstract
BACKGROUND Ultrasonic for detecting and evaluating pleural effusion is an essential part of the Extended Focused Assessment with Sonography in Trauma (E-FAST) in emergencies. Our study aimed to develop an Artificial Intelligence (AI) diagnostic model that automatically identifies and segments pleural effusion areas on ultrasonography. METHODS An Attention U-net and a U-net model were used to detect and segment pleural effusion on ultrasound images of 848 subjects through fully supervised learning. Sensitivity, specificity, precision, accuracy, F1 score, the receiver operating characteristic (ROC) curve, and the area under the curve (AUC) were used to assess the model's effectiveness in classifying the data. The dice coefficient was used to evaluate the segmentation performance of the model. RESULTS In 10 random tests, the Attention U-net and U-net 's average sensitivity of 97% demonstrated that the pleural effusion was well detectable. The Attention U-net performed better at identifying negative images than the U-net, which had an average specificity of 91% compared to 86% for the U-net. Additionally, the Attention U-net was more accurate in predicting the pleural effusion region because its average dice coefficient was 0.86 as opposed to the U-net's average dice coefficient of 0.82. CONCLUSIONS The Attention U-net showed excellent performance in detecting and segmenting pleural effusion on ultrasonic images, which is expected to enhance the operation and application of E-FAST in clinical work.
Collapse
Affiliation(s)
- Libing Huang
- Department of UltrasoundShenzhen Second People's HospitalThe First Affiliated Hospital of Shenzhen UniversityShenzhenChina
- Shenzhen University School of MedicineShenzhenChina
| | - Yingying Lin
- Department of Diagnostic RadiologyThe University of Hong KongHong KongChina
| | - Peng Cao
- Department of Diagnostic RadiologyThe University of Hong KongHong KongChina
| | - Xia Zou
- Department of UltrasoundShenzhen Second People's HospitalThe First Affiliated Hospital of Shenzhen UniversityShenzhenChina
| | - Qian Qin
- Shenzhen University School of MedicineShenzhenChina
| | - Zhanye Lin
- Department of UltrasoundLonggang District People's Hospital of ShenzhenShenzhenChina
| | - Fengting Liang
- Department of UltrasoundShenzhen Second People's HospitalThe First Affiliated Hospital of Shenzhen UniversityShenzhenChina
| | - Zhengyi Li
- Department of UltrasoundShenzhen Second People's HospitalThe First Affiliated Hospital of Shenzhen UniversityShenzhenChina
- Shenzhen University School of MedicineShenzhenChina
| |
Collapse
|
5
|
Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, Dhar M. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell 2023; 6:1227091. [PMID: 37705603 PMCID: PMC10497111 DOI: 10.3389/frai.2023.1227091] [Citation(s) in RCA: 78] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
As the demand for quality healthcare increases, healthcare systems worldwide are grappling with time constraints and excessive workloads, which can compromise the quality of patient care. Artificial intelligence (AI) has emerged as a powerful tool in clinical medicine, revolutionizing various aspects of patient care and medical research. The integration of AI in clinical medicine has not only improved diagnostic accuracy and treatment outcomes, but also contributed to more efficient healthcare delivery, reduced costs, and facilitated better patient experiences. This review article provides an extensive overview of AI applications in history taking, clinical examination, imaging, therapeutics, prognosis and research. Furthermore, it highlights the critical role AI has played in transforming healthcare in developing nations.
Collapse
Affiliation(s)
- Gokul Krishnan
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shiana Singh
- Department of Emergency Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Monika Pathania
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Siddharth Gosavi
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shuchi Abhishek
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Ashwin Parchani
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Minakshi Dhar
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| |
Collapse
|
6
|
Song C, Zhu S, Liu Y, Zhang W, Wang Z, Li W, Sun Z, Zhao P, Tian S. DCNAS-Net: deformation convolution and neural architecture search detection network for bone marrow oedema. BMC Med Imaging 2023; 23:45. [PMID: 36978011 PMCID: PMC10045610 DOI: 10.1186/s12880-023-01003-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 03/21/2023] [Indexed: 03/30/2023] Open
Abstract
Background Lumbago is a global disease that affects more than 500 million people worldwide. Bone marrow oedema is one of the main causes of the condition and clinical diagnosis is mainly made by radiologists manually reviewing MRI images to determine whether oedema is present. However, the number of patients with Lumbago has risen dramatically in recent years, which has brought a huge workload to radiologists. In order to improve the efficiency of diagnosis, this paper is devoted to developing and evaluating a neural network for detecting bone marrow edema in MRI images. Related work Inspired by the development of deep learning and image processing techniques, we design a deep learning detection algorithm specifically for the detection of bone marrow oedema from lumbar MRI images. We introduce deformable convolution, feature pyramid networks and neural architecture search modules, and redesign the existing neural networks. We explain in detail the construction of the network and illustrate the setting of the network hyperparameters. Results and discussion The detection accuracy of our algorithm is excellent. And its accuracy of detecting bone marrow oedema reached up to 90.6\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}%, an improvement of 5.7\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}% compared to the original. The recall of our neural network is 95.1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}%, and the F1-measure also reaches 92.8\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}%. And our algorithm is fast in detecting it, taking only 0.144 s per image. Conclusion Extensive experiments have demonstrated that deformable convolution and aggregated feature pyramid structures are conducive for the detection of bone marrow oedema. Our algorithm has better detection accuracy and good detection speed compared to other algorithms.
Collapse
Affiliation(s)
- Chengyu Song
- grid.33763.320000 0004 1761 2484Tianjin University, Tianjin, China
| | - Shan Zhu
- grid.33763.320000 0004 1761 2484Tianjin Hospital, Tianjin University, Tianjin, China
| | - Yanyan Liu
- grid.216938.70000 0000 9878 7032Nankai University, Tianjin, China
| | - Wei Zhang
- grid.33763.320000 0004 1761 2484Tianjin University, Tianjin, China
| | - Zhi Wang
- grid.33763.320000 0004 1761 2484Tianjin Hospital, Tianjin University, Tianjin, China
| | - Wangxiao Li
- grid.33763.320000 0004 1761 2484Tianjin University, Tianjin, China
| | - Zhenye Sun
- grid.33763.320000 0004 1761 2484Tianjin Hospital, Tianjin University, Tianjin, China
| | - Peng Zhao
- grid.33763.320000 0004 1761 2484Tianjin Hospital, Tianjin University, Tianjin, China
| | - Shengzhang Tian
- grid.33763.320000 0004 1761 2484Tianjin Hospital, Tianjin University, Tianjin, China
| |
Collapse
|
7
|
Gan F, Wu FP, Zhong YL. Artificial intelligence method based on multi-feature fusion for automatic macular edema (ME) classification on spectral-domain optical coherence tomography (SD-OCT) images. Front Neurosci 2023; 17:1097291. [PMID: 36793539 PMCID: PMC9922866 DOI: 10.3389/fnins.2023.1097291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 01/09/2023] [Indexed: 01/31/2023] Open
Abstract
Purpose A common ocular manifestation, macular edema (ME) is the primary cause of visual deterioration. In this study, an artificial intelligence method based on multi-feature fusion was introduced to enable automatic ME classification on spectral-domain optical coherence tomography (SD-OCT) images, to provide a convenient method of clinical diagnosis. Methods First, 1,213 two-dimensional (2D) cross-sectional OCT images of ME were collected from the Jiangxi Provincial People's Hospital between 2016 and 2021. According to OCT reports of senior ophthalmologists, there were 300 images with diabetic (DME), 303 images with age-related macular degeneration (AMD), 304 images with retinal-vein occlusion (RVO), and 306 images with central serous chorioretinopathy (CSC). Then, traditional omics features of the images were extracted based on the first-order statistics, shape, size, and texture. After extraction by the alexnet, inception_v3, resnet34, and vgg13 models and selected by dimensionality reduction using principal components analysis (PCA), the deep-learning features were fused. Next, the gradient-weighted class-activation map (Grad-CAM) was used to visualize the-deep-learning process. Finally, the fusion features set, which was fused from the traditional omics features and the deep-fusion features, was used to establish the final classification models. The performance of the final models was evaluated by accuracy, confusion matrix, and the receiver operating characteristic (ROC) curve. Results Compared with other classification models, the performance of the support vector machine (SVM) model was best, with an accuracy of 93.8%. The area under curves AUC of micro- and macro-averages were 99%, and the AUC of the AMD, DME, RVO, and CSC groups were 100, 99, 98, and 100%, respectively. Conclusion The artificial intelligence model in this study could be used to classify DME, AME, RVO, and CSC accurately from SD-OCT images.
Collapse
Affiliation(s)
- Fan Gan
- Medical College of Nanchang University, Nanchang, China,Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Fei-Peng Wu
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Yu-Lin Zhong
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China,*Correspondence: Yu-Lin Zhong,
| |
Collapse
|
8
|
Irmici G, Cè M, Caloro E, Khenkina N, Della Pepa G, Ascenti V, Martinenghi C, Papa S, Oliva G, Cellina M. Chest X-ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics (Basel) 2023; 13:diagnostics13020216. [PMID: 36673027 PMCID: PMC9858224 DOI: 10.3390/diagnostics13020216] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/28/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
Due to its widespread availability, low cost, feasibility at the patient's bedside and accessibility even in low-resource settings, chest X-ray is one of the most requested examinations in radiology departments. Whilst it provides essential information on thoracic pathology, it can be difficult to interpret and is prone to diagnostic errors, particularly in the emergency setting. The increasing availability of large chest X-ray datasets has allowed the development of reliable Artificial Intelligence (AI) tools to help radiologists in everyday clinical practice. AI integration into the diagnostic workflow would benefit patients, radiologists, and healthcare systems in terms of improved and standardized reporting accuracy, quicker diagnosis, more efficient management, and appropriateness of the therapy. This review article aims to provide an overview of the applications of AI for chest X-rays in the emergency setting, emphasizing the detection and evaluation of pneumothorax, pneumonia, heart failure, and pleural effusion.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natallia Khenkina
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Carlo Martinenghi
- Radiology Department, San Raffaele Hospital, Via Olgettina 60, 20132 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Giancarlo Oliva
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| |
Collapse
|
9
|
Gan F, Liu H, Qin WG, Zhou SL. Application of artificial intelligence for automatic cataract staging based on anterior segment images: comparing automatic segmentation approaches to manual segmentation. Front Neurosci 2023; 17:1182388. [PMID: 37152605 PMCID: PMC10159175 DOI: 10.3389/fnins.2023.1182388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 03/27/2023] [Indexed: 05/09/2023] Open
Abstract
Purpose Cataract is one of the leading causes of blindness worldwide, accounting for >50% of cases of blindness in low- and middle-income countries. In this study, two artificial intelligence (AI) diagnosis platforms are proposed for cortical cataract staging to achieve a precise diagnosis. Methods A total of 647 high quality anterior segment images, which included the four stages of cataracts, were collected into the dataset. They were divided randomly into a training set and a test set using a stratified random-allocation technique at a ratio of 8:2. Then, after automatic or manual segmentation of the lens area of the cataract, the deep transform-learning (DTL) features extraction, PCA dimensionality reduction, multi-features fusion, fusion features selection, and classification models establishment, the automatic and manual segmentation DTL platforms were developed. Finally, the accuracy, confusion matrix, and area under the receiver operating characteristic (ROC) curve (AUC) were used to evaluate the performance of the two platforms. Results In the automatic segmentation DTL platform, the accuracy of the model in the training and test sets was 94.59 and 84.50%, respectively. In the manual segmentation DTL platform, the accuracy of the model in the training and test sets was 97.48 and 90.00%, respectively. In the test set, the micro and macro average AUCs of the two platforms reached >95% and the AUC for each classification was >90%. The results of a confusion matrix showed that all stages, except for mature, had a high recognition rate. Conclusion Two AI diagnosis platforms were proposed for cortical cataract staging. The resulting automatic segmentation platform can stage cataracts more quickly, whereas the resulting manual segmentation platform can stage cataracts more accurately.
Collapse
Affiliation(s)
- Fan Gan
- Medical College of Nanchang University, Nanchang, China
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Hui Liu
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Wei-Guo Qin
- Department of Cardiothoracic Surgery, The 908th Hospital of Chinese People’s Liberation Army Joint Logistic Support Force, Nanchang, China
| | - Shui-Lian Zhou
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
- *Correspondence: Shui-Lian Zhou,
| |
Collapse
|