1
|
Cai Y, Zhang X, Cao J, Grzybowski A, Ye J, Lou L. Application of artificial intelligence in oculoplastics. Clin Dermatol 2024:S0738-081X(23)00271-7. [PMID: 38184122 DOI: 10.1016/j.clindermatol.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2024]
Abstract
Oculoplastics is a subspecialty of ophthalmology/dermatology concerned with eyelid, orbital, and lacrimal diseases. Artificial intelligence (AI), with its powerful ability to analyze large data sets, has dramatically benefited oculoplastics. The cutting-edge AI technology is widely applied to extract ocular parameters and to use these results for further assessment, such as screening and diagnosis of blepharoptosis and predicting the progression of thyroid eye disease. AI also assists in treatment procedures, such as surgical strategy planning in blepharoptosis. High efficiency and high reliability are the most apparent advantages of AI, with promising prospects. The possibilities of AI in oculoplastics may lie in three-dimensional modeling technology and image generation. We retrospectively summarize AI applications involving eyelid, orbital, and lacrimal diseases in oculoplastics, and we also examine the strengths and weaknesses of AI technology in oculoplastics.
Collapse
Affiliation(s)
- Yilu Cai
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Xuan Zhang
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Jing Cao
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Juan Ye
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Lixia Lou
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China.
| |
Collapse
|
2
|
Gunturkun F, Bakir-Batu B, Siddiqui A, Lakin K, Hoehn ME, Vestal R, Davis RL, Shafi NI. Development of a Deep Learning Model for Retinal Hemorrhage Detection on Head Computed Tomography in Young Children. JAMA Netw Open 2023; 6:e2319420. [PMID: 37347482 PMCID: PMC10288337 DOI: 10.1001/jamanetworkopen.2023.19420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/05/2023] [Indexed: 06/23/2023] Open
Abstract
Importance Abusive head trauma (AHT) in children is often missed in medical encounters, and retinal hemorrhage (RH) is considered strong evidence for AHT. Although head computed tomography (CT) is obtained routinely, all but exceptionally large RHs are undetectable on CT images in children. Objective To examine whether deep learning-based image analysis can detect RH on pediatric head CT. Design, Setting, and Participants This diagnostic study included 301 patients diagnosed with AHT who underwent head CT and dilated fundoscopic examinations at a quaternary care children's hospital. The study assessed a deep learning model using axial slices from 218 segmented globes with RH and 384 globes without RH between May 1, 2007, and March 31, 2021. Two additional light gradient boosting machine (GBM) models were assessed: one that used demographic characteristics and common brain findings in AHT and another that combined the deep learning model's risk prediction plus the same demographic characteristics and brain findings. Main Outcomes and Measures Sensitivity (recall), specificity, precision, accuracy, F1 score, and area under the curve (AUC) for each model predicting the presence or absence of RH in globes were assessed. Globe regions that influenced the deep learning model predictions were visualized in saliency maps. The contributions of demographic and standard CT features were assessed by Shapley additive explanation. Results The final study population included 301 patients (187 [62.1%] male; median [range] age, 4.6 [0.1-35.8] months). A total of 120 patients (39.9%) had RH on fundoscopic examinations. The deep learning model performed as follows: sensitivity, 79.6%; specificity, 79.2%; positive predictive value (precision), 68.6%; negative predictive value, 87.1%; accuracy, 79.3%; F1 score, 73.7%; and AUC, 0.83 (95% CI, 0.75-0.91). The AUCs were 0.80 (95% CI, 0.69-0.91) for the general light GBM model and 0.86 (95% CI, 0.79-0.93) for the combined light GBM model. Sensitivities of all models were similar, whereas the specificities of the deep learning and combined light GBM models were higher than those of the light GBM model. Conclusions and Relevance The findings of this diagnostic study indicate that a deep learning-based image analysis of globes on pediatric head CTs can predict the presence of RH. After prospective external validation, a deep learning model incorporated into CT image analysis software could calibrate clinical suspicion for AHT and provide decision support for which patients urgently need fundoscopic examinations.
Collapse
Affiliation(s)
- Fatma Gunturkun
- Quantitative Sciences Unit, Department of Medicine, Stanford University, Palo Alto, California
| | - Berna Bakir-Batu
- Center for Biomedical Informatics, University of Tennessee Health Science Center, Memphis
| | - Adeel Siddiqui
- Department of Radiology, University of Tennessee Health Sciences Center, Memphis
| | - Karen Lakin
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Mary E. Hoehn
- Department of Ophthalmology, University of Tennessee Health Sciences Center, Memphis
| | - Robert Vestal
- Department of Ophthalmology, University of Tennessee Health Sciences Center, Memphis
| | - Robert L. Davis
- Center for Biomedical Informatics, University of Tennessee Health Science Center, Memphis
| | - Nadeem I. Shafi
- Department of Pediatrics, University of Tennessee Health Sciences Center, Memphis
| |
Collapse
|
3
|
Wawer Matos PA, Reimer RP, Rokohl AC, Caldeira L, Heindl LM, Große Hokamp N. Artificial Intelligence in Ophthalmology - Status Quo and Future Perspectives. Semin Ophthalmol 2023; 38:226-237. [PMID: 36356300 DOI: 10.1080/08820538.2022.2139625] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
4
|
Lee SH, Lee S, Lee J, Lee JK, Moon NJ. Effective encoder-decoder neural network for segmentation of orbital tissue in computed tomography images of Graves' orbitopathy patients. PLoS One 2023; 18:e0285488. [PMID: 37163543 PMCID: PMC10171592 DOI: 10.1371/journal.pone.0285488] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023] Open
Abstract
PURPOSE To propose a neural network (NN) that can effectively segment orbital tissue in computed tomography (CT) images of Graves' orbitopathy (GO) patients. METHODS We analyzed orbital CT scans from 701 GO patients diagnosed between 2010 and 2019 and devised an effective NN specializing in semantic orbital tissue segmentation in GO patients' CT images. After four conventional (Attention U-Net, DeepLab V3+, SegNet, and HarDNet-MSEG) and the proposed NN train the various manual orbital tissue segmentations, we calculated the Dice coefficient and Intersection over Union for comparison. RESULTS CT images of the eyeball, four rectus muscles, the optic nerve, and the lacrimal gland tissues from all 701 patients were analyzed in this study. In the axial image with the largest eyeball area, the proposed NN achieved the best performance, with Dice coefficients of 98.2% for the eyeball, 94.1% for the optic nerve, 93.0% for the medial rectus muscle, and 91.1% for the lateral rectus muscle. The proposed NN also gave the best performance for the coronal image. Our qualitative analysis demonstrated that the proposed NN outputs provided more sophisticated orbital tissue segmentations for GO patients than the conventional NNs. CONCLUSION We concluded that our proposed NN exhibited an improved CT image segmentation for GO patients over conventional NNs designed for semantic segmentation tasks.
Collapse
Affiliation(s)
- Seung Hyeun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| |
Collapse
|
5
|
Bao XL, Sun YJ, Zhan X, Li GY. Orbital and eyelid diseases: The next breakthrough in artificial intelligence? Front Cell Dev Biol 2022; 10:1069248. [PMID: 36467418 PMCID: PMC9716028 DOI: 10.3389/fcell.2022.1069248] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 11/08/2022] [Indexed: 12/07/2023] Open
Abstract
Orbital and eyelid disorders affect normal visual functions and facial appearance, and precise oculoplastic and reconstructive surgeries are crucial. Artificial intelligence (AI) network models exhibit a remarkable ability to analyze large sets of medical images to locate lesions. Currently, AI-based technology can automatically diagnose and grade orbital and eyelid diseases, such as thyroid-associated ophthalmopathy (TAO), as well as measure eyelid morphological parameters based on external ocular photographs to assist surgical strategies. The various types of imaging data for orbital and eyelid diseases provide a large amount of training data for network models, which might be the next breakthrough in AI-related research. This paper retrospectively summarizes different imaging data aspects addressed in AI-related research on orbital and eyelid diseases, and discusses the advantages and limitations of this research field.
Collapse
Affiliation(s)
- Xiao-Li Bao
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Ying-Jian Sun
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Xi Zhan
- Department of Engineering, The Army Engineering University of PLA, Nanjing, China
| | - Guang-Yu Li
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
6
|
Steybe D, Poxleitner P, Metzger MC, Brandenburg LS, Schmelzeisen R, Bamberg F, Tran PH, Kellner E, Reisert M, Russe MF. Automated segmentation of head CT scans for computer-assisted craniomaxillofacial surgery applying a hierarchical patch-based stack of convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:2093-2101. [PMID: 35665881 PMCID: PMC9515026 DOI: 10.1007/s11548-022-02673-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/03/2022] [Indexed: 11/25/2022]
Abstract
Purpose Computer-assisted techniques play an important role in craniomaxillofacial surgery. As segmentation of three-dimensional medical imaging represents a cornerstone for these procedures, the present study was aiming at investigating a deep learning approach for automated segmentation of head CT scans. Methods The deep learning approach of this study was based on the patchwork toolbox, using a multiscale stack of 3D convolutional neural networks. The images were split into nested patches using a fixed 3D matrix size with decreasing physical size in a pyramid format of four scale depths. Manual segmentation of 18 craniomaxillofacial structures was performed in 20 CT scans, of which 15 were used for the training of the deep learning network and five were used for validation of the results of automated segmentation. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC), surface DSC, 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). Results Mean for DSC was 0.81 ± 0.13 (range: 0.61 [mental foramen] – 0.98 [mandible]). Mean Surface DSC was 0.94 ± 0.06 (range: 0.87 [mental foramen] – 0.99 [mandible]), with values > 0.9 for all structures but the mental foramen. Mean 95HD was 1.93 ± 2.05 mm (range: 1.00 [mandible] – 4.12 mm [maxillary sinus]) and for ASSD, a mean of 0.42 ± 0.44 mm (range: 0.09 [mandible] – 1.19 mm [mental foramen]) was found, with values < 1 mm for all structures but the mental foramen. Conclusion In this study, high accuracy of automated segmentation of a variety of craniomaxillofacial structures could be demonstrated, suggesting this approach to be suitable for the incorporation into a computer-assisted craniomaxillofacial surgery workflow. The small amount of training data required and the flexibility of an open source-based network architecture enable a broad variety of clinical and research applications. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-022-02673-5.
Collapse
Affiliation(s)
- David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.,Berta-Ottenstein-Programme for Clinician Scientists, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|