1
|
Gurnani B, Kaur K, Lalgudi VG, Kundu G, Mimouni M, Liu H, Jhanji V, Prakash G, Roy AS, Shetty R, Gurav JS. Role of artificial intelligence, machine learning and deep learning models in corneal disorders - A narrative review. J Fr Ophtalmol 2024; 47:104242. [PMID: 39013268 DOI: 10.1016/j.jfo.2024.104242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 05/13/2024] [Accepted: 05/15/2024] [Indexed: 07/18/2024]
Abstract
In the last decade, artificial intelligence (AI) has significantly impacted ophthalmology, particularly in managing corneal diseases, a major reversible cause of blindness. This review explores AI's transformative role in the corneal subspecialty, which has adopted advanced technology for superior clinical judgment, early diagnosis, and personalized therapy. While AI's role in anterior segment diseases is less documented compared to glaucoma and retinal pathologies, this review highlights its integration into corneal diagnostics through imaging techniques like slit-lamp biomicroscopy, anterior segment optical coherence tomography (AS-OCT), and in vivo confocal biomicroscopy. AI has been pivotal in refining decision-making and prognosis for conditions such as keratoconus, infectious keratitis, and dystrophies. Multi-disease deep learning neural networks (MDDNs) have shown diagnostic ability in classifying corneal diseases using AS-OCT images, achieving notable metrics like an AUC of 0.910. AI's progress over two decades has significantly improved the accuracy of diagnosing conditions like keratoconus and microbial keratitis. For instance, AI has achieved a 90.7% accuracy rate in classifying bacterial and fungal keratitis and an AUC of 0.910 in differentiating various corneal diseases. Convolutional neural networks (CNNs) have enhanced the analysis of color-coded corneal maps, yielding up to 99.3% diagnostic accuracy for keratoconus. Deep learning algorithms have also shown robust performance in detecting fungal hyphae on in vivo confocal microscopy, with precise quantification of hyphal density. AI models combining tomography scans and visual acuity have demonstrated up to 97% accuracy in keratoconus staging according to the Amsler-Krumeich classification. However, the review acknowledges the limitations of current AI models, including their reliance on binary classification, which may not capture the complexity of real-world clinical presentations with multiple coexisting disorders. Challenges also include dependency on data quality, diverse imaging protocols, and integrating multimodal images for a generalized AI diagnosis. The need for interpretability in AI models is emphasized to foster trust and applicability in clinical settings. Looking ahead, AI has the potential to unravel the intricate mechanisms behind corneal pathologies, reduce healthcare's carbon footprint, and revolutionize diagnostic and management paradigms. Ethical and regulatory considerations will accompany AI's clinical adoption, marking an era where AI not only assists but augments ophthalmic care.
Collapse
Affiliation(s)
- B Gurnani
- Department of Cataract, Cornea, External Disease, Trauma, Ocular Surface and Refractive Surgery, ASG Eye Hospital, Jodhpur, Rajasthan, India.
| | - K Kaur
- Department of Cataract, Pediatric Ophthalmology and Strabismus, ASG Eye Hospital, Jodhpur, Rajasthan, India
| | - V G Lalgudi
- Department of Cornea, Refractive surgery, Ira G Ross Eye Institute, Jacobs School of Medicine and Biomedical Sciences, State University of New York (SUNY), Buffalo, USA
| | - G Kundu
- Department of Cornea and Refractive Surgery, Narayana Nethralaya, Bangalore, India
| | - M Mimouni
- Department of Ophthalmology, Rambam Health Care Campus affiliated with the Bruce and Ruth Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel
| | - H Liu
- Department of Ophthalmology, University of Ottawa Eye Institute, Ottawa, Canada
| | - V Jhanji
- UPMC Eye Center, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - G Prakash
- Department of Ophthalmology, School of Medicine, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - A S Roy
- Narayana Nethralaya Foundation, Bangalore, India
| | - R Shetty
- Department of Cornea and Refractive Surgery, Narayana Nethralaya, Bangalore, India
| | - J S Gurav
- Department of Opthalmology, Armed Forces Medical College, Pune, India
| |
Collapse
|
2
|
Kang Z, Xiao E, Li Z, Wang L. Deep Learning Based on ResNet-18 for Classification of Prostate Imaging-Reporting and Data System Category 3 Lesions. Acad Radiol 2024; 31:2412-2423. [PMID: 38302387 DOI: 10.1016/j.acra.2023.12.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 12/25/2023] [Accepted: 12/30/2023] [Indexed: 02/03/2024]
Abstract
RATIONALE AND OBJECTIVES To explore the classification and prediction efficacy of the deep learning model for benign prostate lesions, non-clinically significant prostate cancer (non-csPCa) and clinically significant prostate cancer (csPCa) in Prostate Imaging-Reporting and Data System (PI-RADS) 3 lesions. MATERIALS AND METHODS From January 2015 to December 2021, lesions diagnosed with PI-RADS 3 by multi-parametric MRI or bi-parametric MRI were retrospectively included. They were classified as benign prostate lesions, non-csPCa, and csPCa according to the pathological results. T2-weighted images of the lesions were divided into a training set and a test set according to 8:2. ResNet-18 was used for model training. All statistical analyses were performed using Python open-source libraries. The receiver operating characteristic curve (ROC) was used to evaluate the predictive effectiveness of the model. T-SNE was used for image semantic feature visualization. The class activation mapping was used to visualize the area focused by the model. RESULTS A total of 428 benign prostate lesion images, 158 non-csPCa images and 273 csPCa images were included. The precision in predicting benign prostate disease, non-csPCa and csPCa were 0.882, 0.681 and 0.851, and the area under the ROC were 0.875, 0.89 and 0.929, respectively. Semantic feature analysis showed strong classification separability between csPCa and benign prostate lesions. The class activation map showed that the deep learning model can focus on the area of the prostate or the location of PI-RADS 3 lesions. CONCLUSION Deep learning model with T2-weighted images based on ResNet-18 can realize accurate classification of PI-RADS 3 lesions.
Collapse
Affiliation(s)
- Zhen Kang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China
| | - Enhua Xiao
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha, Hunan Province, China
| | - Zhen Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China
| | - Liang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
3
|
Malashin I, Daibagya D, Tynchenko V, Gantimurov A, Nelyub V, Borodulin A. Predicting Diffusion Coefficients in Nafion Membranes during the Soaking Process Using a Machine Learning Approach. Polymers (Basel) 2024; 16:1204. [PMID: 38732673 PMCID: PMC11085799 DOI: 10.3390/polym16091204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 04/18/2024] [Accepted: 04/23/2024] [Indexed: 05/13/2024] Open
Abstract
Nafion, a versatile polymer used in electrochemistry and membrane technologies, exhibits complex behaviors in saline environments. This study explores Nafion membrane's IR spectra during soaking and subsequent drying processes in salt solutions at various concentrations. Utilizing the principles of Fick's second law, diffusion coefficients for these processes are derived via exponential approximation. By harnessing machine learning (ML) techniques, including the optimization of neural network hyperparameters via a genetic algorithm (GA) and leveraging various regressors, we effectively pinpointed the optimal model for predicting diffusion coefficients. Notably, for the prediction of soaking coefficients, our model is composed of layers with 64, 64, 32, and 16 neurons, employing ReLU, ELU, sigmoid, and ELU activation functions, respectively. Conversely, for drying coefficients, our model features two hidden layers with 16 and 12 neurons, utilizing sigmoid and ELU activation functions, respectively.
Collapse
Affiliation(s)
- Ivan Malashin
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
| | - Daniil Daibagya
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
- P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Moscow, Russia
| | - Vadim Tynchenko
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
| | - Andrei Gantimurov
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
| | - Vladimir Nelyub
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
- Scientific Department, Far Eastern Federal University, 690922 Vladivostok, Russia
| | - Aleksei Borodulin
- Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, 105005 Moscow, Russia (A.G.); (V.N.); (A.B.)
| |
Collapse
|
4
|
Yang L, Wang T, Zhang J, Kang S, Xu S, Wang K. Deep learning-based automatic segmentation of meningioma from T1-weighted contrast-enhanced MRI for preoperative meningioma differentiation using radiomic features. BMC Med Imaging 2024; 24:56. [PMID: 38443817 PMCID: PMC10916038 DOI: 10.1186/s12880-024-01218-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 01/21/2024] [Indexed: 03/07/2024] Open
Abstract
BACKGROUND This study aimed to establish a dedicated deep-learning model (DLM) on routine magnetic resonance imaging (MRI) data to investigate DLM performance in automated detection and segmentation of meningiomas in comparison to manual segmentations. Another purpose of our work was to develop a radiomics model based on the radiomics features extracted from automatic segmentation to differentiate low- and high-grade meningiomas before surgery. MATERIALS A total of 326 patients with pathologically confirmed meningiomas were enrolled. Samples were randomly split with a 6:2:2 ratio to the training set, validation set, and test set. Volumetric regions of interest (VOIs) were manually drawn on each slice using the ITK-SNAP software. An automatic segmentation model based on SegResNet was developed for the meningioma segmentation. Segmentation performance was evaluated by dice coefficient and 95% Hausdorff distance. Intra class correlation (ICC) analysis was applied to assess the agreement between radiomic features from manual and automatic segmentations. Radiomics features derived from automatic segmentation were extracted by pyradiomics. After feature selection, a model for meningiomas grading was built. RESULTS The DLM detected meningiomas in all cases. For automatic segmentation, the mean dice coefficient and 95% Hausdorff distance were 0.881 (95% CI: 0.851-0.981) and 2.016 (95% CI:1.439-3.158) in the test set, respectively. Features extracted on manual and automatic segmentation are comparable: the average ICC value was 0.804 (range, 0.636-0.933). Features extracted on manual and automatic segmentation are comparable: the average ICC value was 0.804 (range, 0.636-0.933). For meningioma classification, the radiomics model based on automatic segmentation performed well in grading meningiomas, yielding a sensitivity, specificity, accuracy, and area under the curve (AUC) of 0.778 (95% CI: 0.701-0.856), 0.860 (95% CI: 0.722-0.908), 0.848 (95% CI: 0.715-0.903) and 0.842 (95% CI: 0.807-0.895) in the test set, respectively. CONCLUSIONS The DLM yielded favorable automated detection and segmentation of meningioma and can help deploy radiomics for preoperative meningioma differentiation in clinical practice.
Collapse
Affiliation(s)
- Liping Yang
- Department of PET-CT, Harbin Medical University Cancer Hospital, Harbin, 150001, China
| | - Tianzuo Wang
- Medical Imaging Department, Changzheng Hospital of Harbin City, Harbin, China
| | - Jinling Zhang
- Medical Imaging Department, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shi Kang
- Medical Imaging Department, The Second Hospital of Heilongjiang Province, Harbin, China
| | - Shichuan Xu
- Department of Medical Instruments, Second Hospital of Harbin, Harbin, 150001, China.
| | - Kezheng Wang
- Department of PET-CT, Harbin Medical University Cancer Hospital, Harbin, 150001, China.
| |
Collapse
|
5
|
Ryakhovsky AN, Ryakhovsky SA. [Comparative evaluation of the accuracy of 3D TMJ analysis performed by different methods of processing computed tomograms]. STOMATOLOGIIA 2024; 103:56-60. [PMID: 38741536 DOI: 10.17116/stomat202410302156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
OBJECTIVE The aim of this study. Comparison of the accuracy of segmentation of TMJ elements in different ways and assessment of the suitability of the data obtained for the diagnosis of TMJ dysfunction. MATERIALS AND METHODS To study the segmentation of the bone elements of the TMJ (articular fossa, head of the LF), 60 computed tomograms of the maxillofacial region of patients were randomly selected in various ways (archival material). In group 1, the results of CT processing by AI diagnostics algorithms (Russia) were collected; in group 2, the results of CT processing based on the semi-automatic segmentation method in the Avantis3D program. The results of CT processing by Avantis3D AI algorithms (Russia) with different probability modes - 0.4 and 0.9, respectively, were selected for the third and fourth groups. Visually, the coincidence of the contours of the LF heads and articular pits isolated using different methods with their contours on all possible sections of the original CT itself was evaluated. The time spent on TMJ segmentation according to CT data was determined and compared using the methods described above. RESULTS Of the 240 objects, only 7.5% of the cases showed a slight discrepancy between the contours of the original CT in group b1, which was the lowest of all. A slight discrepancy in the TMJ contours to be corrected is characteristic of the semi-automatic method of segmentation by optical density was detected in 50.4% (group 2). The largest percentage of significant errors not subject to correction was noted in the first group, which made it impossible to perform a full 3D analysis of the TMJ, and the smallest in the second and fourth. The magnitude of the error in determining the width of the articular gap in different groups is comparable to the size of one voxel per CT. When segmentation is carried out using AI, the difference between segmented objects is close to zero values. The average time spent on TMJ segmentation in group 1 was 10.2±1.23 seconds, in group 2 - 12.6±1.87 seconds, in groups 3 and 4 - 0.46±0.12 seconds and 0.46±0.13 seconds, respectively. CONCLUSION The developed automated method for segmenting TMJ elements using AI is obviously more suitable for practical work, since it requires minimal time, and is almost as accurate as other methods under consideration.
Collapse
Affiliation(s)
- A N Ryakhovsky
- Central Research Institute of Dentistry and Maxillofacial Surgery, Moscow, Russia
| | | |
Collapse
|
6
|
Shen X, Mo S, Zeng X, Wang Y, Lin L, Weng M, Sugasawa T, Wang L, Gu W, Nakajima T. Identification of antigen-presentation related B cells as a key player in Crohn's disease using single-cell dissecting, hdWGCNA, and deep learning. Clin Exp Med 2023; 23:5255-5267. [PMID: 37550553 DOI: 10.1007/s10238-023-01145-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 07/12/2023] [Indexed: 08/09/2023]
Abstract
Crohn's disease (CD) arises from intricate intercellular interactions within the intestinal lamina propria. Our objective was to use single-cell RNA sequencing to investigate CD pathogenesis and explore its clinical significance. We identified a distinct subset of B cells, highly infiltrated in the CD lamina propria, that expressed genes related to antigen presentation. Using high-dimensional weighted gene co-expression network analysis and nine machine learning techniques, we demonstrated that the antigen-presenting CD-specific B cell signature effectively differentiated diseased mucosa from normal mucosa (Independent external testing AUC = 0.963). Additionally, using MCPcounter and non-negative matrix factorization, we established a relationship between the antigen-presenting CD-specific B cell signature and immune cell infiltration and patient heterogeneity. Finally, we developed a gene-immune convolutional neural network deep learning model that accurately diagnosed CD mucosa in diverse cohorts (Independent external testing AUC = 0.963). Our research has revealed a population of B cells with a potential promoting role in CD pathogenesis and represents a fundamental step in the development of future clinical diagnostic tools for the disease.
Collapse
Affiliation(s)
- Xin Shen
- Department of Digestive Diseases, Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Shaocong Mo
- Department of Digestive Diseases, Huashan Hospital, Fudan University, Shanghai, 200040, China.
| | - Xinlei Zeng
- School of Pharmaceutical Sciences, Sun Yat-Sen University, Guangzhou, 510006, China
| | - Yulin Wang
- Department of Nephrology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Lingxi Lin
- Department of Digestive Diseases, Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Meilin Weng
- Department of Anesthesiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Takehito Sugasawa
- Laboratory of Clinical Examination and Sports Medicine, Department of Clinical Medicine, Faculty of Medicine, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, 305-8577, Japan
| | - Lei Wang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College of Fudan University, Shanghai, China
| | - Wenchao Gu
- Department of Diagnostic and Interventional Radiology, University of Tsukuba, Ibaraki, 305-8577, Japan.
- Department of Diagnostic Radiology and Nuclear Medicine, Gunma University Graduate School of Medicine, Maebashi, 371-8511, Japan.
| | - Takahito Nakajima
- Department of Diagnostic and Interventional Radiology, University of Tsukuba, Ibaraki, 305-8577, Japan
| |
Collapse
|
7
|
Hagg A, Kirschner KN. Open-Source Machine Learning in Computational Chemistry. J Chem Inf Model 2023; 63:4505-4532. [PMID: 37466636 PMCID: PMC10430767 DOI: 10.1021/acs.jcim.3c00643] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Indexed: 07/20/2023]
Abstract
The field of computational chemistry has seen a significant increase in the integration of machine learning concepts and algorithms. In this Perspective, we surveyed 179 open-source software projects, with corresponding peer-reviewed papers published within the last 5 years, to better understand the topics within the field being investigated by machine learning approaches. For each project, we provide a short description, the link to the code, the accompanying license type, and whether the training data and resulting models are made publicly available. Based on those deposited in GitHub repositories, the most popular employed Python libraries are identified. We hope that this survey will serve as a resource to learn about machine learning or specific architectures thereof by identifying accessible codes with accompanying papers on a topic basis. To this end, we also include computational chemistry open-source software for generating training data and fundamental Python libraries for machine learning. Based on our observations and considering the three pillars of collaborative machine learning work, open data, open source (code), and open models, we provide some suggestions to the community.
Collapse
Affiliation(s)
- Alexander Hagg
- Institute
of Technology, Resource and Energy-Efficient Engineering (TREE), University of Applied Sciences Bonn-Rhein-Sieg, 53757 Sankt Augustin, Germany
- Department
of Electrical Engineering, Mechanical Engineering and Technical Journalism, University of Applied Sciences Bonn-Rhein-Sieg, 53757 Sankt Augustin, Germany
| | - Karl N. Kirschner
- Institute
of Technology, Resource and Energy-Efficient Engineering (TREE), University of Applied Sciences Bonn-Rhein-Sieg, 53757 Sankt Augustin, Germany
- Department
of Computer Science, University of Applied
Sciences Bonn-Rhein-Sieg, 53757 Sankt Augustin, Germany
| |
Collapse
|
8
|
Kim SK. Transverse Deflection for Extreme Ultraviolet Pellicles. MATERIALS (BASEL, SWITZERLAND) 2023; 16:ma16093471. [PMID: 37176352 PMCID: PMC10179971 DOI: 10.3390/ma16093471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023]
Abstract
Defect control of extreme ultraviolet (EUV) masks using pellicles is challenging for mass production in EUV lithography because EUV pellicles require more critical fabrication than argon fluoride (ArF) pellicles. One of the fabrication requirements is less than 500 μm transverse deflections with more than 88% transmittance of full-size pellicles (112 mm × 145 mm) at pressure 2 Pa. For the nanometer thickness (thickness/width length (t/L) = 0.0000054) of EUV pellicles, this study reports the limitation of the student's version and shear locking in a commercial tool-based finite element method (FEM) such as ANSYS and SIEMENS. A Python program-based analytical-numerical method with deep learning is described as an alternative. Deep learning extended the ANSYS limitation and overcame shear locking. For EUV pellicle materials, the ascending order of transverse deflection was Ru<MoSi2=SiC<SiNx<ZrSr2<p-Si<Sn in both ANSYS and a Python program, regardless of thickness and pressure. According to a neural network, such as the Taguchi method, the sensitivity order of EUV pellicle parameters was Poisson's ratio<Elastic modulus<Pressure<Thickness<Length.
Collapse
Affiliation(s)
- Sang-Kon Kim
- The Faculty of Liberal Arts, Hongik University, Seoul 04066, Republic of Korea
| |
Collapse
|