1
|
Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024:10.1038/s41433-024-03085-2. [PMID: 38734746 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
|
2
|
Gradient-Based Saliency Maps Are Not Trustworthy Visual Explanations of Automated AI Musculoskeletal Diagnoses. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01136-4. [PMID: 38710971 DOI: 10.1007/s10278-024-01136-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 04/30/2024] [Accepted: 05/01/2024] [Indexed: 05/08/2024]
Abstract
Saliency maps are popularly used to "explain" decisions made by modern machine learning models, including deep convolutional neural networks (DCNNs). While the resulting heatmaps purportedly indicate important image features, their "trustworthiness," i.e., utility and robustness, has not been evaluated for musculoskeletal imaging. The purpose of this study was to systematically evaluate the trustworthiness of saliency maps used in disease diagnosis on upper extremity X-ray images. The underlying DCNNs were trained using the Stanford MURA dataset. We studied four trustworthiness criteria-(1) localization accuracy of abnormalities, (2) repeatability, (3) reproducibility, and (4) sensitivity to underlying DCNN weights-across six different gradient-based saliency methods (Grad-CAM (GCAM), gradient explanation (GRAD), integrated gradients (IG), Smoothgrad (SG), smooth IG (SIG), and XRAI). Ground-truth was defined by the consensus of three fellowship-trained musculoskeletal radiologists who each placed bounding boxes around abnormalities on a holdout saliency test set. Compared to radiologists, all saliency methods showed inferior localization (AUPRCs: 0.438 (SG)-0.590 (XRAI); average radiologist AUPRC: 0.816), repeatability (IoUs: 0.427 (SG)-0.551 (IG); average radiologist IOU: 0.613), and reproducibility (IoUs: 0.250 (SG)-0.502 (XRAI); average radiologist IOU: 0.613) on abnormalities such as fractures, orthopedic hardware insertions, and arthritis. Five methods (GCAM, GRAD, IG, SG, XRAI) passed the sensitivity test. Ultimately, no saliency method met all four trustworthiness criteria; therefore, we recommend caution and rigorous evaluation of saliency maps prior to their clinical use.
Collapse
|
3
|
Performance of deep learning for detection of chronic kidney disease from retinal fundus photographs: A systematic review and meta-analysis. Eur J Ophthalmol 2024; 34:502-509. [PMID: 37671422 DOI: 10.1177/11206721231199848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
OBJECTIVE Deep learning has been used to detect chronic kidney disease (CKD) from retinal fundus photographs. We aim to evaluate the performance of deep learning for CKD detection. METHODS The original studies in CKD patients detected by deep learning from retinal fundus photographs were eligible for inclusion. PubMed, Embase, the Cochrane Library, and Web of Science were searched up to October 31, 2022. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used to assess the risk of bias. RESULTS Four studies enrolled 114,860 subjects were included. The pooled sensitivity and specificity were 87.8% (95% confidence interval (CI): 61.6% to 98.3%), and 62.4% (95% CI: 44.9% to 78.7%). The area under the curve (AUC) was 0.864 (95%CI: 0.769, 0.986). CONCLUSION Deep learning based on retinal fundus photographs has the ability to detect CKD, but it currently has a lot of room for improvement. It is still a long way from clinical application.
Collapse
|
4
|
Predicting extremely low body weight from 12-lead electrocardiograms using a deep neural network. Sci Rep 2024; 14:4696. [PMID: 38409450 PMCID: PMC10897430 DOI: 10.1038/s41598-024-55453-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 02/23/2024] [Indexed: 02/28/2024] Open
Abstract
Previous studies have successfully predicted overweight status by applying deep learning to 12-lead electrocardiogram (ECG); however, models for predicting underweight status remain unexplored. Here, we assessed the feasibility of deep learning in predicting extremely low body weight using 12-lead ECGs, thereby investigating the prediction rationale for highlighting the parts of ECGs that are associated with extremely low body weight. Using records of inpatients predominantly with anorexia nervosa, we trained a convolutional neural network (CNN) that inputs a 12-lead ECG and outputs a binary prediction of whether body mass index is ≤ 12.6 kg/m2. This threshold was identified in a previous study as the optimal cutoff point for predicting the onset of refeeding syndrome. The CNN model achieved an area under the receiver operating characteristic curve of 0.807 (95% confidence interval, 0.745-0.869) on the test dataset. The gradient-weighted class activation map showed that the model focused on QRS waves. A negative correlation with the prediction scores was observed for QRS voltage. These results suggest that deep learning is feasible for predicting extremely low body weight using 12-lead ECGs, and several ECG features, such as lower QRS voltage, may be associated with extremely low body weight in patients with anorexia nervosa.
Collapse
|
5
|
Deep Learning and Machine Learning Algorithms for Retinal Image Analysis in Neurodegenerative Disease: Systematic Review of Datasets and Models. Transl Vis Sci Technol 2024; 13:16. [PMID: 38381447 PMCID: PMC10893898 DOI: 10.1167/tvst.13.2.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/26/2023] [Indexed: 02/22/2024] Open
Abstract
Purpose Retinal images contain rich biomarker information for neurodegenerative disease. Recently, deep learning models have been used for automated neurodegenerative disease diagnosis and risk prediction using retinal images with good results. Methods In this review, we systematically report studies with datasets of retinal images from patients with neurodegenerative diseases, including Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, and others. We also review and characterize the models in the current literature which have been used for classification, regression, or segmentation problems using retinal images in patients with neurodegenerative diseases. Results Our review found several existing datasets and models with various imaging modalities primarily in patients with Alzheimer's disease, with most datasets on the order of tens to a few hundred images. We found limited data available for the other neurodegenerative diseases. Although cross-sectional imaging data for Alzheimer's disease is becoming more abundant, datasets with longitudinal imaging of any disease are lacking. Conclusions The use of bilateral and multimodal imaging together with metadata seems to improve model performance, thus multimodal bilateral image datasets with patient metadata are needed. We identified several deep learning tools that have been useful in this context including feature extraction algorithms specifically for retinal images, retinal image preprocessing techniques, transfer learning, feature fusion, and attention mapping. Importantly, we also consider the limitations common to these models in real-world clinical applications. Translational Relevance This systematic review evaluates the deep learning models and retinal features relevant in the evaluation of retinal images of patients with neurodegenerative disease.
Collapse
|
6
|
Revisiting the Trustworthiness of Saliency Methods in Radiology AI. Radiol Artif Intell 2024; 6:e220221. [PMID: 38166328 PMCID: PMC10831523 DOI: 10.1148/ryai.220221] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 10/04/2023] [Accepted: 10/23/2023] [Indexed: 01/04/2024]
Abstract
Purpose To determine whether saliency maps in radiology artificial intelligence (AI) are vulnerable to subtle perturbations of the input, which could lead to misleading interpretations, using prediction-saliency correlation (PSC) for evaluating the sensitivity and robustness of saliency methods. Materials and Methods In this retrospective study, locally trained deep learning models and a research prototype provided by a commercial vendor were systematically evaluated on 191 229 chest radiographs from the CheXpert dataset and 7022 MR images from a human brain tumor classification dataset. Two radiologists performed a reader study on 270 chest radiograph pairs. A model-agnostic approach for computing the PSC coefficient was used to evaluate the sensitivity and robustness of seven commonly used saliency methods. Results The saliency methods had low sensitivity (maximum PSC, 0.25; 95% CI: 0.12, 0.38) and weak robustness (maximum PSC, 0.12; 95% CI: 0.0, 0.25) on the CheXpert dataset, as demonstrated by leveraging locally trained model parameters. Further evaluation showed that the saliency maps generated from a commercial prototype could be irrelevant to the model output, without knowledge of the model specifics (area under the receiver operating characteristic curve decreased by 8.6% without affecting the saliency map). The human observer studies confirmed that it is difficult for experts to identify the perturbed images; the experts had less than 44.8% correctness. Conclusion Popular saliency methods scored low PSC values on the two datasets of perturbed chest radiographs, indicating weak sensitivity and robustness. The proposed PSC metric provides a valuable quantification tool for validating the trustworthiness of medical AI explainability. Keywords: Saliency Maps, AI Trustworthiness, Dynamic Consistency, Sensitivity, Robustness Supplemental material is available for this article. © RSNA, 2023 See also the commentary by Yanagawa and Sato in this issue.
Collapse
|
7
|
Mortality Prediction of Patients with Subarachnoid Hemorrhage Using a Deep Learning Model Based on an Initial Brain CT Scan. Brain Sci 2023; 14:10. [PMID: 38248225 PMCID: PMC10812955 DOI: 10.3390/brainsci14010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/10/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Subarachnoid hemorrhage (SAH) entails high morbidity and mortality rates. Convolutional neural networks (CNN) are capable of generating highly accurate predictions from imaging data. Our objective was to predict mortality in SAH patients by processing initial CT scans using a CNN-based algorithm. METHODS We conducted a retrospective multicentric study of a consecutive cohort of patients with SAH. Demographic, clinical and radiological variables were analyzed. Preprocessed baseline CT scan images were used as the input for training using the AUCMEDI framework. Our model's architecture leveraged a DenseNet121 structure, employing transfer learning principles. The output variable was mortality in the first three months. RESULTS Images from 219 patients were processed; 175 for training and validation and 44 for the model's evaluation. Of the patients, 52% (115/219) were female and the median age was 58 (SD = 13.06) years. In total, 18.5% (39/219) had idiopathic SAH. The mortality rate was 28.5% (63/219). The model showed good accuracy at predicting mortality in SAH patients when exclusively using the images of the initial CT scan (accuracy = 74%, F1 = 75% and AUC = 82%). CONCLUSION Modern image processing techniques based on AI and CNN make it possible to predict mortality in SAH patients with high accuracy using CT scan images as the only input. These models might be optimized by including more data and patients, resulting in better training, development and performance on tasks that are beyond the skills of conventional clinical knowledge.
Collapse
|
8
|
Optical coherence tomography angiography for the characterisation of retinal microvasculature alterations in pregnant patients with anaemia: a nested case‒control study. Br J Ophthalmol 2023; 108:117-123. [PMID: 36428006 PMCID: PMC10803992 DOI: 10.1136/bjo-2022-321781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/31/2022] [Indexed: 11/26/2022]
Abstract
AIMS To characterise retinal microvascular alterations in the eyes of pregnant patients with anaemia (PA) and to compare the alterations with those in healthy controls (HC) using optical coherence tomography angiography (OCTA). METHODS This nested case‒control study included singleton PA and HC from the Eye Health in Pregnancy Study. Fovea avascular zone (FAZ) metrics, perfusion density (PD) in the superficial capillary plexus, deep capillary plexus and flow deficit (FD) density in the choriocapillaris (CC) were quantified using FIJI software. Linear regressions were conducted to evaluate the differences in OCTA metrics between PA and HC. Subgroup analyses were performed based on comparisons between PA diagnosed in the early or late trimester and HC. RESULTS In total, 99 eyes of 99 PA and 184 eyes of 184 HC were analysed. PA had a significantly reduced FAZ perimeter (β coefficient=-0.310, p<0.001), area (β coefficient=-0.121, p=0.001) and increased circularity (β coefficient=0.037, p<0.001) compared with HC. Furthermore, higher PD in the central (β coefficient=0.327, p=0.001) and outer (β coefficient=0.349, p=0.007) regions were observed in PA. PA diagnosed in the first trimester had more extensive central FD (β coefficient=4.199, p=0.003) in the CC, indicating impaired perfusion in the CC. CONCLUSION It was found that anaemia during pregnancy was associated with macular microvascular abnormalities, which differed in PA as pregnancy progressed. The results suggest that quantitative OCTA metrics may be useful for risk evaluation before clinical diagnosis. TRIAL REGISTRATION NUMBERS 2021KYPJ098 and ChiCTR2100049850.
Collapse
|
9
|
Prediction of cancer recurrence based on compact graphs of whole slide images. Comput Biol Med 2023; 167:107663. [PMID: 37931526 DOI: 10.1016/j.compbiomed.2023.107663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/19/2023] [Accepted: 10/31/2023] [Indexed: 11/08/2023]
Abstract
Cancer recurrence is one of the primary causes of patient mortality following treatment, indicating increased aggressiveness of cancer cells and difficulties in achieving a cure. A critical step to improve patients' survival is accurately predicting recurrence status and giving appropriate treatment. Whole Slide Images (WSIs) are a common type of image data in the field of digital pathology, containing high-resolution tissue information. Furthermore, WSIs of primary tumors contain microenvironmental information directly associated with the growth of tumor cells. To effectively utilize this microenvironmental information. Firstly, we represented microenvironmental features of histopathological images as compact graphs. Secondly, this work aims to develop an enhanced lightweight graph neural network called the Adaptive Graph Clustering Network (AGCNet) for predicting cancer recurrence. Experiments are conducted on three cancer datasets from The Cancer Genome Atlas (TCGA), and AGCNet achieved an accuracy of 81.81% in BLCA, 69.66% in PAAD, and 81.96% in STAD. These results indicated that AGCNet is an effective model for predicting cancer recurrence and is expected to be applied in clinical applications.
Collapse
|
10
|
Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images. Med Image Anal 2023; 89:102929. [PMID: 37598606 DOI: 10.1016/j.media.2023.102929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/15/2023] [Accepted: 08/07/2023] [Indexed: 08/22/2023]
Abstract
Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.
Collapse
|
11
|
Automatic intracranial abnormality detection and localization in head CT scans by learning from free-text reports. Cell Rep Med 2023; 4:101164. [PMID: 37652014 PMCID: PMC10518589 DOI: 10.1016/j.xcrm.2023.101164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/30/2023] [Accepted: 07/27/2023] [Indexed: 09/02/2023]
Abstract
Deep learning has yielded promising results for medical image diagnosis but relies heavily on manual image annotations, which are expensive to acquire. We present Cross-DL, a cross-modality learning framework for intracranial abnormality detection and localization in head computed tomography (CT) scans by learning from free-text imaging reports. Cross-DL has a discretizer that automatically extracts discrete labels of abnormality types and locations from reports, which are utilized to train an image analyzer by a dynamic multi-instance learning approach. Benefiting from the low annotation cost and a consequent large-scale training set of 28,472 CT scans, Cross-DL achieves accurate performance, with an average area under the receiver operating characteristic curve (AUROC) of 0.956 (95% confidence interval: 0.952-0.959) in detecting 4 abnormality types in 17 regions while accurately localizing abnormalities at the voxel level. An intracranial hemorrhage classification experiment on the external dataset CQ500 achieves an AUROC of 0.928 (0.905-0.951). The model can also help review prioritization.
Collapse
|
12
|
A new, feasible, and convenient method based on semantic segmentation and deep learning for hemoglobin monitoring. Front Med (Lausanne) 2023; 10:1151996. [PMID: 37601798 PMCID: PMC10435289 DOI: 10.3389/fmed.2023.1151996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Objective Non-invasive methods for hemoglobin (Hb) monitoring can provide additional and relatively precise information between invasive measurements of Hb to help doctors' decision-making. We aimed to develop a new method for Hb monitoring based on mask R-CNN and MobileNetV3 with eye images as input. Methods Surgical patients from our center were enrolled. After image acquisition and pre-processing, the eye images, the manually selected palpebral conjunctiva, and features extracted, respectively, from the two kinds of images were used as inputs. A combination of feature engineering and regression, solely MobileNetV3, and a combination of mask R-CNN and MobileNetV3 were applied for model development. The model's performance was evaluated using metrics such as R2, explained variance score (EVS), and mean absolute error (MAE). Results A total of 1,065 original images were analyzed. The model's performance based on the combination of mask R-CNN and MobileNetV3 using the eye images achieved an R2, EVS, and MAE of 0.503 (95% CI, 0.499-0.507), 0.518 (95% CI, 0.515-0.522) and 1.6 g/dL (95% CI, 1.6-1.6 g/dL), which was similar to that based on MobileNetV3 using the manually selected palpebral conjunctiva images (R2: 0.509, EVS:0.516, MAE:1.6 g/dL). Conclusion We developed a new and automatic method for Hb monitoring to help medical staffs' decision-making with high efficiency, especially in cases of disaster rescue, casualty transport, and so on.
Collapse
|
13
|
Diagnosing Systemic Disorders with AI Algorithms Based on Ocular Images. Healthcare (Basel) 2023; 11:1739. [PMID: 37372857 DOI: 10.3390/healthcare11121739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
The advent of artificial intelligence (AI), especially the state-of-the-art deep learning frameworks, has begun a silent revolution in all medical subfields, including ophthalmology. Due to their specific microvascular and neural structures, the eyes are anatomically associated with the rest of the body. Hence, ocular image-based AI technology may be a useful alternative or additional screening strategy for systemic diseases, especially where resources are scarce. This review summarizes the current applications of AI related to the prediction of systemic diseases from multimodal ocular images, including cardiovascular diseases, dementia, chronic kidney diseases, and anemia. Finally, we also discuss the current predicaments and future directions of these applications.
Collapse
|
14
|
Ocular images-based artificial intelligence on systemic diseases. Biomed Eng Online 2023; 22:49. [PMID: 37208715 DOI: 10.1186/s12938-023-01110-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/02/2023] [Indexed: 05/21/2023] Open
Abstract
PURPOSE To provide a summary of the research advances on ocular images-based artificial intelligence on systemic diseases. METHODS Narrative literature review. RESULTS Ocular images-based artificial intelligence has been used in a variety of systemic diseases, including endocrine, cardiovascular, neurological, renal, autoimmune, and hematological diseases, and many others. However, the studies are still at an early stage. The majority of studies have used AI only for diseases diagnosis, and the specific mechanisms linking systemic diseases to ocular images are still unclear. In addition, there are many limitations to the research, such as the number of images, the interpretability of artificial intelligence, rare diseases, and ethical and legal issues. CONCLUSION While ocular images-based artificial intelligence is widely used, the relationship between the eye and the whole body should be more clearly elucidated.
Collapse
|
15
|
Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
|
16
|
Ocular disease examination of fundus images by hybriding SFCNN and rule mining algorithms. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
17
|
Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050900. [PMID: 36900043 PMCID: PMC10001234 DOI: 10.3390/diagnostics13050900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
Deep learning (DL) is the new high-profile technology in medical artificial intelligence (AI) for building screening and diagnosing algorithms for various diseases. The eye provides a window for observing neurovascular pathophysiological changes. Previous studies have proposed that ocular manifestations indicate systemic conditions, revealing a new route in disease screening and management. There have been multiple DL models developed for identifying systemic diseases based on ocular data. However, the methods and results varied immensely across studies. This systematic review aims to summarize the existing studies and provide an overview of the present and future aspects of DL-based algorithms for screening systemic diseases based on ophthalmic examinations. We performed a thorough search in PubMed®, Embase, and Web of Science for English-language articles published until August 2022. Among the 2873 articles collected, 62 were included for analysis and quality assessment. The selected studies mainly utilized eye appearance, retinal data, and eye movements as model input and covered a wide range of systemic diseases such as cardiovascular diseases, neurodegenerative diseases, and systemic health features. Despite the decent performance reported, most models lack disease specificity and public generalizability for real-world application. This review concludes the pros and cons and discusses the prospect of implementing AI based on ocular data in real-world clinical scenarios.
Collapse
|
18
|
DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
|
19
|
The dominant logic of Big Tech in healthcare and pharma. Drug Discov Today 2023; 28:103457. [PMID: 36427777 DOI: 10.1016/j.drudis.2022.103457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 09/19/2022] [Accepted: 11/17/2022] [Indexed: 11/25/2022]
Abstract
Digital health and digital pharma are considered supportive tools for patients and healthcare providers (HCPs), making the market highly attractive for industry players. Not surprisingly, Tech Giants have started to move into this area. We utilized established management models and publicly available information sources, such as annual company reports, and performed a thorough analysis to uncover the underlying business models of Alphabet, Amazon, Apple, IBM, and Microsoft in order to better understand their intention and course of entering the healthcare and pharma industries. Our results indicate that Big Tech or Tech Giants do address the needs of patients and physicians, while having built clear value propositions, value chains, and revenue models to sustainably revolutionize the healthcare and pharma industries.
Collapse
|
20
|
Transfer learning as an AI-based solution to address limited datasets in space medicine. LIFE SCIENCES IN SPACE RESEARCH 2023; 36:36-38. [PMID: 36682827 DOI: 10.1016/j.lssr.2022.12.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/03/2022] [Accepted: 12/30/2022] [Indexed: 06/17/2023]
Abstract
The advent of artificial intelligence (AI) has a promising role in the future long-duration spaceflight missions. Traditional AI algorithms rely on training and testing data from the same domain. However, astronaut medical data is naturally limited to a small sample size and often difficult to collect, leading to extremely limited datasets. This significantly limits the ability of traditional machine learning methodologies. Transfer learning is a potential solution to overcome this dataset size limitation and can help improve training time and performance of a neural networks. We discuss the unique challenges of space medicine in producing datasets and transfer learning as an emerging technique to address these issues.
Collapse
|
21
|
An Overview of Deep-Learning-Based Methods for Cardiovascular Risk Assessment with Retinal Images. Diagnostics (Basel) 2022; 13:diagnostics13010068. [PMID: 36611360 PMCID: PMC9818382 DOI: 10.3390/diagnostics13010068] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/19/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early diagnosis of several systemic diseases. There is a large corpus of RFI systematically acquired for diagnosing eye-related diseases that could be used for CVDs prevention. Nevertheless, public health systems cannot afford to dedicate expert physicians to only deal with this data, posing the need for automated diagnosis tools that can raise alarms for patients at risk. Artificial Intelligence (AI) and, particularly, deep learning models, became a strong alternative to provide computerized pre-diagnosis for patient risk retrieval. This paper provides a novel review of the major achievements of the recent state-of-the-art DL approaches to automated CVDs diagnosis. This overview gathers commonly used datasets, pre-processing techniques, evaluation metrics and deep learning approaches used in 30 different studies. Based on the reviewed articles, this work proposes a classification taxonomy depending on the prediction target and summarizes future research challenges that have to be tackled to progress in this line.
Collapse
|
22
|
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J Clin Med 2022; 12:jcm12010152. [PMID: 36614953 PMCID: PMC9821402 DOI: 10.3390/jcm12010152] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/17/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.
Collapse
|
23
|
Development and validation of a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score. Age Ageing 2022; 51:6936402. [PMID: 36580391 DOI: 10.1093/ageing/afac282] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/08/2022] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND the Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk score is a recognised tool for dementia risk stratification. However, its application is limited due to the requirements for multidimensional information and fasting blood draw. Consequently, an effective and non-invasive tool for screening individuals with high dementia risk in large population-based settings is urgently needed. METHODS a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score was developed and internally validated by a medical check-up dataset included 271,864 participants in 19 province-level administrative regions of China, and externally validated based on an independent dataset included 20,690 check-up participants in Beijing. The performance for identifying individuals with high dementia risk (CAIDE dementia risk score ≥ 10 points) was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval (CI). RESULTS the algorithm achieved an AUC of 0.944 (95% CI: 0.939-0.950) in the internal validation group and 0.926 (95% CI: 0.913-0.939) in the external group, respectively. Besides, the estimated CAIDE dementia risk score derived from the algorithm was significantly associated with both comprehensive cognitive function and specific cognitive domains. CONCLUSIONS this algorithm trained via fundus photographs could well identify individuals with high dementia risk in a population setting. Therefore, it has the potential to be utilised as a non-invasive and more expedient method for dementia risk stratification. It might also be adopted in dementia clinical trials, incorporated as inclusion criteria to efficiently select eligible participants.
Collapse
|
24
|
Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
|
25
|
Detection algorithm for pigmented skin disease based on classifier-level and feature-level fusion. Front Public Health 2022; 10:1034772. [PMID: 36339204 PMCID: PMC9632750 DOI: 10.3389/fpubh.2022.1034772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 09/30/2022] [Indexed: 01/29/2023] Open
Abstract
Pigmented skin disease is caused by abnormal melanocyte and melanin production, which can be induced by genetic and environmental factors. It is also common among the various types of skin diseases. The timely and accurate diagnosis of pigmented skin disease is important for reducing mortality. Patients with pigmented dermatosis are generally diagnosed by a dermatologist through dermatoscopy. However, due to the current shortage of experts, this approach cannot meet the needs of the population, so a computer-aided system would help to diagnose skin lesions in remote areas containing insufficient experts. This paper proposes an algorithm based on a fusion network for the detection of pigmented skin disease. First, we preprocess the images in the acquired dataset, and then we perform image flipping and image style transfer to augment the images to alleviate the imbalance between the various categories in the dataset. Finally, two feature-level fusion optimization schemes based on deep features are compared with a classifier-level fusion scheme based on a classification layer to effectively determine the best fusion strategy for satisfying the pigmented skin disease detection requirements. Gradient-weighted Class Activation Mapping (Grad_CAM) and Grad_CAM++ are used for visualization purposes to verify the effectiveness of the proposed fusion network. The results show that compared with those of the traditional detection algorithm for pigmented skin disease, the accuracy and Area Under Curve (AUC) of the method in this paper reach 92.1 and 95.3%, respectively. The evaluation indices are greatly improved, proving the adaptability and accuracy of the proposed method. The proposed method can assist clinicians in screening and diagnosing pigmented skin disease and is suitable for real-world applications.
Collapse
|
26
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
27
|
Artificial intelligence in ophthalmology: an insight into neurodegenerative disease. Curr Opin Ophthalmol 2022; 33:432-439. [PMID: 35819902 DOI: 10.1097/icu.0000000000000877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The aging world population accounts for the increasing prevalence of neurodegenerative diseases such as Alzheimer's and Parkinson's which carry a significant health and economic burden. There is therefore a need for sensitive and specific noninvasive biomarkers for early diagnosis and monitoring. Advances in retinal and optic nerve multimodal imaging as well as the development of artificial intelligence deep learning systems (AI-DLS) have heralded a number of promising advances of which ophthalmologists are at the forefront. RECENT FINDINGS The association among retinal vascular, nerve fiber layer, and macular findings in neurodegenerative disease is well established. In order to optimize the use of these ophthalmic parameters as biomarkers, validated AI-DLS are required to ensure clinical efficacy and reliability. Varied image acquisition methods and protocols as well as variability in neurogenerative disease diagnosis compromise the robustness of ground truths that are paramount to developing high-quality training datasets. SUMMARY In order to produce effective AI-DLS for the diagnosis and monitoring of neurodegenerative disease, multicenter international collaboration is required to prospectively produce large inclusive datasets, acquired through standardized methods and protocols. With a uniform approach, the efficacy of resultant clinical applications will be maximized.
Collapse
|
28
|
AOSLO-net: A Deep Learning-Based Method for Automatic Segmentation of Retinal Microaneurysms From Adaptive Optics Scanning Laser Ophthalmoscopy Images. Transl Vis Sci Technol 2022; 11:7. [PMID: 35938881 PMCID: PMC9366726 DOI: 10.1167/tvst.11.8.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 07/02/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Accurate segmentation of microaneurysms (MAs) from adaptive optics scanning laser ophthalmoscopy (AOSLO) images is crucial for identifying MA morphologies and assessing the hemodynamics inside the MAs. Herein, we introduce AOSLO-net to perform automatic MA segmentation from AOSLO images of diabetic retinas. Method AOSLO-net is composed of a deep neural network based on UNet with a pretrained EfficientNet as the encoder. We have designed customized preprocessing and postprocessing policies for AOSLO images, including generation of multichannel images, de-noising, contrast enhancement, ensemble and union of model predictions, to optimize the MA segmentation. AOSLO-net is trained and tested using 87 MAs imaged from 28 eyes of 20 subjects with varying severity of diabetic retinopathy (DR), which is the largest available AOSLO dataset for MA detection. To avoid the overfitting in the model training process, we augment the training data by flipping, rotating, scaling the original image to increase the diversity of data available for model training. Results The validity of the model is demonstrated by the good agreement between the predictions of AOSLO-net and the MA masks generated by ophthalmologists and skillful trainees on 87 patient-specific MA images. Our results show that AOSLO-net outperforms the state-of-the-art segmentation model (nnUNet) both in accuracy (e.g., intersection over union and Dice scores), as well as computational cost. Conclusions We demonstrate that AOSLO-net provides high-quality of MA segmentation from AOSLO images that enables correct MA morphological classification. Translational Relevance As the first attempt to automatically segment retinal MAs from AOSLO images, AOSLO-net could facilitate the pathological study of DR and help ophthalmologists make disease prognoses.
Collapse
|
29
|
Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
|
30
|
Application of deep learning methods: From molecular modelling to patient classification. Exp Cell Res 2022; 418:113278. [PMID: 35810775 DOI: 10.1016/j.yexcr.2022.113278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/16/2022] [Accepted: 07/05/2022] [Indexed: 11/28/2022]
Abstract
We are now well into the information driven age with complex, heterogeneous, datasets in the biological sciences continuing to grow at a rapid pace. Moreover, distilling of such datasets, to find new governing principles, are underway. Leading the surge are new and exciting algorithmic developments in computer simulation and machine learning, most notably for the latter, those centred on deep learning. However, practical applications of cell centric computations within the biological sciences, even when carefully benchmarked against existing experimental datasets, remain challenging. Here we discuss the application of deep learning methodologies to support our understanding of cell functionality and as an aid to patient classification. Whilst comprehensive end-to-end deep learning approaches that utilise knowledge of the cell and its molecular components to aid human disease classification are yet to be implemented, important for opening the door to more effective molecular and cell-based therapies, we illustrate that many deep learning applications have been developed to tackle components of such an ambitious pipeline. We end our discussion on what the future may hold, especially how an integrated framework of computer simulations and deep learning, in conjunction with wet-bench experimentation, could enable to reveal the governing principles underlying cell functionalities within the tissue environments cells operate.
Collapse
|
31
|
Deep Learning Model for Predicting the Pathological Complete Response to Neoadjuvant Chemoradiotherapy of Locally Advanced Rectal Cancer. Front Oncol 2022; 12:807264. [PMID: 35756653 PMCID: PMC9214314 DOI: 10.3389/fonc.2022.807264] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/02/2022] [Indexed: 12/24/2022] Open
Abstract
Objective This study aimed to develop an artificial intelligence model for predicting the pathological complete response (pCR) to neoadjuvant chemoradiotherapy (nCRT) of locally advanced rectal cancer (LARC) using digital pathological images. Background nCRT followed by total mesorectal excision (TME) is a standard treatment strategy for patients with LARC. Predicting the PCR to nCRT of LARC remine difficulty. Methods 842 LARC patients treated with standard nCRT from three medical centers were retrospectively recruited and subgrouped into the training, testing and external validation sets. Treatment response was classified as pCR and non-pCR based on the pathological diagnosis after surgery as the ground truth. The hematoxylin & eosin (H&E)-stained biopsy slides were manually annotated and used to develop a deep pathological complete response (DeepPCR) prediction model by deep learning. Results The proposed DeepPCR model achieved an AUC-ROC of 0.710 (95% CI: 0.595, 0.808) in the testing cohort. Similarly, in the external validation cohort, the DeepPCR model achieved an AUC-ROC of 0.723 (95% CI: 0.591, 0.844). The sensitivity and specificity of the DeepPCR model were 72.6% and 46.9% in the testing set and 72.5% and 62.7% in the external validation cohort, respectively. Multivariate logistic regression analysis showed that the DeepPCR model was an independent predictive factor of nCRT (P=0.008 and P=0.004 for the testing set and external validation set, respectively). Conclusions The DeepPCR model showed high accuracy in predicting pCR and served as an independent predictive factor for pCR. The model can be used to assist in clinical treatment decision making before surgery.
Collapse
|
32
|
Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging. Front Digit Health 2022; 4:889445. [PMID: 35706971 PMCID: PMC9190759 DOI: 10.3389/fdgth.2022.889445] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial Intelligence (AI) analytics has been used to predict, classify, and aid clinical management of multiple eye diseases. Its robust performances have prompted researchers to expand the use of AI into predicting systemic, non-ocular diseases and parameters based on ocular images. Herein, we discuss the reasons why the eye is well-suited for systemic applications, and review the applications of deep learning on ophthalmic images in the prediction of demographic parameters, body composition factors, and diseases of the cardiovascular, hematological, neurodegenerative, metabolic, renal, and hepatobiliary systems. Three main imaging modalities are included—retinal fundus photographs, optical coherence tomographs and external ophthalmic images. We examine the range of systemic factors studied from ophthalmic imaging in current literature and discuss areas of future research, while acknowledging current limitations of AI systems based on ophthalmic images.
Collapse
|
33
|
Deep-Learning-Based Hemoglobin Concentration Prediction and Anemia Screening Using Ultra-Wide Field Fundus Images. Front Cell Dev Biol 2022; 10:888268. [PMID: 35663399 PMCID: PMC9160874 DOI: 10.3389/fcell.2022.888268] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 04/07/2022] [Indexed: 11/30/2022] Open
Abstract
Background: Anemia is the most common hematological disorder. The purpose of this study was to establish and validate a deep-learning model to predict Hgb concentrations and screen anemia using ultra-wide-field (UWF) fundus images. Methods: The study was conducted at Peking Union Medical College Hospital. Optos color images taken between January 2017 and June 2021 were screened for building the dataset. ASModel_UWF using UWF images was developed. Mean absolute error (MAE) and area under the receiver operating characteristics curve (AUC) were used to evaluate its performance. Saliency maps were generated to make the visual explanation of the model. Results: ASModel_UWF acquired the MAE of the prediction task of 0.83 g/dl (95%CI: 0.81–0.85 g/dl) and the AUC of the screening task of 0.93 (95%CI: 0.92–0.95). Compared with other screening approaches, it achieved the best performance of AUC and sensitivity when the test dataset size was larger than 1000. The model tended to focus on the area around the optic disc, retinal vessels, and some regions located at the peripheral area of the retina, which were undetected by non-UWF imaging. Conclusion: The deep-learning model ASModel_UWF could both predict Hgb concentration and screen anemia in a non-invasive and accurate way with high efficiency.
Collapse
|
34
|
A non-invasive approach to monitor anemia during long-duration spaceflight with retinal fundus images and deep learning. LIFE SCIENCES IN SPACE RESEARCH 2022; 33:69-71. [PMID: 35491031 DOI: 10.1016/j.lssr.2022.04.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 04/18/2022] [Indexed: 06/14/2023]
Abstract
During spaceflight, astronauts can experience significantly higher levels of hemolysis. With future space missions exposing astronauts to longer periods of microgravity, such as missions to Mars, there will be a need to better understand this phenomenon. We have proposed that retinal fundus photography and deep learning may be utilized to help further understand this microgravity-induced, anemic process for future spaceflight. By utilizing astronaut and terrestrial analog metadata, a foundation can be built to develop an algorithm that allows for non-invasive retinal imaging to quantify hemoglobin levels and detect anemia during spaceflight. This approach would allow for a non-invasive retinal photograph that can be done frequently during spaceflight as opposed to an invasive blood draw and subsequent tests.
Collapse
|
35
|
Retinal photograph-based deep learning predicts biological age, and stratifies morbidity and mortality risk. Age Ageing 2022; 51:6561972. [PMID: 35363255 PMCID: PMC8973000 DOI: 10.1093/ageing/afac065] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND ageing is an important risk factor for a variety of human pathologies. Biological age (BA) may better capture ageing-related physiological changes compared with chronological age (CA). OBJECTIVE we developed a deep learning (DL) algorithm to predict BA based on retinal photographs and evaluated the performance of our new ageing marker in the risk stratification of mortality and major morbidity in general populations. METHODS we first trained a DL algorithm using 129,236 retinal photographs from 40,480 participants in the Korean Health Screening study to predict the probability of age being ≥65 years ('RetiAGE') and then evaluated the ability of RetiAGE to stratify the risk of mortality and major morbidity among 56,301 participants in the UK Biobank. Cox proportional hazards model was used to estimate the hazard ratios (HRs). RESULTS in the UK Biobank, over a 10-year follow up, 2,236 (4.0%) died; of them, 636 (28.4%) were due to cardiovascular diseases (CVDs) and 1,276 (57.1%) due to cancers. Compared with the participants in the RetiAGE first quartile, those in the RetiAGE fourth quartile had a 67% higher risk of 10-year all-cause mortality (HR = 1.67 [1.42-1.95]), a 142% higher risk of CVD mortality (HR = 2.42 [1.69-3.48]) and a 60% higher risk of cancer mortality (HR = 1.60 [1.31-1.96]), independent of CA and established ageing phenotypic biomarkers. Likewise, compared with the first quartile group, the risk of CVD and cancer events in the fourth quartile group increased by 39% (HR = 1.39 [1.14-1.69]) and 18% (HR = 1.18 [1.10-1.26]), respectively. The best discrimination ability for RetiAGE alone was found for CVD mortality (c-index = 0.70, sensitivity = 0.76, specificity = 0.55). Furthermore, adding RetiAGE increased the discrimination ability of the model beyond CA and phenotypic biomarkers (increment in c-index between 1 and 2%). CONCLUSIONS the DL-derived RetiAGE provides a novel, alternative approach to measure ageing.
Collapse
|
36
|
Abstract
PURPOSE Retinal signatures of systemic disease ('oculomics') are increasingly being revealed through a combination of high-resolution ophthalmic imaging and sophisticated modelling strategies. Progress is currently limited not mainly by technical issues, but by the lack of large labelled datasets, a sine qua non for deep learning. Such data are derived from prospective epidemiological studies, in which retinal imaging is typically unimodal, cross-sectional, of modest number and relates to cohorts, which are not enriched with subpopulations of interest, such as those with systemic disease. We thus linked longitudinal multimodal retinal imaging from routinely collected National Health Service (NHS) data with systemic disease data from hospital admissions using a privacy-by-design third-party linkage approach. PARTICIPANTS Between 1 January 2008 and 1 April 2018, 353 157 participants aged 40 years or older, who attended Moorfields Eye Hospital NHS Foundation Trust, a tertiary ophthalmic institution incorporating a principal central site, four district hubs and five satellite clinics in and around London, UK serving a catchment population of approximately six million people. FINDINGS TO DATE Among the 353 157 individuals, 186 651 had a total of 1 337 711 Hospital Episode Statistics admitted patient care episodes. Systemic diagnoses recorded at these episodes include 12 022 patients with myocardial infarction, 11 735 with all-cause stroke and 13 363 with all-cause dementia. A total of 6 261 931 retinal images of seven different modalities and across three manufacturers were acquired from 1 54 830 patients. The majority of retinal images were retinal photographs (n=1 874 175) followed by optical coherence tomography (n=1 567 358). FUTURE PLANS AlzEye combines the world's largest single institution retinal imaging database with nationally collected systemic data to create an exceptional large-scale, enriched cohort that reflects the diversity of the population served. First analyses will address cardiovascular diseases and dementia, with a view to identifying hidden retinal signatures that may lead to earlier detection and risk management of these life-threatening conditions.
Collapse
|
37
|
Abstract
Hypertensive eye disease includes a spectrum of pathological changes, the most well known being hypertensive retinopathy. Other commonly involved parts of the eye in hypertension include the choroid and optic nerve, sometimes referred to as hypertensive choroidopathy and hypertensive optic neuropathy. Together, hypertensive eye disease develops in response to acute and/or chronic elevation of blood pressure. Major advances in research over the past three decades have greatly enhanced our understanding of the epidemiology, systemic associations and clinical implications of hypertensive eye disease, particularly hypertensive retinopathy. Traditionally diagnosed via a clinical funduscopic examination, but increasingly documented on digital retinal fundus photographs, hypertensive retinopathy has long been considered a marker of systemic target organ damage (for example, kidney disease) elsewhere in the body. Epidemiological studies indicate that hypertensive retinopathy signs are commonly seen in the general adult population, are associated with subclinical measures of vascular disease and predict risk of incident clinical cardiovascular events. New technologies, including development of non-invasive optical coherence tomography angiography, artificial intelligence and mobile ocular imaging instruments, have allowed further assessment and understanding of the ocular manifestations of hypertension and increase the potential that ocular imaging could be used for hypertension management and cardiovascular risk stratification.
Collapse
|
38
|
Detection of Systemic Diseases From Ocular Images Using Artificial Intelligence: A Systematic Review. Asia Pac J Ophthalmol (Phila) 2022; 11:126-139. [PMID: 35533332 DOI: 10.1097/apo.0000000000000515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Despite the huge investment in health care, there is still a lack of precise and easily accessible screening systems. With proven associations to many systemic diseases, the eye could potentially provide a credible perspective as a novel screening tool. This systematic review aims to summarize the current applications of ocular image-based artificial intelligence on the detection of systemic diseases and suggest future trends for systemic disease screening. METHODS A systematic search was conducted on September 1, 2021, using 3 databases-PubMed, Google Scholar, and Web of Science library. Date restrictions were not imposed and search terms covering ocular images, systemic diseases, and artificial intelligence aspects were used. RESULTS Thirty-three papers were included in this systematic review. A spectrum of target diseases was observed, and this included but was not limited to cardio-cerebrovascular diseases, central nervous system diseases, renal dysfunctions, and hepatological diseases. Additionally, one- third of the papers included risk factor predictions for the respective systemic diseases. CONCLUSIONS Ocular image - based artificial intelligence possesses potential diagnostic power to screen various systemic diseases and has also demonstrated the ability to detect Alzheimer and chronic kidney diseases at early stages. Further research is needed to validate these models for real-world implementation.
Collapse
|
39
|
Identifying diabetes from conjunctival images using a novel hierarchical multi-task network. Sci Rep 2022; 12:264. [PMID: 34997031 PMCID: PMC8742044 DOI: 10.1038/s41598-021-04006-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 12/06/2021] [Indexed: 11/15/2022] Open
Abstract
Diabetes can cause microvessel impairment. However, these conjunctival pathological changes are not easily recognized, limiting their potential as independent diagnostic indicators. Therefore, we designed a deep learning model to explore the relationship between conjunctival features and diabetes, and to advance automated identification of diabetes through conjunctival images. Images were collected from patients with type 2 diabetes and healthy volunteers. A hierarchical multi-tasking network model (HMT-Net) was developed using conjunctival images, and the model was systematically evaluated and compared with other algorithms. The sensitivity, specificity, and accuracy of the HMT-Net model to identify diabetes were 78.70%, 69.08%, and 75.15%, respectively. The performance of the HMT-Net model was significantly better than that of ophthalmologists. The model allowed sensitive and rapid discrimination by assessment of conjunctival images and can be potentially useful for identifying diabetes.
Collapse
|
40
|
Differential diagnosis of hereditary anemias from a fraction of blood drop by digital holography and hierarchical machine learning. Biosens Bioelectron 2022; 201:113945. [PMID: 35032844 DOI: 10.1016/j.bios.2021.113945] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 12/17/2021] [Accepted: 12/28/2021] [Indexed: 01/25/2023]
Abstract
Anemia affects about the 25% of the global population and can provoke severe diseases, ranging from weakness and dizziness to pregnancy problems, arrhythmias and hearth failures. About 10% of the patients are affected by rare anemias of which 80% are hereditary. Early differential diagnosis of anemia enables prescribing patients a proper treatment and diet, which is effective to mitigate the associated symptoms. Nevertheless, the differential diagnosis of these conditions is often difficult due to shared and overlapping phenotypes. Indeed, the complete blood count and unaided peripheral blood smear observation cannot always provide a reliable differential diagnosis, so that biomedical assays and genetic tests are needed. These procedures are not error-free, require skilled personnel, and severely impact the financial resources of national health systems. Here we show a differential screening system for hereditary anemias that relies on holographic imaging and artificial intelligence. Label-free holographic imaging is aided by a hierarchical machine learning decider that works even in the presence of a very limited dataset but is enough accurate for discerning between different anemia classes with minimal morphological dissimilarities. It is worth to notice that only a few tens of cells from each patient are sufficient to obtain a correct diagnosis, with the advantage of significantly limiting the volume of blood drawn. This work paves the way to a wider use of home screening systems for point of care blood testing and telemedicine with lab-on-chip platforms.
Collapse
|
41
|
The year in cardiovascular medicine 2021: digital health and innovation. Eur Heart J 2022; 43:271-279. [PMID: 34974610 DOI: 10.1093/eurheartj/ehab874] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/15/2021] [Accepted: 11/23/2021] [Indexed: 12/15/2022] Open
Abstract
This article presents some of the most important developments in the field of digital medicine that have appeared over the last 12 months and are related to cardiovascular medicine. The article consists of three main sections, as follows: (i) artificial intelligence-enabled cardiovascular diagnostic tools, techniques, and methodologies, (ii) big data and prognostic models for cardiovascular risk protection, and (iii) wearable devices in cardiovascular risk assessment, cardiovascular disease prevention, diagnosis, and management. To conclude the article, the authors present a brief further prospective on this new domain, highlighting existing gaps that are specifically related to artificial intelligence technologies, such as explainability, cost-effectiveness, and, of course, the importance of proper regulatory oversight for each clinical implementation.
Collapse
|
42
|
Detection of signs of disease in external photographs of the eyes via deep learning. Nat Biomed Eng 2022; 6:1370-1383. [PMID: 35352000 PMCID: PMC8963675 DOI: 10.1038/s41551-022-00867-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/15/2022] [Indexed: 01/14/2023]
Abstract
Retinal fundus photographs can be used to detect a range of retinal conditions. Here we show that deep-learning models trained instead on external photographs of the eyes can be used to detect diabetic retinopathy (DR), diabetic macular oedema and poor blood glucose control. We developed the models using eye photographs from 145,832 patients with diabetes from 301 DR screening sites and evaluated the models on four tasks and four validation datasets with a total of 48,644 patients from 198 additional screening sites. For all four tasks, the predictive performance of the deep-learning models was significantly higher than the performance of logistic regression models using self-reported demographic and medical history data, and the predictions generalized to patients with dilated pupils, to patients from a different DR screening programme and to a general eye care programme that included diabetics and non-diabetics. We also explored the use of the deep-learning models for the detection of elevated lipid levels. The utility of external eye photographs for the diagnosis and management of diseases should be further validated with images from different cameras and patient populations.
Collapse
|
43
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
44
|
Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology. Med Image Anal 2022; 77:102364. [DOI: 10.1016/j.media.2022.102364] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 01/17/2023]
|
45
|
Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2021; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
|
46
|
Prediction of Cardiovascular Parameters With Supervised Machine Learning From Singapore "I" Vessel Assessment and OCT-Angiography: A Pilot Study. Transl Vis Sci Technol 2021; 10:20. [PMID: 34767626 PMCID: PMC8590163 DOI: 10.1167/tvst.10.13.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Purpose Assessment of cardiovascular risk is the keystone of prevention in cardiovascular disease. The objective of this pilot study was to estimate the cardiovascular risk score (American Hospital Association [AHA] risk score, Syntax risk, and SCORE risk score) with machine learning (ML) model based on retinal vascular quantitative parameters. Methods We proposed supervised ML algorithm to predict cardiovascular parameters in patients with cardiovascular diseases treated in Dijon University Hospital using quantitative retinal vascular characteristics measured with fundus photography and optical coherence tomography – angiography (OCT-A) scans (alone and combined). To describe retinal microvascular network, we used the Singapore “I” Vessel Assessment (SIVA), which extracts vessel parameters from fundus photography and quantitative OCT-A retinal metrics of superficial retinal capillary plexus. Results The retinal and cardiovascular data of 144 patients were included. This paper presented a high prediction rate of the cardiovascular risk score. By means of the Naïve Bayes algorithm and SIVA + OCT-A data, the AHA risk score was predicted with 81.25% accuracy, the SCORE risk with 75.64% accuracy, and the Syntax score with 96.53% of accuracy. Conclusions Performance of these algorithms demonstrated in this preliminary study that ML algorithms applied to quantitative retinal vascular parameters with SIVA software and OCT-A were able to predict cardiovascular scores with a robust rate. Quantitative retinal vascular biomarkers with the ML strategy might provide valuable data to implement predictive model for cardiovascular parameters. Translational Relevance Small data set of quantitative retinal vascular parameters with fundus and with OCT-A can be used with ML learning to predict cardiovascular parameters.
Collapse
|
47
|
Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging. Radiol Artif Intell 2021; 3:e200267. [PMID: 34870212 PMCID: PMC8637231 DOI: 10.1148/ryai.2021200267] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 09/13/2021] [Accepted: 09/20/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To evaluate the trustworthiness of saliency maps for abnormality localization in medical imaging. MATERIALS AND METHODS Using two large publicly available radiology datasets (Society for Imaging Informatics in Medicine-American College of Radiology Pneumothorax Segmentation dataset and Radiological Society of North America Pneumonia Detection Challenge dataset), the performance of eight commonly used saliency map techniques were quantified in regard to (a) localization utility (segmentation and detection), (b) sensitivity to model weight randomization, (c) repeatability, and (d) reproducibility. Their performances versus baseline methods and localization network architectures were compared, using area under the precision-recall curve (AUPRC) and structural similarity index measure (SSIM) as metrics. RESULTS All eight saliency map techniques failed at least one of the criteria and were inferior in performance compared with localization networks. For pneumothorax segmentation, the AUPRC ranged from 0.024 to 0.224, while a U-Net achieved a significantly superior AUPRC of 0.404 (P < .005). For pneumonia detection, the AUPRC ranged from 0.160 to 0.519, while a RetinaNet achieved a significantly superior AUPRC of 0.596 (P <.005). Five and two saliency methods (of eight) failed the model randomization test on the segmentation and detection datasets, respectively, suggesting that these methods are not sensitive to changes in model parameters. The repeatability and reproducibility of the majority of the saliency methods were worse than localization networks for both the segmentation and detection datasets. CONCLUSION The use of saliency maps in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network.Keywords: Technology Assessment, Technical Aspects, Feature Detection, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2021.
Collapse
|
48
|
Machine-learning algorithms predict breast cancer patient survival from UK Biobank whole-exome sequencing data. Biomark Med 2021; 15:1529-1539. [PMID: 34651513 DOI: 10.2217/bmm-2021-0280] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Aim: We tested whether machine-learning algorithm could find biomarkers predicting overall survival in breast cancer patients using blood-based whole-exome sequencing data. Materials & methods: Whole-exome sequencing data derived from 1181 female breast cancer patients within the UK Biobank was collected. We found feature genes (n = 50) regarding total mutation burden using the long short-term memory model. Then, we developed the XGBoost survival model with selected feature genes. Results: The XGBoost survival model performed acceptably, with a concordance index of 0.75 and a scaled Brier score of 0.146 in terms of overall survival prediction. The high-mutation group exhibited inferior overall survival compared with the low-mutation group in patients ≥56 years (log-rank test, p = 0.042). Conclusion: We showed that machine-learning algorithms can be used to predict overall survival in breast cancer patients from blood-based whole-exome sequencing data.
Collapse
|
49
|
Machine Learning Applications in the Diagnosis of Benign and Malignant Hematological Diseases. Clin Hematol Int 2021; 3:13-20. [PMID: 34595462 PMCID: PMC8432325 DOI: 10.2991/chi.k.201130.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 11/05/2020] [Indexed: 12/23/2022] Open
Abstract
The use of machine learning (ML) and deep learning (DL) methods in hematology includes diagnostic, prognostic, and therapeutic applications. This increase is due to the improved access to ML and DL tools and the expansion of medical data. The utilization of ML remains limited in clinical practice, with some disciplines further along in their adoption, such as radiology and histopathology. In this review, we discuss the current uses of ML in diagnosis in the field of hematology, including image-recognition, laboratory, and genomics-based diagnosis. Additionally, we provide an introduction to the fields of ML and DL, highlighting current trends, limitations, and possible areas of improvement.
Collapse
|
50
|
Abstract
PURPOSE OF REVIEW Systemic retinal biomarkers are biomarkers identified in the retina and related to evaluation and management of systemic disease. This review summarizes the background, categories and key findings from this body of research as well as potential applications to clinical care. RECENT FINDINGS Potential systemic retinal biomarkers for cardiovascular disease, kidney disease and neurodegenerative disease were identified using regression analysis as well as more sophisticated image processing techniques. Deep learning techniques were used in a number of studies predicting diseases including anaemia and chronic kidney disease. A virtual coronary artery calcium score performed well against other competing traditional models of event prediction. SUMMARY Systemic retinal biomarker research has progressed rapidly using regression studies with clearly identified biomarkers such as retinal microvascular patterns, as well as using deep learning models. Future systemic retinal biomarker research may be able to boost performance using larger data sets, the addition of meta-data and higher resolution image inputs.
Collapse
|