1
|
Borrelli E, Oakley JD, Iaccarino G, Russakoff DB, Battista M, Grosso D, Borghesan F, Barresi C, Sacconi R, Bandello F, Querques G. Deep-learning based automated quantification of critical optical coherence tomography features in neovascular age-related macular degeneration. Eye (Lond) 2024; 38:537-544. [PMID: 37670143 PMCID: PMC10858028 DOI: 10.1038/s41433-023-02720-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 07/28/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
PURPOSE To validate a deep learning algorithm for automated intraretinal fluid (IRF), subretinal fluid (SRF) and neovascular pigment epithelium detachment (nPED) segmentations in neovascular age-related macular degeneration (nAMD). METHODS In this IRB-approved study, optical coherence tomography (OCT) data from 50 patients (50 eyes) with exudative nAMD were retrospectively analysed. Two models, A1 and A2, were created based on gradings from two masked readers, R1 and R2. Area under the curve (AUC) values gauged detection performance, and quantification between readers and models was evaluated using Dice and correlation (R2) coefficients. RESULTS The deep learning-based algorithms had high accuracies for all fluid types between all models and readers: per B-scan IRF AUCs were 0.953, 0.932, 0.990, 0.942 for comparisons A1-R1, A1-R2, A2-R1 and A2-R2, respectively; SRF AUCs were 0.984, 0.974, 0.987, 0.979; and nPED AUCs were 0.963, 0.969, 0.961 and 0.966. Similarly, the R2 coefficients for IRF were 0.973, 0.974, 0.889 and 0.973; SRF were 0.928, 0.964, 0.965 and 0.998; and nPED were 0.908, 0.952, 0.839 and 0.905. The Dice coefficients for IRF averaged 0.702, 0.667, 0.649 and 0.631; for SRF were 0.699, 0.651, 0.692 and 0.701; and for nPED were 0.636, 0.703, 0.719 and 0.775. In an inter-observer comparison between manual readers R1 and R2, the R2 coefficient was 0.968 for IRF, 0.960 for SRF, and 0.906 for nPED, with Dice coefficients of 0.692, 0.660 and 0.784 for the same features. CONCLUSIONS Our deep learning-based method applied on nAMD can segment critical OCT features with performance akin to manual grading.
Collapse
Affiliation(s)
- Enrico Borrelli
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Giorgio Iaccarino
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Marco Battista
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Domenico Grosso
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Federico Borghesan
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Costanza Barresi
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Riccardo Sacconi
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Francesco Bandello
- Vita-Salute San Raffaele University Milan, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Giuseppe Querques
- Vita-Salute San Raffaele University Milan, Milan, Italy.
- IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
2
|
Oakley JD, Verdooner S, Russakoff DB, Brucker AJ, Seaman J, Sahni J, BIANCHI CD, Cozzi M, Rogers J, Staurenghi G. QUANTITATIVE ASSESSMENT OF AUTOMATED OPTICAL COHERENCE TOMOGRAPHY IMAGE ANALYSIS USING A HOME-BASED DEVICE FOR SELF-MONITORING NEOVASCULAR AGE-RELATED MACULAR DEGENERATION. Retina 2023; 43:433-443. [PMID: 36705991 PMCID: PMC9935585 DOI: 10.1097/iae.0000000000003677] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
PURPOSE To evaluate a prototype home optical coherence tomography device and automated analysis software for detection and quantification of retinal fluid relative to manual human grading in a cohort of patients with neovascular age-related macular degeneration. METHODS Patients undergoing anti-vascular endothelial growth factor therapy were enrolled in this prospective observational study. In 136 optical coherence tomography scans from 70 patients using the prototype home optical coherence tomography device, fluid segmentation was performed using automated analysis software and compared with manual gradings across all retinal fluid types using receiver-operating characteristic curves. The Dice similarity coefficient was used to assess the accuracy of segmentations, and correlation of fluid areas quantified end point agreement. RESULTS Fluid detection per B-scan had area under the receiver-operating characteristic curves of 0.95, 0.97, and 0.98 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. On a per volume basis, the values for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid were 0.997, 0.998, and 0.998, respectively. The average Dice similarity coefficient values across all B-scans were 0.64, 0.73, and 0.74, and the coefficients of determination were 0.81, 0.93, and 0.97 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. CONCLUSION Home optical coherence tomography device images assessed using the automated analysis software showed excellent agreement to manual human grading.
Collapse
Affiliation(s)
| | - Steven Verdooner
- OCTHealth LLC, Sacramento, California
- NeuroVision Imaging, Inc., Sacramento, California
| | | | - Alexander J. Brucker
- Perelman School of Medicine, Scheie Eye Institute, University of Pennsylvania, Philadelphia, Pennsylvania
| | - John Seaman
- Novartis Pharmaceuticals Corporation, East Hanover, New Jersey
| | | | - Carlo D. BIANCHI
- Eye Clinic, Department of Biomedical and Clinical Science, University of Milan, Milan, Italy
| | - Mariano Cozzi
- Eye Clinic, Department of Biomedical and Clinical Science, University of Milan, Milan, Italy
| | | | - Giovanni Staurenghi
- Eye Clinic, Department of Biomedical and Clinical Science, University of Milan, Milan, Italy
| |
Collapse
|
3
|
Pereira A, Oakley JD, Sodhi SK, Russakoff DB, Choudhry N. Proof-of-Concept Analysis of a Deep Learning Model to Conduct Automated Segmentation of OCT Images for Macular Hole Volume. Ophthalmic Surg Lasers Imaging Retina 2022; 53:208-214. [PMID: 35417293 DOI: 10.3928/23258160-20220315-02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To determine whether an automated artificial intelligence (AI) model could assess macular hole (MH) volume on swept-source optical coherence tomography (OCT) images. PATIENTS AND METHODS This was a proof-of-concept consecutive case series. Patients with an idiopathic full-thickness MH undergoing pars plana vitrectomy surgery with 1 year of follow-up were considered for inclusion. MHs were manually graded by a vitreoretinal surgeon from preoperative OCT images to delineate MH volume. This information was used to train a fully three-dimensional convolutional neural network for automatic segmentation. The main outcome was the correlation of manual MH volume to automated volume segmentation. RESULTS The correlation between manual and automated MH volume was R2 = 0.94 (n = 24). Automated MH volume demonstrated a higher correlation to change in visual acuity from preoperative to the postoperative 1-year time point compared with the minimum linear diameter (volume: R2 = 0.53; minimum linear diameter: R2 = 0.39). CONCLUSION MH automated volume segmentation on OCT imaging demonstrated high correlation to manual MH volume measurements. [Ophthalmic Surg Lasers Imaging Retina. 2022;53(4):208-214.].
Collapse
|
4
|
Petropoulos IN, Fitzgerald KC, Oakley J, Ponirakis G, Khan A, Gad H, George P, Deleu D, Canibano BG, Akhtar N, Shuaib A, Own A, Malik T, Russakoff DB, Mankowski JL, Misra SL, McGhee CNJ, Calabresi P, Saidha S, Kamran S, Malik RA. Corneal confocal microscopy demonstrates axonal loss in different courses of multiple sclerosis. Sci Rep 2021; 11:21688. [PMID: 34737384 PMCID: PMC8568943 DOI: 10.1038/s41598-021-01226-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 10/19/2021] [Indexed: 11/19/2022] Open
Abstract
Axonal loss is the main determinant of disease progression in multiple sclerosis (MS). This study aimed to assess the utility of corneal confocal microscopy (CCM) in detecting corneal axonal loss in different courses of MS. The results were confirmed by two independent segmentation methods. 72 subjects (144 eyes) [(clinically isolated syndrome (n = 9); relapsing–remitting MS (n = 20); secondary-progressive MS (n = 22); and age-matched, healthy controls (n = 21)] underwent CCM and assessment of their disability status. Two independent algorithms (ACCMetrics; and Voxeleron deepNerve) were used to quantify corneal nerve fiber density (CNFD) (ACCMetrics only), corneal nerve fiber length (CNFL) and corneal nerve fractal dimension (CNFrD). Data are expressed as mean ± standard deviation with 95% confidence interval (CI). Compared to controls, patients with MS had significantly lower CNFD (34.76 ± 5.57 vs. 19.85 ± 6.75 fibers/mm2, 95% CI − 18.24 to − 11.59, P < .0001), CNFL [for ACCMetrics: 19.75 ± 2.39 vs. 12.40 ± 3.30 mm/mm2, 95% CI − 8.94 to − 5.77, P < .0001; for deepNerve: 21.98 ± 2.76 vs. 14.40 ± 4.17 mm/mm2, 95% CI − 9.55 to − 5.6, P < .0001] and CNFrD [for ACCMetrics: 1.52 ± 0.02 vs. 1.45 ± 0.04, 95% CI − 0.09 to − 0.05, P < .0001; for deepNerve: 1.29 ± 0.03 vs. 1.19 ± 0.07, 95% − 0.13 to − 0.07, P < .0001]. Corneal nerve parameters were comparably reduced in different courses of MS. There was excellent reproducibility between the algorithms. Significant corneal axonal loss is detected in different courses of MS including patients with clinically isolated syndrome.
Collapse
Affiliation(s)
- Ioannis N Petropoulos
- Research Division, Qatar Foundation, Weill Cornell Medicine-Qatar of Cornell University, PO Box 24144, Education City, Doha, Qatar
| | - Kathryn C Fitzgerald
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Georgios Ponirakis
- Research Division, Qatar Foundation, Weill Cornell Medicine-Qatar of Cornell University, PO Box 24144, Education City, Doha, Qatar
| | - Adnan Khan
- Research Division, Qatar Foundation, Weill Cornell Medicine-Qatar of Cornell University, PO Box 24144, Education City, Doha, Qatar
| | - Hoda Gad
- Research Division, Qatar Foundation, Weill Cornell Medicine-Qatar of Cornell University, PO Box 24144, Education City, Doha, Qatar
| | - Pooja George
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar
| | - Dirk Deleu
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar
| | | | - Naveed Akhtar
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar
| | - Ashfaq Shuaib
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar.,Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Ahmed Own
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar
| | - Taimur Malik
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Joseph L Mankowski
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Stuti L Misra
- Department of Ophthalmology, New Zealand National Eye Centre, University of Auckland, Auckland, New Zealand
| | - Charles N J McGhee
- Department of Ophthalmology, New Zealand National Eye Centre, University of Auckland, Auckland, New Zealand
| | - Peter Calabresi
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shiv Saidha
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Saadat Kamran
- Neuroscience Institute, Hamad General Hospital, Doha, Qatar
| | - Rayaz A Malik
- Research Division, Qatar Foundation, Weill Cornell Medicine-Qatar of Cornell University, PO Box 24144, Education City, Doha, Qatar.
| |
Collapse
|
5
|
McCarron ME, Weinberg RL, Izzi JM, Queen SE, Tarwater PM, Misra SL, Russakoff DB, Oakley JD, Mankowski JL. Combining In Vivo Corneal Confocal Microscopy With Deep Learning-Based Analysis Reveals Sensory Nerve Fiber Loss in Acute Simian Immunodeficiency Virus Infection. Cornea 2021; 40:635-642. [PMID: 33528225 PMCID: PMC8009813 DOI: 10.1097/ico.0000000000002661] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 12/03/2020] [Indexed: 12/24/2022]
Abstract
PURPOSE To characterize corneal subbasal nerve plexus features of normal and simian immunodeficiency virus (SIV)-infected macaques by combining in vivo corneal confocal microscopy (IVCM) with automated assessments using deep learning-based methods customized for macaques. METHODS IVCM images were collected from both male and female age-matched rhesus and pigtailed macaques housed at the Johns Hopkins University breeding colony using the Heidelberg HRTIII with Rostock Corneal Module. We also obtained repeat IVCM images of 12 SIV-infected animals including preinfection and 10-day post-SIV infection time points. All IVCM images were analyzed using a deep convolutional neural network architecture developed specifically for macaque studies. RESULTS Deep learning-based segmentation of subbasal nerves in IVCM images from macaques demonstrated that corneal nerve fiber length and fractal dimension measurements did not differ between species, but pigtailed macaques had significantly higher baseline corneal nerve fiber tortuosity than rhesus macaques (P = 0.005). Neither sex nor age of macaques was associated with differences in any of the assessed corneal subbasal nerve parameters. In the SIV/macaque model of human immunodeficiency virus, acute SIV infection induced significant decreases in both corneal nerve fiber length and fractal dimension (P = 0.01 and P = 0.008, respectively). CONCLUSIONS The combination of IVCM and robust objective deep learning analysis is a powerful tool to track sensory nerve damage, enabling early detection of neuropathy. Adapting deep learning analyses to clinical corneal nerve assessments will improve monitoring of small sensory nerve fiber damage in numerous clinical settings including human immunodeficiency virus.
Collapse
Affiliation(s)
- Megan E McCarron
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Rachel L Weinberg
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Jessica M Izzi
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Suzanne E Queen
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Patrick M Tarwater
- Departments of Epidemiology and Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Stuti L Misra
- Department of Ophthalmology, Faculty of Medical and Health Sciences, New Zealand National Eye Centre, University of Auckland, Auckland, New Zealand; and
| | | | | | - Joseph L Mankowski
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
6
|
Oakley JD, Russakoff DB, McCarron ME, Weinberg RL, Izzi JM, Misra SL, McGhee CN, Mankowski JL. Deep learning-based analysis of macaque corneal sub-basal nerve fibers in confocal microscopy images. Eye Vis (Lond) 2020; 7:27. [PMID: 32420401 PMCID: PMC7206808 DOI: 10.1186/s40662-020-00192-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 04/16/2020] [Indexed: 02/04/2023]
Abstract
BACKGROUND To develop and validate a deep learning-based approach to the fully-automated analysis of macaque corneal sub-basal nerves using in vivo confocal microscopy (IVCM). METHODS IVCM was used to collect 108 images from 35 macaques. 58 of the images from 22 macaques were used to evaluate different deep convolutional neural network (CNN) architectures for the automatic analysis of sub-basal nerves relative to manual tracings. The remaining images were used to independently assess correlations and inter-observer performance relative to three readers. RESULTS Correlation scores using the coefficient of determination between readers and the best CNN averaged 0.80. For inter-observer comparison, inter-correlation coefficients (ICCs) between the three expert readers and the automated approach were 0.75, 0.85 and 0.92. The ICC between all four observers was 0.84, the same as the average between the CNN and individual readers. CONCLUSIONS Deep learning-based segmentation of sub-basal nerves in IVCM images shows high to very high correlation to manual segmentations in macaque data and is indistinguishable across readers. As quantitative measurements of corneal sub-basal nerves are important biomarkers for disease screening and management, the reported work offers utility to a variety of research and clinical studies using IVCM.
Collapse
Affiliation(s)
| | | | - Megan E. McCarron
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Rachel L. Weinberg
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Jessica M. Izzi
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Stuti L. Misra
- Department of Ophthalmology, Faculty of Medical and Health Sciences, New Zealand National Eye Centre, University of Auckland, Auckland, New Zealand
| | - Charles N. McGhee
- Department of Ophthalmology, Faculty of Medical and Health Sciences, New Zealand National Eye Centre, University of Auckland, Auckland, New Zealand
| | - Joseph L. Mankowski
- Department of Molecular and Comparative Pathobiology, Johns Hopkins University School of Medicine, Baltimore, MD USA
| |
Collapse
|
7
|
Russakoff DB, Mannil SS, Oakley JD, Ran AR, Cheung CY, Dasari S, Riyazzuddin M, Nagaraj S, Rao HL, Chang D, Chang RT. A 3D Deep Learning System for Detecting Referable Glaucoma Using Full OCT Macular Cube Scans. Transl Vis Sci Technol 2020; 9:12. [PMID: 32704418 PMCID: PMC7347026 DOI: 10.1167/tvst.9.2.12] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 12/10/2019] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to develop a 3D deep learning system from spectral domain optical coherence tomography (SD-OCT) macular cubes to differentiate between referable and nonreferable cases for glaucoma applied to real-world datasets to understand how this would affect the performance. Methods There were 2805 Cirrus optical coherence tomography (OCT) macula volumes (Macula protocol 512 × 128) of 1095 eyes from 586 patients at a single site that were used to train a fully 3D convolutional neural network (CNN). Referable glaucoma included true glaucoma, pre-perimetric glaucoma, and high-risk suspects, based on qualitative fundus photographs, visual fields, OCT reports, and clinical examinations, including intraocular pressure (IOP) and treatment history as the binary (two class) ground truth. The curated real-world dataset did not include eyes with retinal disease or nonglaucomatous optic neuropathies. The cubes were first homogenized using layer segmentation with the Orion Software (Voxeleron) to achieve standardization. The algorithm was tested on two separate external validation sets from different glaucoma studies, comprised of Cirrus macular cube scans of 505 and 336 eyes, respectively. Results The area under the receiver operating characteristic (AUROC) curve for the development dataset for distinguishing referable glaucoma was 0.88 for our CNN using homogenization, 0.82 without homogenization, and 0.81 for a CNN architecture from the existing literature. For the external validation datasets, which had different glaucoma definitions, the AUCs were 0.78 and 0.95, respectively. The performance of the model across myopia severity distribution has been assessed in the dataset from the United States and was found to have an AUC of 0.85, 0.92, and 0.95 in the severe, moderate, and mild myopia, respectively. Conclusions A 3D deep learning algorithm trained on macular OCT volumes without retinal disease to detect referable glaucoma performs better with retinal segmentation preprocessing and performs reasonably well across all levels of myopia. Translational Relevance Interpretation of OCT macula volumes based on normative data color distributions is highly influenced by population demographics and characteristics, such as refractive error, as well as the size of the normative database. Referable glaucoma, in this study, was chosen to include cases that should be seen by a specialist. This study is unique because it uses multimodal patient data for the glaucoma definition, and includes all severities of myopia as well as validates the algorithm with international data to understand generalizability potential.
Collapse
Affiliation(s)
| | - Suria S. Mannil
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | | | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | | | | | | | | | - Dolly Chang
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Robert T. Chang
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| |
Collapse
|
8
|
Russakoff DB, Lamin A, Oakley JD, Dubis AM, Sivaprasad S. Deep Learning for Prediction of AMD Progression: A Pilot Study. Invest Ophthalmol Vis Sci 2019; 60:712-722. [PMID: 30786275 DOI: 10.1167/iovs.18-25325] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop and assess a method for predicting the likelihood of converting from early/intermediate to advanced wet age-related macular degeneration (AMD) using optical coherence tomography (OCT) imaging and methods of deep learning. Methods Seventy-one eyes of 71 patients with confirmed early/intermediate AMD with contralateral wet AMD were imaged with OCT three times over 2 years (baseline, year 1, year 2). These eyes were divided into two groups: eyes that had not converted to wet AMD (n = 40) at year 2 and those that had (n = 31). Two deep convolutional neural networks (CNN) were evaluated using 5-fold cross validation on the OCT data at baseline to attempt to predict which eyes would convert to advanced AMD at year 2: (1) VGG16, a popular CNN for image recognition was fine-tuned, and (2) a novel, simplified CNN architecture was trained from scratch. Preprocessing was added in the form of a segmentation-based normalization to reduce variance in the data and improve performance. Results Our new architecture, AMDnet, with preprocessing, achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.89 at the B-scan level and 0.91 for volumes. Results for VGG16, an established CNN architecture, with preprocessing were 0.82 for B-scans/0.87 for volumes versus 0.66 for B-scans/0.69 for volumes without preprocessing. Conclusions A CNN with layer segmentation-based preprocessing shows strong predictive power for the progression of early/intermediate AMD to advanced AMD. Use of the preprocessing was shown to improve performance regardless of the network architecture.
Collapse
Affiliation(s)
| | - Ali Lamin
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom.,UCL Institute of Ophthalmology, London, United Kingdom
| | | | - Adam M Dubis
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom.,UCL Institute of Ophthalmology, London, United Kingdom
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom.,UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
9
|
Lamin A, Oakley JD, Dubis AM, Russakoff DB, Sivaprasad S. Changes in volume of various retinal layers over time in early and intermediate age-related macular degeneration. Eye (Lond) 2018; 33:428-434. [PMID: 30310161 DOI: 10.1038/s41433-018-0234-9] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Accepted: 09/24/2018] [Indexed: 10/28/2022] Open
Abstract
PURPOSE To evaluate longitudinally volume changes in inner and outer retinal layers in early and intermediate age-related macular degeneration (AMD) compared to healthy control eyes using optical coherence tomography (OCT). METHODS 71 eyes with AMD and 31 control eyes were imaged at two time points: baseline and after 2 years. Automated OCT layer segmentation was performed using OrionTM. This software is able to measure volumes of retinal layers with distinct boundaries including Retinal Nerve Fibre Layer (RNFL), Ganglion Cell-Inner Plexiform Layer (GCIPL), Inner Nuclear Layer (INL), Outer Plexiform Layer (OPL), Outer Nuclear Layer (ONL), Photoreceptors (PR) and Retinal Pigment Epithelium-Bruch's Membrane complex (RPE-BM). The mean retinal layer volumes and volume changes at 2 years were compared between groups. RESULTS Mean GCIPL and INL volumes were lower, while PR and RPE-BM volumes were higher in AMD eyes than controls at baseline (all P < 0.05) and year 2 (all P < 0.05). In AMD eyes, RNFL and ONL volumes decreased by 0.0232 (P = 0.033) and 0.0851 (P = 0.001), respectively. In contrast, OPL and RPE-BM volumes increased in AMD eyes by 0.0391 (P = 0.000) and 0.0209 (P = 0.000) respectively. Moreover, there were significant differences in longitudinal volume change of OPL (P = 0.02), ONL (P = 0.008) and RPE-BM (P = 0.02) between AMD eyes and controls. CONCLUSIONS There were abnormal retinal layer volumes and volume changes in eyes with early and intermediate AMD.
Collapse
Affiliation(s)
- Ali Lamin
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK.,UCL Institute of Ophthalmology, London, UK
| | | | - Adam M Dubis
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK.,UCL Institute of Ophthalmology, London, UK
| | | | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK. .,UCL Institute of Ophthalmology, London, UK.
| |
Collapse
|
10
|
Russakoff DB, Hasegawa A. Generation and application of a probabilistic breast cancer atlas. Med Image Comput Comput Assist Interv 2007; 9:454-61. [PMID: 17354804 DOI: 10.1007/11866763_56] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Computer-aided detection (CAD) has become increasingly common in recent years as a tool in catching breast cancer in its early, more treatable stages. More and more breast centers are using CAD as studies continue to demonstrate its effectiveness. As the technology behind CAD improves, so do its results and its impact on society. In trying to improve the sensitivity and specificity of CAD algorithms, a good deal of work has been done on feature extraction, the generation of mathematical representations of mammographic features which can help distinguish true cancerous lesions from false ones. One feature that is not currently seen in the literature that physicians rely on in making their decisions is location within the breast. This is a difficult feature to calculate as it requires a good deal of prior knowledge as well as some way of accounting for the tremendous variability present in breast shapes. In this paper, we present a method for the generation and implementation of a probabilistic breast cancer atlas. We then validate this method on data from the Digital Database for Screening Mammography (DDSM).
Collapse
MESH Headings
- Algorithms
- Anatomy, Artistic/methods
- Artificial Intelligence
- Breast Neoplasms/diagnostic imaging
- Computer Simulation
- Data Interpretation, Statistical
- Databases, Factual
- Female
- Humans
- Image Enhancement/methods
- Imaging, Three-Dimensional/methods
- Information Storage and Retrieval/methods
- Mammography/methods
- Medical Illustration
- Models, Anatomic
- Models, Biological
- Models, Statistical
- Pattern Recognition, Automated/methods
- Radiographic Image Interpretation, Computer-Assisted/methods
- Reproducibility of Results
- Sensitivity and Specificity
- Signal Processing, Computer-Assisted
Collapse
|
11
|
Rohlfing T, Denzler J, Grässl C, Russakoff DB, Maurer CR. Markerless real-time 3-D target region tracking by motion backprojection from projection images. IEEE Trans Med Imaging 2005; 24:1455-68. [PMID: 16279082 DOI: 10.1109/tmi.2005.857651] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.
Collapse
Affiliation(s)
- Torsten Rohlfing
- Neuroscience Program at SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025-3493, USA.
| | | | | | | | | |
Collapse
|
12
|
Russakoff DB, Rohlfing T, Mori K, Rueckert D, Ho A, Adler JR, Maurer CR. Fast generation of digitally reconstructed radiographs using attenuation fields with application to 2D-3D image registration. IEEE Trans Med Imaging 2005; 24:1441-54. [PMID: 16279081 DOI: 10.1109/tmi.2005.856749] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.
Collapse
Affiliation(s)
- Daniel B Russakoff
- Department of Computer Science, Stanford University, Stanford, CA 94305 USA.
| | | | | | | | | | | | | |
Collapse
|
13
|
Rohlfing T, Russakoff DB, Denzler J, Mori K, Maurer CR. Progressive attenuation fields: Fast 2D-3D image registration without precomputation. Med Phys 2005; 32:2870-80. [PMID: 16266101 DOI: 10.1118/1.1997367] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Computation of digitally reconstructed radiograph (DRR) images is the rate-limiting step in most current intensity-based algorithms for the registration of three-dimensional (3D) images to two-dimensional (2D) projection images. This paper introduces and evaluates the progressive attenuation field (PAF), which is a new method to speed up DRR computation. A PAF is closely related to an attenuation field (AF). A major difference is that a PAF is constructed on the fly as the registration proceeds; it does not require any precomputation time, nor does it make any prior assumptions of the patient pose or limit the permissible range of patient motion. A PAF effectively acts as a cache memory for projection values once they are computed, rather than as a lookup table for precomputed projections like standard AFs. We use a cylindrical attenuation field parametrization, which is better suited for many medical applications of 2D-3D registration than the usual two-plane parametrization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using clinical gold-standard spine image data sets from five patients, we demonstrate consistent speedups of intensity-based 2D-3D image registration using PAF DRRs by a factor of 10 over conventional ray casting DRRs with no decrease of registration accuracy or robustness.
Collapse
Affiliation(s)
- Torsten Rohlfing
- Neuroscience Program, SRI International, Menlo Park, California 94025-3493, USA.
| | | | | | | | | |
Collapse
|
14
|
Russakoff DB, Rohlfing T, Adler JR, Maurer CR. Intensity-based 2D-3D spine image registration incorporating a single fiducial marker. Acad Radiol 2005; 12:37-50. [PMID: 15691724 DOI: 10.1016/j.acra.2004.09.013] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2004] [Revised: 09/13/2004] [Accepted: 09/25/2004] [Indexed: 11/18/2022]
Abstract
RATIONALE AND OBJECTIVES The two-dimensional (2D)-three dimensional (3D) registration of a computed tomography image to one or more x-ray projection images has a number of image-guided therapy applications. In general, fiducial marker-based methods are fast, accurate, and robust, but marker implantation is not always possible, often is considered too invasive to be clinically acceptable, and entails risk. There also is the unresolved issue of whether it is acceptable to leave markers permanently implanted. Intensity-based registration methods do not require the use of markers and can be automated because such geometric features as points and surfaces do not need to be segmented from the images. However, for spine images, intensity-based methods are susceptible to local optima in the cost function and thus need initial transformations that are close to the correct transformation. MATERIALS AND METHODS In this report, we propose a hybrid similarity measure for 2D-3D registration that is a weighted combination of an intensity-based similarity measure (mutual information) and a point-based measure using one fiducial marker. We evaluate its registration accuracy and robustness by using gold-standard clinical spine image data from four patients. RESULTS Mean registration errors for successful registrations for the four patients were 1.3 and 1.1 mm for the intensity-based and hybrid similarity measures, respectively. Whereas the percentage of successful intensity-based registrations (registration error < 2.5 mm) decreased rapidly as the initial transformation got further from the correct transformation, the incorporation of a single marker produced successful registrations more than 99% of the time independent of the initial transformation. CONCLUSION The use of one fiducial marker reduces 2D-3D spine image registration error slightly and improves robustness substantially. The findings are potentially relevant for image-guided therapy. If one marker is sufficient to obtain clinically acceptable registration accuracy and robustness, as the preliminary results using the proposed hybrid similarity measure suggest, the marker can be placed on a spinous process, which could be accomplished without penetrating muscle or using fluoroscopic guidance, and such a marker could be removed relatively easily.
Collapse
Affiliation(s)
- Daniel B Russakoff
- Department of Computer Science, Stanford University, 300 Pasteur Drive, Stanford, CA 94305-5327, USA
| | | | | | | |
Collapse
|
15
|
Abstract
It is well-known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher that the accuracy of the individual classifiers. In order to combine multiple segmentations we introduce two extensions to an expectation maximization (EM) algorithm for ground truth estimation based on multiple experts (Warfield et al., MICCAI 2002). The first method repeatedly applies the Warfield algorithm with a subsequent integration step. The second method is a multi-label extension of the Warfield algorithm. Both extensions integrate multiple segmentations into one that is closer to the unknown ground truth than the individual segmentations. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. We demonstrate that a segmentation produced by combining multiple individual registration-based segmentations is more accurate for the two EM methods we propose than for simple label averaging.
Collapse
Affiliation(s)
- Torsten Rohlfing
- Image Guidance Laboratories, Department of Neurosurgery, Stanford University, Stanford, CA, USA.
| | | | | |
Collapse
|
16
|
Rohlfing T, Russakoff DB, Maurer CR. Performance-based classifier combination in atlas-based image segmentation using expectation-maximization parameter estimation. IEEE Trans Med Imaging 2004; 23:983-94. [PMID: 15338732 DOI: 10.1109/tmi.2004.830803] [Citation(s) in RCA: 153] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.
Collapse
Affiliation(s)
- Torsten Rohlfing
- Image Guidance Laboratories, Department of Neurosurgery, Stanford University, Stanford, CA 94305-5327, USA.
| | | | | |
Collapse
|
17
|
Russakoff DB, Tomasi C, Rohlfing T, Maurer CR. Image Similarity Using Mutual Information of Regions. Lecture Notes in Computer Science 2004. [DOI: 10.1007/978-3-540-24672-5_47] [Citation(s) in RCA: 96] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
18
|
Rohlfing T, Russakoff DB, Maurer CR. An Expectation Maximization-Like Algorithm for Multi-atlas Multi-label Segmentation. Informatik aktuell 2003. [DOI: 10.1007/978-3-642-18993-7_71] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
19
|
Russakoff DB, Rohlfing T, Ho A, Kim DH, Shahidi R, Adler JR, Maurer CR. Evaluation of Intensity-Based 2D-3D Spine Image Registration Using Clinical Gold-Standard Data. ACTA ACUST UNITED AC 2003. [DOI: 10.1007/978-3-540-39701-4_16] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|