1
|
Aresta G, Araujo T, Reiter GS, Mai J, Riedl S, Grechenig C, Guymer RH, Wu Z, Schmidt-Erfurth U, Bogunovic H. Deep Neural Networks for Automated Outer Plexiform Layer Subsidence Detection on Retinal OCT of Patients With Intermediate AMD. Transl Vis Sci Technol 2024; 13:7. [PMID: 38874975 DOI: 10.1167/tvst.13.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2024] Open
Abstract
Purpose The subsidence of the outer plexiform layer (OPL) is an important imaging biomarker on optical coherence tomography (OCT) associated with early outer retinal atrophy and a risk factor for progression to geographic atrophy in patients with intermediate age-related macular degeneration (AMD). Deep neural networks (DNNs) for OCT can support automated detection and localization of this biomarker. Methods The method predicts potential OPL subsidence locations on retinal OCTs. A detection module (DM) infers bounding boxes around subsidences with a likelihood score, and a classification module (CM) assesses subsidence presence at the B-scan level. Overlapping boxes between B-scans are combined and scored by the product of the DM and CM predictions. The volume-wise score is the maximum prediction across all B-scans. One development and one independent external data set were used with 140 and 26 patients with AMD, respectively. Results The system detected more than 85% of OPL subsidences with less than one false-positive (FP)/scan. The average area under the curve was 0.94 ± 0.03 for volume-level detection. Similar or better performance was achieved on the independent external data set. Conclusions DNN systems can efficiently perform automated retinal layer subsidence detection in retinal OCT images. In particular, the proposed DNN system detects OPL subsidence with high sensitivity and a very limited number of FP detections. Translational Relevance DNNs enable objective identification of early signs associated with high risk of progression to the atrophic late stage of AMD, ideally suited for screening and assessing the efficacy of the interventions aiming to slow disease progression.
Collapse
Affiliation(s)
- Guilherme Aresta
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Teresa Araujo
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Gregor S Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Mai
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Christoph Grechenig
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Robyn H Guymer
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Chernyshov A, Grue JF, Nyberg J, Grenne B, Dalen H, Aase SA, Østvik A, Lovstakken L. Automated Segmentation and Quantification of the Right Ventricle in 2-D Echocardiography. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:540-548. [PMID: 38290912 DOI: 10.1016/j.ultrasmedbio.2023.12.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 02/01/2024]
Abstract
OBJECTIVE The right ventricle receives less attention than its left counterpart in echocardiography research, practice and development of automated solutions. In the work described here, we sought to determine that the deep learning methods for automated segmentation of the left ventricle in 2-D echocardiograms are also valid for the right ventricle. Additionally, here we describe and explore a keypoint detection approach to segmentation that guards against erratic behavior often displayed by segmentation models. METHODS We used a data set of echo images focused on the right ventricle from 250 participants to train and evaluate several deep learning models for segmentation and keypoint detection. We propose a compact architecture (U-Net KP) employing the latter approach. The architecture is designed to balance high speed with accuracy and robustness. RESULTS All featured models achieved segmentation accuracy close to the inter-observer variability. When computing the metrics of right ventricular systolic function from contour predictions of U-Net KP, we obtained the bias and 95% limits of agreement of 0.8 ± 10.8% for the right ventricular fractional area change measurements, -0.04 ± 0.54 cm for the tricuspid annular plane systolic excursion measurements and 0.2 ± 6.6% for the right ventricular free wall strain measurements. These results were also comparable to the semi-automatically derived inter-observer discrepancies of 0.4 ± 11.8%, -0.37 ± 0.58 cm and -1.0 ± 7.7% for the aforementioned metrics, respectively. CONCLUSION Given the appropriate data, automated segmentation and quantification of the right ventricle in 2-D echocardiography are feasible with existing methods. However, keypoint detection architectures may offer higher robustness and information density for the same computational cost.
Collapse
Affiliation(s)
- Artem Chernyshov
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Jahn Frederik Grue
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - John Nyberg
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bjørnar Grenne
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Håvard Dalen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | | | - Andreas Østvik
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - Lasse Lovstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Liu Z, Kainth K, Zhou A, Deyer TW, Fayad ZA, Greenspan H, Mei X. A review of self-supervised, generative, and few-shot deep learning methods for data-limited magnetic resonance imaging segmentation. NMR IN BIOMEDICINE 2024:e5143. [PMID: 38523402 DOI: 10.1002/nbm.5143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/26/2024]
Abstract
Magnetic resonance imaging (MRI) is a ubiquitous medical imaging technology with applications in disease diagnostics, intervention, and treatment planning. Accurate MRI segmentation is critical for diagnosing abnormalities, monitoring diseases, and deciding on a course of treatment. With the advent of advanced deep learning frameworks, fully automated and accurate MRI segmentation is advancing. Traditional supervised deep learning techniques have advanced tremendously, reaching clinical-level accuracy in the field of segmentation. However, these algorithms still require a large amount of annotated data, which is oftentimes unavailable or impractical. One way to circumvent this issue is to utilize algorithms that exploit a limited amount of labeled data. This paper aims to review such state-of-the-art algorithms that use a limited number of annotated samples. We explain the fundamental principles of self-supervised learning, generative models, few-shot learning, and semi-supervised learning and summarize their applications in cardiac, abdomen, and brain MRI segmentation. Throughout this review, we highlight algorithms that can be employed based on the quantity of annotated data available. We also present a comprehensive list of notable publicly available MRI segmentation datasets. To conclude, we discuss possible future directions of the field-including emerging algorithms, such as contrastive language-image pretraining, and potential combinations across the methods discussed-that can further increase the efficacy of image segmentation with limited labels.
Collapse
Affiliation(s)
- Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Komal Kainth
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Timothy W Deyer
- East River Medical Imaging, New York, New York, USA
- Department of Radiology, Cornell Medicine, New York, New York, USA
| | - Zahi A Fayad
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Xueyan Mei
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
4
|
Morales MA, Manning WJ, Nezafat R. Present and Future Innovations in AI and Cardiac MRI. Radiology 2024; 310:e231269. [PMID: 38193835 PMCID: PMC10831479 DOI: 10.1148/radiol.231269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 10/21/2023] [Accepted: 10/26/2023] [Indexed: 01/10/2024]
Abstract
Cardiac MRI is used to diagnose and treat patients with a multitude of cardiovascular diseases. Despite the growth of clinical cardiac MRI, complicated image prescriptions and long acquisition protocols limit the specialty and restrain its impact on the practice of medicine. Artificial intelligence (AI)-the ability to mimic human intelligence in learning and performing tasks-will impact nearly all aspects of MRI. Deep learning (DL) primarily uses an artificial neural network to learn a specific task from example data sets. Self-driving scanners are increasingly available, where AI automatically controls cardiac image prescriptions. These scanners offer faster image collection with higher spatial and temporal resolution, eliminating the need for cardiac triggering or breath holding. In the future, fully automated inline image analysis will most likely provide all contour drawings and initial measurements to the reader. Advanced analysis using radiomic or DL features may provide new insights and information not typically extracted in the current analysis workflow. AI may further help integrate these features with clinical, genetic, wearable-device, and "omics" data to improve patient outcomes. This article presents an overview of AI and its application in cardiac MRI, including in image acquisition, reconstruction, and processing, and opportunities for more personalized cardiovascular care through extraction of novel imaging markers.
Collapse
Affiliation(s)
- Manuel A. Morales
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Warren J. Manning
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Reza Nezafat
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| |
Collapse
|
5
|
Beetz M, Banerjee A, Ossenberg-Engels J, Grau V. Multi-class point cloud completion networks for 3D cardiac anatomy reconstruction from cine magnetic resonance images. Med Image Anal 2023; 90:102975. [PMID: 37804586 DOI: 10.1016/j.media.2023.102975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 07/08/2023] [Accepted: 09/18/2023] [Indexed: 10/09/2023]
Abstract
Cine magnetic resonance imaging (MRI) is the current gold standard for the assessment of cardiac anatomy and function. However, it typically only acquires a set of two-dimensional (2D) slices of the underlying three-dimensional (3D) anatomy of the heart, thus limiting the understanding and analysis of both healthy and pathological cardiac morphology and physiology. In this paper, we propose a novel fully automatic surface reconstruction pipeline capable of reconstructing multi-class 3D cardiac anatomy meshes from raw cine MRI acquisitions. Its key component is a multi-class point cloud completion network (PCCN) capable of correcting both the sparsity and misalignment issues of the 3D reconstruction task in a unified model. We first evaluate the PCCN on a large synthetic dataset of biventricular anatomies and observe Chamfer distances between reconstructed and gold standard anatomies below or similar to the underlying image resolution for multiple levels of slice misalignment. Furthermore, we find a reduction in reconstruction error compared to a benchmark 3D U-Net by 32% and 24% in terms of Hausdorff distance and mean surface distance, respectively. We then apply the PCCN as part of our automated reconstruction pipeline to 1000 subjects from the UK Biobank study in a cross-domain transfer setting and demonstrate its ability to reconstruct accurate and topologically plausible biventricular heart meshes with clinical metrics comparable to the previous literature. Finally, we investigate the robustness of our proposed approach and observe its capacity to successfully handle multiple common outlier conditions.
Collapse
Affiliation(s)
- Marcel Beetz
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| | - Abhirup Banerjee
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK; Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford OX3 9DU, UK.
| | - Julius Ossenberg-Engels
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Vicente Grau
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| |
Collapse
|
6
|
Wang X, Li X, Du R, Zhong Y, Lu Y, Song T. Anatomical Prior-Based Automatic Segmentation for Cardiac Substructures from Computed Tomography Images. Bioengineering (Basel) 2023; 10:1267. [PMID: 38002391 PMCID: PMC10669053 DOI: 10.3390/bioengineering10111267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/12/2023] [Accepted: 10/24/2023] [Indexed: 11/26/2023] Open
Abstract
Cardiac substructure segmentation is a prerequisite for cardiac diagnosis and treatment, providing a basis for accurate calculation, modeling, and analysis of the entire cardiac structure. CT (computed tomography) imaging can be used for a noninvasive qualitative and quantitative evaluation of the cardiac anatomy and function. Cardiac substructures have diverse grayscales, fuzzy boundaries, irregular shapes, and variable locations. We designed a deep learning-based framework to improve the accuracy of the automatic segmentation of cardiac substructures. This framework integrates cardiac anatomical knowledge; it uses prior knowledge of the location, shape, and scale of cardiac substructures and separately processes the structures of different scales. Through two successive segmentation steps with a coarse-to-fine cascaded network, the more easily segmented substructures were coarsely segmented first; then, the more difficult substructures were finely segmented. The coarse segmentation result was used as prior information and combined with the original image as the input for the model. Anatomical knowledge of the large-scale substructures was embedded into the fine segmentation network to guide and train the small-scale substructures, achieving efficient and accurate segmentation of ten cardiac substructures. Sixty cardiac CT images and ten substructures manually delineated by experienced radiologists were retrospectively collected; the model was evaluated using the DSC (Dice similarity coefficient), Recall, Precision, and the Hausdorff distance. Compared with current mainstream segmentation models, our approach demonstrated significantly higher segmentation accuracy, with accurate segmentation of ten substructures of different shapes and sizes, indicating that the segmentation framework fused with prior anatomical knowledge has superior segmentation performance and can better segment small targets in multi-target segmentation tasks.
Collapse
Grants
- Grant 12126610, Grant 81971691, Grant 81801809, Grant 81830052, Grant 81827802, and Grant U1811461,Grant 201804020053,Grant 2018B030312002,Grant 20190302108GX,grant 18DZ2260400, grant 2020B1212060032, Grant 2021B0101190003. Yao Lu
Collapse
Affiliation(s)
- Xuefang Wang
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511400, China;
| | - Xinyi Li
- Department of Radiology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China;
| | - Ruxu Du
- Guangzhou Janus Biotechnology Co., Ltd., Guangzhou 511400, China;
| | - Yong Zhong
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511400, China;
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510275, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou 510275, China
- State Key Laboratory of Oncology in South China, Guangzhou 510060, China
| | - Ting Song
- Department of Radiology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China;
| |
Collapse
|