1
|
Quiroz JC, Brieger D, Jorm LR, Sy RW, Hsu B, Gallego B. Predicting Adverse Outcomes Following Catheter Ablation Treatment for Atrial Flutter/Fibrillation. Heart Lung Circ 2024; 33:470-478. [PMID: 38365498 DOI: 10.1016/j.hlc.2023.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 11/06/2023] [Accepted: 12/19/2023] [Indexed: 02/18/2024]
Abstract
BACKGROUND & AIM To develop prognostic survival models for predicting adverse outcomes after catheter ablation treatment for non-valvular atrial fibrillation (AF) and/or atrial flutter (AFL). METHODS We used a linked dataset including hospital administrative data, prescription medicine claims, emergency department presentations, and death registrations of patients in New South Wales, Australia. The cohort included patients who received catheter ablation for AF and/or AFL. Traditional and deep survival models were trained to predict major bleeding events and a composite of heart failure, stroke, cardiac arrest, and death. RESULTS Out of a total of 3,285 patients in the cohort, 177 (5.3%) experienced the composite outcome-heart failure, stroke, cardiac arrest, death-and 167 (5.1%) experienced major bleeding events after catheter ablation treatment. Models predicting the composite outcome had high-risk discrimination accuracy, with the best model having a concordance index >0.79 at the evaluated time horizons. Models for predicting major bleeding events had poor risk discrimination performance, with all models having a concordance index <0.66. The most impactful features for the models predicting higher risk were comorbidities indicative of poor health, older age, and therapies commonly used in sicker patients to treat heart failure and AF and AFL. DISCUSSION Diagnosis and medication history did not contain sufficient information for precise risk prediction of experiencing major bleeding events. Predicting the composite outcome yielded promising results, but future research is needed to validate the usefulness of these models in clinical practice. CONCLUSIONS Machine learning models for predicting the composite outcome have the potential to enable clinicians to identify and manage high-risk patients following catheter ablation for AF and AFL proactively.
Collapse
Affiliation(s)
- Juan C Quiroz
- Centre for Big Data Research in Health, University of New South Wales, Sydney, NSW, Australia.
| | - David Brieger
- Department of Cardiology, Concord Repatriation General Hospital, Sydney, NSW, Australia; Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Louisa R Jorm
- Centre for Big Data Research in Health, University of New South Wales, Sydney, NSW, Australia
| | - Raymond W Sy
- Department of Cardiology, Concord Repatriation General Hospital, Sydney, NSW, Australia; Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Benjumin Hsu
- Centre for Big Data Research in Health, University of New South Wales, Sydney, NSW, Australia
| | - Blanca Gallego
- Centre for Big Data Research in Health, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
2
|
Factor S, Gurel R, Dan D, Benkovich G, Sagi A, Abialevich A, Benkovich V. Validating a Novel 2D to 3D Knee Reconstruction Method on Preoperative Total Knee Arthroplasty Patient Anatomies. J Clin Med 2024; 13:1255. [PMID: 38592666 PMCID: PMC10931545 DOI: 10.3390/jcm13051255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/18/2024] [Accepted: 02/20/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND As advanced technology continues to evolve, incorporating robotics into surgical procedures has become imperative for precision and accuracy in preoperative planning. Nevertheless, the integration of three-dimensional (3D) imaging into these processes presents both financial considerations and potential patient safety concerns. This study aims to assess the accuracy of a novel 2D-to-3D knee reconstruction solution, RSIP XPlan.ai™ (RSIP Vision, Jerusalem, Israel), on preoperative total knee arthroplasty (TKA) patient anatomies. METHODS Accuracy was calculated by measuring the Root Mean Square Error (RMSE) between X-ray-based 3D bone models generated by the algorithm and corresponding CT bone segmentations (distances of each mesh vertex to the closest vertex in the second mesh). The RMSE was computed globally for each bone, locally for eight clinically relevant bony landmark regions, and along simulated bone cut contours. In addition, the accuracies of three anatomical axes were assessed by comparing angular deviations to inter- and intra-observer baseline values. RESULTS The global RMSE was 0.93 ± 0.25 mm for the femur and 0.88 ± 0.14 mm for the tibia. Local RMSE values for bony landmark regions were 0.51 ± 0.33 mm for the five femoral landmarks and 0.47 ± 0.17 mm for the three tibial landmarks. The RMSE along simulated cut contours was 0.75 ± 0.35 mm for the distal femur cut and 0.63 ± 0.27 mm for the proximal tibial cut. Anatomical axial average angular deviations were 1.89° for the trans epicondylar axis (with an inter- and intra-observer baseline of 1.43°), 1.78° for the posterior condylar axis (with a baseline of 1.71°), and 2.82° (with a baseline of 2.56°) for the medial-lateral transverse axis. CONCLUSIONS The study findings demonstrate promising results regarding the accuracy of XPlan.ai™ in reconstructing 3D bone models from plain-film X-rays. The observed accuracy on real-world TKA patient anatomies in anatomically relevant regions, including bony landmarks, cut contours, and axes, suggests the potential utility of this method in various clinical scenarios. Further validation studies on larger cohorts are warranted to fully assess the reliability and generalizability of our results. Nonetheless, our findings lay the groundwork for potential advancements in future robotic arthroplasty technologies, with XPlan.ai™ offering a promising alternative to conventional CT scans in certain clinical contexts.
Collapse
Affiliation(s)
- Shai Factor
- Division of Orthopedic Surgery, Tel Aviv Medical Center, Faculty of Medicine, Tel Aviv University, Tel Aviv 6423906, Israel
| | - Ron Gurel
- Division of Orthopedic Surgery, Tel Aviv Medical Center, Faculty of Medicine, Tel Aviv University, Tel Aviv 6423906, Israel
| | - Dor Dan
- Orthopedic Department, Meir Medical Center, Faculty of Medicine, Tel Aviv University, Tel Aviv 4428164, Israel
| | - Guy Benkovich
- Orthopedic Department, Sheba Medical Center, Faculty of Medicine, Tel Aviv University, Tel Aviv 5262000, Israel
| | - Amit Sagi
- Orthopedic Department, Barzilai Medical Center, Ashkelon 78278, Israel
- Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer Sheva 8499000, Israel
- South West London Elective Orthopaedic Centre, Epsom KT18 7EG, UK
| | - Artsiom Abialevich
- Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer Sheva 8499000, Israel
- Department of Orthopedic Surgery, Soroka Medical Center, Beer Sheva 84101, Israel
- Israeli Joint Health Center, Tel Aviv 69710, Israel
| | - Vadim Benkovich
- Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer Sheva 8499000, Israel
- Department of Orthopedic Surgery, Soroka Medical Center, Beer Sheva 84101, Israel
- Israeli Joint Health Center, Tel Aviv 69710, Israel
| |
Collapse
|
3
|
Requist MR, Mills MK, Carroll KL, Lenz AL. Quantitative Skeletal Imaging and Image-Based Modeling in Pediatric Orthopaedics. Curr Osteoporos Rep 2024; 22:44-55. [PMID: 38243151 DOI: 10.1007/s11914-023-00845-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/19/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE OF REVIEW Musculoskeletal imaging serves a critical role in clinical care and orthopaedic research. Image-based modeling is also gaining traction as a useful tool in understanding skeletal morphology and mechanics. However, there are fewer studies on advanced imaging and modeling in pediatric populations. The purpose of this review is to provide an overview of recent literature on skeletal imaging modalities and modeling techniques with a special emphasis on current and future uses in pediatric research and clinical care. RECENT FINDINGS While many principles of imaging and 3D modeling are relevant across the lifespan, there are special considerations for pediatric musculoskeletal imaging and fewer studies of 3D skeletal modeling in pediatric populations. Improved understanding of bone morphology and growth during childhood in healthy and pathologic patients may provide new insight into the pathophysiology of pediatric-onset skeletal diseases and the biomechanics of bone development. Clinical translation of 3D modeling tools developed in orthopaedic research is limited by the requirement for manual image segmentation and the resources needed for segmentation, modeling, and analysis. This paper highlights the current and future uses of common musculoskeletal imaging modalities and 3D modeling techniques in pediatric orthopaedic clinical care and research.
Collapse
Affiliation(s)
- Melissa R Requist
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA
| | - Megan K Mills
- Department of Radiology and Imaging Sciences, University of Utah, 30 N Mario Capecchi Dr. 2 South, Salt Lake City, UT, 84112, USA
| | - Kristen L Carroll
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Shriners Hospital for Children, 1275 E Fairfax Rd, Salt Lake City, UT, 84103, USA
| | - Amy L Lenz
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA.
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA.
| |
Collapse
|
4
|
Iyer K, Elhabian S. Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14220:615-625. [PMID: 38659613 PMCID: PMC11036176 DOI: 10.1007/978-3-031-43907-0_59] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Statistical shape modeling is the computational process of discovering significant shape parameters from segmented anatomies captured by medical images (such as MRI and CT scans), which can fully describe subject-specific anatomy in the context of a population. The presence of substantial non-linear variability in human anatomy often makes the traditional shape modeling process challenging. Deep learning techniques can learn complex non-linear representations of shapes and generate statistical shape models that are more faithful to the underlying population-level variability. However, existing deep learning models still have limitations and require established/optimized shape models for training. We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes, forming a correspondence-based shape model. Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection. The proposed method operates directly on meshes and is computationally efficient, making it an attractive alternative to traditional and deep learning-based SSM approaches.
Collapse
Affiliation(s)
- Krithika Iyer
- Scientific Computing and Imaging Institute, University of Utah, SLC, UT, US
- Kahlert School of Computing, University of Utah, Salt Lake City, UT, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, SLC, UT, US
- Kahlert School of Computing, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
5
|
Aziz AZB, Adams J, Elhabian S. Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. SHAPEMI (WORKSHOP) (2023 : VANCOUVER, B.C.) 2023; 14350:157-172. [PMID: 38745942 PMCID: PMC11090218 DOI: 10.1007/978-3-031-46914-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Statistical shape modeling (SSM) is an enabling quantitative tool to study anatomical shapes in various medical applications. However, directly using 3D images in these applications still has a long way to go. Recent deep learning methods have paved the way for reducing the substantial preprocessing steps to construct SSMs directly from unsegmented images. Nevertheless, the performance of these models is not up to the mark. Inspired by multiscale/multiresolution learning, we propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models. The training is performed in multiple scales, and each scale utilizes the output from the previous scale. This strategy enables the model to learn coarse shape features in the first scales and gradually learn detailed fine shape features in the later scales. We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale. Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives. This training methodology can be employed to improve the stability and accuracy of any deep learning method for inferring statistical representations of anatomies from medical images and can be adopted by existing deep learning methods to improve model accuracy and training stability.
Collapse
Affiliation(s)
- Abu Zahid Bin Aziz
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
6
|
Schmid J, Assassi L, Chênes C. A novel image augmentation based on statistical shape and intensity models: application to the segmentation of hip bones from CT images. Eur Radiol Exp 2023; 7:39. [PMID: 37550543 PMCID: PMC10406777 DOI: 10.1186/s41747-023-00357-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/30/2023] [Indexed: 08/09/2023] Open
Abstract
BACKGROUND The collection and annotation of medical images are hindered by data scarcity, privacy, and ethical reasons or limited resources, negatively affecting deep learning approaches. Data augmentation is often used to mitigate this problem, by generating synthetic images from training sets to improve the efficiency and generalization of deep learning models. METHODS We propose the novel use of statistical shape and intensity models (SSIM) to generate augmented images with variety in both shape and intensity of imaged structures and surroundings. The SSIM uses segmentations from training images to create co-registered tetrahedral meshes of the structures and to efficiently encode image intensity in their interior with Bernstein polynomials. In the context of segmentation of hip joint (pathological) bones from retrospective computed tomography images of 232 patients, we compared the impact of SSIM-based and basic augmentations on the performance of a U-Net model. RESULTS In a fivefold cross-validation, the SSIM augmentation improved segmentation robustness and accuracy. In particular, the combination of basic and SSIM augmentation outperformed trained models not using any augmentation, or relying exclusively on a simple form of augmentation, achieving Dice similarity coefficient and Hausdorff distance of 0.95 [0.93-0.96] and 6.16 [4.90-8.08] mm (median [25th-75th percentiles]), comparable to previous work on pathological hip segmentation. CONCLUSIONS We proposed a novel augmentation varying both the shape and appearance of structures in generated images. Tested on bone segmentation, our approach is generalizable to other structures or tasks such as classification, as long as SSIM can be built from training data. RELEVANCE STATEMENT Our data augmentation approach produces realistic shape and appearance variations of structures in generated images, which supports the clinical adoption of AI in radiology by alleviating the collection of clinical imaging data and by improving the performance of AI applications. KEY POINTS • Data augmentation generally improves the accuracy and generalization of deep learning models. • Traditional data augmentation does not consider the appearance of imaged structures. • Statistical shape and intensity models (SSIM) synthetically generate variations of imaged structures. • SSIM support novel augmentation approaches, demonstrated with computed tomography bone segmentation.
Collapse
Affiliation(s)
- Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland.
| | - Lazhari Assassi
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland
| | - Christophe Chênes
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland
| |
Collapse
|
7
|
Gaggion N, Mansilla L, Mosquera C, Milone DH, Ferrante E. Improving Anatomical Plausibility in Medical Image Segmentation via Hybrid Graph Neural Networks: Applications to Chest X-Ray Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:546-556. [PMID: 36423313 DOI: 10.1109/tmi.2022.3224660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Anatomical segmentation is a fundamental task in medical image computing, generally tackled with fully convolutional neural networks which produce dense segmentation masks. These models are often trained with loss functions such as cross-entropy or Dice, which assume pixels to be independent of each other, thus ignoring topological errors and anatomical inconsistencies. We address this limitation by moving from pixel-level to graph representations, which allow to naturally incorporate anatomical constraints by construction. To this end, we introduce HybridGNet, an encoder-decoder neural architecture that leverages standard convolutions for image feature encoding and graph convolutional neural networks (GCNNs) to decode plausible representations of anatomical structures. We also propose a novel image-to-graph skip connection layer which allows localized features to flow from standard convolutional blocks to GCNN blocks, and show that it improves segmentation accuracy. The proposed architecture is extensively evaluated in a variety of domain shift and image occlusion scenarios, and audited considering different types of demographic domain shift. Our comprehensive experimental setup compares HybridGNet with other landmark and pixel-based models for anatomical segmentation in chest x-ray images, and shows that it produces anatomically plausible results in challenging scenarios where other models tend to fail.
Collapse
|
8
|
Montalt-Tordera J, Pajaziti E, Jones R, Sauvage E, Puranik R, Singh AAV, Capelli C, Steeden J, Schievano S, Muthurangu V. Automatic segmentation of the great arteries for computational hemodynamic assessment. J Cardiovasc Magn Reson 2022; 24:57. [PMID: 36336682 PMCID: PMC9639271 DOI: 10.1186/s12968-022-00891-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 10/03/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Computational fluid dynamics (CFD) is increasingly used for the assessment of blood flow conditions in patients with congenital heart disease (CHD). This requires patient-specific anatomy, typically obtained from segmented 3D cardiovascular magnetic resonance (CMR) images. However, segmentation is time-consuming and requires expert input. This study aims to develop and validate a machine learning (ML) method for segmentation of the aorta and pulmonary arteries for CFD studies. METHODS 90 CHD patients were retrospectively selected for this study. 3D CMR images were manually segmented to obtain ground-truth (GT) background, aorta and pulmonary artery labels. These were used to train and optimize a U-Net model, using a 70-10-10 train-validation-test split. Segmentation performance was primarily evaluated using Dice score. CFD simulations were set up from GT and ML segmentations using a semi-automatic meshing and simulation pipeline. Mean pressure and velocity fields across 99 planes along the vessel centrelines were extracted, and a mean average percentage error (MAPE) was calculated for each vessel pair (ML vs GT). A second observer (SO) segmented the test dataset for assessment of inter-observer variability. Friedman tests were used to compare ML vs GT, SO vs GT and ML vs SO metrics, and pressure/velocity field errors. RESULTS The network's Dice score (ML vs GT) was 0.945 (interquartile range: 0.929-0.955) for the aorta and 0.885 (0.851-0.899) for the pulmonary arteries. Differences with the inter-observer Dice score (SO vs GT) and ML vs SO Dice scores were not statistically significant for either aorta or pulmonary arteries (p = 0.741, p = 0.061). The ML vs GT MAPEs for pressure and velocity in the aorta were 10.1% (8.5-15.7%) and 4.1% (3.1-6.9%), respectively, and for the pulmonary arteries 14.6% (11.5-23.2%) and 6.3% (4.3-7.9%), respectively. Inter-observer (SO vs GT) and ML vs SO pressure and velocity MAPEs were of a similar magnitude to ML vs GT (p > 0.2). CONCLUSIONS ML can successfully segment the great vessels for CFD, with errors similar to inter-observer variability. This fast, automatic method reduces the time and effort needed for CFD analysis, making it more attractive for routine clinical use.
Collapse
Affiliation(s)
| | | | - Rod Jones
- Great Ormond Street Hospital, London, UK
| | - Emilie Sauvage
- UCL Institute of Cardiovascular Science, UCL, London, UK
| | - Rajesh Puranik
- Children’s Hospital at Westmead, Sydney, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Aakansha Ajay Vir Singh
- Children’s Hospital at Westmead, Sydney, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | | | | | | | | |
Collapse
|
9
|
Adams J, Elhabian S. From Images to Probabilistic Anatomical Shapes: A Deep Variational Bottleneck Approach. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13432:474-484. [PMID: 37011237 PMCID: PMC10063212 DOI: 10.1007/978-3-031-16434-7_46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Statistical shape modeling (SSM) directly from 3D medical images is an underutilized tool for detecting pathology, diagnosing disease, and conducting population-level morphology analysis. Deep learning frameworks have increased the feasibility of adopting SSM in medical practice by reducing the expert-driven manual and computational overhead in traditional SSM workflows. However, translating such frameworks to clinical practice requires calibrated uncertainty measures as neural networks can produce over-confident predictions that cannot be trusted in sensitive clinical decision-making. Existing techniques for predicting shape with aleatoric (data-dependent) uncertainty utilize a principal component analysis (PCA) based shape representation computed in isolation of the model training. This constraint restricts the learning task to solely estimating pre-defined shape descriptors from 3D images and imposes a linear relationship between this shape representation and the output (i.e., shape) space. In this paper, we propose a principled framework based on the variational information bottleneck theory to relax these assumptions while predicting probabilistic shapes of anatomy directly from images without supervised encoding of shape descriptors. Here, the latent representation is learned in the context of the learning task, resulting in a more scalable, flexible model that better captures data non-linearity. Additionally, this model is self-regularized and generalizes better given limited training data. Our experiments demonstrate that the proposed method provides an accuracy improvement and better calibrated aleatoric uncertainty estimates than state-of-the-art methods.
Collapse
Affiliation(s)
- Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| |
Collapse
|
10
|
Xu Y, Raje S, Rountev A, Sabin G, Sukumaran-Rajam A, Sadayappan P. Training of deep learning pipelines on memory-constrained GPUs via segmented fused-tiled execution. PROCEEDINGS OF THE 31ST ACM SIGPLAN INTERNATIONAL CONFERENCE ON COMPILER CONSTRUCTION 2022; 2022:104-116. [PMID: 35876769 PMCID: PMC9302555 DOI: 10.1145/3497776.3517766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Training models with massive inputs is a significant challenge in the development of Deep Learning pipelines to process very large digital image datasets as required by Whole Slide Imaging (WSI) in computational pathology and analysis of brain fMRI images in computational neuroscience. Graphics Processing Units (GPUs) represent the primary workhorse in training and inference of Deep Learning models. In order to use GPUs to run inference or training on a neural network pipeline, state-of-the-art machine learning frameworks like PyTorch and TensorFlow currently require that the collective memory on the GPUs must be larger than the size of the activations at any stage in the pipeline. Therefore, existing Deep Learning pipelines for these use cases have been forced to develop sub-optimal “patch-based” modeling approaches, where images are processed in small segments of an image. In this paper, we present a solution to this problem by employing tiling in conjunction with check-pointing, thereby enabling arbitrarily large images to be directly processed, irrespective of the size of global memory on a GPU and the number of available GPUs. Experimental results using PyTorch demonstrate enhanced functionality/performance over existing frameworks.
Collapse
|
11
|
Wilms M, Ehrhardt J, Forkert ND. Localized Statistical Shape Models for Large-scale Problems With Few Training Data. IEEE Trans Biomed Eng 2022; 69:2947-2957. [PMID: 35271438 DOI: 10.1109/tbme.2022.3158278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ segmentation or data augmentation when training deep learning models. However, training such models requires large data sets, which are often not available and, hence, shape models frequently fail to represent local details of unseen shapes. This work introduces a kernel-based method to alleviate this problem via so-called model localization. It is specifically designed to be used in large-scale shape modeling scenarios like deep learning data augmentation and fits seamlessly into the classical shape modeling framework. METHOD Relying on recent advances in multi-level shape model localization via distance-based covariance matrix manipulations and Grassmannian-based level fusion, this work proposes a novel and computationally efficient kernel-based localization technique. Moreover, a novel way to improve the specificity of such models via normalizing flow-based density estimation is presented. RESULTS The method is evaluated on the publicly available JSRT/SCR chest X-ray and IXI brain data sets. The results confirm the effectiveness of the kernelized formulation and also highlight the models' improved specificity when utilizing the proposed density estimation method. CONCLUSION This work shows that flexible and specific shape models from few training samples can be generated in a computationally efficient way by combining ideas from kernel theory and normalizing flows. SIGNIFICANCE The proposed method together with its publicly available implementation allows to build shape models from few training samples directly usable for applications like data augmentation.
Collapse
|
12
|
Gilbert A, Marciniak M, Rodero C, Lamata P, Samset E, Mcleod K. Generating Synthetic Labeled Data From Existing Anatomical Models: An Example With Echocardiography Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2783-2794. [PMID: 33444134 PMCID: PMC8493532 DOI: 10.1109/tmi.2021.3051806] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 01/03/2021] [Accepted: 01/11/2021] [Indexed: 06/12/2023]
Abstract
Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and are transformed into realistic synthetic ultrasound images with paired labels using a CycleGAN. We demonstrate the pipeline by generating synthetic 2D echocardiography images to compare with existing deep learning ultrasound segmentation datasets. A convolutional neural network is trained to segment the left ventricle and left atrium using only synthetic images. Networks trained with synthetic images were extensively tested on four different unseen datasets of real images with median Dice scores of 91, 90, 88, and 87 for left ventricle segmentation. These results match or are better than inter-observer results measured on real ultrasound datasets and are comparable to a network trained on a separate set of real images. Results demonstrate the images produced can effectively be used in place of real data for training. The proposed pipeline opens the door for automatic generation of training data for many tasks in medical imaging as the same process can be applied to other segmentation or landmark detection tasks in any modality. The source code and anatomical models are available to other researchers.1 1https://adgilbert.github.io/data-generation/.
Collapse
Affiliation(s)
- Andrew Gilbert
- GE Vingmed Ultrasound, GE Healthcare3183HortenNorway
- Department of InformaticsUniversity of Oslo0315OsloNorway
| | - Maciej Marciniak
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Cristobal Rodero
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Pablo Lamata
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Eigil Samset
- GE Vingmed Ultrasound, GE Healthcare3183HortenNorway
- Department of InformaticsUniversity of Oslo0315OsloNorway
| | | |
Collapse
|
13
|
Wang L, Guo D, Wang G, Zhang S. Annotation-Efficient Learning for Medical Image Segmentation Based on Noisy Pseudo Labels and Adversarial Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2795-2807. [PMID: 33370237 DOI: 10.1109/tmi.2020.3047807] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Despite that deep learning has achieved state-of-the-art performance for medical image segmentation, its success relies on a large set of manually annotated images for training that are expensive to acquire. In this paper, we propose an annotation-efficient learning framework for segmentation tasks that avoids annotations of training images, where we use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks obtained either from a shape model or public datasets. We first use the GAN to generate pseudo labels for our training images under the implicit high-level shape constraint represented by a Variational Auto-encoder (VAE)-based discriminator with the help of the auxiliary masks, and build a Discriminator-guided Generator Channel Calibration (DGCC) module which employs our discriminator's feedback to calibrate the generator for better pseudo labels. To learn from the pseudo labels that are noisy, we further introduce a noise-robust iterative learning method using noise-weighted Dice loss. We validated our framework with two situations: objects with a simple shape model like optic disc in fundus images and fetal head in ultrasound images, and complex structures like lung in X-Ray images and liver in CT images. Experimental results demonstrated that 1) Our VAE-based discriminator and DGCC module help to obtain high-quality pseudo labels. 2) Our proposed noise-robust learning method can effectively overcome the effect of noisy pseudo labels. 3) The segmentation performance of our method without using annotations of training images is close or even comparable to that of learning from human annotations.
Collapse
|
14
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 130] [Impact Index Per Article: 43.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
15
|
Bhalodia R, Kavan L, Whitaker RT. Self-Supervised Discovery of Anatomical Shape Landmarks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:627-638. [PMID: 33778817 PMCID: PMC7993653 DOI: 10.1007/978-3-030-59719-1_61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Statistical shape analysis is a very useful tool in a wide range of medical and biological applications. However, it typically relies on the ability to produce a relatively small number of features that can capture the relevant variability in a population. State-of-the-art methods for obtaining such anatomical features rely on either extensive preprocessing or segmentation and/or significant tuning and post-processing. These shortcomings limit the widespread use of shape statistics. We propose that effective shape representations should provide sufficient information to align/register images. Using this assumption we propose a self-supervised, neural network approach for automatically positioning and detecting landmarks in images that can be used for subsequent analysis. The network discovers the landmarks corresponding to anatomical shape features that promote good image registration in the context of a particular class of transformations. In addition, we also propose a regularization for the proposed network which allows for a uniform distribution of these discovered landmarks. In this paper, we present a complete framework, which only takes a set of input images and produces landmarks that are immediately usable for statistical shape analysis. We evaluate the performance on a phantom dataset as well as 2D and 3D images.
Collapse
Affiliation(s)
- Riddhish Bhalodia
- Scientific Computing and Imaging Institute, University of Utah
- School of Computing, University of Utah
| | | | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah
- School of Computing, University of Utah
| |
Collapse
|
16
|
Adams J, Bhalodia R, Elhabian S. Uncertain-DeepSSM: From Images to Probabilistic Shape Models. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4, 2020, PROCEEDINGS 2020; 12474:57-72. [PMID: 33817703 PMCID: PMC8011333 DOI: 10.1007/978-3-030-61056-2_5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Statistical shape modeling (SSM) has recently taken advantage of advances in deep learning to alleviate the need for a time-consuming and expert-driven workflow of anatomy segmentation, shape registration, and the optimization of population-level shape representations. DeepSSM is an end-to-end deep learning approach that extracts statistical shape representation directly from unsegmented images with little manual overhead. It performs comparably with state-of-the-art shape modeling methods for estimating morphologies that are viable for subsequent downstream tasks. Nonetheless, DeepSSM produces an overconfident estimate of shape that cannot be blindly assumed to be accurate. Hence, conveying what DeepSSM does not know, via quantifying granular estimates of uncertainty, is critical for its direct clinical application as an on-demand diagnostic tool to determine how trustworthy the model output is. Here, we propose Uncertain-DeepSSM as a unified model that quantifies both, data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variance, and model-dependent epistemic uncertainty via a Monte Carlo dropout sampling to approximate a variational distribution over the network parameters. Experiments show an accuracy improvement over DeepSSM while maintaining the same benefits of being end-to-end with little pre-processing.
Collapse
Affiliation(s)
- Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| | - Riddhish Bhalodia
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| |
Collapse
|
17
|
A statistical shape modeling approach for predicting subject-specific human skull from head surface. Med Biol Eng Comput 2020; 58:2355-2373. [DOI: 10.1007/s11517-020-02219-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 06/25/2020] [Indexed: 10/23/2022]
|
18
|
Pressler MP, Geisler EL, Hallac RR, Seaward JR, Kane AA. The Use of Eye Tracking to Discern the Threshold at Which Metopic Orbitofrontal Deformity Attracts Attention. Cleft Palate Craniofac J 2020; 57:1392-1401. [PMID: 32489115 DOI: 10.1177/1055665620926014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION AND OBJECTIVES Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. MATERIAL AND METHODS Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants' gaze patterns were analyzed, and participants were asked if each image looked "normal" or "abnormal." RESULTS Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity (P < .0001). A majority of participants did not agree an image looked "abnormal" until 90% deformity from any angle. CONCLUSION Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was "abnormality" until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.
Collapse
Affiliation(s)
- Mark P Pressler
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Emily L Geisler
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Rami R Hallac
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - James R Seaward
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Alex A Kane
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| |
Collapse
|