1
|
Aziz AZB, Adams J, Elhabian S. Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. SHAPEMI (WORKSHOP) (2023 : VANCOUVER, B.C.) 2023; 14350:157-172. [PMID: 38745942 PMCID: PMC11090218 DOI: 10.1007/978-3-031-46914-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Statistical shape modeling (SSM) is an enabling quantitative tool to study anatomical shapes in various medical applications. However, directly using 3D images in these applications still has a long way to go. Recent deep learning methods have paved the way for reducing the substantial preprocessing steps to construct SSMs directly from unsegmented images. Nevertheless, the performance of these models is not up to the mark. Inspired by multiscale/multiresolution learning, we propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models. The training is performed in multiple scales, and each scale utilizes the output from the previous scale. This strategy enables the model to learn coarse shape features in the first scales and gradually learn detailed fine shape features in the later scales. We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale. Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives. This training methodology can be employed to improve the stability and accuracy of any deep learning method for inferring statistical representations of anatomies from medical images and can be adopted by existing deep learning methods to improve model accuracy and training stability.
Collapse
Affiliation(s)
- Abu Zahid Bin Aziz
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, USA
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
2
|
Bhalodia R, Kavan L, Whitaker RT. Self-Supervised Discovery of Anatomical Shape Landmarks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:627-638. [PMID: 33778817 PMCID: PMC7993653 DOI: 10.1007/978-3-030-59719-1_61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Statistical shape analysis is a very useful tool in a wide range of medical and biological applications. However, it typically relies on the ability to produce a relatively small number of features that can capture the relevant variability in a population. State-of-the-art methods for obtaining such anatomical features rely on either extensive preprocessing or segmentation and/or significant tuning and post-processing. These shortcomings limit the widespread use of shape statistics. We propose that effective shape representations should provide sufficient information to align/register images. Using this assumption we propose a self-supervised, neural network approach for automatically positioning and detecting landmarks in images that can be used for subsequent analysis. The network discovers the landmarks corresponding to anatomical shape features that promote good image registration in the context of a particular class of transformations. In addition, we also propose a regularization for the proposed network which allows for a uniform distribution of these discovered landmarks. In this paper, we present a complete framework, which only takes a set of input images and produces landmarks that are immediately usable for statistical shape analysis. We evaluate the performance on a phantom dataset as well as 2D and 3D images.
Collapse
Affiliation(s)
- Riddhish Bhalodia
- Scientific Computing and Imaging Institute, University of Utah
- School of Computing, University of Utah
| | | | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah
- School of Computing, University of Utah
| |
Collapse
|
3
|
Adams J, Bhalodia R, Elhabian S. Uncertain-DeepSSM: From Images to Probabilistic Shape Models. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4, 2020, PROCEEDINGS 2020; 12474:57-72. [PMID: 33817703 PMCID: PMC8011333 DOI: 10.1007/978-3-030-61056-2_5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Statistical shape modeling (SSM) has recently taken advantage of advances in deep learning to alleviate the need for a time-consuming and expert-driven workflow of anatomy segmentation, shape registration, and the optimization of population-level shape representations. DeepSSM is an end-to-end deep learning approach that extracts statistical shape representation directly from unsegmented images with little manual overhead. It performs comparably with state-of-the-art shape modeling methods for estimating morphologies that are viable for subsequent downstream tasks. Nonetheless, DeepSSM produces an overconfident estimate of shape that cannot be blindly assumed to be accurate. Hence, conveying what DeepSSM does not know, via quantifying granular estimates of uncertainty, is critical for its direct clinical application as an on-demand diagnostic tool to determine how trustworthy the model output is. Here, we propose Uncertain-DeepSSM as a unified model that quantifies both, data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variance, and model-dependent epistemic uncertainty via a Monte Carlo dropout sampling to approximate a variational distribution over the network parameters. Experiments show an accuracy improvement over DeepSSM while maintaining the same benefits of being end-to-end with little pre-processing.
Collapse
Affiliation(s)
- Jadie Adams
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| | - Riddhish Bhalodia
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, UT, USA
- School of Computing, University of Utah, UT, USA
| |
Collapse
|
4
|
Bhalodia R, Elhabian SY, Kavan L, Whitaker RT. DeepSSM: A Deep Learning Framework for Statistical Shape Modeling from Raw Images. SHAPE IN MEDICAL IMAGING : INTERNATIONAL WORKSHOP, SHAPEMI 2018, HELD IN CONJUNCTION WITH MICCAI 2018, GRANADA, SPAIN, SEPTEMBER 20, 2018 : PROCEEDINGS. SHAPEMI (WORKSHOP) (2018 : GRANADA, SPAIN) 2018; 11167:244-257. [PMID: 30805572 DOI: 10.1007/978-3-030-04747-4_23] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Statistical shape modeling is an important tool to characterize variation in anatomical morphology. Typical shapes of interest are measured using 3D imaging and a subsequent pipeline of registration, segmentation, and some extraction of shape features or projections onto some lower-dimensional shape space, which facilitates subsequent statistical analysis. Many methods for constructing compact shape representations have been proposed, but are often impractical due to the sequence of image preprocessing operations, which involve significant parameter tuning, manual delineation, and/or quality control by the users. We propose DeepSSM: a deep learning approach to extract a low-dimensional shape representation directly from 3D images, requiring virtually no parameter tuning or user assistance. DeepSSM uses a convolutional neural network (CNN) that simultaneously localizes the biological structure of interest, establishes correspondences, and projects these points onto a low-dimensional shape representation in the form of PCA loadings within a point distribution model. To overcome the challenge of the limited availability of training images with dense correspondences, we present a novel data augmentation procedure that uses existing correspondences on a relatively small set of processed images with shape statistics to create plausible training samples with known shape parameters. In this way, we leverage the limited CT/MRI scans (40-50) into thousands of images needed to train a deep neural net. After the training, the CNN automatically produces accurate low-dimensional shape representations for unseen images. We validate DeepSSM for three different applications pertaining to modeling pediatric cranial CT for characterization of metopic craniosynostosis, femur CT scans identifying morphologic deformities of the hip due to femoroacetabular impingement, and left atrium MRI scans for atrial fibrillation recurrence prediction.
Collapse
Affiliation(s)
- Riddhish Bhalodia
- Scientific Computing and Imaging Institute, University of Utah.,School of Computing, University of Utah
| | - Shireen Y Elhabian
- Scientific Computing and Imaging Institute, University of Utah.,School of Computing, University of Utah.,Comprehensive Arrhythmia Research and Management Center, University of Utah
| | | | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah.,School of Computing, University of Utah.,Comprehensive Arrhythmia Research and Management Center, University of Utah
| |
Collapse
|
5
|
Gao Y, Tannenbaum A, Bouix S. A Framework for Joint Image-and-Shape Analysis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9034:90340V. [PMID: 25302006 PMCID: PMC4187242 DOI: 10.1117/12.2043276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.
Collapse
Affiliation(s)
- Yi Gao
- Department of Electrical and Computer Engineering and the Comprehensive Cancer Center, the University of Alabama at Birmingham; 1150 10th Avenue South, Birmingham, AL 35294
| | - Allen Tannenbaum
- Departments of Computer Science and Applied Mathematics/Statistics, Stony Brook University, Stony Brook, New York, 11794
| | - Sylvain Bouix
- Department of Psychiatry, Harvard Medical School, 1249 Boylston St, Boston, MA, 02215
| |
Collapse
|