1
|
Liu J, Shen N, Wang W, Li X, Wang W, Yuan Y, Tian Y, Luo G, Wang K. Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration. Med Phys 2025. [PMID: 40280883 DOI: 10.1002/mp.17827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Revised: 03/10/2025] [Accepted: 03/25/2025] [Indexed: 04/29/2025] Open
Abstract
BACKGROUND Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy. PURPOSE To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF. METHODS Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions. RESULTS We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics. CONCLUSIONS Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.
Collapse
Affiliation(s)
- Jun Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Nuo Shen
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Wenyi Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Xiangyu Li
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Wei Wang
- School of Computer Science and Technology, Harbin Institute of Technology Shenzhen, Shenzhen, Guangdong, China
| | - Yongfeng Yuan
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Ye Tian
- Department of Cardiology, The First Affiliated Hospital, Cardiovascular Institute, Harbin Medical University, Harbin, Heilongjiang, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China
| |
Collapse
|
2
|
Orellana B, Navazo I, Brunet P, Monclús E, Bendezú Á, Azpiroz F. Automatic colon segmentation on T1-FS MR images. Comput Med Imaging Graph 2025; 123:102528. [PMID: 40112651 DOI: 10.1016/j.compmedimag.2025.102528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 03/07/2025] [Accepted: 03/07/2025] [Indexed: 03/22/2025]
Abstract
The volume and distribution of the colonic contents provides valuable insights into the effects of diet on gut microbiotica involving both clinical diagnosis and research. In terms of Magnetic Resonance Imaging modalities, T2-weighted images allow the segmentation of the colon lumen, while fecal and gas contents can be only distinguished on the T1-weighted Fat-Sat modality. However, the manual segmentation of T1-weighted Fat-Sat is challenging, and no automatic segmentation methods are known. This paper proposed a non-supervised algorithm providing an accurate T1-weighted Fat-Sat colon segmentation via the registration of an existing colon segmentation in T2-weighted modality. The algorithm consists of two phases. It starts with a registration process based on a classical deformable registration method, followed by a novel Iterative Colon Registration process that utilizes a mesh deformation approach. This approach is guided by a probabilistic model that provides the likelihood of the colon boundary, followed by a shape preservation process of the colon segmentation on T2-weighted images. The iterative process converges to achieve an optimal fit for colon segmentation in T1-weighted Fat-Sat images. The segmentation algorithm has been tested on multiple datasets (154 scans) and acquisition machines (3) as part of the proof of concept for the proposed methodology. The quantitative evaluation was based on two metrics: the percentage of ground truth labeled feces correctly identified by our proposal (93±5%), and the volume variation between the existing colon segmentation in the T2-weighted modality and the colon segmentation computed in T1-weighted Fat-Sat images. Quantitative and medical evaluations demonstrated a degree of accuracy, usability, and stability concerning the acquisition hardware, making the algorithm suitable for clinical application and research.
Collapse
Affiliation(s)
- Bernat Orellana
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Isabel Navazo
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Pere Brunet
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Eva Monclús
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Álvaro Bendezú
- Digestive Department, Hospital Universitari General de Catalunya. Pedro i Pons 1, Sant Cugat del Vallès 08195, Spain.
| | - Fernando Azpiroz
- Digestive System Research Unit, University Hospital Vall d'Hebron, 08035 Barcelona, Spain; Departament de Medicina, Universitat Autònoma de Barcelona, 08193 Bellaterra, Spain; Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (Ciberehd), Spain.
| |
Collapse
|
3
|
Remedios LW, Liu H, Remedios SW, Zuo L, Saunders AM, Bao S, Huo Y, Powers AC, Virostko J, Landman BA. Influence of early through late fusion on pancreas segmentation from imperfectly registered multimodal magnetic resonance imaging. J Med Imaging (Bellingham) 2025; 12:024008. [PMID: 40291815 PMCID: PMC12032765 DOI: 10.1117/1.jmi.12.2.024008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 04/02/2025] [Accepted: 04/03/2025] [Indexed: 04/30/2025] Open
Abstract
Purpose Combining different types of medical imaging data, through multimodal fusion, promises better segmentation of anatomical structures, such as the pancreas. Strategic implementation of multimodal fusion could improve our ability to study diseases such as diabetes. However, where to perform fusion in deep learning models is still an open question. It is unclear if there is a single best location to fuse information when analyzing pairs of imperfectly aligned images or if the optimal fusion location depends on the specific model being used. Two main challenges when using multiple imaging modalities to study the pancreas are (1) the pancreas and surrounding abdominal anatomy have a deformable structure, making it difficult to consistently align the images and (2) breathing by the individual during image collection further complicates the alignment between multimodal images. Even after using state-of-the-art deformable image registration techniques, specifically designed to align abdominal images, multimodal images of the abdomen are often not perfectly aligned. We examine how the choice of different fusion points, ranging from early in the image processing pipeline to later stages, impacts the segmentation of the pancreas on imperfectly registered multimodal magnetic resonance (MR) images. Approach Our dataset consists of 353 pairs of T2-weighted (T2w) and T1-weighted (T1w) abdominal MR images from 163 subjects with accompanying pancreas segmentation labels drawn mainly based on the T2w images. Because the T2w images were acquired in an interleaved manner across two breath holds and the T1w images on one breath hold, there were three different breath holds impacting the alignment of each pair of images. We used deeds, a state-of-the-art deformable abdominal image registration method to align the image pairs. Then, we trained a collection of basic UNets with different fusion points, spanning from early to late layers in the model, to assess how early through late fusion influenced segmentation performance on imperfectly aligned images. To investigate whether performance differences on key fusion points are generalized to other architectures, we expanded our experiments to nnUNet. Results The single-modality T2w baseline using a basic UNet model had a median Dice score of 0.766, whereas the same baseline on the nnUNet model achieved 0.824. For each fusion approach, we analyzed the differences in performance with Dice residuals, by subtracting the baseline score from the fusion score for each datapoint. For the basic UNet, the best fusion approach was from early/mid fusion and occurred in the middle of the encoder with a median Dice residual of + 0.012 compared with the baseline. For the nnUNet, the best fusion approach was early fusion through naïve image concatenation before the model, with a median Dice residual of + 0.004 compared with the baseline. After Bonferroni correction, the distributions of the Dice scores for these best fusion approaches were found to be statistically significant ( p < 0.05 ) via the paired Wilcoxon signed-rank test against the baseline. Conclusions Fusion in specific blocks can improve performance, but the best blocks for fusion are model-specific, and the gains are small. In imperfectly registered datasets, fusion is a nuanced problem, with the art of design remaining vital for uncovering potential insights. Future innovation is needed to better address fusion in cases of imperfect alignment of abdominal image pairs. The code associated with this project is available here https://github.com/MASILab/influence_of_fusion_on_pancreas_segmentation.
Collapse
Affiliation(s)
- Lucas W. Remedios
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Han Liu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Samuel W. Remedios
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
- National Institutes of Health, Department of Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Lianrui Zuo
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Adam M. Saunders
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Alvin C. Powers
- Vanderbilt University Medical Center, Department of Medicine, Division of Diabetes, Endocrinology, and Metabolism, Nashville, Tennessee, United States
- VA Tennessee Valley Healthcare System, Nashville, Tennessee, United States
- Vanderbilt University, Department of Molecular Physiology and Biophysics, Nashville, Tennessee, United States
| | - John Virostko
- University of Texas at Austin, Dell Medical School, Department of Diagnostic Medicine, Austin, Texas, United States
- University of Texas at Austin, Livestrong Cancer Institutes, Dell Medical School, Austin, Texas, United States
- University of Texas at Austin, Department of Oncology, Dell Medical School, Austin, Texas, United States
- University of Texas at Austin, Oden Institute for Computational Engineering and Sciences, Austin, Texas, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| |
Collapse
|
4
|
Zhou Y, Lee HH, Tang Y, Yu X, Yang Q, Kim ME, Remedios LW, Bao S, Spraggins JM, Huo Y, Landman BA. Multi-contrast computed tomography atlas of healthy pancreas with dense displacement sampling registration. J Med Imaging (Bellingham) 2025; 12:024006. [PMID: 40255249 PMCID: PMC12005954 DOI: 10.1117/1.jmi.12.2.024006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 03/26/2025] [Accepted: 03/26/2025] [Indexed: 04/22/2025] Open
Abstract
Purpose Diverse population demographics can lead to substantial variation in the human anatomy. Therefore, standard anatomical atlases are needed for interpreting organ-specific analyses. Among abdominal organs, the pancreas exhibits notable variability in volumetric morphology, shape, and appearance, complicating the generalization of population-wide features. Understanding the common features of a healthy pancreas is crucial for identifying biomarkers and diagnosing pancreatic diseases. Approach We propose a high-resolution CT atlas framework optimized for the healthy pancreas. We introduce a deep-learning-based preprocessing technique to extract abdominal ROIs and leverage a hierarchical registration pipeline to align pancreatic anatomy across populations. Briefly, DEEDS affine and non-rigid registration techniques are employed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas, multi-phase contrast CT scans of 443 subjects (aged 15 to 50 years, with no reported history of pancreatic disease) were processed. Results The two-stage DEEDS affine and non-rigid registration outperforms other state-of-the-art tools, achieving the highest scores for pancreas label transfer across all phases (non-contrast: 0.497, arterial: 0.505, portal venous: 0.494, delayed: 0.497). External evaluation with 100 portal venous scans and 13 labeled abdominal organs shows a mean Dice score of 0.504. The low variance between the pancreases of registered subjects and the obtained pancreas atlas further illustrates the generalizability of the proposed method. Conclusion We introduce a high-resolution pancreas atlas framework to generalize healthy biomarkers across populations with multi-contrast abdominal CT. The atlases and the associated pancreas organ labels are publicly available through the Human Biomolecular Atlas Program (HuBMAP).
Collapse
Affiliation(s)
- Yinchi Zhou
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Ho Hin Lee
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | | | - Xin Yu
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Qi Yang
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Michael E. Kim
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Lucas W. Remedios
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Jeffrey M. Spraggins
- Vanderbilt University, Department of Cell and Developmental Biology, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biochemistry, Nashville, Tennessee, United States
- Vanderbilt University, Department of Chemistry, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Pathology, Microbiology, and Immunology, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| |
Collapse
|
5
|
Siebert H, Grosbrohmer C, Hansen L, Heinrich MP. ConvexAdam: Self-Configuring Dual-Optimization-Based 3D Multitask Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:738-748. [PMID: 39283782 DOI: 10.1109/tmi.2024.3462248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data. However, they often struggle with versatility when aiming to apply the same approach across various anatomical regions and different imaging modalities. In this work, we present a method that extracts semantic or hand-crafted image features and uses a coupled convex optimisation followed by Adam-based instance optimisation for multitask medical image registration. We make use of pre-trained semantic feature extraction models for the individual datasets and combine them with our fast dual optimisation procedure for deformation field computation. Furthermore, we propose a very fast automatic hyperparameter selection procedure that explores many settings and ranks them on validation data to provide a self-configuring image registration framework. With our approach, we can align image data for various tasks with little learning. We conduct experiments on all available Learn2Reg challenge datasets and obtain results that are to be positioned in the upper ranks of the challenge leaderboards. http://github.com/multimodallearning/convexAdam.
Collapse
|
6
|
Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A, Prince JL, Du Y. A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond. Med Image Anal 2025; 100:103385. [PMID: 39612808 PMCID: PMC11730935 DOI: 10.1016/j.media.2024.103385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 12/01/2024]
Abstract
Deep learning technologies have dramatically reshaped the field of medical image registration over the past decade. The initial developments, such as regression-based and U-Net-based networks, established the foundation for deep learning in image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, network architectures, and uncertainty estimation. These advancements have not only enriched the field of image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Zhangxing Bian
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shalini Subramanian
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| |
Collapse
|
7
|
Criscuolo ER, Hao Y, Zhang Z, McKeown T, Yang D. A Vessel Bifurcation Landmark Pair Dataset for Abdominal CT Deformable Image Registration (DIR) Validation. ARXIV 2025:arXiv:2501.09162v1. [PMID: 39876932 PMCID: PMC11774459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/31/2025]
Abstract
Purpose Deformable image registration (DIR) is an enabling technology in many diagnostic and therapeutic tasks. Despite this, DIR algorithms have limited clinical use, largely due to a lack of benchmark datasets for quality assurance during development. DIRs of intra-patient abdominal CTs are among the most challenging registration scenarios due to significant organ deformations and inconsistent image content. To support future algorithm development, here we introduce our first-of-its-kind abdominal CT DIR benchmark dataset, comprising large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Acquisition and Validation Methods Abdominal CT image pairs of 30 patients were acquired from several publicly available repositories as well as the authors' institution with IRB approval. The two CTs of each pair were originally acquired for the same patient but on different days. An image processing workflow was developed and applied to each CT image pair: 1) Abdominal organs were segmented with a deep learning model, and image intensity within organ masks was overwritten. 2) Matching image patches were manually identified between two CTs of each image pair. 3) Vessel bifurcation landmarks were labeled on one image of each image patch pair. 4) Image patches were deformably registered, and landmarks were projected onto the second image 5) Landmark pair locations were refined manually or with an automated process. This workflow resulted in 1895 total landmark pairs, or 63 per case on average. Estimates of the landmark pair accuracy using digital phantoms were 0.7mm +/- 1.2 mm. Data Format and Usage Notes The data is published in Zenodo at https://doi.org/10.5281/zenodo.14362785. Instructions for use can be found at https://github.com/deshanyang/Abdominal-DIR-QA. Potential Applications This dataset is a first-of-its-kind for abdominal DIR validation. The number, accuracy, and distribution of landmark pairs will allow for robust validation of DIR algorithms with precision beyond what is currently available.
Collapse
Affiliation(s)
- Edward R Criscuolo
- Department of Radiation Oncology, Duke University, Durham, NC, 27701, USA
| | - Yao Hao
- Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Zhendong Zhang
- Department of Radiation Oncology, Duke University, Durham, NC, 27701, USA
| | - Trevor McKeown
- Department of Radiation Oncology, Duke University, Durham, NC, 27701, USA
| | - Deshan Yang
- Department of Radiation Oncology, Duke University, Durham, NC, 27701, USA
| |
Collapse
|
8
|
Vadlamudi S, Kumar V, Ghosh D, Abraham A. Artificial intelligence-powered precision: Unveiling the landscape of liver disease diagnosis—A comprehensive review. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2024; 138:109452. [DOI: 10.1016/j.engappai.2024.109452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
9
|
Hresko DJ, Drotar P. BucketAugment: Reinforced Domain Generalisation in Abdominal CT Segmentation. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:353-361. [PMID: 38899027 PMCID: PMC11186658 DOI: 10.1109/ojemb.2024.3397623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 02/28/2024] [Accepted: 05/03/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In recent years, deep neural networks have consistently outperformed previously proposed methods in the domain of medical segmentation. However, due to their nature, these networks often struggle to delineate desired structures in data that fall outside their training distribution. The goal of this study is to address the challenges associated with domain generalization in CT segmentation by introducing a novel method called BucketAugment for deep neural networks. Methods: BucketAugment leverages principles from the Q-learning algorithm and employs validation loss to search for an optimal policy within a search space comprised of distributed stacks of 3D volumetric augmentations, termed 'buckets.' These buckets have tunable parameters and can be seamlessly integrated into existing neural network architectures, offering flexibility for customization. Results: In our experiments, we focus on segmenting kidney and liver structures across three distinct medical datasets, each containing CT scans of the abdominal region collected from various clinical institutions and scanner vendors. Our results indicate that BucketAugment significantly enhances domain generalization across diverse medical datasets, requiring only minimal modifications to existing network architectures. Conclusions: The introduction of BucketAugment provides a promising solution to the challenges of domain generalization in CT segmentation. By leveraging Q-learning principles and distributed stacks of 3D augmentations, this method improves the performance of deep neural networks on medical segmentation tasks, demonstrating its potential to enhance the applicability of such models across different datasets and clinical scenarios.
Collapse
Affiliation(s)
| | - Peter Drotar
- Technical University of Kosice040 01KosiceSlovakia
| |
Collapse
|
10
|
Ao Y, Shi W, Ji B, Miao Y, He W, Jiang Z. MS-TCNet: An effective Transformer-CNN combined network using multi-scale feature learning for 3D medical image segmentation. Comput Biol Med 2024; 170:108057. [PMID: 38301516 DOI: 10.1016/j.compbiomed.2024.108057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/31/2023] [Accepted: 01/26/2024] [Indexed: 02/03/2024]
Abstract
Medical image segmentation is a fundamental research problem in the field of medical image processing. Recently, the Transformer have achieved highly competitive performance in computer vision. Therefore, many methods combining Transformer with convolutional neural networks (CNNs) have emerged for segmenting medical images. However, these methods cannot effectively capture the multi-scale features in medical images, even though texture and contextual information embedded in the multi-scale features are extremely beneficial for segmentation. To alleviate this limitation, we propose a novel Transformer-CNN combined network using multi-scale feature learning for three-dimensional (3D) medical image segmentation, which is called MS-TCNet. The proposed model utilizes a shunted Transformer and CNN to construct an encoder and pyramid decoder, allowing six different scale levels of feature learning. It captures multi-scale features with refinement at each scale level. Additionally, we propose a novel lightweight multi-scale feature fusion (MSFF) module that can fully fuse the different-scale semantic features generated by the pyramid decoder for each segmentation class, resulting in a more accurate segmentation output. We conducted experiments on three widely used 3D medical image segmentation datasets. The experimental results indicated that our method outperformed state-of-the-art medical image segmentation methods, suggesting its effectiveness, robustness, and superiority. Meanwhile, our model has a smaller number of parameters and lower computational complexity than conventional 3D segmentation networks. The results confirmed that the model is capable of effective multi-scale feature learning and that the learned multi-scale features are useful for improving segmentation performance. We open-sourced our code, which can be found at https://github.com/AustinYuAo/MS-TCNet.
Collapse
Affiliation(s)
- Yu Ao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Bai Ji
- Department of Hepatobiliary and Pancreatic Surgery, The First Hospital of Jilin University, Changchun, 130061, China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China.
| |
Collapse
|
11
|
Zappalá S, Keenan BE, Marshall D, Wu J, Evans SL, Al-Dirini RMA. In vivo strain measurements in the human buttock during sitting using MR-based digital volume correlation. J Biomech 2024; 163:111913. [PMID: 38181575 DOI: 10.1016/j.jbiomech.2023.111913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 10/11/2023] [Accepted: 12/20/2023] [Indexed: 01/07/2024]
Abstract
Advancements in systems for prevention and management of pressure ulcers require a more detailed understanding of the complex response of soft tissues to compressive loads. This study aimed at quantifying the progressive deformation of the buttock based on 3D measurements of soft tissue displacements from MR scans of 10 healthy subjects in a semi-recumbent position. Measurements were obtained using digital volume correlation (DVC) and released as a public dataset. A first parametric optimisation of the global registration step aimed at aligning skeletal elements showed acceptable values of Dice coefficient (around 80%). A second parametric optimisation on the deformable registration method showed errors of 0.99mm and 1.78mm against two simulated fields with magnitude 7.30±3.15mm and 19.37±9.58mm, respectively, generated with a finite element model of the buttock under sitting loads. Measurements allowed the quantification of the slide of the gluteus maximus away from the ischial tuberosity (IT, average 13.74 mm) that was only qualitatively identified in the literature, highlighting the importance of the ischial bursa in allowing sliding. Spatial evolution of the maximus shear strain on a path from the IT to the seating interface showed a peak of compression in the fat, close to the interface with the muscle. Obtained peak values were above the proposed damage threshold in the literature. Results in the study showed the complexity of the deformation of the soft tissues in the buttock and the need for further investigations aimed at isolating factors such as tissue geometry, duration and extent of load, sitting posture and tissue properties.
Collapse
Affiliation(s)
- Stefano Zappalá
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK; Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, UK.
| | | | - David Marshall
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Jing Wu
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Sam L Evans
- School of Engineering, Cardiff University, Cardiff, UK
| | - Rami M A Al-Dirini
- College of Science and Engineering, Flinders University of South Australia, Adelaide, Australia
| |
Collapse
|
12
|
Dong J, Cheng G, Zhang Y, Peng C, Song Y, Tong R, Lin L, Chen YW. Tailored multi-organ segmentation with model adaptation and ensemble. Comput Biol Med 2023; 166:107467. [PMID: 37725849 DOI: 10.1016/j.compbiomed.2023.107467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 08/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Collapse
Affiliation(s)
- Jiahua Dong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Guohua Cheng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yue Zhang
- Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, 215163, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, China.
| | - Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China
| | - Yu Song
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| | - Ruofeng Tong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lanfen Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| |
Collapse
|
13
|
Yao J, Cao K, Hou Y, Zhou J, Xia Y, Nogues I, Song Q, Jiang H, Ye X, Lu J, Jin G, Lu H, Xie C, Zhang R, Xiao J, Liu Z, Gao F, Qi Y, Li X, Zheng Y, Lu L, Shi Y, Zhang L. Deep Learning for Fully Automated Prediction of Overall Survival in Patients Undergoing Resection for Pancreatic Cancer: A Retrospective Multicenter Study. Ann Surg 2023; 278:e68-e79. [PMID: 35781511 DOI: 10.1097/sla.0000000000005465] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To develop an imaging-derived biomarker for prediction of overall survival (OS) of pancreatic cancer by analyzing preoperative multiphase contrast-enhanced computed topography (CECT) using deep learning. BACKGROUND Exploiting prognostic biomarkers for guiding neoadjuvant and adjuvant treatment decisions may potentially improve outcomes in patients with resectable pancreatic cancer. METHODS This multicenter, retrospective study included 1516 patients with resected pancreatic ductal adenocarcinoma (PDAC) from 5 centers located in China. The discovery cohort (n=763), which included preoperative multiphase CECT scans and OS data from 2 centers, was used to construct a fully automated imaging-derived prognostic biomarker-DeepCT-PDAC-by training scalable deep segmentation and prognostic models (via self-learning) to comprehensively model the tumor-anatomy spatial relations and their appearance dynamics in multiphase CECT for OS prediction. The marker was independently tested using internal (n=574) and external validation cohorts (n=179, 3 centers) to evaluate its performance, robustness, and clinical usefulness. RESULTS Preoperatively, DeepCT-PDAC was the strongest predictor of OS in both internal and external validation cohorts [hazard ratio (HR) for high versus low risk 2.03, 95% confidence interval (CI): 1.50-2.75; HR: 2.47, CI: 1.35-4.53] in a multivariable analysis. Postoperatively, DeepCT-PDAC remained significant in both cohorts (HR: 2.49, CI: 1.89-3.28; HR: 2.15, CI: 1.14-4.05) after adjustment for potential confounders. For margin-negative patients, adjuvant chemoradiotherapy was associated with improved OS in the subgroup with DeepCT-PDAC low risk (HR: 0.35, CI: 0.19-0.64), but did not affect OS in the subgroup with high risk. CONCLUSIONS Deep learning-based CT imaging-derived biomarker enabled the objective and unbiased OS prediction for patients with resectable PDAC. This marker is applicable across hospitals, imaging protocols, and treatments, and has the potential to tailor neoadjuvant and adjuvant treatments at the individual level.
Collapse
Affiliation(s)
| | - Kai Cao
- Department of Radiology, Changhai Hospital, Shanghai, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
- Key Laboratory of Medical Imaging Technology and Artificial Intelligence, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Jian Zhou
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Yingda Xia
- DAMO Academy, Alibaba Group, New York, NY
| | - Isabella Nogues
- Departments of Biostatistics, Harvard University T.H. Chan School of Public Health, Boston, MA
| | - Qike Song
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, Shanghai, China
| | - Xianghua Ye
- Department of Radiotherapy, The First Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jianping Lu
- Department of Radiology, Changhai Hospital, Shanghai, China
| | - Gang Jin
- Department of Surgery, Changhai Hospital, Shanghai, China
| | - Hong Lu
- Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Tianjin, China
| | - Chuanmiao Xie
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Rong Zhang
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Jing Xiao
- Ping An Technology Co. Ltd., Shenzhen, Guangdong, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital/Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Feng Gao
- Department of Hepato-pancreato-biliary Tumor Surgery, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Yafei Qi
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Xuezhou Li
- Department of Radiology, Changhai Hospital, Shanghai, China
| | - Yang Zheng
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, NY
| | - Yu Shi
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
- Key Laboratory of Medical Imaging Technology and Artificial Intelligence, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Ling Zhang
- DAMO Academy, Alibaba Group, New York, NY
| |
Collapse
|
14
|
Gao H, Lyu M, Zhao X, Yang F, Bai X. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation. Med Image Anal 2023; 87:102838. [PMID: 37196536 DOI: 10.1016/j.media.2023.102838] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Collapse
Affiliation(s)
- Hongjian Gao
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Mengyao Lyu
- School of Software, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinyue Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, China
| | - Fan Yang
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing 102206, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China; Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
15
|
Wei C, Ren S, Guo K, Hu H, Liang J. High-Resolution Swin Transformer for Automatic Medical Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:3420. [PMID: 37050479 PMCID: PMC10099222 DOI: 10.3390/s23073420] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/16/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
The resolution of feature maps is a critical factor for accurate medical image segmentation. Most of the existing Transformer-based networks for medical image segmentation adopt a U-Net-like architecture, which contains an encoder that converts the high-resolution input image into low-resolution feature maps using a sequence of Transformer blocks and a decoder that gradually generates high-resolution representations from low-resolution feature maps. However, the procedure of recovering high-resolution representations from low-resolution representations may harm the spatial precision of the generated segmentation masks. Unlike previous studies, in this study, we utilized the high-resolution network (HRNet) design style by replacing the convolutional layers with Transformer blocks, continuously exchanging feature map information with different resolutions generated by the Transformer blocks. The proposed Transformer-based network is named the high-resolution Swin Transformer network (HRSTNet). Extensive experiments demonstrated that the HRSTNet can achieve performance comparable with that of the state-of-the-art Transformer-based U-Net-like architecture on the 2021 Brain Tumor Segmentation dataset, the Medical Segmentation Decathlon's liver dataset, and the BTCV multi-organ segmentation dataset.
Collapse
Affiliation(s)
- Chen Wei
- College of Economics and Management, Xi’an University of Posts & Telecommunications, Xi’an 710061, China;
| | - Shenghan Ren
- School of Life Science and Technology, Xidian University, Xi’an 710071, China
| | - Kaitai Guo
- School of Electronic Engineering, Xidian University, Xi’an 710071, China
| | - Haihong Hu
- School of Electronic Engineering, Xidian University, Xi’an 710071, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi’an 710071, China
| |
Collapse
|
16
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
17
|
Wang P, Yan Y, Qian L, Suo S, Xu J, Guo Y, Wang Y. Context-driven pyramid registration network for estimating large topology-preserved deformation. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.11.088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
18
|
DiffeoRaptor: diffeomorphic inter-modal image registration using RaPTOR. Int J Comput Assist Radiol Surg 2023; 18:367-377. [PMID: 36173541 DOI: 10.1007/s11548-022-02749-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 09/06/2022] [Indexed: 02/08/2023]
Abstract
PURPOSE Diffeomorphic image registration is essential in many medical imaging applications. Several registration algorithms of such type have been proposed, but primarily for intra-contrast alignment. Currently, efficient inter-modal/contrast diffeomorphic registration, which is vital in numerous applications, remains a challenging task. METHODS We proposed a novel inter-modal/contrast registration algorithm that leverages Robust PaTch-based cOrrelation Ratio metric to allow inter-modal/contrast image alignment and bandlimited geodesic shooting demonstrated in Fourier-Approximated Lie Algebras (FLASH) algorithm for fast diffeomorphic registration. RESULTS The proposed algorithm, named DiffeoRaptor, was validated with three public databases for the tasks of brain and abdominal image registration while comparing the results against three state-of-the-art techniques, including FLASH, NiftyReg, and Symmetric image Normalization (SyN). CONCLUSIONS Our results demonstrated that DiffeoRaptor offered comparable or better registration performance in terms of registration accuracy. Moreover, DiffeoRaptor produces smoother deformations than SyN in inter-modal and contrast registration. The code for DiffeoRaptor is publicly available at https://github.com/nimamasoumi/DiffeoRaptor .
Collapse
|
19
|
An End-to-End Data-Adaptive Pancreas Segmentation System with an Image Quality Control Toolbox. JOURNAL OF HEALTHCARE ENGINEERING 2023. [DOI: 10.1155/2023/3617318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
With the development of radiology and computer technology, diagnosis by medical imaging is heading toward precision and automation. Due to complex anatomy around the pancreatic tissue and high demands for clinical experience, the assisted pancreas segmentation system will greatly promote clinical efficiency. However, the existing segmentation model suffers from poor generalization among images from multiple hospitals. In this paper, we propose an end-to-end data-adaptive pancreas segmentation system to tackle the problems of lack of annotations and model generalizability. The system employs adversarial learning to transfer features from labeled domains to unlabeled domains, seeking a dynamic balance between domain discrimination and unsupervised segmentation. The image quality control toolbox is embedded in the system, which standardizes image quality in terms of intensity, field of view, and so on, to decrease heterogeneity among image domains. In addition, the system implements a data-adaptive process end-to-end without complex operations by doctors. The experiments are conducted on an annotated public dataset and an unannotated in-hospital dataset. The results indicate that after data adaptation, the segmentation performance measured by the dice similarity coefficient on unlabeled images improves from 58.79% to 75.43%, with a gain of 16.64%. Furthermore, the system preserves quantitatively structured information such as the pancreas’ size and volume, as well as objective and accurate visualized images, which assists clinicians in diagnosing and formulating treatment plans in a timely and accurate manner.
Collapse
|
20
|
Mao Z, Zhao L, Huang S, Jin T, Fan Y, Lee APW. Complete region of interest reconstruction by fusing multiview deformable three-dimensional transesophageal echocardiography images. Med Phys 2023; 50:61-73. [PMID: 35924929 DOI: 10.1002/mp.15910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 07/25/2022] [Accepted: 07/26/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND While three-dimensional transesophageal echocardiography (3D TEE) has been increasingly used for assessing cardiac anatomy and function, it still suffers from a limited field of view (FoV) of the ultrasound transducer. Therefore, it is difficult to examine a complete region of interest without moving the transducer. Existing methods extend the FoV of 3D TEE images by mosaicing multiview static images, which requires synchronization between 3D TEE images and electrocardiogram (ECG) signal to avoid deformations in the images and can only get the widened image at a specific phase. PURPOSE This work aims to develop a novel multiview nonrigid registration and fusion method to extend the FoV of 3D TEE images at different cardiac phases, avoiding the bias toward the specifically chosen phase. METHODS A multiview nonrigid registration and fusion method is proposed to enlarge the FoV of 3D TEE images by fusing dynamic images captured from different viewpoints sequentially. The deformation field for registering images is defined by a collection of affine transformations organized in a graph structure and is estimated by a direct (intensity-based) method. The accuracy of the proposed method is evaluated by comparing it with two B-spline-based methods, two Demons-based methods, and one learning-based method VoxelMorph. Twenty-nine sequences of in vivo 3D TEE images captured from four patients are used for the comparative experiments. Four performance metrics including checkerboard volumes, signed distance, mean absolute distance (MAD), and Dice similarity coefficient (DSC) are used jointly to evaluate the accuracy of the results. Additionally, paired t-tests are performed to examine the significance of the results. RESULTS The qualitative results show that the proposed method can align images more accurately and obtain the fused images with higher quality than the other five methods. Additionally, in the evaluation of the segmented left atrium (LA) walls for the pairwise registration and sequential fusion experiments, the proposed method achieves the MAD of (0.07 ± 0.03) mm for pairwise registration and (0.19 ± 0.02) mm for sequential fusion. Paired t-tests indicate that the results obtained from the proposed method are more accurate than those obtained by the state-of-the-art VoxelMorph and the diffeomorphic Demons methods at the significance level of 0.05. In the evaluation of left ventricle (LV) segmentations for the sequential fusion experiments, the proposed method achieves a DSC of (0.88 ± 0.08), which is also significantly better than diffeomorphic Demons at the 0.05 level. The FoVs of the final fused 3D TEE images obtained by our method are enlarged around two times compared with the original images. CONCLUSIONS Without selecting the static (ECG-gated) images from the same cardiac phase, this work addressed the problem of limited FoV of 3D TEE images in the deformable scenario, obtaining the fused images with high accuracy and good quality. The proposed method could provide an alternative to the conventional fusion methods that are biased toward the specifically chosen phase.
Collapse
Affiliation(s)
- Zhehua Mao
- Robotics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, New South Wales, Australia
| | - Liang Zhao
- Robotics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, New South Wales, Australia
| | - Shoudong Huang
- Robotics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, New South Wales, Australia
| | - Tongxing Jin
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, China
| | - Yiting Fan
- Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Alex Pui-Wai Lee
- Division of Cardiology, Department of Medicine and Therapeutics, Prince of Wales Hospital and Laboratory of Cardiac Imaging and 3D Printing, Li Ka Shing Institute of Health Science, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
21
|
Yan K, Cai J, Jin D, Miao S, Guo D, Harrison AP, Tang Y, Xiao J, Lu J, Lu L. SAM: Self-Supervised Learning of Pixel-Wise Anatomical Embeddings in Radiological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2658-2669. [PMID: 35442886 DOI: 10.1109/tmi.2022.3169003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.
Collapse
|
22
|
Dourthe B, Shaikh N, Pai S A, Fels S, Brown SHM, Wilson DR, Street J, Oxland TR. Automated Segmentation of Spinal Muscles From Upright Open MRI Using a Multiscale Pyramid 2D Convolutional Neural Network. Spine (Phila Pa 1976) 2022; 47:1179-1186. [PMID: 34919072 DOI: 10.1097/brs.0000000000004308] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/29/2021] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Randomized trial. OBJECTIVE To implement an algorithm enabling the automated segmentation of spinal muscles from open magnetic resonance images in healthy volunteers and patients with adult spinal deformity (ASD). SUMMARY OF BACKGROUND DATA Understanding spinal muscle anatomy is critical to diagnosing and treating spinal deformity.Muscle boundaries can be extrapolated from medical images using segmentation, which is usually done manually by clinical experts and remains complicated and time-consuming. METHODS Three groups were examined: two healthy volunteer groups (N = 6 for each group) and one ASD group (N = 8 patients) were imaged at the lumbar and thoracic regions of the spine in an upright open magnetic resonance imaging scanner while maintaining different postures (various seated, standing, and supine). For each group and region, a selection of regions of interest (ROIs) was manually segmented. A multiscale pyramid two-dimensional convolutional neural network was implemented to automatically segment all defined ROIs. A five-fold crossvalidation method was applied and distinct models were trained for each resulting set and group and evaluated using Dice coefficients calculated between the model output and the manually segmented target. RESULTS Good to excellent results were found across all ROIs for the ASD (Dice coefficient >0.76) and healthy (dice coefficient > 0.86) groups. CONCLUSION This study represents a fundamental step toward the development of an automated spinal muscle properties extraction pipeline, which will ultimately allow clinicians to have easier access to patient-specific simulations, diagnosis, and treatment.
Collapse
Affiliation(s)
- Benjamin Dourthe
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Noor Shaikh
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Anoosha Pai S
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Fels
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Stephen H M Brown
- Department of Human Health and Nutritional Sciences, University of Guelph, Guelph, ON, Canada
| | - David R Wilson
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Centre for Hip Health and Mobility, University of British Columbia, Vancouver, BC, Canada
| | - John Street
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Thomas R Oxland
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
23
|
Schumacher M, Siebert H, Genz A, Bade R, Heinrich M. Learning-based three-dimensional registration with weak bounding box supervision. J Med Imaging (Bellingham) 2022; 9:044001. [PMID: 35847178 PMCID: PMC9279677 DOI: 10.1117/1.jmi.9.4.044001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/28/2022] [Indexed: 09/05/2024] Open
Abstract
Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming. Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images. Results: The results show an improvement of ∼ 10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations. Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.
Collapse
Affiliation(s)
- Mona Schumacher
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
- MeVis Medical Solutions AG, Bremen, Germany
| | - Hanna Siebert
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
| | | | | | - Mattias Heinrich
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
| |
Collapse
|
24
|
Rao Y, Zhou Y, Wang Y. Salient deformable network for abdominal multiorgan registration. Med Phys 2022; 49:5953-5963. [PMID: 35689601 DOI: 10.1002/mp.15791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 05/20/2022] [Accepted: 05/28/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND Image registration has long been an active research area in the society of medical image computing, which is to perform spatial transformation between a pair of images and establish a point-wise correspondence to achieve spatial consistency. PURPOSE Previous work mainly focused on learning complicated deformation fields by maximizing the global-level (i.e., foreground plus background) image similarity. We argue that taking the background similarity into account may not be a good solution, if we only seek the accurate alignment of target organs/regions in real clinical practice. METHODS We, therefore, propose a novel concept of S a l i e n t $Salient$ R e g i s t r a t i o n $Registration$ and introduce a novel deformable network equipped with a saliency module. Specifically, a multitask learning-based saliency module is proposed to discriminate the salient regions-of-registration in a semisupervised manner. Then, our deformable network analyzes the intensity and anatomical similarity of salient regions, and finally conducts the salient deformable registration. RESULTS We evaluate the efficacy of the proposed network on challenging abdominal multiorgan CT scans. The experimental results demonstrate that the proposed registration network outperforms other state-of-the-art methods, achieving a mean Dice similarity coefficient (DSC) of 40.2%, Hausdorff distance (95 HD) of 20.8 mm, and average symmetric surface distance (ASSD) of 4.58 mm. Moreover, even by training using one labeled data, our network can still attain satisfactory registration performance, with a mean DSC of 39.2%, 95 HD of 21.2 mm, and ASSD of 4.78 mm. CONCLUSIONS The proposed network provides an accurate solution for multiorgan registration and has the potential to be used for improving other registration applications. The code is publicly available at https://github.com/Rrrfrr/Salient-Deformable-Network.
Collapse
Affiliation(s)
- Yi Rao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China.,Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Yihao Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China.,Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| |
Collapse
|
25
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
26
|
Tallam H, Elton DC, Lee S, Wakim P, Pickhardt PJ, Summers RM. Fully Automated Abdominal CT Biomarkers for Type 2 Diabetes Using Deep Learning. Radiology 2022; 304:85-95. [PMID: 35380492 DOI: 10.1148/radiol.211914] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Background CT biomarkers both inside and outside the pancreas can potentially be used to diagnose type 2 diabetes mellitus. Previous studies on this topic have shown significant results but were limited by manual methods and small study samples. Purpose To investigate abdominal CT biomarkers for type 2 diabetes mellitus in a large clinical data set using fully automated deep learning. Materials and Methods For external validation, noncontrast abdominal CT images were retrospectively collected from consecutive patients who underwent routine colorectal cancer screening with CT colonography from 2004 to 2016. The pancreas was segmented using a deep learning method that outputs measurements of interest, including CT attenuation, volume, fat content, and pancreas fractal dimension. Additional biomarkers assessed included visceral fat, atherosclerotic plaque, liver and muscle CT attenuation, and muscle volume. Univariable and multivariable analyses were performed, separating patients into groups based on time between type 2 diabetes diagnosis and CT date and including clinical factors such as sex, age, body mass index (BMI), BMI greater than 30 kg/m2, and height. The best set of predictors for type 2 diabetes were determined using multinomial logistic regression. Results A total of 8992 patients (mean age, 57 years ± 8 [SD]; 5009 women) were evaluated in the test set, of whom 572 had type 2 diabetes mellitus. The deep learning model had a mean Dice similarity coefficient for the pancreas of 0.69 ± 0.17, similar to the interobserver Dice similarity coefficient of 0.69 ± 0.09 (P = .92). The univariable analysis showed that patients with diabetes had, on average, lower pancreatic CT attenuation (mean, 18.74 HU ± 16.54 vs 29.99 HU ± 13.41; P < .0001) and greater visceral fat volume (mean, 235.0 mL ± 108.6 vs 130.9 mL ± 96.3; P < .0001) than those without diabetes. Patients with diabetes also showed a progressive decrease in pancreatic attenuation with greater duration of disease. The final multivariable model showed pairwise areas under the receiver operating characteristic curve (AUCs) of 0.81 and 0.85 between patients without and patients with diabetes who were diagnosed 0-2499 days before and after undergoing CT, respectively. In the multivariable analysis, adding clinical data did not improve upon CT-based AUC performance (AUC = 0.67 for the CT-only model vs 0.68 for the CT and clinical model). The best predictors of type 2 diabetes mellitus included intrapancreatic fat percentage, pancreatic fractal dimension, plaque severity between the L1 and L4 vertebra levels, average liver CT attenuation, and BMI. Conclusion The diagnosis of type 2 diabetes mellitus was associated with abdominal CT biomarkers, especially measures of pancreatic CT attenuation and visceral fat. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hima Tallam
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Daniel C Elton
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Sungwon Lee
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Paul Wakim
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Perry J Pickhardt
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Ronald M Summers
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| |
Collapse
|
27
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|
28
|
Lee HH, Tang Y, Bao S, Yang Q, Xu X, Fogo AB, Harris R, de Caestecker MP, Spraggins JM, Heinrich M, Huo Y, Landman BA. Supervised Deep Generation of High-Resolution Arterial Phase Computed Tomography Kidney Substructure Atlas. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:120322S. [PMID: 36303577 PMCID: PMC9605120 DOI: 10.1117/12.2608290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The Human BioMolecular Atlas Program (HuBMAP) provides an opportunity to contextualize findings across cellular to organ systems levels. Constructing an atlas target is the primary endpoint for generalizing anatomical information across scales and populations. An initial target of HuBMAP is the kidney organ and arterial phase contrast-enhanced computed tomography (CT) provides distinctive appearance and anatomical context on the internal substructure of kidney organs such as renal context, medulla, and pelvicalyceal system. With the confounding effects of demographics and morphological characteristics of the kidney across large-scale imaging surveys, substantial variation is demonstrated with the internal substructure morphometry and the intensity contrast due to the variance of imaging protocols. Such variability increases the level of difficulty to localize the anatomical features of the kidney substructure in a well-defined spatial reference for clinical analysis. In order to stabilize the localization of kidney substructures in the context of this variability, we propose a high-resolution CT kidney substructure atlas template. Briefly, we introduce a deep learning preprocessing technique to extract the volumetric interest of the abdominal regions and further perform a deep supervised registration pipeline to stably adapt the anatomical context of the kidney internal substructure. To generate and evaluate the atlas template, arterial phase CT scans of 500 control subjects are de-identified and registered to the atlas template with a complete end-to-end pipeline. With stable registration to the abdominal wall and kidney organs, the internal substructure of both left and right kidneys are substantially localized in the high-resolution atlas space. The atlas average template successfully demonstrated the contextual details of the internal structure and was applicable to generalize the morphological variation of internal substructure across patients.
Collapse
Affiliation(s)
- Ho Hin Lee
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA 37212
| | - Shunxing Bao
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Qi Yang
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Xin Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN USA 37232
- Departments of Medicine and Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA 37232
- Division of Nephrology and Hypertension, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA 37232
| | - Raymond Harris
- Division of Nephrology and Hypertension, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA 37232
| | - Mark P de Caestecker
- Division of Nephrology and Hypertension, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA 37232
| | - Jeffrey M Spraggins
- Department of Cell and Developmental Biology, Vanderbilt University, Nashville, TN, USA 37232
| | - Mattias Heinrich
- Institute of Medical Informatics, University of Luebeck, Germany
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA 37212
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA 37212
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA 37212
- Institute of Medical Informatics, University of Luebeck, Germany
| |
Collapse
|
29
|
Jiang C, Huang Y, Ding S, Gong X, Yuan X, Wang S, Li J, Zhang Y. Comparison of an in-house hybrid DIR method to NiftyReg on CBCT and CT images for head and neck cancer. J Appl Clin Med Phys 2022; 23:e13540. [PMID: 35084081 PMCID: PMC8906219 DOI: 10.1002/acm2.13540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/22/2021] [Accepted: 01/07/2022] [Indexed: 11/10/2022] Open
Abstract
An in-house hybrid deformable image registration (DIR) method, which combines free-form deformation (FFD) and the viscous fluid registration method, is proposed. Its results on the planning computed tomography (CT) and the day 1 treatment cone-beam CT (CBCT) image from 68 head and neck cancer patients are compared with the results of NiftyReg, which uses B-spline FFD alone. Several similarity metrics, the target registration error (TRE) of annotated points, as well as the Dice similarity coefficient (DSC) and Hausdorff distance (HD) of the propagated organs at risk are employed to analyze their registration accuracy. According to quantitative analysis on mutual information, normalized cross-correlation, and the absolute pixel value differences, the results of the proposed DIR are more similar to the CBCT images than the NiftyReg results. Smaller TRE of the annotated points is observed in the proposed method, and the overall mean TRE for the proposed method and NiftyReg was 2.34 and 2.98 mm, respectively (p < 0.001). The mean DSC in the larynx, spinal cord, oral cavity, mandible, and parotid given by the proposed method ranged from 0.78 to 0.91, significantly higher than the NiftyReg results (ranging from 0.77 to 0.90), and the HD was significantly lower compared to NiftyReg. Furthermore, the proposed method did not suffer from unrealistic deformations as the NiftyReg did in the visual evaluation. Meanwhile, the execution time of the proposed method was much higher than NiftyReg (96.98 ± 11.88 s vs. 4.60 ± 0.49 s). In conclusion, the in-house hybrid method gave better accuracy and more stable performance than NiftyReg.
Collapse
Affiliation(s)
- Chunling Jiang
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China.,Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Nanchang, P. R. China.,Medical College of Nanchang University, Nanchang, P. R. China
| | - Yuling Huang
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China
| | - Shenggou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China
| | - Xiaochang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China
| | - Xingxing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China
| | - Shaobin Wang
- MedMind Technology Co. Ltd., Beijing, P. R. China
| | - Jingao Li
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China.,Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Nanchang, P. R. China.,Medical College of Nanchang University, Nanchang, P. R. China
| | - Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, P. R. China
| |
Collapse
|
30
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
31
|
Zhao X, Zhou Y, Zhang Y, Han L, Mao L, Yu Y, Li X, Zeng M, Wang M, Liu Z. Radiomics Based on Contrast-Enhanced MRI in Differentiation Between Fat-Poor Angiomyolipoma and Hepatocellular Carcinoma in Noncirrhotic Liver: A Multicenter Analysis. Front Oncol 2021; 11:744756. [PMID: 34722300 PMCID: PMC8548657 DOI: 10.3389/fonc.2021.744756] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 09/21/2021] [Indexed: 12/23/2022] Open
Abstract
Objective This study aims to develop and externally validate a contrast-enhanced magnetic resonance imaging (CE-MRI) radiomics-based model for preoperative differentiation between fat-poor angiomyolipoma (fp-AML) and hepatocellular carcinoma (HCC) in patients with noncirrhotic livers and to compare the diagnostic performance with that of two radiologists. Methods This retrospective study was performed with 165 patients with noncirrhotic livers from three medical centers. The dataset was divided into a training cohort (n = 99), a time-independent internal validation cohort (n = 24) from one center, and an external validation cohort (n = 42) from the remaining two centers. The volumes of interest were contoured on the arterial phase (AP) images and then registered to the venous phase (VP) and delayed phase (DP), and a total of 3,396 radiomics features were extracted from the three phases. After the joint mutual information maximization feature selection procedure, four radiomics logistic regression classifiers, including the AP model, VP model, DP model, and combined model, were built. The area under the receiver operating characteristic curve (AUC), diagnostic accuracy, sensitivity, and specificity of each radiomics model and those of two radiologists were evaluated and compared. Results The AUCs of the combined model reached 0.789 (95%CI, 0.579-0.999) in the internal validation cohort and 0.730 (95%CI, 0.563-0.896) in the external validation cohort, higher than the AP model (AUCs, 0.711 and 0.638) and significantly higher than the VP model (AUCs, 0.594 and 0.610) and the DP model (AUCs, 0.547 and 0.538). The diagnostic accuracy, sensitivity, and specificity of the combined model were 0.708, 0.625, and 0.750 in the internal validation cohort and 0.619, 0.786, and 0.536 in the external validation cohort, respectively. The AUCs for the two radiologists were 0.656 and 0.594 in the internal validation cohort and 0.643 and 0.500 in the external validation cohort. The AUCs of the combined model surpassed those of the two radiologists and were significantly higher than that of the junior one in both validation cohorts. Conclusions The proposed radiomics model based on triple-phase CE-MRI images was proven to be useful for differentiating between fp-AML and HCC and yielded comparable or better performance than two radiologists in different centers, with different scanners and different scanning parameters.
Collapse
Affiliation(s)
- Xiangtian Zhao
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yukun Zhou
- Medical Imaging Center, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yuan Zhang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Lujun Han
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Yizhou Yu
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Xiuli Li
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Mengsu Zeng
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Mingliang Wang
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
32
|
Cui H, Wei D, Ma K, Gu S, Zheng Y. A Unified Framework for Generalized Low-Shot Medical Image Segmentation With Scarce Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2656-2671. [PMID: 33338014 DOI: 10.1109/tmi.2020.3045775] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image segmentation has achieved remarkable advancements using deep neural networks (DNNs). However, DNNs often need big amounts of data and annotations for training, both of which can be difficult and costly to obtain. In this work, we propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML). Unlike most existing methods which only deal with the lack of annotations while assuming abundance of data, our framework works with extreme scarcity of both, which is ideal for rare diseases. Via DML, the framework learns a multimodal mixture representation for each category, and performs dense predictions based on cosine distances between the pixels' deep embeddings and the category representations. The multimodal representations effectively utilize the inter-subject similarities and intraclass variations to overcome overfitting due to extremely limited data. In addition, we propose adaptive mixing coefficients for the multimodal mixture distributions to adaptively emphasize the modes better suited to the current input. The representations are implicitly embedded as weights of the fc layer, such that the cosine distances can be computed efficiently via forward propagation. In our experiments on brain MRI and abdominal CT datasets, the proposed framework achieves superior performances for low-shot segmentation towards standard DNN-based (3D U-Net) and classical registration-based (ANTs) methods, e.g., achieving mean Dice coefficients of 81%/69% for brain tissue/abdominal multi-organ segmentation using a single training sample, as compared to 52%/31% and 72%/35% by the U-Net and ANTs, respectively.
Collapse
|
33
|
Yang S, Zhao Y, Liao M, Zhang F. An Unsupervised Learning-Based Multi-Organ Registration Method for 3D Abdominal CT Images. SENSORS (BASEL, SWITZERLAND) 2021; 21:6254. [PMID: 34577461 PMCID: PMC8472627 DOI: 10.3390/s21186254] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 08/22/2021] [Accepted: 08/26/2021] [Indexed: 12/28/2022]
Abstract
Medical image registration is an essential technique to achieve spatial consistency geometric positions of different medical images obtained from single- or multi-sensor, such as computed tomography (CT), magnetic resonance (MR), and ultrasound (US) images. In this paper, an improved unsupervised learning-based framework is proposed for multi-organ registration on 3D abdominal CT images. First, the explored coarse-to-fine recursive cascaded network (RCN) modules are embedded into a basic U-net framework to achieve more accurate multi-organ registration results from 3D abdominal CT images. Then, a topology-preserving loss is added in the total loss function to avoid a distortion of the predicted transformation field. Four public databases are selected to validate the registration performances of the proposed method. The experimental results show that the proposed method is superior to some existing traditional and deep learning-based methods and is promising to meet the real-time and high-precision clinical registration requirements of 3D abdominal CT images.
Collapse
Affiliation(s)
- Shaodi Yang
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
| | - Yuqian Zhao
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
- Hunan Xiangjiang Artificial Intelligence Academy, Changsha 410083, China
- Hunan Engineering Research Center of High Strength Fastener Intelligent Manufacturing, Changde 415701, China
| | - Miao Liao
- School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China;
| | - Fan Zhang
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
- Hunan Xiangjiang Artificial Intelligence Academy, Changsha 410083, China
| |
Collapse
|
34
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An efficient two-step multi-organ registration on abdominal CT via deep-learning based segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
35
|
van Eijnatten M, Rundo L, Batenburg KJ, Lucka F, Beddowes E, Caldas C, Gallagher FA, Sala E, Schönlieb CB, Woitek R. 3D deformable registration of longitudinal abdominopelvic CT images using unsupervised deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106261. [PMID: 34289437 DOI: 10.1016/j.cmpb.2021.106261] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 06/24/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. METHODS As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). RESULTS The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. CONCLUSIONS This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations.
Collapse
Affiliation(s)
- Maureen van Eijnatten
- Centrum Wiskunde & Informatica, 1098 XG Amsterdam, the Netherlands; Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, the Netherlands.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, CB2 0QQ Cambridge, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom
| | - K Joost Batenburg
- Centrum Wiskunde & Informatica, 1098 XG Amsterdam, the Netherlands; Mathematical Institute, Leiden University, 2300 RA Leiden, the Netherlands
| | - Felix Lucka
- Centrum Wiskunde & Informatica, 1098 XG Amsterdam, the Netherlands; Centre for Medical Image Computing, University College London, WC1E 6BT London, United Kingdom
| | - Emma Beddowes
- Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom; Cancer Research UK Cambridge Institute, University of Cambridge, CB2 0RE Cambridge, United Kingdom; Department of Oncology, Addenbrooke's Hospital, Cambridge University Hospitals National Health Service (NHS) Foundation Trust, CB2 0QQ Cambridge, United Kingdom
| | - Carlos Caldas
- Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom; Cancer Research UK Cambridge Institute, University of Cambridge, CB2 0RE Cambridge, United Kingdom; Department of Oncology, Addenbrooke's Hospital, Cambridge University Hospitals National Health Service (NHS) Foundation Trust, CB2 0QQ Cambridge, United Kingdom
| | - Ferdia A Gallagher
- Department of Radiology, University of Cambridge, CB2 0QQ Cambridge, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom
| | - Evis Sala
- Department of Radiology, University of Cambridge, CB2 0QQ Cambridge, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, CB3 0WA Cambridge, United Kingdom
| | - Ramona Woitek
- Department of Radiology, University of Cambridge, CB2 0QQ Cambridge, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, CB2 0RE Cambridge, United Kingdom; Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090 Vienna, Austria
| |
Collapse
|
36
|
A Medical Image Registration Method Based on Progressive Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:4504306. [PMID: 34367316 PMCID: PMC8337131 DOI: 10.1155/2021/4504306] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 07/03/2021] [Indexed: 01/26/2023]
Abstract
Background Medical image registration is an essential task for medical image analysis in various applications. In this work, we develop a coarse-to-fine medical image registration method based on progressive images and SURF algorithm (PI-SURF) for higher registration accuracy. Methods As a first step, the reference image and the floating image are fused to generate multiple progressive images. Thereafter, the floating image and progressive image are registered to get the coarse registration result based on the SURF algorithm. For further improvement, the coarse registration result and the reference image are registered to perform fine image registration. The appropriate progressive image has been investigated by experiments. The mutual information (MI), normal mutual information (NMI), normalized correlation coefficient (NCC), and mean square difference (MSD) similarity metrics are used to demonstrate the potential of the PI-SURF method. Results For the unimodal and multimodal registration, the PI-SURF method achieves the best results compared with the mutual information method, Demons method, Demons+B-spline method, and SURF method. The MI, NMI, and NCC of PI-SURF are improved by 15.5%, 1.31%, and 7.3%, respectively, while MSD decreased by 13.2% for the multimodal registration compared with the optimal result of the state-of-the-art methods. Conclusions The extensive experiments show that the proposed PI-SURF method achieves higher quality of registration.
Collapse
|
37
|
Liang X, Li N, Zhang Z, Xiong J, Zhou S, Xie Y. Incorporating the hybrid deformable model for improving the performance of abdominal CT segmentation via multi-scale feature fusion network. Med Image Anal 2021; 73:102156. [PMID: 34274689 DOI: 10.1016/j.media.2021.102156] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/22/2021] [Accepted: 06/28/2021] [Indexed: 01/17/2023]
Abstract
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
Collapse
Affiliation(s)
- Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Na Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jing Xiong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shoujun Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| |
Collapse
|
38
|
DeepPrognosis: Preoperative prediction of pancreatic cancer survival and surgical margin via comprehensive understanding of dynamic contrast-enhanced CT imaging and tumor-vascular contact parsing. Med Image Anal 2021; 73:102150. [PMID: 34303891 DOI: 10.1016/j.media.2021.102150] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 05/08/2021] [Accepted: 06/24/2021] [Indexed: 12/15/2022]
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers and carries a dismal prognosis of ∼10% in five year survival rate. Surgery remains the best option of a potential cure for patients who are evaluated to be eligible for initial resection of PDAC. However, outcomes vary significantly even among the resected patients who were the same cancer stage and received similar treatments. Accurate quantitative preoperative prediction of primary resectable PDACs for personalized cancer treatment is thus highly desired. Nevertheless, there are a very few automated methods yet to fully exploit the contrast-enhanced computed tomography (CE-CT) imaging for PDAC prognosis assessment. CE-CT plays a critical role in PDAC staging and resectability evaluation. In this work, we propose a novel deep neural network model for the survival prediction of primary resectable PDAC patients, named as 3D Contrast-Enhanced Convolutional Long Short-Term Memory network (CE-ConvLSTM), which can derive the tumor attenuation signatures or patterns from patient CE-CT imaging studies. Tumor-vascular relationships, which might indicate the resection margin status, have also been proven to hold strong relationships with the overall survival of PDAC patients. To capture such relationships, we propose a self-learning approach for automated pancreas and peripancreatic anatomy segmentation without requiring any annotations on our PDAC datasets. We then employ a multi-task convolutional neural network (CNN) to accomplish both tasks of survival outcome and margin prediction where the network benefits from learning the resection margin related image features to improve the survival prediction. Our presented framework can improve overall survival prediction performances compared with existing state-of-the-art survival analysis approaches. The new staging biomarker integrating both the proposed risk signature and margin prediction has evidently added values to be combined with the current clinical staging system.
Collapse
|
39
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An Abdominal Registration Technology for Integration of Nanomaterial Imaging-Aided Diagnosis and Treatment. J Biomed Nanotechnol 2021; 17:952-959. [PMID: 34082880 DOI: 10.1166/jbn.2021.3076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Image registration technology is a key technology used in the process of nanomaterial imaging-aided diagnosis and targeted therapy effect monitoring for abdominal diseases. Recently, the deep-learning based methods have been increasingly used for large-scale medical image registration, because their iteration is much less than those of traditional ones. In this paper, a coarse-to-fine unsupervised learning-based three-dimensional (3D) abdominal CT image registration method is presented. Firstly, an affine transformation was used as an initial step to deal with large deformation between two images. Secondly, an unsupervised total loss function containing similarity, smoothness, and topology preservation measures was proposed to achieve better registration performances during convolutional neural network (CNN) training and testing. The experimental results demonstrated that the proposed method severally obtains the average MSE, PSNR, and SSIM values of 0.0055, 22.7950, and 0.8241, which outperformed some existing traditional and unsupervised learning-based methods. Moreover, our method can register 3D abdominal CT images with shortest time and is expected to become a real-time method for clinical application.
Collapse
Affiliation(s)
- Shao-Di Yang
- School of Automation, Central South University, Changsha 410083, China
| | - Yu-Qian Zhao
- School of Automation, Central South University, Changsha 410083, China
| | - Fan Zhang
- School of Automation, Central South University, Changsha 410083, China
| | - Miao Liao
- School of Automation, Central South University, Changsha 410083, China
| | - Zhen Yang
- School of Xiangya Hospital, Central South University, Changsha 410075, China
| | - Yan-Jin Wang
- School of Xiangya Hospital, Central South University, Changsha 410075, China
| | - Ling-Li Yu
- School of Automation, Central South University, Changsha 410083, China
| |
Collapse
|
40
|
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain adaptation for segmentation of critical structures for prostate cancer therapy. Sci Rep 2021; 11:11480. [PMID: 34075061 PMCID: PMC8169882 DOI: 10.1038/s41598-021-90294-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/04/2021] [Indexed: 11/23/2022] Open
Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Collapse
Affiliation(s)
- Anneke Meyer
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| | - Alireza Mehrtash
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marko Rak
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Oleksii Bashkanov
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Bjoern Langbein
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alireza Ziaei
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam S Kibel
- Division of Urology, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Christian Hansen
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
41
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
42
|
Xu Z, Luo J, Yan J, Li X, Jayender J. F3RNet: full-resolution residual registration network for deformable image registration. Int J Comput Assist Radiol Surg 2021; 16:923-932. [PMID: 33939077 DOI: 10.1007/s11548-021-02359-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 03/24/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes. METHODS We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency. RESULTS We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches. CONCLUSION By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.
Collapse
Affiliation(s)
- Zhe Xu
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.,Brigham and Women's Hospital, Harvard Medical School, Boston, 02115, USA
| | - Jie Luo
- Brigham and Women's Hospital, Harvard Medical School, Boston, 02115, USA
| | - Jiangpeng Yan
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiu Li
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| | | |
Collapse
|
43
|
Chang KP, Lin SH, Chu YW. Artificial intelligence in gastrointestinal radiology: A review with special focus on recent development of magnetic resonance and computed tomography. Artif Intell Gastroenterol 2021; 2:27-41. [DOI: 10.35712/aig.v2.i2.27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/21/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
|
44
|
Park T, Lee J, Shin J, Won Kim K, Chul Kang H. Non-Rigid Liver Registration in Liver Computed Tomography Images Using Elastic Method with Global and Local Deformations. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
The study of follow-up liver computed tomography (CT) images is required for the early diagnosis and treatment evaluation of liver cancer. Although this requirement has been manually performed by doctors, the demands on computer-aided diagnosis are dramatically growing according to
the increased amount of medical image data by the recent development of CT. However, conventional image segmentation, registration, and skeletonization methods cannot be directly applied to clinical data due to the characteristics of liver CT images varying largely by patients and contrast
agents. In this paper, we propose non-rigid liver segmentation using elastic method with global and local deformation for follow-up liver CT images. To manage intensity differences between two scans, we extract the liver vessel and parenchyma in each scan. And our method binarizes the segmented
liver parenchyma and vessel, and performs the registration to minimize the intensity difference between these binarized images of follow-up CT images. The global movements between follow-up CT images are corrected by rigid registration based on liver surface. The local deformations between
follow-up CT images are modeled by non-rigid registration, which aligns images using non-rigid transformation, based on locally deformable model. Our method can model the global and local deformation between follow-up liver CT scans by considering the deformation of both the liver surface
and vessel. In experimental results using twenty clinical datasets, our method matches the liver effectively between follow-up portal phase CT images, enabling the accurate assessment of the volume change of the liver cancer. The proposed registration method can be applied to the follow-up
study of various organ diseases, including cardiovascular diseases and lung cancer.
Collapse
Affiliation(s)
- Taeyong Park
- University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 138-736, Korea
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul 156-743, Korea
| | - Juneseuk Shin
- Department of Systems Management Engineering, Sungkyunkwan University, 2066, Seobu-ro, Jangan-gu, Suwon-si, Gyeonggi-do, 440-746, Korea
| | - Kyoung Won Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 138-736, Korea
| | - Ho Chul Kang
- Department of Media Technology & Media Contents, The Catholic University of Korea, Gyeonggi-do, 14662, Korea
| |
Collapse
|
45
|
Yamaguchi S, Watanabe M, Hattori Y. Statistical parametric mapping of three-dimensional local activity distribution of skeletal muscle using magnetic resonance imaging (MRI). Sci Rep 2021; 11:4808. [PMID: 33637801 PMCID: PMC7910551 DOI: 10.1038/s41598-021-84247-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 02/15/2021] [Indexed: 11/23/2022] Open
Abstract
Analysis of the internal local activity distribution in human skeletal muscles is important for managing muscle fatigue/pain and dysfunction. However, no method is established for three-dimensional (3D) statistical analysis of features of activity regions common to multiple subjects during voluntary motor tasks. We investigated the characteristics of muscle activity distribution from the data of ten healthy subjects (29 ± 1 year old, 2 women) during voluntary teeth clenching under two different occlusal conditions by applying spatial normalization and statistical parametric mapping (SPM) to analysis of muscle functional magnetic resonance imaging (mfMRI) using increase in transverse relaxation time (T2) of the skeletal muscle induced by exercise. The expansion of areas with significant T2 increase was observed in the masticatory muscles after clenching with molar loss comparing with intact dentition. The muscle activity distribution characteristics common to a group of subjects, i.e., the active region in the temporal muscle ipsilateral to the side with the molar loss and medial pterygoid muscle contralateral to the side with the molar loss, were clarified in 3D by applying spatial normalization and SPM to mfMRI analysis. This method might elucidate the functional distribution within the muscles and the localized muscular activity related to skeletal muscle disorders.
Collapse
Affiliation(s)
- Satoshi Yamaguchi
- Division of Aging and Geriatric Dentistry, Tohoku University Graduate School of Dentistry, 4-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan.
| | - Makoto Watanabe
- Division of Aging and Geriatric Dentistry, Tohoku University Graduate School of Dentistry, 4-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan.,Institute of Living and Environmental Sciences, Miyagi Gakuin Women's University, 9-1-1 Sakura-ga-oka, Aoba-ku, Sendai, Miyagi, 981-8557, Japan
| | - Yoshinori Hattori
- Division of Aging and Geriatric Dentistry, Tohoku University Graduate School of Dentistry, 4-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| |
Collapse
|
46
|
Luo C, Terry JG, Tang Y, Xu K, Massion PP, Landman BA, Carr JJ, Huo Y. Measure Partial Liver Volumetric Variations from Paired Inspiratory-expiratory Chest CT Scans. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596. [PMID: 34354325 DOI: 10.1117/12.2581077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Liver stiffness is an essential clinical biomarker for diagnosing liver fibrosis and cirrhosis. In current clinical practice, elastography techniques are standard non-invasive diagnosis tools to assess stiffness of liver, using either Ultrasound (US) or magnetic resonance imaging (MRI). However, the US elastography yields ≈ 10 % failure rate and degraded performance on obese patients, while the MR elastography is costlier and less available. Compared with US and MRI, the computerized tomography (CT) imaging has not been widely used in measuring liver stiffness. In this paper, we performed a pilot study to assess if volumetric variations of liver can be captured from paired inspiratory-expiratory chest (PIEC) CT scans. To enable the assessment, we propose a Hierarchical Intra-Patient Organ-specific (HIPO) registration pipeline to quantify the partial liver volumetric variations with lung pressure from a respiratory cycle. The PIEC protocol is employed since it naturally provides two paired CT scans with liver deformation from regulated respiratory motions. For the subjects whose registration results passed both an automatic quantitative quality assurance (QA) and another visual qualitative QA, 6.0% average volumetric variations of liver were measured, from inspiratory phase to expiratory phase. Future clinical validations will be required to validate the findings in this pilot study.
Collapse
Affiliation(s)
- Can Luo
- Data Science Institute, Vanderbilt University, Nashville, TN, 37235 USA
| | - James G Terry
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235 USA
| | - Yucheng Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - Kaiwen Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - Pierre P Massion
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Vanderbilt University Medical Center, Vanderbilt Ingram Cancer Center, Nashville, TN, USA 37235
| | - Bennett A Landman
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235 USA.,Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, 37235 USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235 USA.,Institute of Imaging Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - J Jeffery Carr
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235 USA.,Department of Cardiovascular Medicine, Vanderbilt University Medical Center, Nashville, TN, 37235 USA.,Institute of Imaging Science, Vanderbilt University, Nashville, TN, 37235 USA.,Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - Yuankai Huo
- Data Science Institute, Vanderbilt University, Nashville, TN, 37235 USA.,Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| |
Collapse
|
47
|
Xu K, Gao R, Khan MS, Bao S, Tang Y, Deppen SA, Huo Y, Sandler KL, Massion PP, Heinrich MP, Landman BA. Development and Characterization of a Chest CT Atlas. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 2021:15961G. [PMID: 34531633 PMCID: PMC8442827 DOI: 10.1117/12.2580800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A major goal of lung cancer screening is to identify individuals with particular phenotypes that are associated with high risk of cancer. Identifying relevant phenotypes is complicated by the variation in body position and body composition. In the brain, standardized coordinate systems (e.g., atlases) have enabled separate consideration of local features from gross/global structure. To date, no analogous standard atlas has been presented to enable spatial mapping and harmonization in chest computational tomography (CT). In this paper, we propose a thoracic atlas built upon a large low dose CT (LDCT) database of lung cancer screening program. The study cohort includes 466 male and 387 female subjects with no screening detected malignancy (age 46-79 years, mean 64.9 years). To provide spatial mapping, we optimize a multi-stage inter-subject non-rigid registration pipeline for the entire thoracic space. Briefly, with 50 scans of 50 randomly selected female subjects as fine tuning dataset, we search for the optimal configuration of the non-rigid registration module in a range of adjustable parameters including: registration searching radius, degree of keypoint dispersion, regularization coefficient and similarity patch size, to minimize the registration failure rate approximated by the number of samples with low Dice similarity score (DSC) for lung and body segmentation. We evaluate the optimized pipeline on a separate cohort (100 scans of 50 female and 50 male subjects) relative to two baselines with alternative non-rigid registration module: the same software with default parameters and an alternative software. We achieve a significant improvement in terms of registration success rate based on manual QA. For the entire study cohort, the optimized pipeline achieves a registration success rate of 91.7%. The application validity of the developed atlas is evaluated in terms of discriminative capability for different anatomic phenotypes, including body mass index (BMI), chronic obstructive pulmonary disease (COPD), and coronary artery calcification (CAC).
Collapse
Affiliation(s)
- Kaiwen Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
| | - Riqiang Gao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
| | - Mirza S. Khan
- Vanderbilt University Medical Center, Nashville, TN, USA 37235
- Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, 37212
- U.S. Department of Veterans Affairs, Nashville, TN, 37212
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
| | - Yucheng Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
| | - Steve A. Deppen
- Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
| | - Kim L. Sandler
- Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | | | - Mattias P. Heinrich
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany 23562
| | - Bennett A. Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA 37235
- Vanderbilt University Medical Center, Nashville, TN, USA 37235
| |
Collapse
|
48
|
Lee HH, Tang Y, Xu K, Bao S, Fogo AB, Harris R, de Caestecker MP, Heinrich M, Spraggins JM, Huo Y, Landman BA. Construction of a Multi-Phase Contrast Computed Tomography Kidney Atlas. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596. [PMID: 34354322 DOI: 10.1117/12.2580561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The Human BioMolecular Atlas Program (HuBMAP) seeks to create a molecular atlas at the cellular level of the human body to spur interdisciplinary innovations across spatial and temporal scales. While the preponderance of effort is allocated towards cellular and molecular scale mapping, differentiating and contextualizing findings within tissues, organs and systems are essential for the HuBMAP efforts. The kidney is an initial organ target of HuBMAP, and constructing a framework (or atlas) for integrating information across scales is needed for visualizing and integrating information. However, there is no abdominal atlas currently available in the public domain. Substantial variation in healthy kidneys exists with sex, body size, and imaging protocols. With the integration of clinical archives for secondary research use, we are able to build atlases based on a diverse population and clinically relevant protocols. In this study, we created a computed tomography (CT) phase-specific atlas for the abdomen, which is optimized for the kidney organ. A two-stage registration pipeline was used by registering extracted abdominal volume of interest from body part regression, to a high-resolution CT. Affine and non-rigid registration were performed to all scans hierarchically. To generate and evaluate the atlas, multiphase CT scans of 500 control subjects (age: 15 - 50, 250 males, 250 females) are registered to the atlas target through the complete pipeline. The abdominal body and kidney registration are shown to be stable with the variance map computed from the result average template. Both left and right kidneys are substantially localized in the high-resolution target space, which successfully demonstrated the sharp details of its anatomical characteristics across each phase. We illustrated the applicability of the atlas template for integrating across normal kidney variation from 64 cm3 to 302 cm3.
Collapse
Affiliation(s)
- Ho Hin Lee
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Yucheng Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Kaiwen Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN USA 37232.,Departments of Medicine and Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA 37232
| | - Raymond Harris
- Division of Nephrology and Hypertension, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA 37232
| | - Mark P de Caestecker
- Division of Nephrology and Hypertension, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA 37232
| | - Mattias Heinrich
- Institute of Medical Informatics, University of Luebeck, Germany
| | | | - Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212.,Radiology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| |
Collapse
|
49
|
Ekström S, Pilia M, Kullberg J, Ahlström H, Strand R, Malmberg F. Faster dense deformable image registration by utilizing both CPU and GPU. J Med Imaging (Bellingham) 2021; 8:014002. [PMID: 33542943 PMCID: PMC7849043 DOI: 10.1117/1.jmi.8.1.014002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 12/31/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Image registration is an important aspect of medical image analysis and a key component in many analysis concepts. Applications include fusion of multimodal images, multi-atlas segmentation, and whole-body analysis. Deformable image registration is often computationally expensive, and the need for efficient registration methods is highlighted by the emergence of large-scale image databases, e.g., the UK Biobank, providing imaging from 100,000 participants. Approach: We present a heterogeneous computing approach, utilizing both the CPU and the graphics processing unit (GPU), to accelerate a previously proposed image registration method. The parallelizable task of computing the matching criterion is offloaded to the GPU, where it can be computed efficiently, while the more complex optimization task is performed on the CPU. To lessen the impact of data synchronization between the CPU and GPU, we propose a pipeline model, effectively overlapping computational tasks with data synchronization. The performance is evaluated on a brain labeling task and compared with a CPU implementation of the same method and the popular advanced normalization tools (ANTs) software. Results: The proposed method presents a speed-up by factors of 4 and 8 against the CPU implementation and the ANTs software, respectively. A significant improvement in labeling quality was also observed, with measured mean Dice overlaps of 0.712 and 0.701 for our method and ANTs, respectively. Conclusions: We showed that the proposed method compares favorably to the ANTs software yielding both a significant speed-up and an improvement in labeling quality. The registration method together with the proposed parallelization strategy is implemented as an open-source software package, deform.
Collapse
Affiliation(s)
- Simon Ekström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Martino Pilia
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden
| | - Joel Kullberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Håkan Ahlström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Robin Strand
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| | - Filip Malmberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| |
Collapse
|
50
|
Anderson BM, Lin EY, Cardenas CE, Gress DA, Erwin WD, Odisio BC, Koay EJ, Brock KK. Automated Contouring of Contrast and Noncontrast Computed Tomography Liver Images With Fully Convolutional Networks. Adv Radiat Oncol 2021; 6:100464. [PMID: 33490720 PMCID: PMC7807136 DOI: 10.1016/j.adro.2020.04.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/14/2020] [Accepted: 04/25/2020] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The deformable nature of the liver can make focal treatment challenging and is not adequately addressed with simple rigid registration techniques. More advanced registration techniques can take deformations into account (eg, biomechanical modeling) but require segmentations of the whole liver for each scan, which is a time-intensive process. We hypothesize that fully convolutional networks can be used to rapidly and accurately autosegment the liver, removing the temporal bottleneck for biomechanical modeling. METHODS AND MATERIALS Manual liver segmentations on computed tomography scans from 183 patients treated at our institution and 30 scans from the Medical Image Computing & Computer Assisted Intervention challenges were collected for this study. Three architectures were investigated for rapid automated segmentation of the liver (VGG-16, DeepLabv3 +, and a 3-dimensional UNet). Fifty-six cases were set aside as a final test set for quantitative model evaluation. Accuracy of the autosegmentations was assessed using Dice similarity coefficient and mean surface distance. Qualitative evaluation was also performed by 3 radiation oncologists on 50 independent cases with previously clinically treated liver contours. RESULTS The mean (minimum-maximum) mean surface distance for the test groups with the final model, DeepLabv3 +, were as follows: μContrast(N = 17): 0.99 mm (0.47-2.2), μNon_Contrast(N = 19)l: 1.12 mm (0.41-2.87), and μMiccai(N = 30)t: 1.48 mm (0.82-3.96). The qualitative evaluation showed that 30 of 50 autosegmentations (60%) were preferred to manual contours (majority voting) in a blinded comparison, and 48 of 50 autosegmentations (96%) were deemed clinically acceptable by at least 1 reviewing physician. CONCLUSIONS The autosegmentations were preferred compared with manually defined contours in the majority of cases. The ability to rapidly segment the liver with high accuracy achieved in this investigation has the potential to enable the efficient integration of biomechanical model-based registration into a clinical workflow.
Collapse
Affiliation(s)
- Brian M. Anderson
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ethan Y. Lin
- Department of Interventional Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Carlos E. Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Dustin A. Gress
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - William D. Erwin
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Bruno C. Odisio
- Department of Interventional Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Eugene J. Koay
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Kristy K. Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|