1
|
Fan F, Ritschl L, Beister M, Biniazan R, Wagner F, Kreher B, Gottschalk TM, Kappler S, Maier A. Simulation-driven training of vision transformers enables metal artifact reduction of highly truncated CBCT scans. Med Phys 2024; 51:3360-3375. [PMID: 38150576 DOI: 10.1002/mp.16919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 11/17/2023] [Accepted: 12/13/2023] [Indexed: 12/29/2023] Open
Abstract
BACKGROUND Due to the high attenuation of metals, severe artifacts occur in cone beam computed tomography (CBCT). The metal segmentation in CBCT projections usually serves as a prerequisite for metal artifact reduction (MAR) algorithms. PURPOSE The occurrence of truncation caused by the limited detector size leads to the incomplete acquisition of metal masks from the threshold-based method in CBCT volume. Therefore, segmenting metal directly in CBCT projections is pursued in this work. METHODS Since the generation of high quality clinical training data is a constant challenge, this study proposes to generate simulated digital radiographs (data I) based on real CT data combined with self-designed computer aided design (CAD) implants. In addition to the simulated projections generated from 3D volumes, 2D x-ray images combined with projections of implants serve as the complementary data set (data II) to improve the network performance. In this work, SwinConvUNet consisting of shift window (Swin) vision transformers (ViTs) with patch merging as encoder is proposed for metal segmentation. RESULTS The model's performance is evaluated on accurately labeled test datasets obtained from cadaver scans as well as the unlabeled clinical projections. When trained on the data I only, the convolutional neural network (CNN) encoder-based networks UNet and TransUNet achieve only limited performance on the cadaver test data, with an average dice score of 0.821 and 0.850. After using both data II and data I during training, the average dice scores for the two models increase to 0.906 and 0.919, respectively. By replacing the CNN encoder with Swin transformer, the proposed SwinConvUNet reaches an average dice score of 0.933 for cadaver projections when only trained on the data I. Furthermore, SwinConvUNet has the largest average dice score of 0.953 for cadaver projections when trained on the combined data set. CONCLUSIONS Our experiments quantitatively demonstrate the effectiveness of the combination of the projections simulated under two pathways for network training. Besides, the proposed SwinConvUNet trained on the simulated projections performs state-of-the-art, robust metal segmentation as demonstrated on experiments on cadaver and clinical data sets. With the accurate segmentations from the proposed model, MAR can be conducted even for highly truncated CBCT scans.
Collapse
Affiliation(s)
- Fuxin Fan
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | - Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
2
|
Aootaphao S, Puttawibul P, Thajchayapong P, Thongvigitmanee SS. Artifact suppression for breast specimen imaging in micro CBCT using deep learning. BMC Med Imaging 2024; 24:34. [PMID: 38321390 PMCID: PMC10845762 DOI: 10.1186/s12880-024-01216-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 01/29/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) has been introduced for breast-specimen imaging to identify a free resection margin of abnormal tissues in breast conservation. As well-known, typical micro CT consumes long acquisition and computation times. One simple solution to reduce the acquisition scan time is to decrease of the number of projections, but this method generates streak artifacts on breast specimen images. Furthermore, the presence of a metallic-needle marker on a breast specimen causes metal artifacts that are prominently visible in the images. In this work, we propose a deep learning-based approach for suppressing both streak and metal artifacts in CBCT. METHODS In this work, sinogram datasets acquired from CBCT and a small number of projections containing metal objects were used. The sinogram was first modified by removing metal objects and up sampling in the angular direction. Then, the modified sinogram was initialized by linear interpolation and synthesized by a modified neural network model based on a U-Net structure. To obtain the reconstructed images, the synthesized sinogram was reconstructed using the traditional filtered backprojection (FBP) approach. The remaining residual artifacts on the images were further handled by another neural network model, ResU-Net. The corresponding denoised image was combined with the extracted metal objects in the same data positions to produce the final results. RESULTS The image quality of the reconstructed images from the proposed method was improved better than the images from the conventional FBP, iterative reconstruction (IR), sinogram with linear interpolation, denoise with ResU-Net, sinogram with U-Net. The proposed method yielded 3.6 times higher contrast-to-noise ratio, 1.3 times higher peak signal-to-noise ratio, and 1.4 times higher structural similarity index (SSIM) than the traditional technique. Soft tissues around the marker on the images showed good improvement, and the mainly severe artifacts on the images were significantly reduced and regulated by the proposed. METHOD CONCLUSIONS Our proposed method performs well reducing streak and metal artifacts in the CBCT reconstructed images, thus improving the overall breast specimen images. This would be beneficial for clinical use.
Collapse
Affiliation(s)
- Sorapong Aootaphao
- Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand.
- Medical Imaging System Research Team, Assistive Technology and Medical Devices Research Group, National Electronics and Computer Technology Center, National Science and Technology Development Agency, Pathum Thani, Thailand.
| | | | | | - Saowapak S Thongvigitmanee
- Medical Imaging System Research Team, Assistive Technology and Medical Devices Research Group, National Electronics and Computer Technology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| |
Collapse
|
3
|
Yun S, Jeong U, Lee D, Kim H, Cho S. Image quality improvement in bowtie-filter-equipped cone-beam CT using a dual-domain neural network. Med Phys 2023; 50:7498-7512. [PMID: 37669510 DOI: 10.1002/mp.16693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 08/08/2023] [Accepted: 08/09/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The bowtie-filter in cone-beam CT (CBCT) causes spatially nonuniform x-ray beam often leading to eclipse artifacts in the reconstructed image. The artifacts are further confounded by the patient scatter, which is therefore patient-dependent as well as system-specific. PURPOSE In this study, we propose a dual-domain network for reducing the bowtie-filter-induced artifacts in CBCT images. METHODS In the projection domain, the network compensates for the filter-induced beam-hardening that are highly related to the eclipse artifacts. The output of the projection-domain network was used for image reconstruction and the reconstructed images were fed into the image-domain network. In the image domain, the network further reduces the remaining cupping artifacts that are associated with the scatter. A single image-domain-only network was also implemented for comparison. RESULTS The proposed approach successfully enhanced soft-tissue contrast with much-reduced image artifacts. In the numerical study, the proposed method decreased perceptual loss and root-mean-square-error (RMSE) of the images by 84.5% and 84.9%, respectively, and increased the structure similarity index measure (SSIM) by 0.26 compared to the original input images on average. In the experimental study, the proposed method decreased perceptual loss and RMSE of the images by 87.2% and 92.1%, respectively, and increased SSIM by 0.58 compared to the original input images on average. CONCLUSIONS We have proposed a deep-learning-based dual-domain framework to reduce the bowtie-filter artifacts and to increase the soft-tissue contrast in CBCT images. The performance of the proposed method has been successfully demonstrated in both numerical and experimental studies.
Collapse
Affiliation(s)
- Sungho Yun
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Uijin Jeong
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Donghyeon Lee
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Hyeongseok Kim
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for Health Science and Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for IT Convergence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
4
|
Li G, Ji L, You C, Gao S, Zhou L, Bai K, Luo S, Gu N. MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints. Phys Med Biol 2023; 68:205005. [PMID: 37696272 DOI: 10.1088/1361-6560/acf8ac] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 09/11/2023] [Indexed: 09/13/2023]
Abstract
Objective.Metal artifact reduction (MAR) has been a key issue in CT imaging. Recently, MAR methods based on deep learning have achieved promising results. However, when deploying deep learning-based MAR in real-world clinical scenarios, two prominent challenges arise. One limitation is the lack of paired training data in real applications, which limits the practicality of supervised methods. Another limitation is that image-domain methods suitable for more application scenarios are inadequate in performance while end-to-end approaches with better performance are only applicable to fan-beam CT due to large memory consumption.Approach.We propose a novel image-domain MAR method based on the generative adversarial network with variable constraints (MARGANVAC) to improve MAR performance. The proposed variable constraint is a kind of time-varying cost function that can relax the fidelity constraint at the beginning and gradually strengthen the fidelity constraint as the training progresses. To better deploy our image-domain supervised method into practical scenarios, we develop a transfer method to mimic the real metal artifacts by first extracting the real metal traces and then adding them to artifact-free images to generate paired training data.Main results.The effectiveness of the proposed method is validated in simulated fan-beam experiments and real cone-beam experiments. All quantitative and qualitative results demonstrate that the proposed method achieves superior performance compared with the competing methods.Significance.The MARGANVAC model proposed in this paper is an image-domain model that can be conveniently applied to various scenarios such as fan beam and cone beam CT. At the same time, its performance is on par with the cutting-edge dual-domain MAR approaches. In addition, the metal artifact transfer method proposed in this paper can easily generate paired data with real artifact features, which can be better used for model training in real scenarios.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Longyin Ji
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Chenyu You
- Image Processing and Analysis Group (IPAG), Yale University, New Haven 06510, United States of America
| | - Shuai Gao
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Langrui Zhou
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Keshu Bai
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| | - Ning Gu
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing 210096, People's Republic of China
| |
Collapse
|
5
|
Tang H, Lin YB, Jiang SD, Li Y, Li T, Bao XD. A new dental CBCT metal artifact reduction method based on a dual-domain processing framework. Phys Med Biol 2023; 68:175016. [PMID: 37524084 DOI: 10.1088/1361-6560/acec29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.Cone beam computed tomography (CBCT) has been wildly used in clinical treatment of dental diseases. However, patients often have metallic implants in mouth, which will lead to severe metal artifacts in the reconstructed images. To reduce metal artifacts in dental CBCT images, which have a larger amount of data and a limited field of view compared to computed tomography images, a new dental CBCT metal artifact reduction method based on a projection correction and a convolutional neural network (CNN) based image post-processing model is proposed in this paper. Approach.The proposed method consists of three stages: (1) volume reconstruction and metal segmentation in the image domain, using the forward projection to get the metal masks in the projection domain; (2) linear interpolation in the projection domain and reconstruction to build a linear interpolation (LI) corrected volume; (3) take the LI corrected volume as prior and perform the prior based beam hardening correction in the projection domain, and (4) combine the constructed projection corrected volume and LI-volume slice-by-slice in the image domain by two concatenated U-Net based models (CNN1 and CNN2). Simulated and clinical dental CBCT cases are used to evaluate the proposed method. The normalized root means square difference (NRMSD) and the structural similarity index (SSIM) are used for the quantitative evaluation of the method.Main results.The proposed method outperforms the frequency domain fusion method (FS-MAR) and a state-of-art CNN based method on the simulated dataset and yields the best NRMSD and SSIM of 4.0196 and 0.9924, respectively. Visual results on both simulated and clinical images also illustrate that the proposed method can effectively reduce metal artifacts.Significance. This study demonstrated that the proposed dual-domain processing framework is suitable for metal artifact reduction in dental CBCT images.
Collapse
Affiliation(s)
- Hui Tang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, People's Republic of China
| | - Yu Bing Lin
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Su Dong Jiang
- School of Software Engineering, Southeast University, Nanjing, People's Republic of China
| | - Yu Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Tian Li
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Xu Dong Bao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China
| |
Collapse
|
6
|
Yang P, Ge X, Tsui T, Liang X, Xie Y, Hu Z, Niu T. Four-Dimensional Cone Beam CT Imaging Using a Single Routine Scan via Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1495-1508. [PMID: 37015393 DOI: 10.1109/tmi.2022.3231461] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
A novel method is proposed to obtain four-dimensional (4D) cone-beam computed tomography (CBCT) images from a routine scan in patients with upper abdominal cancer. The projections are sorted according to the location of the lung diaphragm before being reconstructed to phase-sorted data. A multiscale-discriminator generative adversarial network (MSD-GAN) is proposed to alleviate the severe streaking artifacts in the original images. The MSD-GAN is trained using simulated CBCT datasets from patient planning CT images. The enhanced images are further used to estimate the deformable vector field (DVF) among breathing phases using a deformable image registration method. The estimated DVF is then applied in the motion-compensated ordered-subset simultaneous algebraic reconstruction approach to generate 4D CBCT images. The proposed MSD-GAN is compared with U-Net on the performance of image enhancement. Results show that the proposed method significantly outperforms the total variation regularization-based iterative reconstruction approach and the method using only MSD-GAN to enhance original phase-sorted images in simulation and patient studies on 4D reconstruction quality. The MSD-GAN also shows higher accuracy than the U-Net. The proposed method enables a practical way for 4D-CBCT imaging from a single routine scan in upper abdominal cancer treatment including liver and pancreatic tumors.
Collapse
|
7
|
Gottschalk TM, Maier A, Kordon F, Kreher BW. DL-based inpainting for metal artifact reduction for cone beam CT using metal path length information. Med Phys 2023; 50:128-141. [PMID: 35925029 DOI: 10.1002/mp.15909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/29/2022] [Accepted: 07/18/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Metallic implants, which are inserted into the patient's body during trauma interventions, are the main cause of heavy artifacts in 3D X-ray acquisitions. These artifacts then hinder the evaluation of the correct implant's positioning, thus leading to a disturbed patient's healing process and increased revision rates. PURPOSE This problem is tackled by so-called metal artifact reduction (MAR) methods. This paper examines possible advances in the inpainting process of such MAR methods to decrease disruptive artifacts while simultaneously preserving important anatomical structures adjacent to the inserted implants. METHODS In this paper, a learning-based inpainting method for cone-beam computed tomography is proposed that couples a convolutional neural network (CNN) with an estimated metal path length as prior knowledge. Further, the proposed method is solely trained and evaluated on real measured data. RESULTS The proposed inpainting approach shows advantages over the inpainting method used by the currently clinically approved frequency split metal artifact reduction (fsMAR) method as well as the learning-based state-of-the-art (SOTA) method PConv-Net. The major improvement of the proposed inpainting method lies in the ability to correctly preserve important anatomical structures in those regions adjacent to the metal implants. Especially these regions are highly important for a correct implant's positioning in an intraoperative setup. Using the proposed inpainting, the corresponding MAR volumes reach a mean structural similarity index measure (SSIM) score of 0.9974 and outperform the other methods by up to 6 dB on single slices regarding the peak signal-to-noise ratio (PSNR) score. Furthermore, it can be shown that the proposed method can generalize to clinical cases at hand. CONCLUSIONS In this paper, a learning-based inpainting network is proposed that leverages prior knowledge about the metal path length of the inserted implant. Evaluations on real measured data reveal an increased overall MAR performance, especially regarding the preservation of anatomical structures adjacent to the inserted implants. Further evaluations suggest the ability of the proposed approach to generalize to clinical cases.
Collapse
Affiliation(s)
- Tristan M Gottschalk
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Siemens Healthineers, Forchheim, Germany.,Erlangen Graduate School in Advanced Optical Technologies (SAOT), FAU, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Florian Kordon
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Siemens Healthineers, Forchheim, Germany.,Erlangen Graduate School in Advanced Optical Technologies (SAOT), FAU, Erlangen, Germany
| | | |
Collapse
|
8
|
Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network‐based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme. Med Phys 2022; 49:6253-6277. [DOI: 10.1002/mp.15884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Junhyun Ahn
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Byeongjoon Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering Convergence IT Engineering, Mechanical Engineering School of Interdisciplinary Bioscience and Bioengineering, and Medical Device Innovation Center Pohang University of Science and Technology Pohang 37673 South Korea
| | - Jongduk Baek
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| |
Collapse
|
9
|
Thies M, Wagner F, Huang Y, Gu M, Kling L, Pechmann S, Aust O, Grüneboom A, Schett G, Christiansen S, Maier A. Calibration by differentiation - Self-supervised calibration for X-ray microscopy using a differentiable cone-beam reconstruction operator. J Microsc 2022; 287:81-92. [PMID: 35638174 DOI: 10.1111/jmi.13125] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/20/2022] [Accepted: 05/22/2022] [Indexed: 11/28/2022]
Abstract
High-resolution X-ray microscopy (XRM) is gaining interest for biological investigations of extremely small-scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometers in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open-source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data-driven way using the gradient-based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self-supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artifacts and decreases the difference in gray values between outer and inner bone by 68% to 94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat-panel CT systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step toward the goal of reducing the resolution limit of in-vivo bone imaging to the single micrometer domain. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mareike Thies
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Lasse Kling
- Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, Germany
| | - Sabrina Pechmann
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, Germany
| | - Oliver Aust
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany
| | - Anika Grüneboom
- Leibniz Institute for Analytical Sciences ISAS, Dortmund, Germany
| | - Georg Schett
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany.,Deutsches Zentrum für Immuntherapie, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany
| | - Silke Christiansen
- Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, Germany.,Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, Germany.,Physics Department, Freie Universität Berlin, Berlin, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
10
|
Wagner F, Thies M, Gu M, Huang Y, Pechmann S, Patwari M, Ploner S, Aust O, Uderhardt S, Schett G, Christiansen S, Maier A. Ultra low-parameter denoising: Trainable bilateral filter layers in computed tomography. Med Phys 2022; 49:5107-5120. [PMID: 35583171 DOI: 10.1002/mp.15718] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/25/2022] [Accepted: 05/11/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE Most data-driven denoising techniques are based on deep neural networks and, therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures (SSIM) of 0.7094 and 0.9674 and peak signal-to-noise ratio (PSNR) values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Mareike Thies
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Sabrina Pechmann
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, 91301, Germany
| | - Mayank Patwari
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Stefan Ploner
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Oliver Aust
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Stefan Uderhardt
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Georg Schett
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91054, Germany.,University Hospital Erlangen, Erlangen, 91054, Germany
| | - Silke Christiansen
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, 91301, Germany.,Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, 91301, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, 91058, Germany
| |
Collapse
|
11
|
Zhou B, Chen X, Zhou SK, Duncan JS, Liu C. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography. Med Image Anal 2022; 75:102289. [PMID: 34758443 PMCID: PMC8678361 DOI: 10.1016/j.media.2021.102289] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 09/03/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Sparse-view computed tomography (SVCT) aims to reconstruct a cross-sectional image using a reduced number of x-ray projections. While SVCT can efficiently reduce the radiation dose, the reconstruction suffers from severe streak artifacts, and the artifacts are further amplified with the presence of metallic implants, which could adversely impact the medical diagnosis and other downstream applications. Previous methods have extensively explored either SVCT reconstruction without metallic implants, or full-view CT metal artifact reduction (MAR). The issue of simultaneous sparse-view and metal artifact reduction (SVMAR) remains under-explored, and it is infeasible to directly apply previous SVCT and MAR methods to SVMAR which may yield non-ideal reconstruction quality. In this work, we propose a dual-domain data consistent recurrent network, called DuDoDR-Net, for SVMAR. Our DuDoDR-Net aims to reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations. To ensure the metal-free part of acquired projection data is preserved, we also develop the image data consistent layer (iDCL) and sinogram data consistent layer (sDCL) that are interleaved in our recurrent framework. Our experimental results demonstrate that our DuDoDR-Net is able to produce superior artifact-reduced results while preserving the anatomical structures, that outperforming previous SVCT and SVMAR methods, under different sparse-view acquisition settings.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China; Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
12
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
13
|
La Rivière PJ, Fahrig R, Pelc NJ. Special Section Guest Editorial: Computed tomography (CT) at 50 years. J Med Imaging (Bellingham) 2021; 8:052101. [PMID: 34738026 PMCID: PMC8558671 DOI: 10.1117/1.jmi.8.5.052101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Guest editors Patrick La Riviere, Rebecca Fahrig, and Norbert Pelc introduce the JMI Special Section Celebrating X-Ray Computed Tomography at 50.
Collapse
Affiliation(s)
- Patrick J La Rivière
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Rebecca Fahrig
- Siemens Healthineers, Innovation, Advanced Therapies, Forchheim, Bavaria, Germany
- Friedrich-Alexander Universität, Department of Computer Science 5, Erlangen, Germany
| | - Norbert J Pelc
- Stanford University, Department of Radiology, Stanford, California, United States
| |
Collapse
|