1
|
Ma G, Zhao X, Zhu Y, Luo T. Fourier-enhanced high-order total variation (FeHOT) iterative network for interior tomography. Phys Med Biol 2025; 70:095001. [PMID: 40179937 DOI: 10.1088/1361-6560/adc8f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 04/03/2025] [Indexed: 04/05/2025]
Abstract
Objective. Determining a satisfactory solution for different computed tomography (CT) fields has been a long-standing challenge in the interior tomography, Traditional methods like FBP suffer from low contrast, while deep learning approaches often lack data consistency. The goal is to leverage high-order total variation (HOT) regularization and Fourier-based frequency domain enhancement to achieve high-precision reconstruction from truncated projection data while overcoming limitations such as slow convergence, over-smoothing, and loss of high-frequency details in existing methods.Approach. The proposed Fourier-enhanced HOT (FeHOT) network employs a coarse-to-fine strategy. First, a HOT-based unrolled iterative network accelerates coarse reconstruction using a learned primal-dual algorithm for data consistency and implicit high-order gradient constraints. Second, a Fourier-enhanced U-Net module selectively attenuates low-frequency components in skip connections while amplifying high-frequency features from filtered back-projection (FBP) results, preserving edge and texture details. Frequency-dependent scaling factors are introduced to balance spectral components during refinement.Main Results. Experiments on the AAPM and clinical medical datasets demonstrate FeHOT's superiority over competing methods (FBP, HOT, AG-Net, PD-Net). For the medical dataset, FeHOT achieved PSNR = 41.17 (noise-free) and 39.24 (noisy), outperforming PD-Net (33.42/31.08) and AG-Net (33.41/31.31). Meanwhile, For the AAPM dataset, where imaged objects exhibit piecewise constant properties, first-order total variation achieved satisfactory results. In contrast, for clinical medical datasets with non-piecewise-constant characteristics (e.g. complex anatomical structures), FeHOT's second-order regularization better aligned with the high-quality requirements of interior tomography. Ablation studies confirmed the necessity of Fourier enhancement, showing significant improvements in edge preservation (e.g. SSIM increased from 0.9877 to 0.9976 for noise-free cases). The method achieved high-quality reconstruction within five iterations, reducing computational costs.Significance. FeHOT represents a paradigm shift in interior tomography by: 1) Bridging classical HOT theory with deep learning through an iterative unrolling framework. 2) Introducing frequency-domain operations to overcome the limitations of polynomial/piecewise-constant assumptions in CT images. 3) Enabling high-quality reconstruction in just five iterations, balancing computational efficiency with accuracy. This method offers a promising solution for low-dose, precise imaging in clinical and industrial applications.
Collapse
Affiliation(s)
- Genwei Ma
- The Academy for Multidisciplinary Studies, Captial Normal University, Beijing, People's Republic of China
| | - Xing Zhao
- School of Mathematical Sciences, Captial Normal University, Beijing, People's Republic of China
| | - Yining Zhu
- School of Mathematical Sciences, Captial Normal University, Beijing, People's Republic of China
| | - Ting Luo
- The Academy of Information Network Security, People's Public Security University of China, Beijing, People's Republic of China
| |
Collapse
|
2
|
Li L, Zhang Z, Li Y, Wang Y, Zhao W. DDoCT: Morphology preserved dual-domain joint optimization for fast sparse-view low-dose CT imaging. Med Image Anal 2025; 101:103420. [PMID: 39705821 DOI: 10.1016/j.media.2024.103420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 11/07/2024] [Accepted: 11/28/2024] [Indexed: 12/23/2024]
Abstract
Computed tomography (CT) is continuously becoming a valuable diagnostic technique in clinical practice. However, the radiation dose exposure in the CT scanning process is a public health concern. Within medical diagnoses, mitigating the radiation risk to patients can be achieved by reducing the radiation dose through adjustments in tube current and/or the number of projections. Nevertheless, dose reduction introduces additional noise and artifacts, which have extremely detrimental effects on clinical diagnosis and subsequent analysis. In recent years, the feasibility of applying deep learning methods to low-dose CT (LDCT) imaging has been demonstrated, leading to significant achievements. This article proposes a dual-domain joint optimization LDCT imaging framework (termed DDoCT) which uses noisy sparse-view projection to reconstruct high-performance CT images with joint optimization in projection and image domains. The proposed method not only addresses the noise introduced by reducing tube current, but also pays special attention to issues such as streak artifacts caused by a reduction in the number of projections, enhancing the applicability of DDoCT in practical fast LDCT imaging environments. Experimental results have demonstrated that DDoCT has made significant progress in reducing noise and streak artifacts and enhancing the contrast and clarity of the images.
Collapse
Affiliation(s)
- Linxuan Li
- School of Physics, Beihang University, Beijing, China.
| | - Zhijie Zhang
- School of Physics, Beihang University, Beijing, China.
| | - Yongqing Li
- School of Physics, Beihang University, Beijing, China.
| | - Yanxin Wang
- School of Physics, Beihang University, Beijing, China.
| | - Wei Zhao
- School of Physics, Beihang University, Beijing, China; Hangzhou International Innovation Institute, Beihang University, Hangzhou, China; Tianmushan Laboratory, Hangzhou, China.
| |
Collapse
|
3
|
Han Y. Low-dose CT reconstruction using cross-domain deep learning with domain transfer module. Phys Med Biol 2025; 70:065014. [PMID: 39983305 DOI: 10.1088/1361-6560/adb932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 02/21/2025] [Indexed: 02/23/2025]
Abstract
Objective. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.Approach. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.Main results. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.Significance. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.
Collapse
Affiliation(s)
- Yoseob Han
- Department of Electronic Engineering, Soongsil University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Zhang R, Szczykutowicz TP, Toia GV. Artificial Intelligence in Computed Tomography Image Reconstruction: A Review of Recent Advances. J Comput Assist Tomogr 2025:00004728-990000000-00429. [PMID: 40008975 DOI: 10.1097/rct.0000000000001734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 01/07/2025] [Indexed: 02/27/2025]
Abstract
The development of novel image reconstruction algorithms has been pivotal in enhancing image quality and reducing radiation dose in computed tomography (CT) imaging. Traditional techniques like filtered back projection perform well under ideal conditions but fail to generate high-quality images under low-dose, sparse-view, and limited-angle conditions. Iterative reconstruction methods improve upon filtered back projection by incorporating system models and assumptions about the patient, yet they can suffer from patchy image textures. The emergence of artificial intelligence (AI), particularly deep learning, has further advanced CT reconstruction. AI techniques have demonstrated great potential in reducing radiation dose while preserving image quality and noise texture. Moreover, AI has exhibited unprecedented performance in addressing challenging CT reconstruction problems, including low-dose CT, sparse-view CT, limited-angle CT, and interior tomography. This review focuses on the latest advances in AI-based CT reconstruction under these challenging conditions.
Collapse
Affiliation(s)
- Ran Zhang
- Departments of Radiology and Medical Physics, University of Wisconsin, Madison, WI
| | | | | |
Collapse
|
5
|
Liu Y, Yu W, Wang P, Huang Y, Li J, Li P. Deep Learning With Ultrasound Images Enhance the Diagnosis of Nonalcoholic Fatty Liver. ULTRASOUND IN MEDICINE & BIOLOGY 2024:S0301-5629(24)00291-6. [PMID: 39179453 DOI: 10.1016/j.ultrasmedbio.2024.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/21/2024] [Accepted: 07/29/2024] [Indexed: 08/26/2024]
Abstract
OBJECTIVE This research aimed to improve diagnosis of non-alcoholic fatty liver disease (NAFLD) by deep learning with ultrasound Images and reduce the impact of the professional competence and personal bias of the diagnostician. METHOD Three convolutional neural network models were used to classify and identify the ultrasound images to obtain the best network. Then, the features in the ultrasound images were extracted and a new convolutional neural network was created based on the best network. Finally, the accuracy of several networks was compared and the best network was evaluated using AUC. RESULTS Models of VGG16, ResNet50, and Inception-v3 were individually applied to classify and identify 710 ultrasound images containing NAFLD, demonstrating accuracies of 66.2%, 58.5%, and 59.2%, respectively. To further improve the classification accuracy, two features are presented: the ultrasound echo attenuation coefficient (θ), derived from fitting brightness values within sliding region of interest (ROIs), and the ratio of Doppler effect (ROD), identified through analyzing spots exhibiting the Doppler effect. Then, a multi-input deep learning network framework based on the VGG16 model is established, where the VGG16 model processes ultrasound image, while the fully connected layers handle θ and ROD. Ultimately, these components are combined to jointly generate predictions, demonstrating robust diagnostic capabilities for moderate to severe fatty liver (AUC = 0.95). Moreover, the average accuracy is increased from 64.8% to 77.5%, attributed to the introduction of two advanced features with domain knowledge. CONCLUSION This research holds significant potential in aiding doctors for more precise and efficient diagnosis of ultrasound images related to NAFLD.
Collapse
Affiliation(s)
- Yao Liu
- Department of Ultrasound, The Second Affliated Hospital of Chongqing Medical University, Chongqing, China; Ultrasonography Department, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
| | - Wenrou Yu
- College of Physics, Chongqing University, Chongqing, China
| | - Peizheng Wang
- College of Physics, Chongqing University, Chongqing, China; School of Software Technology, Zhejiang University, Ningbo, China
| | - Yingzhou Huang
- College of Physics, Chongqing University, Chongqing, China
| | - Jin Li
- College of Physics, Chongqing University, Chongqing, China
| | - Pan Li
- Department of Ultrasound, The Second Affliated Hospital of Chongqing Medical University, Chongqing, China.
| |
Collapse
|
6
|
Zhao P, Liu Z. Influence on Sample Determination for Deep Learning Electromagnetic Tomography. SENSORS (BASEL, SWITZERLAND) 2024; 24:2452. [PMID: 38676069 PMCID: PMC11054014 DOI: 10.3390/s24082452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 03/25/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024]
Abstract
Deep learning (DL) has been frequently applied in the image reconstruction of electromagnetic tomography (EMT) in recent years. It offers the potential to achieve higher-quality image reconstruction. Among these, research on samples is relatively scarce. Samples are the cornerstone for both large and small models, which is easy to ignore. In this paper, a deep learning electromagnetic tomography (DL-EMT) model with nine elements is established. Complete simulation and experimental samples are obtained based on this model. On the sample sets, the reconstruction quality is observed by adjusting the size and configuration of the training set. The Mann-Whitney U test shows that beyond a certain point, the addition of more samples to the training data fed into the deep learning network does not result in an obvious improvement statistically in the quality of the reconstructed images. This paper proposes a CC-building method for optimizing a sample set. This method is based on the Pearson correlation coefficient calculation, aiming to establish a more effective sample base for DL-EMT image reconstruction. The statistical analysis shows that the CC-building method can significantly improve the image reconstruction effect in a small and moderate sample size. This method is also validated by experiments.
Collapse
Affiliation(s)
| | - Ze Liu
- The School of Automation and Intelligence, Beijing Jiaotong University, Beijing 100044, China;
| |
Collapse
|
7
|
Han Y. Hierarchical decomposed dual-domain deep learning for sparse-view CT reconstruction. Phys Med Biol 2024; 69:085019. [PMID: 38457843 DOI: 10.1088/1361-6560/ad31c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 03/08/2024] [Indexed: 03/10/2024]
Abstract
Objective. X-ray computed tomography employing sparse projection views has emerged as a contemporary technique to mitigate radiation dose. However, due to the inadequate number of projection views, an analytic reconstruction method utilizing filtered backprojection results in severe streaking artifacts. Recently, deep learning (DL) strategies employing image-domain networks have demonstrated remarkable performance in eliminating the streaking artifact caused by analytic reconstruction methods with sparse projection views. Nevertheless, it is difficult to clarify the theoretical justification for applying DL to sparse view computed tomography (CT) reconstruction, and it has been understood as restoration by removing image artifacts, not reconstruction.Approach. By leveraging the theory of deep convolutional framelets (DCF) and the hierarchical decomposition of measurement, this research reveals the constraints of conventional image and projection-domain DL methodologies, subsequently, the research proposes a novel dual-domain DL framework utilizing hierarchical decomposed measurements. Specifically, the research elucidates how the performance of the projection-domain network can be enhanced through a low-rank property of DCF and a bowtie support of hierarchical decomposed measurement in the Fourier domain.Main results. This study demonstrated performance improvement of the proposed framework based on the low-rank property, resulting in superior reconstruction performance compared to conventional analytic and DL methods.Significance. By providing a theoretically justified DL approach for sparse-view CT reconstruction, this study not only offers a superior alternative to existing methods but also opens new avenues for research in medical imaging. It highlights the potential of dual-domain DL frameworks to achieve high-quality reconstructions with lower radiation doses, thereby advancing the field towards safer and more efficient diagnostic techniques. The code is available athttps://github.com/hanyoseob/HDD-DL-for-SVCT.
Collapse
Affiliation(s)
- Yoseob Han
- Department of Electronic Engineering, Soongsil University, Republic of Korea
- Department of Intelligent Semiconductors, Soongsil University, Republic of Korea
| |
Collapse
|
8
|
Zhang C, Chen GH. Deep-Interior: A new pathway to interior tomographic image reconstruction via a weighted backprojection and deep learning. Med Phys 2024; 51:946-963. [PMID: 38063251 PMCID: PMC10993302 DOI: 10.1002/mp.16880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 11/21/2023] [Accepted: 11/22/2023] [Indexed: 02/10/2024] Open
Abstract
BACKGROUND In recent years, deep learning strategies have been combined with either the filtered backprojection or iterative methods or the direct projection-to-image by deep learning only to reconstruct images. Some of these methods can be applied to address the interior reconstruction problems for centered regions of interest (ROIs) with fixed sizes. Developing a method to enable interior tomography with arbitrarily located ROIs with nearly arbitrary ROI sizes inside a scanning field of view (FOV) remains an open question. PURPOSE To develop a new pathway to enable interior tomographic reconstruction for arbitrarily located ROIs with arbitrary sizes using a single trained deep neural network model. METHODS The method consists of two steps. First, an analytical weighted backprojection reconstruction algorithm was developed to perform domain transform from divergent fan-beam projection data to an intermediate image feature space,B ( x ⃗ ) $B(\vec{x})$ , for an arbitrary size ROI at an arbitrary location inside the FOV. Second, a supervised learning technique was developed to train a deep neural network architecture to perform deconvolution to obtain the true imagef ( x ⃗ ) $f(\vec{x})$ from the new feature spaceB ( x ⃗ ) $B(\vec{x})$ . This two-step method is referred to as Deep-Interior for convenience. Both numerical simulations and experimental studies were performed to validate the proposed Deep-Interior method. RESULTS The results showed that ROIs as small as a diameter of 5 cm could be accurately reconstructed (similarity index 0.985 ± 0.018 on internal testing data and 0.940 ± 0.025 on external testing data) at arbitrary locations within an imaging object covering a wide variety of anatomical structures of different body parts. Besides, ROIs of arbitrary size can be reconstructed by stitching small ROIs without additional training. CONCLUSION The developed Deep-Interior framework can enable interior tomographic reconstruction from divergent fan-beam projections for short-scan and super-short-scan acquisitions for small ROIs (with a diameter larger than 5 cm) at an arbitrary location inside the scanning FOV with high quantitative reconstruction accuracy.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
9
|
Li G, Huang X, Huang X, Zong Y, Luo S. PIDNET: Polar Transformation Based Implicit Disentanglement Network for Truncation Artifacts. ENTROPY (BASEL, SWITZERLAND) 2024; 26:101. [PMID: 38392356 PMCID: PMC10887623 DOI: 10.3390/e26020101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/18/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
The interior problem, a persistent ill-posed challenge in CT imaging, gives rise to truncation artifacts capable of distorting CT values, thereby significantly impacting clinical diagnoses. Traditional methods have long struggled to effectively solve this issue until the advent of supervised models built on deep neural networks. However, supervised models are constrained by the need for paired data, limiting their practical application. Therefore, we propose a simple and efficient unsupervised method based on the Cycle-GAN framework. Introducing an implicit disentanglement strategy, we aim to separate truncation artifacts from content information. The separated artifact features serve as complementary constraints and the source of generating simulated paired data to enhance the training of the sub-network dedicated to removing truncation artifacts. Additionally, we incorporate polar transformation and an innovative constraint tailored specifically for truncation artifact features, further contributing to the effectiveness of our approach. Experiments conducted on multiple datasets demonstrate that our unsupervised network outperforms the traditional Cycle-GAN model significantly. When compared to state-of-the-art supervised models trained on paired datasets, our model achieves comparable visual results and closely aligns with quantitative evaluation metrics.
Collapse
Affiliation(s)
- Guang Li
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinhai Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinyu Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Yuan Zong
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Shouhua Luo
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| |
Collapse
|
10
|
Wang Z, Cui J, Bian X, Tang R, Li Z, Li S, Lin L, Wang S. Single-slice rebinning reconstruction method for segmented helical computed tomography. OPTICS EXPRESS 2023; 31:30514-30528. [PMID: 37710592 DOI: 10.1364/oe.502160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 08/17/2023] [Indexed: 09/16/2023]
Abstract
Recently, to easily extend the helical field-of-view (FOV), the segmented helical computed tomography (SHCT) method was proposed, as well as the corresponding generalized backprojection filtration (G-BPF) type algorithm. Similar to the geometric relationship between helical and circular CT, SHCT just becomes full-scan multiple source-translation CT (F-mSTCT) when the pitch is zero and the number of scan cycles is one. The strategy of G-BPF follows the idea of the generalized Feldkamp approximate cone-beam algorithm for helical CT, i.e., using the F-mSTCT cone-beam BPF algorithm to approximately perform reconstruction for SHCT. The image quality is limited by the pitch size, which implies that satisfactory quality could only be obtained under the conditions of small pitches. To extend the analytical reconstruction for SHCT, an effective single-slice rebinning (SSRB) method for SHCT is investigated here. Transforming the SHCT cone-beam reconstruction into the virtual F-mSTCT fan-beam stack reconstruction task with low computational complexity, and then some techniques are developed to address the challenges involved. By using the basic BPF reconstruction with derivating along the detector (D-BPF), our experiments demonstrate that SSRB has fewer interlayer artifacts, higher z-resolution, more uniform in-plane resolution, and higher reconstruction efficiency compared to G-BPF. SSRB could promote the effective application of deep learning in SHCT reconstruction.
Collapse
|
11
|
Ni S, Yu H, Chen J, Liu C, Liu F. Hybrid source translation scanning mode for interior tomography. OPTICS EXPRESS 2023; 31:13342-13356. [PMID: 37157473 DOI: 10.1364/oe.483741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Interior tomography is a promising technique that can be used to image large objects with high acquisition efficiency. However, it suffers from truncation artifacts and attenuation value bias due to the contribution from the parts of the object outside the ROI, which compromises its ability of quantitative evaluation in material or biological studies. In this paper, we present a hybrid source translation scanning mode for interior tomography, called hySTCT-where the projections inside the ROI and outside the ROI are finely sampled and coarsely sampled respectively to mitigate truncation artifacts and value bias within the ROI. Inspired by our previous work-virtual projection-based filtered backprojection (V-FBP) algorithm, we develop two reconstruction methods-interpolation V-FBP (iV-FBP) and two-step V-FBP (tV-FBP)-based on the linearity property of the inverse Radon transform for hySTCT reconstruction. The experiments demonstrate that the proposed strategy can effectively suppress truncated artifacts and improve the reconstruction accuracy within the ROI.
Collapse
|
12
|
Han Y, Wu D, Kim K, Li Q. End-to-end deep learning for interior tomography with low-dose x-ray CT. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 04/07/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. There are several x-ray computed tomography (CT) scanning strategies used to reduce radiation dose, such as (1) sparse-view CT, (2) low-dose CT and (3) region-of-interest (ROI) CT (called interior tomography). To further reduce the dose, sparse-view and/or low-dose CT settings can be applied together with interior tomography. Interior tomography has various advantages in terms of reducing the number of detectors and decreasing the x-ray radiation dose. However, a large patient or a small field-of-view (FOV) detector can cause truncated projections, and then the reconstructed images suffer from severe cupping artifacts. In addition, although low-dose CT can reduce the radiation exposure dose, analytic reconstruction algorithms produce image noise. Recently, many researchers have utilized image-domain deep learning (DL) approaches to remove each artifact and demonstrated impressive performances, and the theory of deep convolutional framelets supports the reason for the performance improvement. Approach. In this paper, we found that it is difficult to solve coupled artifacts using an image-domain convolutional neural network (CNN) based on deep convolutional framelets. Significance. To address the coupled problem, we decouple it into two sub-problems: (i) image-domain noise reduction inside the truncated projection to solve low-dose CT problem and (ii) extrapolation of the projection outside the truncated projection to solve the ROI CT problem. The decoupled sub-problems are solved directly with a novel proposed end-to-end learning method using dual-domain CNNs. Main results. We demonstrate that the proposed method outperforms the conventional image-domain DL methods, and a projection-domain CNN shows better performance than the image-domain CNNs commonly used by many researchers.
Collapse
|
13
|
Coussat A, Rit S, Clackdoyle R, Defrise M, Desbat L, Letang JM. Region-of-Interest CT Reconstruction Using Object Extent and Singular Value Decomposition. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3091288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Aurelien Coussat
- Université de Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJMSaint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, France
| | - Simon Rit
- Université de Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJMSaint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, France
| | - Rolf Clackdoyle
- TIMC-IMAG Laboratory (CNRS UMR 5525), Université Grenoble Alpes, Grenoble, France
| | - Michel Defrise
- Department of Nuclear Medicine, Vrije Universiteit Brussel, Brussels, Belgium
| | - Laurent Desbat
- TIMC-IMAG Laboratory (CNRS UMR 5525), Université Grenoble Alpes, Grenoble, France
| | - Jean Michel Letang
- Université de Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJMSaint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, France
| |
Collapse
|
14
|
Tan C, Yu H, Xi Y, Li L, Liao M, Liu F, Duan L. Multi source translation based projection completion for interior region of interest tomography with CBCT. OPTICS EXPRESS 2022; 30:2963-2980. [PMID: 35209426 DOI: 10.1364/oe.442287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Abstract
Interior tomography by rotary computed tomography (RCT) is an effective method to improve the detection efficiency and achieve high-resolution imaging for the region of interest (ROI) within a large-scale object. However, because only the X-rays through the ROI can be received by detector, the projection data is inevitably truncated, resulting in truncation artifacts in the reconstructed image. When the ROI is totally within the object, the solution of the problem is not unique, which is named interior problem. Fortunately, projection completion (PC) is an effective technique to solve the interior problem. In this study, we proposed a multi source translation CT based PC method (mSTCT-PC) to cope with the interior problem. Firstly, mSTCT-PC employs multi-source translation to sparsely obtain the global projection which covered the whole object. Secondly, the sparse global projection is utilized to fill up the truncated projection of ROI. The global projection and truncated projection are obtained under the same geometric parameters. Therefore, it omits the registration of projection. To verify the feasibility of this method, simulation and practical experiments were implemented. Compared with the results of ROI reconstructed by filtered back-projection (FBP), simultaneous iterative reconstruction technique-total variation (SIRT-TV) and the multi-resolution based method (mR-PC), the proposed mSTCT-PC is good at mitigating truncation artifacts, preserving details and improving the accuracy of ROI images.
Collapse
|
15
|
Huang Y, Preuhs A, Manhart M, Lauritsch G, Maier A. Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3042-3053. [PMID: 33844627 DOI: 10.1109/tmi.2021.3072568] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.
Collapse
|
16
|
Ketola JHJ, Heino H, Juntunen MAK, Nieminen MT, Siltanen S, Inkinen SI. Generative adversarial networks improve interior computed tomography angiography reconstruction. Biomed Phys Eng Express 2021; 7. [PMID: 34673559 DOI: 10.1088/2057-1976/ac31cb] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/21/2021] [Indexed: 11/12/2022]
Abstract
In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 ± 0.01), and structural similarity index (SSIM) (0.92 ± 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 ± 0.01 and 0.04 ± 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 ± 2.6 dB and 28.6 ± 2.6 dB), and SSIM (0.90 ± 0.02 and 0.87 ± 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,The South Savo Social and Health Care Authority, Mikkeli Central Hospital, FI-50100, Finland
| | - Helinä Heino
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| | - Mikael A K Juntunen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland.,Medical Research Center Oulu, University of Oulu and Oulu University Hospital, FI-90014, Finland
| | - Samuli Siltanen
- Department of Mathematics and Statistics, University of Helsinki, Helsinki, FI-00014, Finland
| | - Satu I Inkinen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| |
Collapse
|
17
|
Wu W, Hu D, Niu C, Broeke LV, Butler APH, Cao P, Atlas J, Chernoglazov A, Vardhanabhuti V, Wang G. Deep learning based spectral CT imaging. Neural Netw 2021; 144:342-358. [PMID: 34560584 DOI: 10.1016/j.neunet.2021.08.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 07/14/2021] [Accepted: 08/20/2021] [Indexed: 10/20/2022]
Abstract
Spectral computed tomography (CT) has attracted much attention in radiation dose reduction, metal artifacts removal, tissue quantification and material discrimination. The x-ray energy spectrum is divided into several bins, each energy-bin-specific projection has a low signal-noise-ratio (SNR) than the current-integrating counterpart, which makes image reconstruction a unique challenge. Traditional wisdom is to use prior knowledge based iterative methods. However, this kind of methods demands a great computational cost. Inspired by deep learning, here we first develop a deep learning based reconstruction method; i.e., U-net with Lpp-norm, Total variation, Residual learning, and Anisotropic adaption (ULTRA). Specifically, we emphasize the various multi-scale feature fusion and multichannel filtering enhancement with a denser connection encoding architecture for residual learning and feature fusion. To address the image deblurring problem associated with the L22- loss, we propose a general Lpp-loss, p>0. Furthermore, the images from different energy bins share similar structures of the same object, the regularization characterizing correlations of different energy bins is incorporated into the Lpp- loss function, which helps unify the deep learning based methods with traditional compressed sensing based methods. Finally, the anisotropically weighted total variation is employed to characterize the sparsity in the spatial-spectral domain to regularize the proposed network In particular, we validate our ULTRA networks on three large-scale spectral CT datasets, and obtain excellent results relative to the competing algorithms. In conclusion, our quantitative and qualitative results in numerical simulation and preclinical experiments demonstrate that our proposed approach is accurate, efficient and robust for high-quality spectral CT image reconstruction.
Collapse
Affiliation(s)
- Weiwen Wu
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China; Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Lieza Vanden Broeke
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | | | - Peng Cao
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | - James Atlas
- Department of Radiology, University of Otago, Christchurch, New Zealand
| | | | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China.
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
18
|
Fonseca GP, Baer-Beck M, Fournie E, Hofmann C, Rinaldi I, Ollers MC, van Elmpt WJC, Verhaegen F. Evaluation of novel AI-based extended field-of-view CT reconstructions. Med Phys 2021; 48:3583-3594. [PMID: 33978240 PMCID: PMC8362147 DOI: 10.1002/mp.14937] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/27/2021] [Accepted: 04/30/2021] [Indexed: 01/14/2023] Open
Abstract
Purpose Modern computed tomography (CT) scanners have an extended field‐of‐view (eFoV) for reconstructing images up to the bore size, which is relevant for patients with higher BMI or non‐isocentric positioning due to fixation devices. However, the accuracy of the image reconstruction in eFoV is not well known since truncated data are used. This study introduces a new deep learning‐based algorithm for extended field‐of‐view reconstruction and evaluates the accuracy of the eFoV reconstruction focusing on aspects relevant for radiotherapy. Methods A life‐size three‐dimensional (3D) printed thorax phantom, based on a patient CT for which eFoV was necessary, was manufactured and used as reference. The phantom has holes allowing the placement of tissue mimicking inserts used to evaluate the Hounsfield unit (HU) accuracy. CT images of the phantom were acquired using different configurations aiming to evaluate geometric and HU accuracy in the eFoV. Image reconstruction was performed using a state‐of‐the‐art reconstruction algorithm (HDFoV), commercially available, and the novel deep learning‐based approach (HDeepFoV). Five patient cases were selected to evaluate the performance of both algorithms on patient data. There is no ground truth for patients so the reconstructions were qualitatively evaluated by five physicians and five medical physicists. Results The phantom geometry reconstructed with HDFoV showed boundary deviations from 1.0 to 2.5 cm depending on the volume of the phantom outside the regular scan field of view. HDeepFoV showed a superior performance regardless of the volume of the phantom within eFOV with a maximum boundary deviation below 1.0 cm. The maximum HU (absolute) difference for soft issue inserts is below 79 and 41 HU for HDFoV and HDeepFoV, respectively. HDeepFoV has a maximum deviation of −18 HU for an inhaled lung insert while HDFoV reached a 229 HU difference. The qualitative evaluation of patient cases shows that the novel deep learning approach produces images that look more realistic and have fewer artifacts. Conclusion To be able to reconstruct images outside the sFoV of the CT scanner there is no alternative than to use some kind of extrapolated data. In our study, we proposed and investigated a new deep learning‐based algorithm and compared it to a commercial solution for eFoV reconstruction. The deep learning‐based algorithm showed superior performance in quantitative evaluations based on phantom data and in qualitative assessments of patient data.
Collapse
Affiliation(s)
- Gabriel Paiva Fonseca
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | | | | | | | - Ilaria Rinaldi
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Michel C Ollers
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Wouter J C van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
19
|
|
20
|
Han Y, Kim J, Ye JC. Differentiated Backprojection Domain Deep Learning for Conebeam Artifact Removal. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3571-3582. [PMID: 32746105 DOI: 10.1109/tmi.2020.3000341] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results under various conditions confirmed that our method generalizes well and outperforms the existing iterative methods despite significantly reduced runtime complexity.
Collapse
|
21
|
Ravishankar S, Ye JC, Fessler JA. Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 91] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
Affiliation(s)
- Saiprasad Ravishankar
- Departments of Computational Mathematics, Science and Engineering, and Biomedical Engineering at Michigan State University, East Lansing, MI, 48824 USA
| | - Jong Chul Ye
- Department of Bio and Brain Engineering and Department of Mathematical Sciences at the Korea Advanced Institute of Science & Technology (KAIST), Daejeon, South Korea
| | - Jeffrey A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|