1
|
Li Y, Zhang C, Huang T, Fan Y, Ning G, Liao H. Computational multi-angle optical coherence tomography using implicit neural representation. OPTICS & LASER TECHNOLOGY 2025; 184:112551. [DOI: 10.1016/j.optlastec.2025.112551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
|
2
|
Takano S, Tomita N, Takaoka T, Ukai M, Matsuura A, Oguri M, Kita N, Torii A, Niwa M, Okazaki D, Yasui T, Hiwatashi A. Risk Estimation of Late Rectal Toxicity Using a Convolutional Neural Network-based Dose Prediction in Prostate Cancer Radiation Therapy. Adv Radiat Oncol 2025; 10:101739. [PMID: 40161541 PMCID: PMC11950957 DOI: 10.1016/j.adro.2025.101739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 02/05/2025] [Indexed: 04/02/2025] Open
Abstract
Purpose The present study investigated the feasibility of our automatic plan generation model based on a convolutional neural network (CNN) to estimate the baseline risk of grade ≥2 late rectal bleeding (G2-LRB) in volumetric modulated arc therapy for prostate cancer. Methods and Materials We built the 2-dimensional U-net model to predict dose distributions using the planning computed tomography and organs at risk masks as inputs. Seventy-five volumetric modulated arc therapy plans of prostate cancer, which were delivered at 74.8 Gy in 34 fractions with a uniform planning goal, were included: 60 for training and 5-fold cross-validation, and the remaining 15 for testing. Isodose volume dice similarity coefficient, dose-volume histogram, and normal tissue complication probability (NTCP) metrics between planned and CNN-predicted dose distributions were calculated. The primary endpoint was the goodness-of-fit, expressed as a coefficient of determination (R 2) value, in predicting the percentage of G2-LRB-Lyman-Kutcher-Burman-NTCP. Results In 15 test cases, 2-dimensional U-net predicted dose distributions with a mean isodose volume dice similarity coefficient value of 0.90 within the high-dose region (doses ≥ 50 Gy). Rectum V50Gy, V60Gy, and V70Gy were accurately predicted (R 2 = 0.73, 0.82, and 0.87, respectively). Strong correlations were observed between planned and predicted G2-LRB-Lyman-Kutcher-Burman-NTCP (R 2 = 0.80, P < .001), with a small percent mean absolute error (mean ± 1 standard deviation, 1.24% ± 1.42%). Conclusions A risk estimation of LRB using CNN-based automatic plan generation from anatomic information was feasible. These results will contribute to the development of a decision support system that identifies priority cases for preradiation therapy interventions, such as hydrogel spacer implantation.
Collapse
Affiliation(s)
- Seiya Takano
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Natsuo Tomita
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Taiki Takaoka
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Machiko Ukai
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Akane Matsuura
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Masanosuke Oguri
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Nozomi Kita
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Akira Torii
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Masanari Niwa
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Dai Okazaki
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Takahiro Yasui
- Department of Urology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| | - Akio Hiwatashi
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi, Mizuho-cho, Mizuho-ku, Nagoya, Aichi, Japan
| |
Collapse
|
3
|
Lv L, Li C, Wei W, Sun S, Ren X, Pan X, Li G. Optimization of sparse-view CT reconstruction based on convolutional neural network. Med Phys 2025; 52:2089-2105. [PMID: 39894762 DOI: 10.1002/mp.17636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Revised: 01/03/2025] [Accepted: 01/03/2025] [Indexed: 02/04/2025] Open
Abstract
BACKGROUND Sparse-view CT shortens scan time and reduces radiation dose but results in severe streak artifacts due to insufficient sampling data. Deep learning methods can now suppress these artifacts and improve image quality in sparse-view CT reconstruction. PURPOSE The quality of sparse-view CT reconstructed images can still be improved. Additionally, the interpretability of deep learning-based optimization methods for these reconstruction images is lacking, and the role of different network layers in artifact removal requires further study. Moreover, the optimization capability of these methods for reconstruction images from various sparse views needs enhancement. This study aims to improve the network's optimization ability for sparse-view reconstructed images, enhance interpretability, and boost generalization by establishing multiple network structures and datasets. METHODS In this paper, we developed a sparse-view CT reconstruction images improvement network (SRII-Net) based on U-Net. We added a copy pathway in the network and designed a residual image output block to boost the network's performance. Multiple networks with different connectivity structures were established using SRII-Net to analyze the contribution of each layer to artifact removal, improving the network's interpretability. Additionally, we created multiple datasets with reconstructed images of various sampling views to train and test the proposed network, investigating how these datasets from different sampling views affect the network's generalization ability. RESULTS The results show that the proposed method outperforms current networks, with significant improvements in metrics like PSNR and SSIM. Image optimization time is at the millisecond level. By comparing the performance of different network structures, we've identified the impact of various hierarchical structures. The image detail information learned by shallow layers and the high-level abstract feature information learned by deep layers play a crucial role in optimizing sparse-view CT reconstruction images. Training the network with multiple mixed datasets revealed that, under a certain amount of data, selecting the appropriate categories of sampling views and their corresponding samples can effectively enhance the network's optimization ability for reconstructing images with different sampling views. CONCLUSIONS The network in this paper effectively suppresses artifacts in reconstructed images with different sparse views, improving generalization. We have also created diverse network structures and datasets to deepen the understanding of artifact removal in deep learning networks, offering insights for noise reduction and image enhancement in other imaging methods.
Collapse
Affiliation(s)
- Liangliang Lv
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Chang Li
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Wenjing Wei
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Shuyi Sun
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Xiaoxuan Ren
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Xiaodong Pan
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
| | - Gongping Li
- School of Nuclear Science and Technology, Lanzhou University, Lanzhou, China
- Key Laboratory of Special Functional Materials and Structural Design, Ministry of Education, Lanzhou University, Lanzhou, China
| |
Collapse
|
4
|
Loli Piccolomini E, Evangelista D, Morotti E. Deep Guess acceleration for explainable image reconstruction in sparse-view CT. Comput Med Imaging Graph 2025; 123:102530. [PMID: 40154011 DOI: 10.1016/j.compmedimag.2025.102530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 03/03/2025] [Accepted: 03/13/2025] [Indexed: 04/01/2025]
Abstract
Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.
Collapse
Affiliation(s)
- Elena Loli Piccolomini
- Department of Computer Science and Engineering, University of Bologna, Via Mura Anteo Zamboni 7, 40126 Bologna, Italy.
| | - Davide Evangelista
- Department of Computer Science and Engineering, University of Bologna, Via Mura Anteo Zamboni 7, 40126 Bologna, Italy
| | - Elena Morotti
- Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125 Bologna, Italy
| |
Collapse
|
5
|
Han Y. Low-dose CT reconstruction using cross-domain deep learning with domain transfer module. Phys Med Biol 2025; 70:065014. [PMID: 39983305 DOI: 10.1088/1361-6560/adb932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 02/21/2025] [Indexed: 02/23/2025]
Abstract
Objective. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.Approach. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.Main results. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.Significance. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.
Collapse
Affiliation(s)
- Yoseob Han
- Department of Electronic Engineering, Soongsil University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Zhang S, Tuo M, Jin S, Gu Y. An efficient and high-quality scheme for cone-beam CT reconstruction from sparse-view data. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2025; 33:420-435. [PMID: 39973789 DOI: 10.1177/08953996241313121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Computed tomography (CT) is capable of generating detailed cross-sectional images of the scanned objects non-destructively. So far, CT has become an increasingly vital tool for 3D modelling of cultural relics. Compressed sensing (CS)-based CT reconstruction algorithms, such as the algebraic reconstruction technique (ART) regularized by total variation (TV), enable accurate reconstructions from sparse-view data, which consequently reduces both scanning time and costs. However, the implementation of the ART-TV is considerably slow, particularly in cone-beam reconstruction. In this paper, we propose an efficient and high-quality scheme for cone-beam CT reconstruction based on the traditional ART-TV algorithm. Our scheme employs Joseph's projection method for the computation of the system matrix. By exploiting the geometric symmetry of the cone-beam rays, we are able to compute the weight coefficients of the system matrix for two symmetric rays simultaneously. We then employ multi-threading technology to speed up the reconstruction of ART, and utilize graphics processing units (GPUs) to accelerate the TV minimization. Experimental results demonstrate that, for a typical reconstruction of a 512 × 512 × 512 volume from 60 views of 512 × 512 projection images, our scheme achieves a speedup of 14 × compared to a single-threaded CPU implementation. Furthermore, high-quality reconstructions of ART-TV are obtained by using Joseph's projection compared with that using traditional Siddon's projection.
Collapse
Affiliation(s)
- Shunli Zhang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Mingxiu Tuo
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Siyu Jin
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Yikuan Gu
- School of Information Science and Technology, Northwest University, Xi'an, China
| |
Collapse
|
7
|
Li Y, Sun X, Wang S, Guo L, Qin Y, Pan J, Chen P. TD-STrans: Tri-domain sparse-view CT reconstruction based on sparse transformer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 260:108575. [PMID: 39733746 DOI: 10.1016/j.cmpb.2024.108575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 12/15/2024] [Accepted: 12/24/2024] [Indexed: 12/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Sparse-view computed tomography (CT) speeds up scanning and reduces radiation exposure in medical diagnosis. However, when the projection views are severely under-sampled, deep learning-based reconstruction methods often suffer from over-smoothing of the reconstructed images due to the lack of high-frequency information. To address this issue, we introduce frequency domain information into the popular projection-image domain reconstruction, proposing a Tri-Domain sparse-view CT reconstruction model based on Sparse Transformer (TD-STrans). METHODS TD-STrans integrates three essential modules: the projection recovery module completes the sparse-view projection, the Fourier domain filling module mitigates artifacts and over-smoothing by filling in missing high-frequency details; the image refinement module further enhances and preserves image details. Additionally, a multi-domain joint loss function is designed to simultaneously enhance the reconstruction quality in the projection domain, image domain, and frequency domain, thereby further improving the preservation of image details. RESULTS The results of simulation experiments on the lymph node dataset and real experiments on the walnut dataset consistently demonstrate the effectiveness of TD-STrans in artifact removal, suppression of over-smoothing, and preservation of structural fidelity. CONCLUSION The reconstruction results of TD-STrans indicate that sparse transformer across multiple domains can alleviate over-smoothing and detail loss caused by reduced views, offering a novel solution for ultra-sparse-view CT imaging.
Collapse
Affiliation(s)
- Yu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Xueqin Sun
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Sukai Wang
- The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China; Department of computer science and technology, North University of China, Taiyuan 030051, China
| | - Lina Guo
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Yingwei Qin
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Jinxiao Pan
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Ping Chen
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China.
| |
Collapse
|
8
|
Zhang R, Szczykutowicz TP, Toia GV. Artificial Intelligence in Computed Tomography Image Reconstruction: A Review of Recent Advances. J Comput Assist Tomogr 2025:00004728-990000000-00429. [PMID: 40008975 DOI: 10.1097/rct.0000000000001734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 01/07/2025] [Indexed: 02/27/2025]
Abstract
The development of novel image reconstruction algorithms has been pivotal in enhancing image quality and reducing radiation dose in computed tomography (CT) imaging. Traditional techniques like filtered back projection perform well under ideal conditions but fail to generate high-quality images under low-dose, sparse-view, and limited-angle conditions. Iterative reconstruction methods improve upon filtered back projection by incorporating system models and assumptions about the patient, yet they can suffer from patchy image textures. The emergence of artificial intelligence (AI), particularly deep learning, has further advanced CT reconstruction. AI techniques have demonstrated great potential in reducing radiation dose while preserving image quality and noise texture. Moreover, AI has exhibited unprecedented performance in addressing challenging CT reconstruction problems, including low-dose CT, sparse-view CT, limited-angle CT, and interior tomography. This review focuses on the latest advances in AI-based CT reconstruction under these challenging conditions.
Collapse
Affiliation(s)
- Ran Zhang
- Departments of Radiology and Medical Physics, University of Wisconsin, Madison, WI
| | | | | |
Collapse
|
9
|
Zhang J, Li Z, Pan J, Wang S, Wu W. Trustworthy Limited Data CT Reconstruction Using Progressive Artifact Image Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:1163-1178. [PMID: 40031253 DOI: 10.1109/tip.2025.3534559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The reconstruction of limited data computed tomography (CT) aims to obtain high-quality images from a reduced set of projection views acquired from sparse views or limited angles. This approach is utilized to reduce radiation exposure or expedite the scanning process. Deep Learning (DL) techniques have been incorporated into limited data CT reconstruction tasks and achieve remarkable performance. However, these DL methods suffer from various limitations. Firstly, the distribution inconsistency between the simulation data and the real data hinders the generalization of these DL-based methods. Secondly, these DL-based methods could be unstable due to lack of kernel awareness. This paper addresses these issues by proposing an unrolling framework called Progressive Artifact Image Learning (PAIL) for limited data CT reconstruction. The proposed PAIL primarily consists of three key modules, i.e., a residual domain module (RDM), an image domain module (IDM), and a wavelet domain module (WDM). The RDM is designed to refine features from residual images and suppress the observable artifacts from the reconstructed images. This module could effectively alleviate the effects of distribution inconsistency among different data sets by transferring the optimization space from the original data domain to the residual data domain. The IDM is designed to suppress the unobservable artifacts in the image space. The RDM and IDM collaborate with each other during the iterative optimization process, progressively removing artifacts and reconstructing the underlying CT image. Furthermore, in order to void the potential hallucinations generated by the RDM and IDM, an additional WDM is incorporated into the network to enhance its stability. This is achieved by making the network become kernel-aware via integrating wavelet-based compressed sensing. The effectiveness of the proposed PAIL method has been consistently verified on two simulated CT data sets, a clinical cardiac data set and a sheep lung data set. Compared to other state-of-the-art methods, the proposed PAIL method achieves superior performance in various limited data CT reconstruction tasks, demonstrating its promising generalization and stability.
Collapse
|
10
|
Zhao X, Du Y, Peng Y. Deep Learning-Based Multi-View Projection Synthesis Approach for Improving the Quality of Sparse-View CBCT in Image-Guided Radiotherapy. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01390-0. [PMID: 39849201 DOI: 10.1007/s10278-025-01390-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 12/16/2024] [Accepted: 12/19/2024] [Indexed: 01/25/2025]
Abstract
While radiation hazards induced by cone-beam computed tomography (CBCT) in image-guided radiotherapy (IGRT) can be reduced by sparse-view sampling, the image quality is inevitably degraded. We propose a deep learning-based multi-view projection synthesis (DLMPS) approach to improve the quality of sparse-view low-dose CBCT images. In the proposed DLMPS approach, linear interpolation was first applied to sparse-view projections and the projections were rearranged into sinograms; these sinograms were processed with a sinogram restoration model and then rearranged back into projections. The sinogram restoration model was modified from the 2D U-Net by incorporating dynamic convolutional layers and residual learning techniques. The DLMPS approach was trained, validated, and tested on CBCT data from 163, 30, and 30 real patients respectively. Sparse-view projection datasets with 1/4 and 1/8 of the original sampling rate were simulated, and the corresponding full-view projection datasets were restored via the DLMPS approach. Tomographic images were reconstructed using the Feldkamp-Davis-Kress algorithm. Quantitative metrics including root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) were calculated in both the projection and image domains to evaluate the performance of the DLMPS approach. The DLMPS approach was compared with 11 state-of-the-art (SOTA) models, including CNN and Transformer architectures. For 1/4 sparse-view reconstruction task, the proposed DLMPS approach achieved averaged RMSE, PSNR, SSIM, and FSIM values of 0.0271, 45.93 dB, 0.9817, and 0.9587 in the projection domain, and 0.000885, 37.63 dB, 0.9074, and 0.9885 in the image domain, respectively. For 1/8 sparse-view reconstruction task, the DLMPS approach achieved averaged RMSE, PSNR, SSIM, and FSIM values of 0.0304, 44.85 dB, 0.9785, and 0.9524 in the projection domain, and 0.001057, 36.05 dB, 0.8786, and 0.9774 in the image domain, respectively. The DLMPS approach outperformed all the 11 SOTA models in both the projection and image domains for 1/4 and 1/8 sparse-view reconstruction tasks. The proposed DLMPS approach effectively improves the quality of sparse-view CBCT images in IGRT by accurately synthesizing missing projections, exhibiting potential in substantially reducing imaging dose to patients with minimal loss of image quality.
Collapse
Affiliation(s)
- Xuzhi Zhao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Yi Du
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China.
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China.
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China.
| |
Collapse
|
11
|
Didonna A, Ramos Lopez D, Iaselli G, Amoroso N, Ferrara N, Pugliese GMI. Deep Convolutional Framelets for Dose Reconstruction in Boron Neutron Capture Therapy with Compton Camera Detector. Cancers (Basel) 2025; 17:130. [PMID: 39796757 PMCID: PMC11719915 DOI: 10.3390/cancers17010130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2024] [Revised: 12/28/2024] [Accepted: 12/31/2024] [Indexed: 01/13/2025] Open
Abstract
BACKGROUND Boron neutron capture therapy (BNCT) is an innovative binary form of radiation therapy with high selectivity towards cancer tissue based on the neutron capture reaction 10B(n,α)7Li, consisting in the exposition of patients to neutron beams after administration of a boron compound with preferential accumulation in cancer cells. The high linear energy transfer products of the ensuing reaction deposit their energy at the cell level, sparing normal tissue. Although progress in accelerator-based BNCT has led to renewed interest in this cancer treatment modality, in vivo dose monitoring during treatment still remains not feasible and several approaches are under investigation. While Compton imaging presents various advantages over other imaging methods, it typically requires long reconstruction times, comparable with BNCT treatment duration. METHODS This study aims to develop deep neural network models to estimate the dose distribution by using a simulated dataset of BNCT Compton camera images. The models pursue the avoidance of the iteration time associated with the maximum-likelihood expectation-maximization algorithm (MLEM), enabling a prompt dose reconstruction during the treatment. The U-Net architecture and two variants based on the deep convolutional framelets framework have been used for noise and artifact reduction in few-iteration reconstructed images. RESULTS This approach has led to promising results in terms of reconstruction accuracy and processing time, with a reduction by a factor of about 6 with respect to classical iterative algorithms. CONCLUSIONS This can be considered a good reconstruction time performance, considering typical BNCT treatment times. Further enhancements may be achieved by optimizing the reconstruction of input images with different deep learning techniques.
Collapse
Affiliation(s)
- Angelo Didonna
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Scuola di Specializzazione in Fisica Medica, Università degli Studi di Milano, 20133 Milan, Italy
| | - Dayron Ramos Lopez
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Dipartimento Interateneo di Fisica, Politecnico di Bari, 70125 Bari, Italy
| | - Giuseppe Iaselli
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Dipartimento Interateneo di Fisica, Politecnico di Bari, 70125 Bari, Italy
| | - Nicola Amoroso
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Dipartimento di Farmacia-Scienze del Farmaco, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
| | - Nicola Ferrara
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Dipartimento Interateneo di Fisica, Politecnico di Bari, 70125 Bari, Italy
| | - Gabriella Maria Incoronata Pugliese
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy (N.F.)
- Dipartimento Interateneo di Fisica, Politecnico di Bari, 70125 Bari, Italy
| |
Collapse
|
12
|
Wirtensohn S, Schmid C, Berthe D, John D, Heck L, Taphorn K, Flenner S, Herzen J. Self-supervised denoising of grating-based phase-contrast computed tomography. Sci Rep 2024; 14:32169. [PMID: 39741166 DOI: 10.1038/s41598-024-83517-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 12/16/2024] [Indexed: 01/02/2025] Open
Abstract
In the last decade, grating-based phase-contrast computed tomography (gbPC-CT) has received growing interest. It provides additional information about the refractive index decrement in the sample. This signal shows an increased soft-tissue contrast. However, the resolution dependence of the signal poses a challenge: its contrast enhancement is overcompensated by the low resolution in low-dose applications such as clinical computed tomography. As a result, the implementation of gbPC-CT is currently tied to a higher dose. To reduce the dose, we introduce the self-supervised deep learning network Noise2Inverse into the field of gbPC-CT. We evaluate the behavior of the Noise2Inverse parameters on the phase-contrast results. Afterward, we compare its results with other denoising methods, namely the Statistical Iterative Reconstruction, Block Matching 3D, and Patchwise Phase Retrieval. In the example of Noise2Inverse, we show that deep learning networks can deliver superior denoising results with respect to the investigated image quality metrics. Their application allows to increase the resolution while maintaining the dose. At higher resolutions, gbPC-CT can naturally deliver higher contrast than conventional absorption-based CT. Therefore, the application of machine learning-based denoisers shifts the dose-normalized image quality in favor of gbPC-CT, bringing it one step closer to medical application.
Collapse
Affiliation(s)
- Sami Wirtensohn
- Research Group Biomedical Imaging Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany.
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany.
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany.
- Institute of Materials Physics, Helmholtz-Zentrum Hereon, 21502, Geesthacht, Germany.
| | - Clemens Schmid
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
- Paul Scherer Institute, Forschungsstrasse 111, 5232, Villigen, Switzerland
| | - Daniel Berthe
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| | - Dominik John
- Research Group Biomedical Imaging Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
- Institute of Materials Physics, Helmholtz-Zentrum Hereon, 21502, Geesthacht, Germany
| | - Lisa Heck
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| | - Kirsten Taphorn
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| | - Silja Flenner
- Institute of Materials Physics, Helmholtz-Zentrum Hereon, 21502, Geesthacht, Germany
| | - Julia Herzen
- Research Group Biomedical Imaging Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Chair of Biomedical Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| |
Collapse
|
13
|
Fu Y, Dong S, Huang Y, Niu M, Ni C, Yu L, Shi K, Yao Z, Zhuo C. MPGAN: Multi Pareto Generative Adversarial Network for the denoising and quantitative analysis of low-dose PET images of human brain. Med Image Anal 2024; 98:103306. [PMID: 39163786 DOI: 10.1016/j.media.2024.103306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 06/15/2024] [Accepted: 08/12/2024] [Indexed: 08/22/2024]
Abstract
Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.
Collapse
Affiliation(s)
- Yu Fu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China; College of Integrated Circuits, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanyan Huang
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Chao Ni
- Department of Breast Surgery, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Cheng Zhuo
- College of Integrated Circuits, Zhejiang University, Hangzhou, China.
| |
Collapse
|
14
|
Mileto A, Yu L, Revels JW, Kamel S, Shehata MA, Ibarra-Rovira JJ, Wong VK, Roman-Colon AM, Lee JM, Elsayes KM, Jensen CT. State-of-the-Art Deep Learning CT Reconstruction Algorithms in Abdominal Imaging. Radiographics 2024; 44:e240095. [PMID: 39612283 PMCID: PMC11618294 DOI: 10.1148/rg.240095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/18/2023] [Accepted: 05/21/2023] [Indexed: 12/01/2024]
Abstract
The implementation of deep neural networks has spurred the creation of deep learning reconstruction (DLR) CT algorithms. DLR CT techniques encompass a spectrum of deep learning-based methodologies that operate during the different steps of the image creation, prior to or after the traditional image formation process (eg, filtered backprojection [FBP] or iterative reconstruction [IR]), or alternatively by fully replacing FBP or IR techniques. DLR algorithms effectively facilitate the reduction of image noise associated with low photon counts from reduced radiation dose protocols. DLR methods have emerged as an effective solution to ameliorate limitations observed with prior CT image reconstruction algorithms, including FBP and IR algorithms, which are not able to preserve image texture and diagnostic performance at low radiation dose levels. An additional advantage of DLR algorithms is their high reconstruction speed, hence targeting the ideal triad of features for a CT image reconstruction (ie, the ability to consistently provide diagnostic-quality images and achieve radiation dose imaging levels as low as reasonably possible, with high reconstruction speed). An accumulated body of evidence supports the clinical use of DLR algorithms in abdominal imaging across multiple CT imaging tasks. The authors explore the technical aspects of DLR CT algorithms and examine various approaches to image synthesis in DLR creation. The clinical applications of DLR algorithms are highlighted across various abdominal CT imaging domains, with emphasis on the supporting evidence for diverse clinical tasks. An overview of the current limitations of and outlook for DLR algorithms for CT is provided. ©RSNA, 2024.
Collapse
Affiliation(s)
- Achille Mileto
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Lifeng Yu
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Jonathan W. Revels
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Serageldin Kamel
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Mostafa A. Shehata
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Juan J. Ibarra-Rovira
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Vincenzo K. Wong
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Alicia M. Roman-Colon
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Jeong Min Lee
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Khaled M. Elsayes
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| | - Corey T. Jensen
- From the Department of Radiology, University of Washington School of
Medicine, Seattle, Wash (A.M.); Department of Radiology, Mayo Clinic, Rochester,
Minn (L.Y.); Department of Radiology, New York University Grossman School of
Medicine, NYU Langone Health, New York, NY (J.W.R.); Departments of Radiation
Oncology (S.K.) and Abdominal Imaging (M.A.S., J.J.I.R., V.K.W., K.M.E.,
C.T.J.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St,
Unit 1473, Houston, TX 77030-4009; Department of Radiology, Texas
Children's Hospital, Houston, Tex (A.M.R.C.); and Department of
Radiology, Seoul National University College of Medicine, Seoul, South Korea
(J.M.L.)
| |
Collapse
|
15
|
Yun S, Lee S, Choi DI, Lee T, Cho S. TMAA-net: tensor-domain multi-planal anti-aliasing network for sparse-view CT image reconstruction. Phys Med Biol 2024; 69:225012. [PMID: 39481239 DOI: 10.1088/1361-6560/ad8da2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 10/31/2024] [Indexed: 11/02/2024]
Abstract
Objective.Among various deep-network-based sparse-view CT image reconstruction studies, the sinogram upscaling network has been predominantly employed to synthesize additional view information. However, the performance of the sinogram-based network is limited in terms of removing aliasing streak artifacts and recovering low-contrast small structures. In this study, we used a view-by-view back-projection (VVBP) tensor-domain network to overcome such limitations of the sinogram-based approaches.Approach.The proposed method offers advantages of addressing the aliasing artifacts directly in the 3D tensor domain over the 2D sinogram. In the tensor-domain network, the multi-planal anti-aliasing modules were used to remove artifacts within the coronal and sagittal tensor planes. In addition, the data-fidelity-based refinement module was also implemented to successively process output images of the tensor network to recover image sharpness and textures.Main result.The proposed method showed outperformance in terms of removing aliasing artifacts and recovering low-contrast details compared to other state-of-the-art sinogram-based networks. The performance was validated for both numerical and clinical projection data in a circular fan-beam CT configuration.Significance.We observed that view-by-view aliasing artifacts in sparse-view CT exhibit distinct patterns within the tensor planes, making them effectively removable in high-dimensional representations. Additionally, we demonstrated that the co-domain characteristics of tensor space processing offer higher generalization performance for aliasing artifact removal compared to conventional sinogram-domain processing.
Collapse
Affiliation(s)
- Sungho Yun
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Seoyoung Lee
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Da-In Choi
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Taewon Lee
- Department of Semiconductor Engineering, Hoseo University, Asan 31499, Republic of Korea
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
- KAIST Institute for IT Convergence, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| |
Collapse
|
16
|
Takeda K, Sakai T, Mitate E. Background removal for debiasing computer-aided cytological diagnosis. Int J Comput Assist Radiol Surg 2024; 19:2165-2174. [PMID: 38918281 PMCID: PMC11541310 DOI: 10.1007/s11548-024-03169-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 04/30/2024] [Indexed: 06/27/2024]
Abstract
To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.
Collapse
Affiliation(s)
- Keita Takeda
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan.
| | - Tomoya Sakai
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
- Graduate School of Integrated Science and Technology, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
| | - Eiji Mitate
- Department of Oral and Maxillofacial Surgery, Kanazawa Medical University, 1-1 Daigaku, Uchinada, Kahoku, Ishikawa, 9200293, Japan
| |
Collapse
|
17
|
Liu M, Wang Y, Gu Y, Gong H, Lu HM, Tang Z, Yang Y. Development of a proton CT imaging system using scintillator-based range detection. Med Phys 2024; 51:8047-8059. [PMID: 39250696 DOI: 10.1002/mp.17393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 07/22/2024] [Accepted: 08/14/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND The accuracy of proton therapy and preclinical proton irradiation experiments is susceptible to proton range uncertainties, which partly stem from the inaccurate conversion between CT numbers and relative stopping power (RSP). Proton computed tomography (PCT) can reduce these uncertainties by directly acquiring RSP maps. PURPOSE This study aims to develop a novel PCT imaging system based on scintillator-based proton range detection for accurate RSP reconstruction. METHODS The proposed PCT system consists of a pencil-beam brass collimator with a 1 mm aperture, an object stage capable of translation and 360° rotation, a plastic scintillator for dose-to-light conversion, and a complementary metal oxide semiconductor (CMOS) camera for light distribution acquisition. A calibration procedure based on Monte Carlo (MC) simulation was implemented to convert the obtained light ranges into water equivalent ranges. The water equivalent path lengths (WEPLs) of the imaged object were determined by calculating the differences in proton ranges obtained with and without the object in the beam path. To validate the WEPL calculation, measurements of WEPLs for eight tissue-equivalent inserts were conducted. PCT imaging was performed on a custom-designed phantom and a mouse, utilizing both 60 and 360 projections. The filtered back projection (FBP) algorithm was employed to reconstruct the RSP from WEPLs. Image quality was assessed based on the reconstructed RSP maps and compared to reference and simulation-based reconstructions. RESULTS The differences between the calibrated and reference ranges of 110-150 MeV proton beams were within 0.18 mm. The WEPLs of eight tissue-equivalent inserts were measured with accuracies better than 1%. Phantom experiments exhibited good agreement with reference and simulation-based reconstructions, demonstrating average RSP errors of 1.26%, 1.38%, and 0.38% for images reconstructed with 60 projections, 60 projections after penalized weighted least-squares algorithm denoising, and 360 projections, respectively. Mouse experiments provided clear observations of mouse contours and major tissue types. MC simulation estimated an imaging dose of 3.44 cGy for decent RSP reconstruction. CONCLUSIONS The proposed PCT imaging system enables RSP map acquisition with high accuracy and has the potential to improve dose calculation accuracy in proton therapy and preclinical proton irradiation experiments.
Collapse
Affiliation(s)
- Meiqi Liu
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
| | - Yuxiang Wang
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
- Hefei Ion Medical Center, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Yue Gu
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
| | - Haonian Gong
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
| | - Hsiao-Ming Lu
- Hefei Ion Medical Center, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
- Ion Medical Research Institute, University of Science and Technology of China, Hefei, Anhui, China
| | - Zebo Tang
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
| | - Yidong Yang
- School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
- Hefei Ion Medical Center, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
- Ion Medical Research Institute, University of Science and Technology of China, Hefei, Anhui, China
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
18
|
Sun C, Salimi Y, Angeliki N, Boudabbous S, Zaidi H. An efficient dual-domain deep learning network for sparse-view CT reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108376. [PMID: 39173481 DOI: 10.1016/j.cmpb.2024.108376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 08/02/2024] [Accepted: 08/15/2024] [Indexed: 08/24/2024]
Abstract
BACKGROUND AND OBJECTIVE We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners. METHODS We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1-5 likert scoring system. RESULTS Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data. CONCLUSION This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.
Collapse
Affiliation(s)
- Chang Sun
- Beijing University of Posts and Telecommunications, School of Information and Communication Engineering, 100876 Beijing, China; Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Yazdan Salimi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - Neroladaki Angeliki
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Sana Boudabbous
- Geneva University Hospital, Division of Radiology, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
19
|
Kumschier T, Thalhammer J, Schmid C, Haeusele J, Koehler T, Pfeiffer F, Lasser T, Schaff F. Streak artefact removal in x-ray dark-field computed tomography using a convolutional neural network. Med Phys 2024; 51:7404-7414. [PMID: 39012833 DOI: 10.1002/mp.17305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 06/24/2024] [Accepted: 07/01/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND Computed tomography (CT) relies on the attenuation of x-rays, and is, hence, of limited use for weakly attenuating organs of the body, such as the lung. X-ray dark-field (DF) imaging is a recently developed technology that utilizes x-ray optical gratings to enable small-angle scattering as an alternative contrast mechanism. The DF signal provides structural information about the micromorphology of an object, complementary to the conventional attenuation signal. A first human-scale x-ray DF CT has been developed by our group. Despite specialized processing algorithms, reconstructed images remain affected by streaking artifacts, which often hinder image interpretation. In recent years, convolutional neural networks have gained popularity in the field of CT reconstruction, amongst others for streak artefact removal. PURPOSE Reducing streak artifacts is essential for the optimization of image quality in DF CT, and artefact free images are a prerequisite for potential future clinical application. The purpose of this paper is to demonstrate the feasibility of CNN post-processing for artefact reduction in x-ray DF CT and how multi-rotation scans can serve as a pathway for training data. METHODS We employed a supervised deep-learning approach using a three-dimensional dual-frame UNet in order to remove streak artifacts. Required training data were obtained from the experimental x-ray DF CT prototype at our institute. Two different operating modes were used to generate input and corresponding ground truth data sets. Clinically relevant scans at dose-compatible radiation levels were used as input data, and extended scans with substantially fewer artifacts were used as ground truth data. The latter is neither dose-, nor time-compatible and, therefore, unfeasible for clinical imaging of patients. RESULTS The trained CNN was able to greatly reduce streak artifacts in DF CT images. The network was tested against images with entirely different, previously unseen image characteristics. In all cases, CNN processing substantially increased the image quality, which was quantitatively confirmed by increased image quality metrics. Fine details are preserved during processing, despite the output images appearing smoother than the ground truth images. CONCLUSIONS Our results showcase the potential of a neural network to reduce streak artifacts in x-ray DF CT. The image quality is successfully enhanced in dose-compatible x-ray DF CT, which plays an essential role for the adoption of x-ray DF CT into modern clinical radiology.
Collapse
Affiliation(s)
- Tom Kumschier
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
| | - Johannes Thalhammer
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
- Institute for Advanced Study, Technical University of Munich, Garching, Germany
| | - Clemens Schmid
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
| | - Jakob Haeusele
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
| | - Thomas Koehler
- Institute for Advanced Study, Technical University of Munich, Garching, Germany
- Philips Research, Hamburg, Germany
| | - Franz Pfeiffer
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
- Department of Diagnostic and Interventional Radiology, School of Medicine & Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Tobias Lasser
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
- Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Florian Schaff
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, Germany
- Munich School of BioEngineering, Technical University of Munich, Garching, Germany
| |
Collapse
|
20
|
Liu Y, Zhou X, Wei C, Xu Q. Sparse-View Spectral CT Reconstruction and Material Decomposition Based on Multi-Channel SGM. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3425-3435. [PMID: 38865221 DOI: 10.1109/tmi.2024.3413085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
In medical applications, the diffusion of contrast agents in tissue can reflect the physiological function of organisms, so it is valuable to quantify the distribution and content of contrast agents in the body over a period. Spectral CT has the advantages of multi-energy projection acquisition and material decomposition, which can quantify K-edge contrast agents. However, multiple repetitive spectral CT scans can cause excessive radiation doses. Sparse-view scanning is commonly used to reduce dose and scan time, but its reconstructed images are usually accompanied by streaking artifacts, which leads to inaccurate quantification of the contrast agents. To solve this problem, an unsupervised sparse-view spectral CT reconstruction and material decomposition algorithm based on the multi-channel score-based generative model (SGM) is proposed in this paper. First, multi-energy images and tissue images are used as multi-channel input data for SGM training. Secondly, the organism is multiply scanned in sparse views, and the trained SGM is utilized to generate multi-energy images and tissue images driven by sparse-view projections. After that, a material decomposition algorithm using tissue images generated by SGM as prior images for solving contrast agent images is established. Finally, the distribution and content of the contrast agents are obtained. The comparison and evaluation of this method are given in this paper, and a series of mouse scanning experiments are carried out to verify the effectiveness of the method.
Collapse
|
21
|
Zandarco S, Günther B, Riedel M, Breitenhuber G, Kirst M, Achterhold K, Pfeiffer F, Herzen J. Speckle tracking phase-contrast computed tomography at an inverse Compton X-ray source. OPTICS EXPRESS 2024; 32:28472-28488. [PMID: 39538663 DOI: 10.1364/oe.528701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 06/10/2024] [Indexed: 11/16/2024]
Abstract
Speckle-based X-ray imaging (SBI) is a phase-contrast method developed at and for highly coherent X-ray sources, such as synchrotrons, to increase the contrast of weakly absorbing objects. Consequently, it complements the conventional attenuation-based X-ray imaging. Meanwhile, attempts to establish SBI at less coherent laboratory sources have been performed, ranging from liquid metal-jet X-ray sources to microfocus X-ray tubes. However, their lack of coherence results in interference fringes not being resolved. Therefore, algorithms were developed which neglect the interference effects. Here, we demonstrate phase-contrast computed tomography employing SBI in a laboratory-setting with an inverse Compton X-ray source. In this context, we investigate and compare also the performance of the at synchrotron conventionally used phase-retrieval algorithms for SBI, unified modulated pattern analysis (UMPA) with a phase-retrieval method developed for low coherence systems (LCS). We successfully retrieve a full computed tomography in a phantom as well as in biological specimens, such as larvae of the greater wax moth (Galleria mellonella), a model system for studies of pathogens and infections. In this context, we additionally demonstrate quantitative phase-contrast computed tomography using SBI at a low coherent set-up.
Collapse
|
22
|
Thalhammer J, Schultheiß M, Dorosti T, Lasser T, Pfeiffer F, Pfeiffer D, Schaff F. Improving Automated Hemorrhage Detection at Sparse-View CT via U-Net-based Artifact Reduction. Radiol Artif Intell 2024; 6:e230275. [PMID: 38717293 PMCID: PMC11294955 DOI: 10.1148/ryai.230275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 04/12/2024] [Accepted: 04/22/2024] [Indexed: 06/06/2024]
Abstract
Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], P < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], P < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. Keywords: CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Johannes Thalhammer
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Manuel Schultheiß
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Tina Dorosti
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Tobias Lasser
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Franz Pfeiffer
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Daniela Pfeiffer
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| | - Florian Schaff
- From the Department of Physics, School of Natural Sciences (J.T., M.S., T.D., F.P., D.P., F.S.), Munich Institute of Biomedical Engineering (J.T., M.S., T.D., T.L., F.P., D.P., F.S.), Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum rechts der Isar (J.T., M.S., T.D., F.P., D.P.), Institute for Advanced Study (J.T., F.P., D.P.), and Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology (T.L.), Technical University of Munich, Boltzmannstrasse 11, 85748 Garching, Germany
| |
Collapse
|
23
|
Li G, Deng Z, Ge Y, Luo S. HEAL: High-Frequency Enhanced and Attention-Guided Learning Network for Sparse-View CT Reconstruction. Bioengineering (Basel) 2024; 11:646. [PMID: 39061728 PMCID: PMC11273693 DOI: 10.3390/bioengineering11070646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/08/2024] [Accepted: 06/18/2024] [Indexed: 07/28/2024] Open
Abstract
X-ray computed tomography (CT) imaging technology has become an indispensable diagnostic tool in clinical examination. However, it poses a risk of ionizing radiation, making the reduction of radiation dose one of the current research hotspots in CT imaging. Sparse-view imaging, as one of the main methods for reducing radiation dose, has made significant progress in recent years. In particular, sparse-view reconstruction methods based on deep learning have shown promising results. Nevertheless, efficiently recovering image details under ultra-sparse conditions remains a challenge. To address this challenge, this paper proposes a high-frequency enhanced and attention-guided learning Network (HEAL). HEAL includes three optimization strategies to achieve detail enhancement: Firstly, we introduce a dual-domain progressive enhancement module, which leverages fidelity constraints within each domain and consistency constraints across domains to effectively narrow the solution space. Secondly, we incorporate both channel and spatial attention mechanisms to improve the network's feature-scaling process. Finally, we propose a high-frequency component enhancement regularization term that integrates residual learning with direction-weighted total variation, utilizing directional cues to effectively distinguish between noise and textures. The HEAL network is trained, validated and tested under different ultra-sparse configurations of 60 views and 30 views, demonstrating its advantages in reconstruction accuracy and detail enhancement.
Collapse
Affiliation(s)
- Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| | - Zhenhao Deng
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| | - Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shouhua Luo
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (G.L.); (Z.D.)
| |
Collapse
|
24
|
Ries A, Dorosti T, Thalhammer J, Sasse D, Sauter A, Meurer F, Benne A, Lasser T, Pfeiffer F, Schaff F, Pfeiffer D. Improving image quality of sparse-view lung tumor CT images with U-Net. Eur Radiol Exp 2024; 8:54. [PMID: 38698099 PMCID: PMC11065797 DOI: 10.1186/s41747-024-00450-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 02/09/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.
Collapse
Affiliation(s)
- Annika Ries
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, 85748, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| | - Tina Dorosti
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, 85748, Germany.
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany.
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany.
| | - Johannes Thalhammer
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, 85748, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
- Institute for Advanced Study, Technical University of Munich, 85748, Garching, Germany
| | - Daniel Sasse
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Andreas Sauter
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Felix Meurer
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Ashley Benne
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
- Institute for Advanced Study, Technical University of Munich, 85748, Garching, Germany
| | - Tobias Lasser
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
- Computational Imaging and Inverse Problems, Department of Computer Science, School of Computation, Information, and Technology, Technical University of Munich, 85748, Garching, Germany
| | - Franz Pfeiffer
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, 85748, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
- Institute for Advanced Study, Technical University of Munich, 85748, Garching, Germany
| | - Florian Schaff
- Chair of Biomedical Physics, Department of Physics, School of Natural Sciences, Technical University of Munich, Garching, 85748, Germany
- Munich Institute of Biomedical Engineering, Technical University of Munich, 85748, Garching, Germany
| | - Daniela Pfeiffer
- Department of Diagnostic and Interventional Radiology, School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, 81675, Munich, Germany
- Institute for Advanced Study, Technical University of Munich, 85748, Garching, Germany
| |
Collapse
|
25
|
Zhang X, Zhang B, Zhang F. Stenosis Detection and Quantification of Coronary Artery Using Machine Learning and Deep Learning. Angiology 2024; 75:405-416. [PMID: 37399509 DOI: 10.1177/00033197231187063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2023]
Abstract
The aim of this review is to introduce some applications of artificial intelligence (AI) algorithms for the detection and quantification of coronary stenosis using computed tomography angiography (CTA). The realization of automatic/semi-automatic stenosis detection and quantification includes the following steps: vessel central axis extraction, vessel segmentation, stenosis detection, and quantification. Many new AI techniques, such as machine learning and deep learning, have been widely used in medical image segmentation and stenosis detection. This review also summarizes the recent progress regarding coronary stenosis detection and quantification, and discusses the development trends in this field. Through evaluation and comparison, researchers can better understand the research frontier in related fields, compare the advantages and disadvantages of various methods, and better optimize the new technologies. Machine learning and deep learning will promote the process of automatic detection and quantification of coronary artery stenosis. However, the machine learning and the deep learning methods need a large amount of data, so they also face some challenges because of the lack of professional image annotations (manually add labels by experts).
Collapse
Affiliation(s)
- Xinhong Zhang
- School of Software, Henan University, Kaifeng, China
| | - Boyan Zhang
- School of Software, Henan University, Kaifeng, China
| | - Fan Zhang
- Huaihe Hospital, Henan University, Kaifeng, China
| |
Collapse
|
26
|
Hu X, Jia X. Spectral CT image reconstruction using a constrained optimization approach-An algorithm for AAPM 2022 spectral CT grand challenge and beyond. Med Phys 2024; 51:3376-3390. [PMID: 38078560 PMCID: PMC11076172 DOI: 10.1002/mp.16877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 10/17/2023] [Accepted: 11/11/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND CT reconstruction is of essential importance in medical imaging. In 2022, the American Association of Physicists in Medicine (AAPM) sponsored a Grand Challenge to investigate the challenging inverse problem of spectral CT reconstruction, with the aim of achieving the most accurate reconstruction results. The authors of this paper participated in the challenge and won as a runner-up team. PURPOSE This paper reports details of our PROSPECT algorithm (Prior-based Restricted-variable Optimization for SPEctral CT) and follow-up studies regarding the algorithm's accuracy and enhancement of its convergence speed. METHODS We formulated the reconstruction task as an optimization problem. PROSPECT employed a one-step backward iterative scheme to solve this optimization problem by allowing estimation of and correction for the difference between the actual polychromatic projection model and the monochromatic model used in the optimization problem. PROSPECT incorporated various forms of prior information derived by analyzing training data provided by the Grand Challenge to reduce the number of unknown variables. We investigated the impact of projection data precision on the resulting solution accuracy and improved convergence speed of the PROSPECT algorithm by incorporating a beam-hardening correction (BHC) step in the iterative process. We also studied the algorithm's performance under noisy projection data. RESULTS Prior knowledge allowed a reduction of the number of unknown variables by85.9 % $85.9\%$ . PROSPECT algorithm achieved the average root of mean square error (RMSE) of3.3 × 10 - 6 $3.3\,\times \,10^{-6}$ in the test data set provided by the Grand Challenge. Performing the reconstruction with the same algorithm but using double-precision projection data reduced RMSE to1.2 × 10 - 11 $1.2\,\times \,10^{-11}$ . Including the BHC step in the PROSPECT algorithm accelerated the iteration process with a 40% reduction in computation time. CONCLUSIONS PROSPECT algorithm achieved a high degree of accuracy and computational efficiency.
Collapse
Affiliation(s)
- Xiaoyu Hu
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Xun Jia
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
27
|
Han Y. Hierarchical decomposed dual-domain deep learning for sparse-view CT reconstruction. Phys Med Biol 2024; 69:085019. [PMID: 38457843 DOI: 10.1088/1361-6560/ad31c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 03/08/2024] [Indexed: 03/10/2024]
Abstract
Objective. X-ray computed tomography employing sparse projection views has emerged as a contemporary technique to mitigate radiation dose. However, due to the inadequate number of projection views, an analytic reconstruction method utilizing filtered backprojection results in severe streaking artifacts. Recently, deep learning (DL) strategies employing image-domain networks have demonstrated remarkable performance in eliminating the streaking artifact caused by analytic reconstruction methods with sparse projection views. Nevertheless, it is difficult to clarify the theoretical justification for applying DL to sparse view computed tomography (CT) reconstruction, and it has been understood as restoration by removing image artifacts, not reconstruction.Approach. By leveraging the theory of deep convolutional framelets (DCF) and the hierarchical decomposition of measurement, this research reveals the constraints of conventional image and projection-domain DL methodologies, subsequently, the research proposes a novel dual-domain DL framework utilizing hierarchical decomposed measurements. Specifically, the research elucidates how the performance of the projection-domain network can be enhanced through a low-rank property of DCF and a bowtie support of hierarchical decomposed measurement in the Fourier domain.Main results. This study demonstrated performance improvement of the proposed framework based on the low-rank property, resulting in superior reconstruction performance compared to conventional analytic and DL methods.Significance. By providing a theoretically justified DL approach for sparse-view CT reconstruction, this study not only offers a superior alternative to existing methods but also opens new avenues for research in medical imaging. It highlights the potential of dual-domain DL frameworks to achieve high-quality reconstructions with lower radiation doses, thereby advancing the field towards safer and more efficient diagnostic techniques. The code is available athttps://github.com/hanyoseob/HDD-DL-for-SVCT.
Collapse
Affiliation(s)
- Yoseob Han
- Department of Electronic Engineering, Soongsil University, Republic of Korea
- Department of Intelligent Semiconductors, Soongsil University, Republic of Korea
| |
Collapse
|
28
|
Zhan F, Wang W, Chen Q, Guo Y, He L, Wang L. Three-Direction Fusion for Accurate Volumetric Liver and Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2175-2186. [PMID: 38109246 DOI: 10.1109/jbhi.2023.3344392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Biomedical image segmentation of organs, tissues and lesions has gained increasing attention in clinical treatment planning and navigation, which involves the exploration of two-dimensional (2D) and three-dimensional (3D) contexts in the biomedical image. Compared to 2D methods, 3D methods pay more attention to inter-slice correlations, which offer additional spatial information for image segmentation. An organ or tumor has a 3D structure that can be observed from three directions. Previous studies focus only on the vertical axis, limiting the understanding of the relationship between a tumor and its surrounding tissues. Important information can also be obtained from sagittal and coronal axes. Therefore, spatial information of organs and tumors can be obtained from three directions, i.e. the sagittal, coronal and vertical axes, to understand better the invasion depth of tumor and its relationship with the surrounding tissues. Moreover, the edges of organs and tumors in biomedical image may be blurred. To address these problems, we propose a three-direction fusion volumetric segmentation (TFVS) model for segmenting 3D biomedical images from three perspectives in sagittal, coronal and transverse planes, respectively. We use the dataset of the liver task provided by the Medical Segmentation Decathlon challenge to train our model. The TFVS method demonstrates a competitive performance on the 3D-IRCADB dataset. In addition, the t-test and Wilcoxon signed-rank test are also performed to show the statistical significance of the improvement by the proposed method as compared with the baseline methods. The proposed method is expected to be beneficial in guiding and facilitating clinical diagnosis and treatment.
Collapse
|
29
|
Lu B, Fu L, Pan Y, Dong Y. SWISTA-Nets: Subband-adaptive wavelet iterative shrinkage thresholding networks for image reconstruction. Comput Med Imaging Graph 2024; 113:102345. [PMID: 38330636 DOI: 10.1016/j.compmedimag.2024.102345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 01/29/2024] [Accepted: 01/29/2024] [Indexed: 02/10/2024]
Abstract
Robust and interpretable image reconstruction is central to imageology applications in clinical practice. Prevalent deep networks, with strong learning ability to extract implicit information from data manifold, are still lack of prior knowledge introduced from mathematics or physics, leading to instability, poor structure interpretability and high computation cost. As to this issue, we propose two prior knowledge-driven networks to combine the good interpretability of mathematical methods and the powerful learnability of deep learning methods. Incorporating different kinds of prior knowledge, we propose subband-adaptive wavelet iterative shrinkage thresholding networks (SWISTA-Nets), where almost every network module is in one-to-one correspondence with each step involved in the iterative algorithm. By end-to-end training of proposed SWISTA-Nets, implicit information can be extracted from training data and guide the tuning process of key parameters that possess mathematical definition. The inverse problems associated with two medical imaging modalities, i.e., electromagnetic tomography and X-ray computational tomography are applied to validate the proposed networks. Both visual and quantitative results indicate that the SWISTA-Nets outperform mathematical methods and state-of-the-art prior knowledge-driven networks, especially with fewer training parameters, interpretable network structures and well robustness. We assume that our analysis will support further investigation of prior knowledge-driven networks in the field of ill-posed image reconstruction.
Collapse
Affiliation(s)
- Binchun Lu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| | - Lidan Fu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Yixuan Pan
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| | - Yonggui Dong
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
30
|
Li Y, Feng J, Xiang J, Li Z, Liang D. AIRPORT: A Data Consistency Constrained Deep Temporal Extrapolation Method To Improve Temporal Resolution In Contrast Enhanced CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1605-1618. [PMID: 38133967 DOI: 10.1109/tmi.2023.3344712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Typical tomographic image reconstruction methods require that the imaged object is static and stationary during the time window to acquire a minimally complete data set. The violation of this requirement leads to temporal-averaging errors in the reconstructed images. For a fixed gantry rotation speed, to reduce the errors, it is desired to reconstruct images using data acquired over a narrower angular range, i.e., with a higher temporal resolution. However, image reconstruction with a narrower angular range violates the data sufficiency condition, resulting in severe data-insufficiency-induced errors. The purpose of this work is to decouple the trade-off between these two types of errors in contrast-enhanced computed tomography (CT) imaging. We demonstrated that using the developed data consistency constrained deep temporal extrapolation method (AIRPORT), the entire time-varying imaged object can be accurately reconstructed with 40 frames-per-second temporal resolution, the time window needed to acquire a single projection view data using a typical C-arm cone-beam CT system. AIRPORT is applicable to general non-sparse imaging tasks using a single short-scan data acquisition.
Collapse
|
31
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
32
|
Li H, Song Y. Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5047-5067. [PMID: 38872526 DOI: 10.3934/mbe.2024223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
Sparse-view computed tomography (CT) is an important way to reduce the negative effect of radiation exposure in medical imaging by skipping some X-ray projections. However, due to violating the Nyquist/Shannon sampling criterion, there are severe streaking artifacts in the reconstructed CT images that could mislead diagnosis. Noting the ill-posedness nature of the corresponding inverse problem in a sparse-view CT, minimizing an energy functional composed by an image fidelity term together with properly chosen regularization terms is widely used to reconstruct a medical meaningful attenuation image. In this paper, we propose a regularization, called the box-constrained nonlinear weighted anisotropic total variation (box-constrained NWATV), and minimize the regularization term accompanying the least square fitting using an alternative direction method of multipliers (ADMM) type method. The proposed method is validated through the Shepp-Logan phantom model, alongisde the actual walnut X-ray projections provided by Finnish Inverse Problems Society and the human lung images. The experimental results show that the reconstruction speed of the proposed method is significantly accelerated compared to the existing $ L_1/L_2 $ regularization method. Precisely, the central processing unit (CPU) time is reduced more than 8 times.
Collapse
Affiliation(s)
- Huiying Li
- School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
| | - Yizhuang Song
- School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
| |
Collapse
|
33
|
Ling Y, Wang Y, Liu Q, Yu J, Xu L, Zhang X, Liang P, Kong D. EPolar-UNet: An edge-attending polar UNet for automatic medical image segmentation with small datasets. Med Phys 2024; 51:1702-1713. [PMID: 38299370 DOI: 10.1002/mp.16957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 02/02/2024] Open
Abstract
BACKGROUND Medical image segmentation is one of the most key steps in computer-aided clinical diagnosis, geometric characterization, measurement, image registration, and so forth. Convolutional neural networks especially UNet and its variants have been successfully used in many medical image segmentation tasks. However, the results are limited by the deficiency in extracting high resolution edge information because of the design of the skip connections in UNet and the need for large available datasets. PURPOSE In this paper, we proposed an edge-attending polar UNet (EPolar-UNet), which was trained on the polar coordinate system instead of classic Cartesian coordinate system with an edge-attending construction in skip connection path. METHODS EPolar-UNet extracted the location information from an eight-stacked hourglass network as the pole for polar transformation and extracted the boundary cues from an edge-attending UNet, which consisted of a deconvolution layer and a subtraction operation. RESULTS We evaluated the performance of EPolar-UNet across three imaging modalities for different segmentation tasks: CVC-ClinicDB dataset for polyp, ISIC-2018 dataset for skin lesion, and our private ultrasound dataset for liver tumor segmentation. Our proposed model outperformed state-of-the-art models on all three datasets and needed only 30%-60% of training data compared with the benchmark UNet model to achieve similar performances for medical image segmentation tasks. CONCLUSIONS We proposed an end-to-end EPolar-UNet for automatic medical image segmentation and showed good performance on small datasets, which was critical in the field of medical image segmentation.
Collapse
Affiliation(s)
- Yating Ling
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Yuling Wang
- Department of Interventional Ultrasound, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Qian Liu
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Jie Yu
- Department of Interventional Ultrasound, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Lei Xu
- Zhejiang Qiushi Institute for Mathematical Medicine, Hangzhou, China
| | - Xiaoqian Zhang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Ping Liang
- Department of Interventional Ultrasound, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| |
Collapse
|
34
|
Sindhura C, Al Fahim M, Yalavarthy PK, Gorthi S. Fully automated sinogram-based deep learning model for detection and classification of intracranial hemorrhage. Med Phys 2024; 51:1944-1956. [PMID: 37702932 DOI: 10.1002/mp.16714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/26/2023] [Accepted: 08/20/2023] [Indexed: 09/14/2023] Open
Abstract
PURPOSE To propose an automated approach for detecting and classifying Intracranial Hemorrhages (ICH) directly from sinograms using a deep learning framework. This method is proposed to overcome the limitations of the conventional diagnosis by eliminating the time-consuming reconstruction step and minimizing the potential noise and artifacts that can occur during the Computed Tomography (CT) reconstruction process. METHODS This study proposes a two-stage automated approach for detecting and classifying ICH from sinograms using a deep learning framework. The first stage of the framework is Intensity Transformed Sinogram Sythesizer, which synthesizes sinograms that are equivalent to the intensity-transformed CT images. The second stage comprises of a cascaded Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model that detects and classifies hemorrhages from the synthesized sinograms. The CNN module extracts high-level features from each input sinogram, while the RNN module provides spatial correlation of the neighborhood regions in the sinograms. The proposed method was evaluated on a publicly available RSNA dataset consisting of a large sample size of 8652 patients. RESULTS The results showed that the proposed method had a notable improvement as high as 27% in patient-wise accuracies when compared to state-of-the-art methods like ResNext-101, Inception-v3 and Vision Transformer. Furthermore, the sinogram-based approach was found to be more robust to noise and offset errors in comparison to CT image-based approaches. The proposed model was also subjected to a multi-label classification analysis to determine the hemorrhage type from a given sinogram. The learning patterns of the proposed model were also examined for explainability using the activation maps. CONCLUSION The proposed sinogram-based approach can provide an accurate and efficient diagnosis of ICH without the need for the time-consuming reconstruction step and can potentially overcome the limitations of CT image-based approaches. The results show promising outcomes for the use of sinogram-based approaches in detecting hemorrhages, and further research can explore the potential of this approach in clinical settings.
Collapse
Affiliation(s)
- Chitimireddy Sindhura
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Mohammad Al Fahim
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Phaneendra K Yalavarthy
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, India
| | - Subrahmanyam Gorthi
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| |
Collapse
|
35
|
Oh C, Chung JY, Han Y. Domain transformation learning for MR image reconstruction from dual domain input. Comput Biol Med 2024; 170:108098. [PMID: 38330825 DOI: 10.1016/j.compbiomed.2024.108098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 01/22/2024] [Accepted: 02/02/2024] [Indexed: 02/10/2024]
Abstract
Medical images are acquired through diverse imaging systems, with each system employing specific image reconstruction techniques to transform sensor data into images. In MRI, sensor data (i.e., k-space data) is encoded in the frequency domain, and fully sampled k-space data is transformed into an image using the inverse Fourier Transform. However, in efforts to reduce acquisition time, k-space is often subsampled, necessitating a sophisticated image reconstruction method beyond a simple transform. The proposed approach addresses this challenge by training a model to learn domain transform, generating the final image directly from undersampled k-space input. Significantly, to improve the stability of reconstruction from randomly subsampled k-space data, folded images are incorporated as supplementary inputs in the dual-input ETER-net. Moreover, modifications are made to the formation of inputs for the bi-RNN stages to accommodate non-fixed k-space trajectories. Experimental validation, encompassing both regular and irregular sampling trajectories, validates the method's effectiveness. The results demonstrated superior performance, measured by PSNR, SSIM, and VIF, across acceleration factors of 4 and 8. In summary, the dual-input ETER-net emerges as an effective both regular and irregular sampling trajectories, and accommodating diverse acceleration factors.
Collapse
Affiliation(s)
- Changheun Oh
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Jun-Young Chung
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea; Department of Neuroscience, College of Medicine, Gachon University, Incheon, 21565, Republic of Korea.
| | - Yeji Han
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea; Department of Biomedical Engineering, Gachon University, Seongnam, 13120, Republic of Korea.
| |
Collapse
|
36
|
Gibson NM, Lee A, Bencsik M. A practical method to simulate realistic reduced-exposure CT images by the addition of computationally generated noise. Radiol Phys Technol 2024; 17:112-123. [PMID: 37955819 DOI: 10.1007/s12194-023-00755-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 11/14/2023]
Abstract
Computed tomography (CT) scanning protocols should be optimized to minimize the radiation dose necessary for imaging. The addition of computationally generated noise to the CT images facilitates dose reduction. The objective of this study was to develop a noise addition method that reproduces the complexity of the noise texture present in clinical images with directionality that varies over images according to the underlying anatomy, requiring only Digital Imaging and Communications in Medicine (DICOM) images as input data and commonly available phantoms for calibration. The developed method is based on the estimation of projection data by forward projection from images, the addition of Poisson noise, and the reconstruction of new images. The method was validated by applying it to images acquired from cylindrical and thoracic phantoms using source images with exposures up to 49 mAs and target images between 39 and 5 mAs. 2D noise spectra were derived for regions of interest in the generated low-dose images and compared with those from the scanner-acquired low-dose images. The root mean square difference between the standard deviations of noise was 4%, except for very low exposures in peripheral regions of the cylindrical phantom. The noise spectra from the corresponding regions of interest exhibited remarkable agreement, indicating that the complex nature of the noise was reproduced. A practical method for adding noise to CT images was presented, and the magnitudes of noise and spectral content were validated. This method may be used to optimize CT imaging.
Collapse
Affiliation(s)
- Nicholas Mark Gibson
- Medical Physics and Clinical Engineering, Queens Medical Centre, Nottingham University Hospitals NHS Trust, Derby Road, Nottingham, NG7 2UH, UK.
| | - Amy Lee
- Physics and Mathematics, Nottingham Trent University, Clifton Lane, Clifton, Nottingham, NG11 8NS, UK
| | - Martin Bencsik
- Physics and Mathematics, Nottingham Trent University, Clifton Lane, Clifton, Nottingham, NG11 8NS, UK
| |
Collapse
|
37
|
Li G, Huang X, Huang X, Zong Y, Luo S. PIDNET: Polar Transformation Based Implicit Disentanglement Network for Truncation Artifacts. ENTROPY (BASEL, SWITZERLAND) 2024; 26:101. [PMID: 38392356 PMCID: PMC10887623 DOI: 10.3390/e26020101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/18/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
The interior problem, a persistent ill-posed challenge in CT imaging, gives rise to truncation artifacts capable of distorting CT values, thereby significantly impacting clinical diagnoses. Traditional methods have long struggled to effectively solve this issue until the advent of supervised models built on deep neural networks. However, supervised models are constrained by the need for paired data, limiting their practical application. Therefore, we propose a simple and efficient unsupervised method based on the Cycle-GAN framework. Introducing an implicit disentanglement strategy, we aim to separate truncation artifacts from content information. The separated artifact features serve as complementary constraints and the source of generating simulated paired data to enhance the training of the sub-network dedicated to removing truncation artifacts. Additionally, we incorporate polar transformation and an innovative constraint tailored specifically for truncation artifact features, further contributing to the effectiveness of our approach. Experiments conducted on multiple datasets demonstrate that our unsupervised network outperforms the traditional Cycle-GAN model significantly. When compared to state-of-the-art supervised models trained on paired datasets, our model achieves comparable visual results and closely aligns with quantitative evaluation metrics.
Collapse
Affiliation(s)
- Guang Li
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinhai Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Xinyu Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Yuan Zong
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Shouhua Luo
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| |
Collapse
|
38
|
Sheng C, Ding Y, Qi Y, Hu M, Zhang J, Cui X, Zhang Y, Huo W. A denoising method based on deep learning for proton radiograph using energy resolved dose function. Phys Med Biol 2024; 69:025015. [PMID: 38096569 DOI: 10.1088/1361-6560/ad15c4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024]
Abstract
Objective.Proton radiograph has been broadly applied in proton radiotherapy which is affected by scattered protons which result in the lower spatial resolution of proton radiographs than that of x-ray images. Traditional image denoising method may lead to the change of water equivalent path length (WEPL) resulting in the lower WEPL measurement accuracy. In this study, we proposed a new denoising method of proton radiographs based on energy resolved dose function curves.Approach.Firstly, the corresponding relationship between the distortion of WEPL characteristic curve, and energy and proportion of scattered protons was established. Then, to improve the accuracy of proton radiographs, deep learning technique was used to remove scattered protons and correct deviated WEPL values. Experiments on a calibration phantom to prove the effectiveness and feasibility of this method were performed. In addition, an anthropomorphic head phantom was selected to demonstrate the clinical relevance of this technology and the denoising effect was analyzed.Main results.The curves of WEPL profiles of proton radiographs became smoother and deviated WEPL values were corrected. For the calibration phantom proton radiograph, the average absolute error of WEPL values decreased from 2.23 to 1.72, the mean percentage difference of all materials of relative stopping power decreased from 1.24 to 0.39, and the average relative WEPL corrected due to the denoising process was 1.06%. In addition, WEPL values correcting were also observed on the proton radiograph for anthropomorphic head phantom due to this denoising process.Significance.The experiments showed that this new method was effective for proton radiograph denoising and had greater advantages than end-to-end image denoising methods, laying the foundation for the implementation of precise proton radiotherapy.
Collapse
Affiliation(s)
- Cong Sheng
- Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering, China Jiliang University, Hangzhou, 310018, People's Republic of China
| | - Yu Ding
- Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering, China Jiliang University, Hangzhou, 310018, People's Republic of China
| | - Yaping Qi
- Division of lonizing Radiation Metrology, National Institute of Metrology, Beijing, 100029, People's Republic of China
| | - Man Hu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, 250117, People's Republic of China
| | - Jianguang Zhang
- Departments of Radiation Oncology, Zibo Wanjie Cancer Hospital, Zibo, 255000, People's Republic of China
| | - Xiangli Cui
- Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, People's Republic of China
| | - Yingying Zhang
- Department of Oncology, Xiangya Hospital, Central South University, Changsha, 410008, People's Republic of China
| | - Wanli Huo
- Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering, China Jiliang University, Hangzhou, 310018, People's Republic of China
| |
Collapse
|
39
|
Tao S, Tian Z, Bai L, Xu Y, Kuang C, Liu X. Phase retrieval for X-ray differential phase contrast radiography with knowledge transfer learning from virtual differential absorption model. Comput Biol Med 2024; 168:107711. [PMID: 37995534 DOI: 10.1016/j.compbiomed.2023.107711] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/31/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023]
Abstract
Grating-based X-ray phase contrast radiography and computed tomography (CT) are promising modalities for future medical applications. However, the ill-posed phase retrieval problem in X-ray phase contrast imaging has hindered its use for quantitative analysis in biomedical imaging. Deep learning has been proved as an effective tool for image retrieval. However, in practical grating-based X-ray phase contrast imaging system, acquiring the ground truth of phase to form image pairs is challenging, which poses a great obstacle for using deep leaning methods. Transfer learning is widely used to address the problem with knowledge inheritance from similar tasks. In the present research, we propose a virtual differential absorption model and generate a training dataset with differential absorption images and absorption images. The knowledge learned from the training is transferred to phase retrieval with transfer learning techniques. Numerical simulations and experiments both demonstrate its feasibility. Image quality of retrieved phase radiograph and phase CT slices is improved when compared with representative phase retrieval methods. We conclude that this method is helpful in both X-ray 2D and 3D imaging and may find its applications in X-ray phase contrast radiography and X-ray phase CT.
Collapse
Affiliation(s)
- Siwei Tao
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zonghan Tian
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Ling Bai
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Yueshu Xu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China
| | - Cuifang Kuang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, 030006, China.
| | - Xu Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Ningbo Research Institute, Zhejiang University, Ningbo, 315100, China.
| |
Collapse
|
40
|
Liu P, Fang C, Qiao Z. A dense and U-shaped transformer with dual-domain multi-loss function for sparse-view CT reconstruction. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:207-228. [PMID: 38306086 DOI: 10.3233/xst-230184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
OBJECTIVE CT image reconstruction from sparse-view projections is an important imaging configuration for low-dose CT, as it can reduce radiation dose. However, the CT images reconstructed from sparse-view projections by traditional analytic algorithms suffer from severe sparse artifacts. Therefore, it is of great value to develop advanced methods to suppress these artifacts. In this work, we aim to use a deep learning (DL)-based method to suppress sparse artifacts. METHODS Inspired by the good performance of DenseNet and Transformer architecture in computer vision tasks, we propose a Dense U-shaped Transformer (D-U-Transformer) to suppress sparse artifacts. This architecture exploits the advantages of densely connected convolutions in capturing local context and Transformer in modelling long-range dependencies, and applies channel attention to fusion features. Moreover, we design a dual-domain multi-loss function with learned weights for the optimization of the model to further improve image quality. RESULTS Experimental results of our proposed D-U-Transformer yield performance improvements on the well-known Mayo Clinic LDCT dataset over several representative DL-based models in terms of artifact suppression and image feature preservation. Extensive internal ablation experiments demonstrate the effectiveness of the components in the proposed model for sparse-view computed tomography (SVCT) reconstruction. SIGNIFICANCE The proposed method can effectively suppress sparse artifacts and achieve high-precision SVCT reconstruction, thus promoting clinical CT scanning towards low-dose radiation and high-quality imaging. The findings of this work can be applied to denoising and artifact removal tasks in CT and other medical images.
Collapse
Affiliation(s)
- Peng Liu
- School of Computer and Information Technology, Shanxi University, Taiyuan, China
- Department of Big Data and Intelligent Engineering, Shanxi Institute of Technology, Yangquan, China
| | - Chenyun Fang
- School of Computer and Information Technology, Shanxi University, Taiyuan, China
| | - Zhiwei Qiao
- School of Computer and Information Technology, Shanxi University, Taiyuan, China
| |
Collapse
|
41
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
42
|
Yang Z, Chen Y, Huangfu H, Ran M, Wang H, Li X, Zhang Y. Dynamic Corrected Split Federated Learning With Homomorphic Encryption for U-Shaped Medical Image Networks. IEEE J Biomed Health Inform 2023; 27:5946-5957. [PMID: 37729562 DOI: 10.1109/jbhi.2023.3317632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
U-shaped networks have become prevalent in various medical image tasks such as segmentation, and restoration. However, most existing U-shaped networks rely on centralized learning which raises privacy concerns. To address these issues, federated learning (FL) and split learning (SL) have been proposed. However, achieving a balance between the local computational cost, model privacy, and parallel training remains a challenge. In this articler, we propose a novel hybrid learning paradigm called Dynamic Corrected Split Federated Learning (DC-SFL) for U-shaped medical image networks. To preserve data privacy, including the input, model parameters, label and output simultaneously, we propose to split the network into three parts hosted by different parties. We propose a Dynamic Weight Correction Strategy (DWCS) to stabilize the training process and avoid the model drift problem due to data heterogeneity. To further enhance privacy protection and establish a trustworthy distributed learning paradigm, we propose to introduce additively homomorphic encryption into the aggregation process of client-side model, which helps prevent potential collusion between parties and provides a better privacy guarantee for our proposed method. The proposed DC-SFL is evaluated on various medical image tasks, and the experimental results demonstrate its effectiveness. In comparison with state-of-the-art distributed learning methods, our method achieves competitive performance.
Collapse
|
43
|
Kim S, Kim B, Lee J, Baek J. Sparsier2Sparse: Self-supervised convolutional neural network-based streak artifacts reduction in sparse-view CT images. Med Phys 2023; 50:7731-7747. [PMID: 37303108 DOI: 10.1002/mp.16552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/13/2023] Open
Abstract
BACKGROUND Sparse-view computed tomography (CT) has attracted a lot of attention for reducing both scanning time and radiation dose. However, sparsely-sampled projection data generate severe streak artifacts in the reconstructed images. In recent decades, many sparse-view CT reconstruction techniques based on fully-supervised learning have been proposed and have shown promising results. However, it is not feasible to acquire pairs of full-view and sparse-view CT images in real clinical practice. PURPOSE In this study, we propose a novel self-supervised convolutional neural network (CNN) method to reduce streak artifacts in sparse-view CT images. METHODS We generate the training dataset using only sparse-view CT data and train CNN based on self-supervised learning. Since the streak artifacts can be estimated using prior images under the same CT geometry system, we acquire prior images by iteratively applying the trained network to given sparse-view CT images. We then subtract the estimated steak artifacts from given sparse-view CT images to produce the final results. RESULTS We validated the imaging performance of the proposed method using extended cardiac-torso (XCAT) and the 2016 AAPM Low-Dose CT Grand Challenge dataset from Mayo Clinic. From the results of visual inspection and modulation transfer function (MTF), the proposed method preserved the anatomical structures effectively and showed higher image resolution compared to the various streak artifacts reduction methods for all projection views. CONCLUSIONS We propose a new framework for streak artifacts reduction when only the sparse-view CT data are given. Although we do not use any information of full-view CT data for CNN training, the proposed method achieved the highest performance in preserving fine details. By overcoming the limitation of dataset requirements on fully-supervised-based methods, we expect that our framework can be utilized in the medical imaging field.
Collapse
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Byeongjoon Kim
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Jooho Lee
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| |
Collapse
|
44
|
Mori S, Hirai R, Sakata Y, Koto M, Ishikawa H. Shortening image registration time using a deep neural network for patient positional verification in radiotherapy. Phys Eng Sci Med 2023; 46:1563-1572. [PMID: 37639109 DOI: 10.1007/s13246-023-01320-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 08/09/2023] [Indexed: 08/29/2023]
Abstract
We sought to accelerate 2D/3D image registration computation time using image synthesis with a deep neural network (DNN) to generate digitally reconstructed radiographic (DRR) images from X-ray flat panel detector (FPD) images. And we explored the feasibility of using our DNN in the patient setup verification application. Images of the prostate and of the head and neck (H&N) regions were acquired by two oblique X-ray fluoroscopic units and the treatment planning CT. DNN was designed to generate DRR images from the FPD image data. We evaluated the quality of the synthesized DRR images to compare the ground-truth DRR images using the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Image registration accuracy and computation time were evaluated by comparing the 2D-3D image registration algorithm using DRR and FPD image data with DRR and synthesized DRR images. Mean PSNR values were 23.4 ± 3.7 dB and 24.1 ± 3.9 dB for the pelvic and H&N regions, respectively. Mean SSIM values for both cases were also similar (= 0.90). Image registration accuracy was degraded by a mean of 0.43 mm and 0.30°, it was clinically acceptable. Computation time was accelerated by a factor of 0.69. Our DNN successfully generated DRR images from FPD image data, and improved 2D-3D image registration computation time up to 37% in average.
Collapse
Affiliation(s)
- Shinichiro Mori
- Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan.
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
45
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
46
|
You S, Lei B, Wang S, Chui CK, Cheung AC, Liu Y, Gan M, Wu G, Shen Y. Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet Domain. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8802-8814. [PMID: 35254996 DOI: 10.1109/tnnls.2022.3153088] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Magnetic resonance (MR) imaging plays an important role in clinical and brain exploration. However, limited by factors such as imaging hardware, scanning time, and cost, it is challenging to acquire high-resolution MR images clinically. In this article, fine perceptive generative adversarial networks (FP-GANs) are proposed to produce super-resolution (SR) MR images from the low-resolution counterparts. By adopting the divide-and-conquer scheme, FP-GANs are designed to deal with the low-frequency (LF) and high-frequency (HF) components of MR images separately and parallelly. Specifically, FP-GANs first decompose an MR image into LF global approximation and HF anatomical texture subbands in the wavelet domain. Then, each subband generative adversarial network (GAN) simultaneously concentrates on super-resolving the corresponding subband image. In generator, multiple residual-in-residual dense blocks are introduced for better feature extraction. In addition, the texture-enhancing module is designed to trade off the weight between global topology and detailed textures. Finally, the reconstruction of the whole image is considered by integrating inverse discrete wavelet transformation in FP-GANs. Comprehensive experiments on the MultiRes_7T and ADNI datasets demonstrate that the proposed model achieves finer structure recovery and outperforms the competing methods quantitatively and qualitatively. Moreover, FP-GANs further show the value by applying the SR results in classification tasks.
Collapse
|
47
|
Li Q, Li R, Wang T, Cheng Y, Qiang Y, Wu W, Zhao J, Zhang D. A cascade-based dual-domain data correction network for sparse view CT image reconstruction. Comput Biol Med 2023; 165:107345. [PMID: 37603960 DOI: 10.1016/j.compbiomed.2023.107345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 07/18/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023]
Abstract
Computed tomography (CT) provides non-invasive anatomical structures of the human body and is also widely used for clinical diagnosis, but excessive ionizing radiation in X-rays can cause harm to the human body. Therefore, the researchers obtained sparse sinograms reconstructed sparse view CT images (SVCT) by reducing the amount of X-ray projection, thereby reducing the radiological effects caused by radiation. This paper proposes a cascade-based dual-domain data correction network (CDDCN), which can effectively combine the complementary information contained in the sinogram domain and the image domain to reconstruct high-quality CT images from sparse view sinograms. Specifically, several encoder-decoder subnets are cascaded in the sinogram domain to reconstruct artifact-free and noise-free CT images. In the encoder-decoder subnets, spatial-channel domain learning is designed to achieve efficient feature fusion through a group merging structure, providing continuous and elaborate pixel-level features and improving feature extraction efficiency. At the same time, to ensure that the original sinogram data collected can be retained, a sinogram data consistency layer is proposed to ensure the fidelity of the sinogram data. To further maintain the consistency between the reconstructed image and the reference image, a multi-level composite loss function is designed for regularization to compensate for excessive smoothing and distortion of the image caused by pixel loss and preserve image details and texture. Quantitative and qualitative analysis shows that CDDCN achieves competitive results in artifact removal, edge preservation, detail restoration, and visual improvement for sparsely sampled data under different views.
Collapse
Affiliation(s)
- Qing Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China
| | - Runrui Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China
| | - Tao Wang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China
| | - Yubin Cheng
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China
| | - Yan Qiang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China.
| | - Wei Wu
- Department of Clinical Laboratory, Affiliated People's Hospital of Shanxi Medical University, Shanxi Provincial People's Hospital, Taiyuan, 030012, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China; School of Information Engineering, Jinzhong College of Information, Jinzhong, 030800, China
| | - Dongxu Zhang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, 030024, China
| |
Collapse
|
48
|
Mirbeik A, Ebadi N. Deep learning for tumor margin identification in electromagnetic imaging. Sci Rep 2023; 13:15925. [PMID: 37741854 PMCID: PMC10517989 DOI: 10.1038/s41598-023-42625-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 09/12/2023] [Indexed: 09/25/2023] Open
Abstract
In this work, a novel method for tumor margin identification in electromagnetic imaging is proposed to optimize the tumor removal surgery. This capability will enable the visualization of the border of the cancerous tissue for the surgeon prior or during the excision surgery. To this end, the border between the normal and tumor parts needs to be identified. Therefore, the images need to be segmented into tumor and normal areas. We propose a deep learning technique which divides the electromagnetic images into two regions: tumor and normal, with high accuracy. We formulate deep learning from a perspective relevant to electromagnetic image reconstruction. A recurrent auto-encoder network architecture (termed here DeepTMI) is presented. The effectiveness of the algorithm is demonstrated by segmenting the reconstructed images of an experimental tissue-mimicking phantom. The structure similarity measure (SSIM) and mean-square-error (MSE) average of normalized reconstructed results by the DeepTMI method are about 0.94 and 0.04 respectively, while that average obtained from the conventional backpropagation (BP) method can hardly overcome 0.35 and 0.41 respectively.
Collapse
Affiliation(s)
- Amir Mirbeik
- RadioSight LLC, Hoboken, NJ, 07030, USA
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, 1 Castle Point Ter, Hoboken, NJ, 07030, USA
| | - Negar Ebadi
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, 1 Castle Point Ter, Hoboken, NJ, 07030, USA.
- Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
49
|
Cheng W, He J, Liu Y, Zhang H, Wang X, Liu Y, Zhang P, Chen H, Gui Z. CAIR: Combining integrated attention with iterative optimization learning for sparse-view CT reconstruction. Comput Biol Med 2023; 163:107161. [PMID: 37311381 DOI: 10.1016/j.compbiomed.2023.107161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/21/2023] [Accepted: 06/07/2023] [Indexed: 06/15/2023]
Abstract
Sparse-view CT is an efficient way for low dose scanning but degrades image quality. Inspired by the successful use of non-local attention in natural image denoising and compression artifact removal, we proposed a network combining integrated attention and iterative optimization learning for sparse-view CT reconstruction (CAIR). Specifically, we first unrolled the proximal gradient descent into a deep network and added an enhanced initializer between the gradient term and the approximation term. It can enhance the information flow between different layers, fully preserve the image details, and improve the network convergence speed. Secondly, the integrated attention module was introduced into the reconstruction process as a regularization term. It adaptively fuses the local and non-local features of the image which are used to reconstruct the complex texture and repetitive details of the image, respectively. Note that we innovatively designed a one-shot iteration strategy to simplify the network structure and reduce the reconstruction time while maintaining image quality. Experiments showed that the proposed method is very robust and outperforms state-of-the-art methods in terms of both quantitative and qualitative, greatly improving the preservation of structures and the removal of artifacts.
Collapse
Affiliation(s)
- Weiting Cheng
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Jichun He
- School of Medical and BioInformation Engineering, Northeastern University, Shenyang, 110000, China
| | - Yi Liu
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Haowen Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Xiang Wang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Yuhang Liu
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Pengcheng Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Hao Chen
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Zhiguo Gui
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China.
| |
Collapse
|
50
|
Ernst P, Chatterjee S, Rose G, Speck O, Nürnberger A. Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction. Neural Netw 2023; 166:704-721. [PMID: 37604079 DOI: 10.1016/j.neunet.2023.08.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 08/03/2023] [Accepted: 08/04/2023] [Indexed: 08/23/2023]
Abstract
Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932±0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919±0.016. Furthermore, the proposed model resulted in 0.903±0.019 and 0.957±0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867±0.025 and 0.949±0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.
Collapse
Affiliation(s)
- Philipp Ernst
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Genomics Research Centre, Human Technopole, Milan, Italy.
| | - Georg Rose
- Institute of Medical Engineering, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Faculty of Natural Sciences, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; German Centre for Neurodegenerative Disease, Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| |
Collapse
|