1
|
Amadita K, Gray F, Gee E, Ekpo E, Jimenez Y. CT metal artefact reduction for hip and shoulder implants using novel algorithms and machine learning: A systematic review with pairwise and network meta-analyses. Radiography (Lond) 2025; 31:36-52. [PMID: 39509906 DOI: 10.1016/j.radi.2024.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 09/25/2024] [Accepted: 10/14/2024] [Indexed: 11/15/2024]
Abstract
INTRODUCTION Many tools have been developed to reduce metal artefacts in computed tomography (CT) images resulting from metallic prosthesis; however, their relative effectiveness in preserving image quality is poorly understood. This paper reviews the literature on novel metal artefact reduction (MAR) methods targeting large metal artefacts in fan-beam CT to examine their effectiveness in reducing metal artefacts and effect on image quality. METHODS The PRISMA checklist was used to search for articles in five electronic databases (MEDLINE, Scopus, Web of Science, IEEE, EMBASE). Studies that assessed the effectiveness of recently developed MAR method on fan-beam CT images of hip and shoulder implants were reviewed. Study quality was assessed using the National Institute of Health (NIH) tool. Meta-analyses were conducted in R, and results that could not be meta-analysed were synthesised narratively. RESULTS Thirty-six studies were reviewed. Of these, 20 studies proposed statistical algorithms and 16 used machine learning (ML), and there were 19 novel comparators. Network meta-analysis of 19 studies showed that Recurrent Neural Network MAR (RNN-MAR) is more effective in reducing noise (LogOR 20.7; 95 % CI 12.6-28.9) without compromising image quality (LogOR 4.4; 95 % CI -13.8-22.5). The network meta-analysis and narrative synthesis showed novel MAR methods reduce noise more effectively than baseline algorithms, with five out of 23 ML methods significantly more effective than Filtered Back Projection (FBP) (p < 0.05). Computation time varied, but ML methods were faster than statistical algorithms. CONCLUSION ML tools are more effective in reducing metal artefacts without compromising image quality and are computationally faster than statistical algorithms. Overall, novel MAR methods were also more effective in reducing noise than the baseline reconstructions. IMPLICATIONS FOR PRACTICE Implementation research is needed to establish the clinical suitability of ML MAR in practice.
Collapse
Affiliation(s)
- K Amadita
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - F Gray
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Gee
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - E Ekpo
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| | - Y Jimenez
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, NSW 2006, Australia.
| |
Collapse
|
2
|
Konst B, Ohlsson L, Henriksson L, Sandstedt M, Persson A, Ebbers T. Optimization of photon counting CT for cardiac imaging in patients with left ventricular assist devices: An in-depth assessment of metal artifacts. J Appl Clin Med Phys 2024; 25:e14386. [PMID: 38739330 PMCID: PMC11244676 DOI: 10.1002/acm2.14386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/22/2024] [Accepted: 04/21/2024] [Indexed: 05/14/2024] Open
Abstract
PURPOSE Photon counting CT (PCCT) holds promise for mitigating metal artifacts and can produce virtual mono-energetic images (VMI), while maintaining temporal resolution, making it a valuable tool for characterizing the heart. This study aimed to evaluate and optimize PCCT for cardiac imaging in patients during left ventricular assistance device (LVAD) therapy by conducting an in-depth objective assessment of metal artifacts and visual grading. METHODS Various scan and reconstruction settings were tested on a phantom and further evaluated on a patient acquisition to identify the optimal protocol settings. The phantom comprised an empty thoracic cavity, supplemented with heart and lungs from a cadaveric lamb. The heart was implanted with an LVAD (HeartMate 3) and iodine contrast. Scans were performed on a PCCT (NAEOTOM Alpha, Siemens Healthcare). Metal artifacts were assessed by three objective methods: Hounsfield units (HU)/SD measurements (DiffHU and SDARTIFACT), Fourier analysis (AmplitudeLowFreq), and depicted LVAD volume in the images (BloomVol). Radiologists graded metal artifacts and the diagnostic interpretability in the LVAD lumen, cardiac tissue, lung tissue, and spinal cord using a 5-point rating scale. Regression and correlation analysis were conducted to determine the assessment method most closely associated with acquisition and reconstruction parameters, as well as the objective method demonstrating the highest correlation with visual grading. RESULTS Due to blooming artifacts, the LVAD volume fluctuated between 27.0 and 92.7 cm3. This variance was primarily influenced by kVp, kernel, keV, and iMAR (R2 = 0.989). Radiologists favored pacemaker iMAR, 3 mm slice thickness, and T3D keV and kernel Bv56f for minimal metal artifacts in cardiac tissue assessment, and 110 keV and Qr40f for lung tissue interpretation. The model adequacy for DiffHU SDARTIFACT, AmplitueLowFreq, and BloomVol was 0.28, 0.76, 0.29, and 0.99 respectively for phantom data, and 0.95, 0.98, 1.00, and 0.99 for in-vivo data. For in-vivo data, the correlation between visual grading (VGSUM) and DiffHU SDARTIFACT, AmplitueLowFreq, and BloomVol was -0.16, -0.01, -0.48, and -0.40 respectively. CONCLUSION We found that optimal scan settings for LVAD imaging involved using 120 kVp and IQ level 80. Employing T3D with pacemaker iMAR, the sharpest allowed vascular kernel (Bv56f), and VMI at 110 keV with kernel Qr40 yields images suitable for cardiac imaging during LVAD-therapy. Volumetric measurements of the LVAD for determination of the extent of blooming artifacts was shown to be the best objective method to assess metal artifacts.
Collapse
Affiliation(s)
- Bente Konst
- Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of RadiologyVestfold HospitalTønsbergNorway
| | - Linus Ohlsson
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Thoracic and Vascular Surgery in Östergötland, and Department of HealthMedicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Lilian Henriksson
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Radiology in Linköpingand Department of HealthMedicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Mårten Sandstedt
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Radiology in Linköpingand Department of HealthMedicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Anders Persson
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Radiology in Linköpingand Department of HealthMedicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Tino Ebbers
- Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
| |
Collapse
|
3
|
Li Z, Gao Q, Wu Y, Niu C, Zhang J, Wang M, Wang G, Shan H. Quad-Net: Quad-Domain Network for CT Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1866-1879. [PMID: 38194399 DOI: 10.1109/tmi.2024.3351722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Metal implants and other high-density objects in patients introduce severe streaking artifacts in CT images, compromising image quality and diagnostic performance. Although various methods were developed for CT metal artifact reduction over the past decades, including the latest dual-domain deep networks, remaining metal artifacts are still clinically challenging in many cases. Here we extend the state-of-the-art dual-domain deep network approach into a quad-domain counterpart so that all the features in the sinogram, image, and their corresponding Fourier domains are synergized to eliminate metal artifacts optimally without compromising structural subtleties. Our proposed quad-domain network for MAR, referred to as Quad-Net, takes little additional computational cost since the Fourier transform is highly efficient, and works across the four receptive fields to learn both global and local features as well as their relations. Specifically, we first design a Sinogram-Fourier Restoration Network (SFR-Net) in the sinogram domain and its Fourier space to faithfully inpaint metal-corrupted traces. Then, we couple SFR-Net with an Image-Fourier Refinement Network (IFR-Net) which takes both an image and its Fourier spectrum to improve a CT image reconstructed from the SFR-Net output using cross-domain contextual information. Quad-Net is trained on clinical datasets to minimize a composite loss function. Quad-Net does not require precise metal masks, which is of great importance in clinical practice. Our experimental results demonstrate the superiority of Quad-Net over the state-of-the-art MAR methods quantitatively, visually, and statistically. The Quad-Net code is publicly available at https://github.com/longzilicart/Quad-Net.
Collapse
|
4
|
Cui W, Lv H, Wang J, Zheng Y, Wu Z, Zhao H, Zheng J, Li M. Feature shared multi-decoder network using complementary learning for Photon counting CT ring artifact suppression. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:529-547. [PMID: 38669511 DOI: 10.3233/xst-230396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
BACKGROUND Photon-counting computed tomography (Photon counting CT) utilizes photon-counting detectors to precisely count incident photons and measure their energy. These detectors, compared to traditional energy integration detectors, provide better image contrast and material differentiation. However, Photon counting CT tends to show more noticeable ring artifacts due to limited photon counts and detector response variations, unlike conventional spiral CT. OBJECTIVE To comprehensively address this issue, we propose a novel feature shared multi-decoder network (FSMDN) that utilizes complementary learning to suppress ring artifacts in Photon counting CT images. METHODS Specifically, we employ a feature-sharing encoder to extract context and ring artifact features, facilitating effective feature sharing. These shared features are also independently processed by separate decoders dedicated to the context and ring artifact channels, working in parallel. Through complementary learning, this approach achieves superior performance in terms of artifact suppression while preserving tissue details. RESULTS We conducted numerous experiments on Photon counting CT images with three-intensity ring artifacts. Both qualitative and quantitative results demonstrate that our network model performs exceptionally well in correcting ring artifacts at different levels while exhibiting superior stability and robustness compared to the comparison methods. CONCLUSIONS In this paper, we have introduced a novel deep learning network designed to mitigate ring artifacts in Photon counting CT images. The results illustrate the viability and efficacy of our proposed network model as a new deep learning-based method for suppressing ring artifacts.
Collapse
Affiliation(s)
- Wei Cui
- Institute of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Haipeng Lv
- Institute of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Jiping Wang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | | | - Zhongyi Wu
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Hui Zhao
- Wenzhou People's Hospital, Wenzhou, China
| | - Jian Zheng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Ming Li
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| |
Collapse
|
5
|
Ikuta M, Zhang J. A Deep Convolutional Gated Recurrent Unit for CT Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10612-10625. [PMID: 35522637 DOI: 10.1109/tnnls.2022.3169569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Computed tomography (CT) is one of the most important medical imaging technologies in use today. Most commercial CT products use a technique known as the filtered backprojection (FBP) that is fast and can produce decent image quality when an X-ray dose is high. However, the FBP is not good enough on low-dose X-ray CT imaging because the CT image reconstruction problem becomes more stochastic. A more effective reconstruction technique proposed recently and implemented in a limited number of CT commercial products is an iterative reconstruction (IR). The IR technique is based on a Bayesian formulation of the CT image reconstruction problem with an explicit model of the CT scanning, including its stochastic nature, and a prior model that incorporates our knowledge about what a good CT image should look like. However, constructing such prior knowledge is more complicated than it seems. In this article, we propose a novel neural network for CT image reconstruction. The network is based on the IR formulation and constructed with a recurrent neural network (RNN). Specifically, we transform the gated recurrent unit (GRU) into a neural network performing CT image reconstruction. We call it "GRU reconstruction." This neural network conducts concurrent dual-domain learning. Many deep learning (DL)-based methods in medical imaging are single-domain learning, but dual-domain learning performs better because it learns from both the sinogram and the image domain. In addition, we propose backpropagation through stage (BPTS) as a new RNN backpropagation algorithm. It is similar to the backpropagation through time (BPTT) of an RNN; however, it is tailored for iterative optimization. Results from extensive experiments indicate that our proposed method outperforms conventional model-based methods, single-domain DL methods, and state-of-the-art DL techniques in terms of the root mean squared error (RMSE), the peak signal-to-noise ratio (PSNR), and the structure similarity (SSIM) and in terms of visual appearance.
Collapse
|
6
|
Yun S, Jeong U, Lee D, Kim H, Cho S. Image quality improvement in bowtie-filter-equipped cone-beam CT using a dual-domain neural network. Med Phys 2023; 50:7498-7512. [PMID: 37669510 DOI: 10.1002/mp.16693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 08/08/2023] [Accepted: 08/09/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The bowtie-filter in cone-beam CT (CBCT) causes spatially nonuniform x-ray beam often leading to eclipse artifacts in the reconstructed image. The artifacts are further confounded by the patient scatter, which is therefore patient-dependent as well as system-specific. PURPOSE In this study, we propose a dual-domain network for reducing the bowtie-filter-induced artifacts in CBCT images. METHODS In the projection domain, the network compensates for the filter-induced beam-hardening that are highly related to the eclipse artifacts. The output of the projection-domain network was used for image reconstruction and the reconstructed images were fed into the image-domain network. In the image domain, the network further reduces the remaining cupping artifacts that are associated with the scatter. A single image-domain-only network was also implemented for comparison. RESULTS The proposed approach successfully enhanced soft-tissue contrast with much-reduced image artifacts. In the numerical study, the proposed method decreased perceptual loss and root-mean-square-error (RMSE) of the images by 84.5% and 84.9%, respectively, and increased the structure similarity index measure (SSIM) by 0.26 compared to the original input images on average. In the experimental study, the proposed method decreased perceptual loss and RMSE of the images by 87.2% and 92.1%, respectively, and increased SSIM by 0.58 compared to the original input images on average. CONCLUSIONS We have proposed a deep-learning-based dual-domain framework to reduce the bowtie-filter artifacts and to increase the soft-tissue contrast in CBCT images. The performance of the proposed method has been successfully demonstrated in both numerical and experimental studies.
Collapse
Affiliation(s)
- Sungho Yun
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Uijin Jeong
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Donghyeon Lee
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Hyeongseok Kim
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for Health Science and Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- KAIST Institute for IT Convergence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
7
|
Du M, Liang K, Zhang L, Gao H, Liu Y, Xing Y. Deep-Learning-Based Metal Artefact Reduction With Unsupervised Domain Adaptation Regularization for Practical CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2133-2145. [PMID: 37022909 DOI: 10.1109/tmi.2023.3244252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
CT metal artefact reduction (MAR) methods based on supervised deep learning are often troubled by domain gap between simulated training dataset and real-application dataset, i.e., methods trained on simulation cannot generalize well to practical data. Unsupervised MAR methods can be trained directly on practical data, but they learn MAR with indirect metrics and often perform unsatisfactorily. To tackle the domain gap problem, we propose a novel MAR method called UDAMAR based on unsupervised domain adaptation (UDA). Specifically, we introduce a UDA regularization loss into a typical image-domain supervised MAR method, which mitigates the domain discrepancy between simulated and practical artefacts by feature-space alignment. Our adversarial-based UDA focuses on a low-level feature space where the domain difference of metal artefacts mainly lies. UDAMAR can simultaneously learn MAR from simulated data with known labels and extract critical information from unlabeled practical data. Experiments on both clinical dental and torso datasets show the superiority of UDAMAR by outperforming its supervised backbone and two state-of-the-art unsupervised methods. We carefully analyze UDAMAR by both experiments on simulated metal artefacts and various ablation studies. On simulation, its close performance to the supervised methods and advantages over the unsupervised methods justify its efficacy. Ablation studies on the influence from the weight of UDA regularization loss, UDA feature layers, and the amount of practical data used for training further demonstrate the robustness of UDAMAR. UDAMAR provides a simple and clean design and is easy to implement. These advantages make it a very feasible solution for practical CT MAR.
Collapse
|
8
|
Koetzier LR, Mastrodicasa D, Szczykutowicz TP, van der Werf NR, Wang AS, Sandfort V, van der Molen AJ, Fleischmann D, Willemink MJ. Deep Learning Image Reconstruction for CT: Technical Principles and Clinical Prospects. Radiology 2023; 306:e221257. [PMID: 36719287 PMCID: PMC9968777 DOI: 10.1148/radiol.221257] [Citation(s) in RCA: 115] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/26/2022] [Accepted: 10/13/2022] [Indexed: 02/01/2023]
Abstract
Filtered back projection (FBP) has been the standard CT image reconstruction method for 4 decades. A simple, fast, and reliable technique, FBP has delivered high-quality images in several clinical applications. However, with faster and more advanced CT scanners, FBP has become increasingly obsolete. Higher image noise and more artifacts are especially noticeable in lower-dose CT imaging using FBP. This performance gap was partly addressed by model-based iterative reconstruction (MBIR). Yet, its "plastic" image appearance and long reconstruction times have limited widespread application. Hybrid iterative reconstruction partially addressed these limitations by blending FBP with MBIR and is currently the state-of-the-art reconstruction technique. In the past 5 years, deep learning reconstruction (DLR) techniques have become increasingly popular. DLR uses artificial intelligence to reconstruct high-quality images from lower-dose CT faster than MBIR. However, the performance of DLR algorithms relies on the quality of data used for model training. Higher-quality training data will become available with photon-counting CT scanners. At the same time, spectral data would greatly benefit from the computational abilities of DLR. This review presents an overview of the principles, technical approaches, and clinical applications of DLR, including metal artifact reduction algorithms. In addition, emerging applications and prospects are discussed.
Collapse
Affiliation(s)
| | | | - Timothy P. Szczykutowicz
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Niels R. van der Werf
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Adam S. Wang
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Veit Sandfort
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Aart J. van der Molen
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Dominik Fleischmann
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| | - Martin J. Willemink
- From the Department of Radiology (L.R.K., D.M., A.S.W., V.S., D.F.,
M.J.W.) and Stanford Cardiovascular Institute (D.M., D.F., M.J.W.), Stanford
University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105;
Department of Radiology, University of Wisconsin–Madison, School of
Medicine and Public Health, Madison, Wis (T.P.S.); Department of Radiology,
Erasmus Medical Center, Rotterdam, the Netherlands (N.R.v.d.W.); Clinical
Science Western Europe, Philips Healthcare, Best, the Netherlands (N.R.v.d.W.);
and Department of Radiology, Leiden University Medical Center, Leiden, the
Netherlands (A.J.v.d.M.)
| |
Collapse
|
9
|
Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network‐based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme. Med Phys 2022; 49:6253-6277. [DOI: 10.1002/mp.15884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Junhyun Ahn
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Byeongjoon Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering Convergence IT Engineering, Mechanical Engineering School of Interdisciplinary Bioscience and Bioengineering, and Medical Device Innovation Center Pohang University of Science and Technology Pohang 37673 South Korea
| | - Jongduk Baek
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| |
Collapse
|
10
|
Anhaus JA, Killermann P, Sedlmair M, Winter J, Mahnken AH, Hofmann C. Non-linearly scaled (NLS) prior image-controlled frequency split for high-frequency metal artifact reduction in Computed Tomography. Med Phys 2022; 49:5870-5885. [PMID: 35866263 DOI: 10.1002/mp.15879] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/03/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE This paper introduces a new approach for the dedicated reduction of high-frequency metal artifacts, which applies a non-linear scaling transfer function (NLS) on the high-frequency projection domain to reduce artifacts, while preserving edge information and anatomic detail by incorporating prior image information. METHODS A non-linear scaling function is applied to suppress high-frequency streak artifacts, but to restrict the correction to metal projections only, scaling is performed in the sinogram domain. Anatomic information should be preserved and is excluded from scaling by incorporating a prior image from tissue-classification. The corrected high-frequency sinogram is reconstructed and combined with the low-frequency component of an NMAR image. Scans of different anthropomorphic phantoms were acquired (unilateral hip, bilateral hip, dental implants, and embolization coil). Multiple ROIs were drawn around the metal implants and HU deviations were analyzed. Clinical datasets including single image slices of dental fillings, a bilateral hip implant, spinal fixation screws, and an aneurysm coil were reconstructed and assessed. RESULTS The prior image-controlled non-linear scaling function can remove streak artifacts while preserving anatomic detail within the bone and soft tissue. The qualitative analysis of clinical cases showed a tremendous enhancement within dental fillings and neuro coils, and a significant enhancement within spinal screws or hip implants. The phantom scan measurements support this observation. In all phantom setups, the NLS-corrected result showed lowest HU derivation and the best visualization of the data. CONCLUSIONS The prior image-controlled NLS provides a method to reduce high-frequency streaks in metal-corrupted CT data. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Julian A Anhaus
- Siemens Healthineers, CT Physics, Forchheim, D-91031, Germany.,Philipps-University Marburg, Marburg, D-35037, Germany
| | | | - Martin Sedlmair
- Siemens Healthineers, CT Physics, Forchheim, D-91031, Germany
| | - Jonas Winter
- Siemens Healthineers, CT Physics, Forchheim, D-91031, Germany
| | | | | |
Collapse
|
11
|
LMA-Net: A lesion morphology aware network for medical image segmentation towards breast tumors. Comput Biol Med 2022; 147:105685. [DOI: 10.1016/j.compbiomed.2022.105685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/20/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
|
12
|
Wei Z, Wu X, Tong W, Zhang S, Yang X, Tian J, Hui H. Elimination of stripe artifacts in light sheet fluorescence microscopy using an attention-based residual neural network. BIOMEDICAL OPTICS EXPRESS 2022; 13:1292-1311. [PMID: 35414974 PMCID: PMC8973169 DOI: 10.1364/boe.448838] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/15/2022] [Accepted: 01/28/2022] [Indexed: 06/14/2023]
Abstract
Stripe artifacts can deteriorate the quality of light sheet fluorescence microscopy (LSFM) images. Owing to the inhomogeneous, high-absorption, or scattering objects located in the excitation light path, stripe artifacts are generated in LSFM images in various directions and types, such as horizontal, anisotropic, or multidirectional anisotropic. These artifacts severely degrade the quality of LSFM images. To address this issue, we proposed a new deep-learning-based approach for the elimination of stripe artifacts. This method utilizes an encoder-decoder structure of UNet integrated with residual blocks and attention modules between successive convolutional layers. Our attention module was implemented in the residual blocks to learn useful features and suppress the residual features. The proposed network was trained and validated by generating three different degradation datasets with different types of stripe artifacts in LSFM images. Our method can effectively remove different stripes in generated and actual LSFM images distorted by stripe artifacts. Besides, quantitative analysis and extensive comparison results demonstrated that our method performs the best compared with classical image-based processing algorithms and other powerful deep-learning-based destriping methods for all three generated datasets. Thus, our method has tremendous application prospects to LSFM, and its use can be easily extended to images reconstructed by other modalities affected by the presence of stripe artifacts.
Collapse
Affiliation(s)
- Zechen Wei
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing 100190, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xiangjun Wu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing 100083, China
| | - Wei Tong
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing 100853, China
| | - Suhui Zhang
- Senior Department of Cardiology, the Sixth Medical Center of PLA General Hospital, Beijing 100853, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing 100190, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing 100190, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- Zhuhai Precision Medical Center, Zhuhai People's Hospital, affiliated with Jinan University, Zhuhai 519000, China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Beijing 100190, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
13
|
Peng C, Zhang Y, Zheng J, Li B, Shen J, Li M, Liu L, Qiu B, Chen DZ. IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation. Comput Med Imaging Graph 2022; 95:102021. [PMID: 34861622 DOI: 10.1016/j.compmedimag.2021.102021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/02/2021] [Accepted: 11/23/2021] [Indexed: 11/22/2022]
Abstract
Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.
Collapse
Affiliation(s)
- Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China; Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Yue Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China.
| | - Bin Li
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Jun Shen
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou 510120, China.
| | - Ming Li
- Suzhou Institute of Biomedical Engineering and Technology, CAS, Suzhou 215163, China
| | - Lei Liu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Bensheng Qiu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
| | - Danny Z Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| |
Collapse
|
14
|
Zhang Y, Peng C, Peng L, Xu Y, Lin L, Tong R, Peng Z, Mao X, Hu H, Chen YW, Li J. DeepRecS: From RECIST Diameters to Precise Liver Tumor Segmentation. IEEE J Biomed Health Inform 2021; 26:614-625. [PMID: 34161249 DOI: 10.1109/jbhi.2021.3091900] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Liver tumor segmentation (LiTS) is of primary importance in diagnosis and treatment of hepatocellular carcinoma. Known automated LiTS methods could not yield satisfactory results for clinical use since they were hard to model flexible tumor shapes and locations. In clinical practice, radiologists usually estimate tumor shape and size by a Response Evaluation Criteria in Solid Tumor (RECIST) mark. Inspired by this, in this paper, we explore a deep learning (DL) based interactive LiTS method, which incorporates guidance from user-provided RECIST marks. Our method takes a three-step framework to predict liver tumor boundaries. Under this architecture, we develop a RECIST mark propagation network (RMP-Net) to estimate RECIST-like marks in off-RECIST slices. We also devise a context-guided boundary-sensitive network (CGBS-Net) to distill tumors contextual and boundary information from corresponding RECIST(-like) marks, and then predict tumor maps. To further refine the segmentation results, we process the tumor maps using a 3D conditional random field (CRF) algorithm and a morphology hole-filling operation. Verified on two clinical contrast-enhanced abdomen computed tomography (CT) image datasets, our proposed approach can produce promising segmentation results, and outperforms the state-of-the-art interactive segmentation methods.
Collapse
|