151
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
152
|
Wu W, Shi J, Yu H, Wu W, Vardhanabhuti V. Tensor Gradient L₀-Norm Minimization-Based Low-Dose CT and Its Application to COVID-19. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2021; 70:4503012. [PMID: 35582003 PMCID: PMC8769022 DOI: 10.1109/tim.2021.3050190] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 12/15/2020] [Accepted: 12/30/2020] [Indexed: 05/03/2023]
Abstract
Methods to recover high-quality computed tomography (CT) images in low-dose cases will be of great benefit. To reach this goal, sparse-data subsampling is one of the common strategies to reduce radiation dose, which is attracting interest among the researchers in the CT community. Since analytic image reconstruction algorithms may lead to severe image artifacts, the iterative algorithms have been developed for reconstructing images from sparsely sampled projection data. In this study, we first develop a tensor gradient L0-norm minimization (TGLM) for low-dose CT imaging. Then, the TGLM model is optimized by using the split-Bregman method. The Coronavirus Disease 2019 (COVID-19) has been sweeping the globe, and CT imaging has been deployed for detection and assessing the severity of the disease. Finally, we first apply our proposed TGLM method for COVID-19 to achieve low-dose scan by incorporating the 3-D spatial information. Two COVID-19 patients (64 years old female and 56 years old man) were scanned by the [Formula: see text]CT 528 system, and the acquired projections were retrieved to validate and evaluate the performance of the TGLM.
Collapse
Affiliation(s)
- Weiwen Wu
- Department of Diagnostic RadiologyThe University of Hong Kong Hong Kong China
| | - Jun Shi
- School of Communication and Information EngineeringShanghai Institute for Advanced Communication and Data Science, Shanghai University Shanghai 200444 China
| | - Hengyong Yu
- Department of Electrical and Computer EngineeringUniversity of Massachusetts Lowell Lowell MA 01854 USA
| | - Weifei Wu
- People's Hospital of China Three Gorges University Yichang 443000 China
- First People's Hospital of Yichang Yichang 443000 China
| | - Varut Vardhanabhuti
- Department of Diagnostic RadiologyThe University of Hong Kong Hong Kong China
| |
Collapse
|
153
|
Matsuura M, Zhou J, Akino N, Yu Z. Feature-Aware Deep-Learning Reconstruction for Context-Sensitive X-ray Computed Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3040882] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
154
|
Zhang F, Zhang M, Qin B, Zhang Y, Xu Z, Liang D, Liu Q. REDAEP: Robust and Enhanced Denoising Autoencoding Prior for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2989634] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
155
|
Podgorsak AR, Shiraz Bhurwani MM, Ionita CN. CT artifact correction for sparse and truncated projection data using generative adversarial networks. Med Phys 2020; 48:615-626. [PMID: 32996149 DOI: 10.1002/mp.14504] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Computed tomography image reconstruction using truncated or sparsely acquired projection data to reduce radiation dose, iodine volume, and patient motion artifacts has been widely investigated. To continue these efforts, we investigated the use of machine learning-based reconstruction techniques using deep convolutional generative adversarial networks (DCGANs) and evaluated its effect using standard imaging metrics. METHODS Ten thousand head computed tomography (CT) scans were collected from the 2019 RSNA Intracranial Hemorrhage Detection and Classification Challenge dataset. Sinograms were simulated and then resampled in both a one-third truncated and one-third sparse manner. DCGANs were tasked with correcting the incomplete projection data, either in the sinogram domain where the full sinogram was recovered by the DCGAN and then reconstructed, or the reconstruction domain where the incomplete data were first reconstructed and the sparse or truncation artifacts were corrected by the DCGAN. Seventy-five hundred images were used for network training and 2500 were withheld for network assessment using mean absolute error (MAE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) between results of different correction techniques. Image data from a quality-assurance phantom were also resampled in the two manners and corrected and reconstructed for network performance assessment using line profiles across high-contrast features, the modulation transfer function (MTF), noise power spectrum (NPS), and Hounsfield Unit (HU) linearity analysis. RESULTS Better agreement with the fully sampled reconstructions were achieved from sparse acquisition corrected in the sinogram domain and the truncated acquisition corrected in the reconstruction domain. MAE, SSIM, and PSNR showed quantitative improvement from the DCGAN correction techniques. HU linearity of the reconstructions was maintained by the correction techniques for the sparse and truncated acquisitions. MTF curves reached the 10% modulation cutoff frequency at 5.86 lp/cm for the truncated corrected reconstruction compared with 2.98 lp/cm for the truncated uncorrected reconstruction, and 5.36 lp/cm for the sparse corrected reconstruction compared with around 2.91 lp/cm for the sparse uncorrected reconstruction. NPS analyses yielded better agreement across a range of frequencies between the resampled corrected phantom and truth reconstructions. CONCLUSIONS We demonstrated the use of DCGANs for CT-image correction from sparse and truncated simulated projection data, while preserving imaging quality of the fully sampled projection data.
Collapse
Affiliation(s)
- Alexander R Podgorsak
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Mohammad Mahdi Shiraz Bhurwani
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Ciprian N Ionita
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| |
Collapse
|
156
|
Xu Y, Zhang Y, Zhang M, Wang M, Xu W, Wang C, Sun Y, Wei P. Quantitative Analysis of Metallographic Image Using Attention-Aware Deep Neural Networks. SENSORS 2020; 21:s21010043. [PMID: 33374842 PMCID: PMC7796035 DOI: 10.3390/s21010043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 12/19/2020] [Accepted: 12/20/2020] [Indexed: 12/02/2022]
Abstract
As a detection tool to identify metal or alloy, metallographic quantitative analysis has received increasing attention for its ability to evaluate quality control and reveal mechanical properties. The detection procedure is mainly operated manually to locate and characterize the constitution in metallographic images. The automatic detection is still a challenge even with the emergence of several excellent models. Benefiting from the development of deep learning, with regard to two different metallurgical structural steel image datasets, we propose two attention-aware deep neural networks, Modified Attention U-Net (MAUNet) and Self-adaptive Attention-aware Soft Anchor-Point Detector (SASAPD), to identify structures and evaluate their performance. Specifically, in the case of analyzing single-phase metallographic image, MAUNet investigates the difference between low-frequency and high-frequency and prevents duplication of low-resolution information in skip connection used in an U-Net like structure, and incorporates spatial-channel attention module with the decoder to enhance interpretability of features. In the case of analyzing multi-phase metallographic image, SASAPD explores and ranks the importance of anchor points, forming soft-weighted samples in subsequent loss design, and self-adaptively evaluates the contributions of attention-aware pyramid features to assist in detecting elements in different sizes. Extensive experiments on the above two datasets demonstrate the superiority and effectiveness of our two deep neural networks compared to state-of-the-art models on different metrics.
Collapse
Affiliation(s)
- Yifei Xu
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
- Correspondence: ; Tel.: +86-150-2918-2592
| | - Yuewan Zhang
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Meizi Zhang
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Mian Wang
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Wujiang Xu
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Chaoyong Wang
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Yan Sun
- School of Software, Xi’an Jiaotong University, Xi’an 710054, China; (Y.Z.); (M.Z.); (M.W.); (W.X.); (C.W.); (Y.S.)
| | - Pingping Wei
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054, China;
| |
Collapse
|
157
|
Moen TR, Chen B, Holmes DR, Duan X, Yu Z, Yu L, Leng S, Fletcher JG, McCollough CH. Low-dose CT image and projection dataset. Med Phys 2020; 48:902-911. [PMID: 33202055 DOI: 10.1002/mp.14594] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 09/01/2020] [Accepted: 11/11/2020] [Indexed: 01/01/2023] Open
Abstract
PURPOSE To describe a large, publicly available dataset comprising computed tomography (CT) projection data from patient exams, both at routine clinical doses and simulated lower doses. ACQUISITION AND VALIDATION METHODS The library was developed under local ethics committee approval. Projection and image data from 299 clinically performed patient CT exams were archived for three types of clinical exams: noncontrast head CT scans acquired for acute cognitive or motor deficit, low-dose noncontrast chest scans acquired to screen high-risk patients for pulmonary nodules, and contrast-enhanced CT scans of the abdomen acquired to look for metastatic liver lesions. Scans were performed on CT systems from two different CT manufacturers using routine clinical protocols. Projection data were validated by reconstructing the data using several different reconstruction algorithms and through use of the data in the 2016 Low Dose CT Grand Challenge. Reduced dose projection data were simulated for each scan using a validated noise-insertion method. Radiologists marked location and diagnosis for detected pathologies. Reference truth was obtained from the patient medical record, either from histology or subsequent imaging. DATA FORMAT AND USAGE NOTES Projection datasets were converted into the previously developed DICOM-CT-PD format, which is an extended DICOM format created to store CT projections and acquisition geometry in a nonproprietary format. Image data are stored in the standard DICOM image format and clinical data in a spreadsheet. Materials are provided to help investigators use the DICOM-CT-PD files, including a dictionary file, data reader, and user manual. The library is publicly available from The Cancer Imaging Archive (https://doi.org/10.7937/9npb-2637). POTENTIAL APPLICATIONS This CT data library will facilitate the development and validation of new CT reconstruction and/or denoising algorithms, including those associated with machine learning or artificial intelligence. The provided clinical information allows evaluation of task-based diagnostic performance.
Collapse
Affiliation(s)
- Taylor R Moen
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Baiyu Chen
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - David R Holmes
- Biomedical Imaging Resource, Mayo Clinic, Rochester, MN, USA
| | - Xinhui Duan
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Zhicong Yu
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | |
Collapse
|
158
|
Oh C, Kim D, Chung JY, Han Y, Park H. A k-space-to-image reconstruction network for MRI using recurrent neural network. Med Phys 2020; 48:193-203. [PMID: 33128235 DOI: 10.1002/mp.14566] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/06/2020] [Accepted: 10/23/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Reconstructing the images from undersampled k-space data are an ill-posed inverse problem. As a solution to this problem, we propose a method to reconstruct magnetic resonance (MR) images directly from k-space data using a recurrent neural network. METHODS A novel neural network architecture named "ETER-net" is developed as a unified solution to reconstruct MR images from undersampled k-space data, where two bi-RNNs and convolutional neural network (CNN) are utilized to perform domain transformation and de-aliasing. To demonstrate the practicality of the proposed method, we conducted model optimization, cross-validation, and network pruning using in-house data from a 3T MRI scanner and public dataset called "FastMRI." RESULTS The experimental results showed that the proposed method could be utilized for accurate image reconstruction from undersampled k-space data. The size of the proposed model was optimized and cross-validation was performed to show the robustness of the proposed method. For in-house dataset (R = 4), the proposed method provided nMSE = 1.09% and SSIM = 0.938. For "FastMRI" dataset, the proposed method provided nMSE = 1.05 % and SSIM = 0.931 for R = 4, and nMSE = 3.12 % and SSIM = 0.884 for R = 8. The performance of the pruned model trained the loss function including with L2 regularization was consistent for a pruning ratio of up to 70%. CONCLUSIONS The proposed method is an end-to-end MR image reconstruction method based on recurrent neural networks. It performs direct mapping of the input k-space data and the reconstructed images, operating as a unified solution that is applicable to various scanning trajectories.
Collapse
Affiliation(s)
- Changheun Oh
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea.,Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Dongchan Kim
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Jun-Young Chung
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Yeji Han
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - HyunWook Park
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
159
|
Zhang T, Zhang L, Chen Z, Xing Y, Gao H. Fourier Properties of Symmetric-Geometry Computed Tomography and Its Linogram Reconstruction With Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4445-4457. [PMID: 32866095 DOI: 10.1109/tmi.2020.3020720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work, we investigate the Fourier properties of a symmetric-geometry computed tomography (SGCT) with linearly distributed source and detector in a stationary configuration. A linkage between the 1D Fourier Transform of a weighted projection from SGCT and the 2D Fourier Transform of a deformed object is established in a simple mathematical form (i.e., the Fourier slice theorem for SGCT). Based on its Fourier slice theorem and its unique data sampling in the Fourier space, a Linogram-based Fourier reconstruction method is derived for SGCT. We demonstrate that the entire Linogram reconstruction process can be embedded as known operators into an end-to-end neural network. As a learning-based approach, the proposed Linogram-Net has capability of improving CT image quality for non-ideal imaging scenarios, a limited-angle SGCT for instance, through combining weights learning in the projection domain and loss minimization in the image domain. Numerical simulations and physical experiments on an SGCT prototype platform showed that our proposed Linogram-based method can achieve accurate reconstruction from a dual-SGCT scan and can greatly reduce computational complexity when compared with the filtered backprojection type reconstruction. The Linogram-Net achieved accurate reconstruction when projection data are complete and significantly suppressed image artifacts from a limited-angle SGCT scan mimicked by using a clinical CT dataset, with the average CT number error in the selected regions of interest reduced from 67.7 Hounsfield Units (HU) to 28.7 HU, and the average normalized mean square error of overall images reduced from 4.21e-3 to 2.65e-3.
Collapse
|
160
|
Lu J, Millioz F, Garcia D, Salles S, Liu W, Friboulet D. Reconstruction for Diverging-Wave Imaging Using Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2481-2492. [PMID: 32286972 DOI: 10.1109/tuffc.2020.2986166] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, diverging wave (DW) ultrasound imaging has become a very promising methodology for cardiovascular imaging due to its high temporal resolution. However, if they are limited in number, DW transmits provide lower image quality compared with classical focused schemes. A conventional reconstruction approach consists in summing series of ultrasound signals coherently, at the expense of frame rate, data volume, and computation time. To deal with this limitation, we propose a convolutional neural network (CNN) architecture, Inception for DW Network (IDNet), for high-quality reconstruction of DW ultrasound images using a small number of transmissions. In order to cope with the specificities induced by the sectorial geometry associated with DW imaging, we adopted the inception model composed of the concatenation of multiscale convolution kernels. Incorporating inception modules aims at capturing different image features with multiscale receptive fields. A mapping between low-quality images and corresponding high-quality compounded reconstruction was learned by training the network using in vitro and in vivo samples. The performance of the proposed approach was evaluated in terms of contrast ratio (CR), contrast-to-noise ratio (CNR), and lateral resolution (LR), and compared with standard compounding method and conventional CNN methods. The results demonstrated that our method could produce high-quality images using only 3 DWs, yielding an image quality equivalent to that obtained with compounding of 31 DWs and outperforming more conventional CNN architectures in terms of complexity, inference time, and image quality.
Collapse
|
161
|
Zhao W, Wang H, Gemmeke H, van Dongen KWA, Hopp T, Hesser J. Ultrasound transmission tomography image reconstruction with a fully convolutional neural network. Phys Med Biol 2020; 65:235021. [PMID: 33245050 DOI: 10.1088/1361-6560/abb5c3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Image reconstruction of ultrasound computed tomography based on the wave equation is able to show much more structural details than simpler ray-based image reconstruction methods. However, to invert the wave-based forward model is computationally demanding. To address this problem, we develop an efficient fully learned image reconstruction method based on a convolutional neural network. The image is reconstructed via one forward propagation of the network given input sensor data, which is much faster than the reconstruction using conventional iterative optimization methods. To transform the ultrasound measured data in the sensor domain into the reconstructed image in the image domain, we apply multiple down-scaling and up-scaling convolutional units to efficiently increase the number of hidden layers with a large receptive and projective field that can cover all elements in inputs and outputs, respectively. For dataset generation, a paraxial approximation forward model is used to simulate ultrasound measurement data. The neural network is trained with a dataset derived from natural images in ImageNet and tested with a dataset derived from medical images in OA-Breast Phantom dataset. Test results show the superior efficiency of the proposed neural network to other reconstruction algorithms including popular neural networks. When compared with conventional iterative optimization algorithms, our neural network can reconstruct a 110 × 86 image more than 20 times faster on a CPU and 1000 times faster on a GPU with comparable image quality and is also more robust to noise.
Collapse
Affiliation(s)
- Wenzhao Zhao
- Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | | | | | | | | | | |
Collapse
|
162
|
Han Y, Kim J, Ye JC. Differentiated Backprojection Domain Deep Learning for Conebeam Artifact Removal. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3571-3582. [PMID: 32746105 DOI: 10.1109/tmi.2020.3000341] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results under various conditions confirmed that our method generalizes well and outperforms the existing iterative methods despite significantly reduced runtime complexity.
Collapse
|
163
|
Li F, Nan Y, Hou X, Xie C, Wang J, Lv C, Xie G. Correlation-Guided Network for Fine-Grained Classification of Glomerular lesions in Kidney Histopathology Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:5781-5784. [PMID: 33019288 DOI: 10.1109/embc44109.2020.9176234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Chronic Kidney Disease has become a worldwide public health problem which demands careful assessments by pathologists. In this paper, we propose a novel architecture for fine-grained classification of glomerular lesions in renal pathology images sampling from patients with IgA nephropathy. The adversarial correlation loss is innovatively presented to guide a parallel convolutional neural network. In this well- designed loss function, bias between the prediction and the label was take into account while the relationship among different categories is well-aligned. Glomerular lesions in this study are divided into five subcategories, Neg (Negative samples such as tubule and artery), SS (sclerosis involving a portion of the glomerular tuft), GS (sclerosis involving 100% of the tuft), C (build-up of more than two layers of cells within Bowman's space, often with fibrin and collagen deposition) and NOA (none of above). Our model with 93.0% accuracy and 92.9% Fl-score for these five categories has proved superior to other models through experimental results.
Collapse
|
164
|
Madesta F, Sentker T, Gauer T, Werner R. Self‐contained deep learning‐based boosting of 4D cone‐beam CT reconstruction. Med Phys 2020; 47:5619-5631. [DOI: 10.1002/mp.14441] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/02/2020] [Accepted: 07/16/2020] [Indexed: 12/25/2022] Open
Affiliation(s)
- Frederic Madesta
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - Thilo Sentker
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
- Department of Radiotherapy and Radio‐Oncology University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radio‐Oncology University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - René Werner
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| |
Collapse
|
165
|
Goudarzi S, Asif A, Rivaz H. High Frequency Ultrasound Image Recovery Using Tight Frame Generative Adversarial Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2035-2038. [PMID: 33018404 DOI: 10.1109/embc44109.2020.9176101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In ultrasound imaging, there is a trade-off between imaging depth and axial resolution because of physical limitations. Increasing the center frequency of the transmitted ultrasound wave improves the axial resolution of resulting image. However, High Frequency (HF) ultrasound has a shallower depth of penetration. Herein, we propose a novel method based on Generative Adversarial Network (GAN) for achieving a high axial resolution without a reduction in imaging depth. Results on simulated phantoms show that a mapping function between Low Frequency (LF) and HF ultrasound images can be constructed.
Collapse
|
166
|
|
167
|
Pham TH, Devalla SK, Ang A, Soh ZD, Thiery AH, Boote C, Cheng CY, Girard MJA, Koh V. Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images. Br J Ophthalmol 2020; 105:1231-1237. [PMID: 32980820 DOI: 10.1136/bjophthalmol-2019-315723] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 04/14/2020] [Accepted: 08/03/2020] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS Accurate isolation and quantification of intraocular dimensions in the anterior segment (AS) of the eye using optical coherence tomography (OCT) images is important in the diagnosis and treatment of many eye diseases, especially angle-closure glaucoma. METHOD In this study, we developed a deep convolutional neural network (DCNN) for the localisation of the scleral spur; moreover, we introduced an information-rich segmentation approach for this localisation problem. An ensemble of DCNNs for the segmentation of AS structures (iris, corneosclera shell adn anterior chamber) was developed. Based on the results of two previous processes, an algorithm to automatically quantify clinically important measurements were created. 200 images from 58 patients (100 eyes) were used for testing. RESULTS With limited training data, the DCNN was able to detect the scleral spur on unseen anterior segment optical coherence tomography (ASOCT) images as accurately as an experienced ophthalmologist on the given test dataset and simultaneously isolated the AS structures with a Dice coefficient of 95.7%. We then automatically extracted eight clinically relevant ASOCT measurements and proposed an automated quality check process that asserts the reliability of these measurements. When combined with an OCT machine capable of imaging multiple radial sections, the algorithms can provide a more complete objective assessment. The total segmentation and measurement time for a single scan is less than 2 s. CONCLUSION This is an essential step towards providing a robust automated framework for reliable quantification of ASOCT scans, for applications in the diagnosis and management of angle-closure glaucoma.
Collapse
Affiliation(s)
- Tan Hung Pham
- Department of Biomedical Engineering, National University of Singapore, Singapore.,Singapore Eye Research Institute, Singapore
| | | | - Aloysius Ang
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Zhi-Da Soh
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore
| | - Alexandre H Thiery
- Statistics and Applied Probability, National University of Singapore, Singapore
| | - Craig Boote
- Optometry and Vision Sciences, Cardiff University, Cardiff, South Glamorgan, UK
| | - Ching-Yu Cheng
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore
| | - Michael J A Girard
- Ophthalmic Engineering and Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore
| |
Collapse
|
168
|
Hoorali F, Khosravi H, Moradi B. Automatic Bacillus anthracis bacteria detection and segmentation in microscopic images using UNet+. J Microbiol Methods 2020; 177:106056. [PMID: 32931840 DOI: 10.1016/j.mimet.2020.106056] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 09/09/2020] [Accepted: 09/09/2020] [Indexed: 10/23/2022]
Abstract
Anthrax is one of the important diseases in humans and animals, caused by the gram-positive bacteria spores called Bacillus anthracis. The disease is still one of the health problems of developing countries. Due to fatigue and decreased visual acuity, microscopic diagnosis of diseases by humans may not be of good quality. In this paper, for the first time, a system for automatic and rapid diagnosis of anthrax disease simultaneously with detection and segmentation of B. anthracis bacteria in microscopic images has been proposed based on artificial intelligence and deep learning techniques. Two important architectures of deep neural networks including UNet and UNet++ have been used for detection and segmentation of the most important component of the image i.e. bacteria. Automated detection and segmentation of B. anthracis bacteria offers the same level of accuracy as the human diagnostic specialist and in some cases outperforms it. Experimental results show that these deep architectures especially UNet++ can be used effectively and efficiently to automate B. anthracis bacteria segmentation of microscopic images obtained under different conditions. UNet++ produces outstanding results despite the many challenges in this field, such as high image dimension, image artifacts, object crowding, and overlapping. We conducted our experiments on a dataset prepared privately and achieved an accuracy of 97% and the dice score of 0.96 on the patch test images. It also tested on whole raw images and a recall of 98% and accuracy of 97% is achieved, which shows excellent performance in the bacteria segmentation task. The low cost and high speed of diagnosis and no need for a specialist are other benefits of the proposed system.
Collapse
Affiliation(s)
- Fatemeh Hoorali
- Faculty of Electrical Engineering and Robotics, Shahrood University of Technology, Shahrood, Iran
| | - Hossein Khosravi
- Faculty of Electrical Engineering and Robotics, Shahrood University of Technology, Shahrood, Iran.
| | - Bagher Moradi
- Esfarayen Faculty of Medical Science, Esfarayen, Iran
| |
Collapse
|
169
|
Chen G, Zhao Y, Huang Q, Gao H. 4D-AirNet: a temporally-resolved CBCT slice reconstruction method synergizing analytical and iterative method with deep learning. Phys Med Biol 2020; 65:175020. [DOI: 10.1088/1361-6560/ab9f60] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
170
|
Clark DP, Schwartz FR, Marin D, Ramirez-Giraldo JC, Badea CT. Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography. Med Phys 2020; 47:4150-4163. [PMID: 32531114 PMCID: PMC7722037 DOI: 10.1002/mp.14324] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 05/12/2020] [Accepted: 06/02/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Data completion is commonly employed in dual-source, dual-energy computed tomography (CT) when physical or hardware constraints limit the field of view (FoV) covered by one of two imaging chains. Practically, dual-energy data completion is accomplished by estimating missing projection data based on the imaging chain with the full FoV and then by appropriately truncating the analytical reconstruction of the data with the smaller FoV. While this approach works well in many clinical applications, there are applications which would benefit from spectral contrast estimates over the larger FoV (spectral extrapolation)-e.g. model-based iterative reconstruction, contrast-enhanced abdominal imaging of large patients, interior tomography, and combined temporal and spectral imaging. METHODS To document the fidelity of spectral extrapolation and to prototype a deep learning algorithm to perform it, we assembled a data set of 50 dual-source, dual-energy abdominal x-ray CT scans (acquired at Duke University Medical Center with 5 Siemens Flash scanners; chain A: 50 cm FoV, 100 kV; chain B: 33 cm FoV, 140 kV + Sn; helical pitch: 0.8). Data sets were reconstructed using ReconCT (v14.1, Siemens Healthineers): 768 × 768 pixels per slice, 50 cm FoV, 0.75 mm slice thickness, "Dual-Energy - WFBP" reconstruction mode with dual-source data completion. A hybrid architecture consisting of a learned piecewise linear transfer function (PLTF) and a convolutional neural network (CNN) was trained using 40 scans (five scans reserved for validation, five for testing). The PLTF learned to map chain A spectral contrast to chain B spectral contrast voxel-wise, performing an image domain analog of dual-source data completion with approximate spectral reweighting. The CNN with its U-net structure then learned to improve the accuracy of chain B contrast estimates by copying chain A structural information, by encoding prior chain A, chain B contrast relationships, and by generalizing feature-contrast associations. Training was supervised, using data from within the 33-cm chain B FoV to optimize and assess network performance. RESULTS Extrapolation performance on the testing data confirmed our network's robustness and ability to generalize to unseen data from different patients, yielding maximum extrapolation errors of 26 HU following the PLTF and 7.5 HU following the CNN (averaged per target organ). Degradation of network performance when applied to a geometrically simple phantom confirmed our method's reliance on feature-contrast relationships in correctly inferring spectral contrast. Integrating our image domain spectral extrapolation network into a standard dual-source, dual-energy processing pipeline for Siemens Flash scanner data yielded spectral CT data with adequate fidelity for the generation of both 50 keV monochromatic images and material decomposition images over a 30-cm FoV for chain B when only 20 cm of chain B data were available for spectral extrapolation. CONCLUSIONS Even with a moderate amount of training data, deep learning methods are capable of robustly inferring spectral contrast from feature-contrast relationships in spectral CT data, leading to spectral extrapolation performance well beyond what may be expected at face value. Future work reconciling spectral extrapolation results with original projection data is expected to further improve results in outlying and pathological cases.
Collapse
Affiliation(s)
- Darin P. Clark
- Center for In Vivo Microscopy, Department of Radiology, Duke University, Durham, NC, 27710, USA
| | | | - Daniele Marin
- Department of Radiology, Duke University, Durham, NC, 27710, USA
| | | | - Cristian T. Badea
- Center for In Vivo Microscopy, Department of Radiology, Duke University, Durham, NC, 27710, USA
| |
Collapse
|
171
|
Manimala MVR, Dhanunjaya Naidu C, Giri Prasad MN. Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach. WIRELESS PERSONAL COMMUNICATIONS 2020; 116:491-511. [PMID: 32836885 PMCID: PMC7417787 DOI: 10.1007/s11277-020-07725-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Compressive sensing (CS) provides a potential platform for acquiring slow and sequential data, as in magnetic resonance (MR) imaging. However, CS requires high computational time for reconstructing MR images from sparse k-space data, which restricts its usage for high speed online reconstruction and wireless communications. Another major challenge is removal of Rician noise from magnitude MR images which changes the image characteristics, and thus affects the clinical usefulness. The work carried out so far predominantly models MRI noise as a Gaussian type. The use of advanced noise models primarily Rician type in CS paradigm is less explored. In this work, we develop a novel framework to reconstruct MR images with high speed and visual quality from noisy sparse k-space data. The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. To extract local features, the algorithm exploits signal similarities by processing similar patches as a group. An imperative reduction in the run time has been achieved as the CNN has been trained on a GPU with Convolutional Architecture for Fast Feature Embedding framework making it suitable for online reconstruction. The CNN based reconstruction also eliminates the necessity of optimization and prediction of noise level while denoising, which is the major advantage over existing state-of-the-art-techniques. Analytical experiments have been carried out with various undersampling schemes and the experimental results demonstrate high accuracy and consistent peak signal to noise ratio even at 20-fold undersampling. High undersampling rates provide scope for wireless transmission of k-space data and high speed reconstruction provides applicability of our algorithm for remote health monitoring.
Collapse
|
172
|
Tian C, Fei L, Zheng W, Xu Y, Zuo W, Lin CW. Deep learning on image denoising: An overview. Neural Netw 2020; 131:251-275. [PMID: 32829002 DOI: 10.1016/j.neunet.2020.07.025] [Citation(s) in RCA: 197] [Impact Index Per Article: 39.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/17/2020] [Accepted: 07/21/2020] [Indexed: 01/19/2023]
Abstract
Deep learning techniques have received much attention in the area of image denoising. However, there are substantial differences in the various types of deep learning methods dealing with image denoising. Specifically, discriminative learning based on deep learning can ably address the issue of Gaussian noise. Optimization models based on deep learning are effective in estimating the real noise. However, there has thus far been little related research to summarize the different deep learning techniques for image denoising. In this paper, we offer a comparative study of deep techniques in image denoising. We first classify the deep convolutional neural networks (CNNs) for additive white noisy images; the deep CNNs for real noisy images; the deep CNNs for blind denoising and the deep CNNs for hybrid noisy images, which represents the combination of noisy, blurred and low-resolution images. Then, we analyze the motivations and principles of the different types of deep learning methods. Next, we compare the state-of-the-art methods on public denoising datasets in terms of quantitative and qualitative analyses. Finally, we point out some potential challenges and directions of future research.
Collapse
Affiliation(s)
- Chunwei Tian
- Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, 518055, Guangdong, China; Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, 518055, Guangdong, China
| | - Lunke Fei
- School of Computers, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Wenxian Zheng
- Tsinghua Shenzhen International Graduate School, Shenzhen, 518055, Guangdong, China
| | - Yong Xu
- Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, 518055, Guangdong, China; Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, 518055, Guangdong, China; Peng Cheng Laboratory, Shenzhen, 518055, Guangdong, China.
| | - Wangmeng Zuo
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China; Peng Cheng Laboratory, Shenzhen, 518055, Guangdong, China
| | - Chia-Wen Lin
- Department of Electrical Engineering and the Institute of Communications Engineering, National Tsing Hua University, Hsinchu, Taiwan
| |
Collapse
|
173
|
Zhang Q, Hu Z, Jiang C, Zheng H, Ge Y, Liang D. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging. Phys Med Biol 2020; 65:155010. [PMID: 32369793 DOI: 10.1088/1361-6560/ab9066] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The suppression of streak artifacts in computed tomography with a limited-angle configuration is challenging. Conventional analytical algorithms, such as filtered backprojection (FBP), are not successful due to incomplete projection data. Moreover, model-based iterative total variation algorithms effectively reduce small streaks but do not work well at eliminating large streaks. In contrast, FBP mapping networks and deep-learning-based postprocessing networks are outstanding at removing large streak artifacts; however, these methods perform processing in separate domains, and the advantages of multiple deep learning algorithms operating in different domains have not been simultaneously explored. In this paper, we present a hybrid-domain convolutional neural network (hdNet) for the reduction of streak artifacts in limited-angle computed tomography. The network consists of three components: the first component is a convolutional neural network operating in the sinogram domain, the second is a domain transformation operation, and the last is a convolutional neural network operating in the CT image domain. After training the network, we can obtain artifact-suppressed CT images directly from the sinogram domain. Verification results based on numerical, experimental and clinical data confirm that the proposed method can significantly reduce serious artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China. Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | | | | | | | | | | |
Collapse
|
174
|
Khan S, Huh J, Ye JC. Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:1558-1572. [PMID: 32149628 DOI: 10.1109/tuffc.2020.2977202] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Collapse
|
175
|
Zhou Y, Yen GG, Yi Z. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2916-2929. [PMID: 31536016 DOI: 10.1109/tnnls.2019.2933879] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Biomedical image segmentation is lately dominated by deep neural networks (DNNs) due to their surpassing expert-level performance. However, the existing DNN models for biomedical image segmentation are generally highly parameterized, which severely impede their deployment on real-time platforms and portable devices. To tackle this difficulty, we propose an evolutionary compression method (ECDNN) to automatically discover efficient DNN architectures for biomedical image segmentation. Different from the existing studies, ECDNN can optimize network loss and number of parameters simultaneously during the evolution, and search for a set of Pareto-optimal solutions in a single run, which is useful for quantifying the tradeoff in satisfying different objectives, and flexible for compressing DNN when preference information is uncertain. In particular, a set of novel genetic operators is proposed for automatically identifying less important filters over the whole network. Moreover, a pruning operator is designed for eliminating convolutional filters from layers involved in feature map concatenation, which is commonly adopted in DNN architectures for capturing multi-level features from biomedical images. Experiments carried out on compressing DNN for retinal vessel and neuronal membrane segmentation tasks show that ECDNN can not only improve the performance without any retraining but also discover efficient network architectures that well maintain the performance. The superiority of the proposed method is further validated by comparison with the state-of-the-art methods.
Collapse
|
176
|
Kofler A, Haltmeier M, Schaeffter T, Kachelrieß M, Dewey M, Wald C, Kolbitsch C. Neural networks-based regularization for large-scale medical image reconstruction. Phys Med Biol 2020; 65:135003. [PMID: 32492660 DOI: 10.1088/1361-6560/ab990e] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In this paper we present a generalized Deep Learning-based approach for solving ill-posed large-scale inverse problems occuring in medical image reconstruction. Recently, Deep Learning methods using iterative neural networks (NNs) and cascaded NNs have been reported to achieve state-of-the-art results with respect to various quantitative quality measures as PSNR, NRMSE and SSIM across different imaging modalities. However, the fact that these approaches employ the application of the forward and adjoint operators repeatedly in the network architecture requires the network to process the whole images or volumes at once, which for some applications is computationally infeasible. In this work, we follow a different reconstruction strategy by strictly separating the application of the NN, the regularization of the solution and the consistency with the measured data. The regularization is given in the form of an image prior obtained by the output of a previously trained NN which is used in a Tikhonov regularization framework. By doing so, more complex and sophisticated network architectures can be used for the removal of the artefacts or noise than it is usually the case in iterative NNs. Due to the large scale of the considered problems and the resulting computational complexity of the employed networks, the priors are obtained by processing the images or volumes as patches or slices. We evaluated the method for the cases of 3D cone-beam low dose CT and undersampled 2D radial cine MRI and compared it to a total variation-minimization-based reconstruction algorithm as well as to a method with regularization based on learned overcomplete dictionaries. The proposed method outperformed all the reported methods with respect to all chosen quantitative measures and further accelerates the regularization step in the reconstruction by several orders of magnitude.
Collapse
Affiliation(s)
- A Kofler
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | | | | | | | | | | | | |
Collapse
|
177
|
Guo P, Li D, Li X. Deep OCT image compression with convolutional neural networks. BIOMEDICAL OPTICS EXPRESS 2020; 11:3543-3554. [PMID: 33014550 PMCID: PMC7510896 DOI: 10.1364/boe.392882] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 05/28/2020] [Accepted: 06/01/2020] [Indexed: 06/11/2023]
Abstract
We report an end-to-end image compression framework for retina optical coherence tomography (OCT) images based on convolutional neural networks (CNNs), which achieved an image size compression ratio as high as 80. Our compression scheme consists of three parts: data preprocessing, compression CNNs, and reconstruction CNNs. The preprocessing module was designed to reduce OCT speckle noise and segment out the region of interest. Skip connections with quantization were developed and added between the compression CNNs and the reconstruction CNNs to reserve the fine-structure information. Two networks were trained together by taking the semantic segmented images from the preprocessing module as input. To train the two networks sensitive to both low and high frequency information, we leveraged an objective function with two components: an adversarial discriminator to judge the high frequency information and a differentiable multi-scale structural similarity (MS-SSIM) penalty to evaluate the low frequency information. The proposed framework was trained and evaluated on ophthalmic OCT images with pathological information. The evaluation showed reconstructed images can still achieve above 99% similarity in terms of MS-SSIM when the compression ratio reached 40. Furthermore, the reconstructed images after 80-fold compression with the proposed framework even presented comparable quality with those of a compression ratio 20 from state-of-the-art methods. The test results showed that the proposed framework outperformed other methods in terms of both MS-SSIM and visualization, which was more obvious at higher compression ratios. Compression and reconstruction were fast and took only about 0.015 seconds per image. The results suggested a promising potential of deep neural networks on customized medical image compression, particularly valuable for effective image storage and tele-transfer.
Collapse
Affiliation(s)
- Pengfei Guo
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Equal contribution
| | - Dawei Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
- Equal contribution
| | - Xingde Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
178
|
Simulation Study of Low-Dose Sparse-Sampling CT with Deep Learning-Based Reconstruction: Usefulness for Evaluation of Ovarian Cancer Metastasis. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10134446] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, 20 validation, and 50 test cases. Sparse-sampling CT images were calculated slice-by-slice by software simulation. Two deep-learning models for deep learning-based reconstruction were evaluated: Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) and deeper U-net. For 50 test cases, we evaluated the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative measures. Two radiologists independently performed a qualitative evaluation for the following points: entire CT image quality; visibility of the iliac artery; and visibility of peritoneal dissemination, liver metastasis, and lymph node metastasis. Wilcoxon signed-rank test and McNemar test were used to compare image quality and metastasis detectability between the two models, respectively. The mean PSNR and SSIM performed better with deeper U-net over RED-CNN. For all items of the visual evaluation, deeper U-net scored significantly better than RED-CNN. The metastasis detectability with deeper U-net was more than 95%. Sparse-sampling CT with deep learning-based reconstruction proved useful in detecting metastasis of malignant ovarian tumors and might contribute to reducing overall CT-radiation exposure.
Collapse
|
179
|
Chen XL, Yan TY, Wang N, von Deneen KM. Rising role of artificial intelligence in image reconstruction for biomedical imaging. Artif Intell Med Imaging 2020; 1:1-5. [DOI: 10.35711/aimi.v1.i1.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we review recent progress on the applications of artificial intelligence (AI) in image reconstruction for biomedical imaging. Because it abandons prior information of traditional artificial design and adopts a completely data-driven mode to obtain deeper prior information via learning, AI technology plays an increasingly important role in biomedical image reconstruction. The combination of AI technology and the biomedical image reconstruction method has become a hotspot in the field. Favoring AI, the performance of biomedical image reconstruction has been improved in terms of accuracy, resolution, imaging speed, etc. We specifically focus on how to use AI technology to improve the performance of biomedical image reconstruction, and propose possible future directions in this field.
Collapse
Affiliation(s)
- Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Tian-Yu Yan
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|
180
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
181
|
Kim Y, Kudo H. Nonlocal Total Variation Using the First and Second Order Derivatives and Its Application to CT Image Reconstruction. SENSORS 2020; 20:s20123494. [PMID: 32575760 PMCID: PMC7349404 DOI: 10.3390/s20123494] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 06/15/2020] [Accepted: 06/18/2020] [Indexed: 11/17/2022]
Abstract
We propose a new class of nonlocal Total Variation (TV), in which the first derivative and the second derivative are mixed. Since most existing TV considers only the first-order derivative, it suffers from problems such as staircase artifacts and loss in smooth intensity changes for textures and low-contrast objects, which is a major limitation in improving image quality. The proposed nonlocal TV combines the first and second order derivatives to preserve smooth intensity changes well. Furthermore, to accelerate the iterative algorithm to minimize the cost function using the proposed nonlocal TV, we propose a proximal splitting based on Passty’s framework. We demonstrate that the proposed nonlocal TV method achieves adequate image quality both in sparse-view CT and low-dose CT, through simulation studies using a brain CT image with a very narrow contrast range for which it is rather difficult to preserve smooth intensity changes.
Collapse
|
182
|
Shao W, Pomper MG, Du Y. A Learned Reconstruction Network for SPECT Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 5:26-34. [PMID: 33403244 DOI: 10.1109/trpms.2020.2994041] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
A neural network designed specifically for SPECT image reconstruction was developed. The network reconstructed activity images from SPECT projection data directly. Training was performed through a corpus of training data including that derived from digital phantoms generated from custom software and the corresponding projection data obtained from simulation. When using the network to reconstruct images, input projection data were initially fed to two fully connected (FC) layers to perform a basic reconstruction. Then the output of the FC layers and an attenuation map were delivered to five convolutional layers for signal-decay compensation and image optimization. To validate the system, data not used in training, simulated data from the Zubal human brain phantom, and clinical patient data were used to test reconstruction performance. Reconstructed images from the developed network proved closer to the truth with higher resolution and quantitative accuracy than those from conventional OS-EM reconstruction. To understand better the operation of the network for reconstruction, intermediate results from hidden layers were investigated for each step of the processing. The network system was also retrained with noisy projection data and compared with that developed with noise-free data. The retrained network proved even more robust after having learned to filter noise. Finally, we showed that the network still provided sharp images when using reduced view projection data (retrained with reduced view data).
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Martin G Pomper
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| |
Collapse
|
183
|
Seo H, Huang C, Bassenne M, Xiao R, Xing L. Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1316-1325. [PMID: 31634827 PMCID: PMC8095064 DOI: 10.1109/tmi.2019.2948320] [Citation(s) in RCA: 180] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Segmentation of livers and liver tumors is one of the most important steps in radiation therapy of hepatocellular carcinoma. The segmentation task is often done manually, making it tedious, labor intensive, and subject to intra-/inter- operator variations. While various algorithms for delineating organ-at-risks (OARs) and tumor targets have been proposed, automatic segmentation of livers and liver tumors remains intractable due to their low tissue contrast with respect to the surrounding organs and their deformable shape in CT images. The U-Net has gained increasing popularity recently for image analysis tasks and has shown promising results. Conventional U-Net architectures, however, suffer from three major drawbacks. First, skip connections allow for the duplicated transfer of low resolution information in feature maps to improve efficiency in learning, but this often leads to blurring of extracted image features. Secondly, high level features extracted by the network often do not contain enough high resolution edge information of the input, leading to greater uncertainty where high resolution edge dominantly affects the network's decisions such as liver and liver-tumor segmentation. Thirdly, it is generally difficult to optimize the number of pooling operations in order to extract high level global features, since the number of pooling operations used depends on the object size. To cope with these problems, we added a residual path with deconvolution and activation operations to the skip connection of the U-Net to avoid duplication of low resolution information of features. In the case of small object inputs, features in the skip connection are not incorporated with features in the residual path. Furthermore, the proposed architecture has additional convolution layers in the skip connection in order to extract high level global features of small object inputs as well as high level features of high resolution edge information of large object inputs. Efficacy of the modified U-Net (mU-Net) was demonstrated using the public dataset of Liver tumor segmentation (LiTS) challenge 2017. For liver-tumor segmentation, Dice similarity coefficient (DSC) of 89.72 %, volume of error (VOE) of 21.93 %, and relative volume difference (RVD) of - 0.49 % were obtained. For liver segmentation, DSC of 98.51 %, VOE of 3.07 %, and RVD of 0.26 % were calculated. For the public 3D Image Reconstruction for Comparison of Algorithm Database (3Dircadb), DSCs were 96.01 % for the liver and 68.14 % for liver-tumor segmentations, respectively. The proposed mU-Net outperformed existing state-of-art networks.
Collapse
|
184
|
Whiteley W, Luk WK, Gregor J. DirectPET: full-size neural network PET reconstruction from sinogram data. J Med Imaging (Bellingham) 2020; 7:032503. [PMID: 32206686 PMCID: PMC7048204 DOI: 10.1117/1.jmi.7.3.032503] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 02/10/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose: Neural network image reconstruction directly from measurement data is a relatively new field of research, which until now has been limited to producing small single-slice images (e.g., 1 × 128 × 128 ). We proposed a more efficient network design for positron emission tomography called DirectPET, which is capable of reconstructing multislice image volumes (i.e., 16 × 400 × 400 ) from sinograms. Approach: Large-scale direct neural network reconstruction is accomplished by addressing the associated memory space challenge through the introduction of a specially designed Radon inversion layer. Using patient data, we compare the proposed method to the benchmark ordered subsets expectation maximization (OSEM) algorithm using signal-to-noise ratio, bias, mean absolute error, and structural similarity measures. In addition, line profiles and full-width half-maximum measurements are provided for a sample of lesions. Results: DirectPET is shown capable of producing images that are quantitatively and qualitatively similar to the OSEM target images in a fraction of the time. We also report on an experiment where DirectPET is trained to map low-count raw data to normal count target images, demonstrating the method's ability to maintain image quality under a low-dose scenario. Conclusion: The ability of DirectPET to quickly reconstruct high-quality, multislice image volumes suggests potential clinical viability of the method. However, design parameters and performance boundaries need to be fully established before adoption can be considered.
Collapse
Affiliation(s)
- William Whiteley
- The University of Tennessee, Department of Electrical Engineering and Computer Science, Knoxville, Tennessee, United States
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, United States
| | - Wing K. Luk
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, United States
| | - Jens Gregor
- The University of Tennessee, Department of Electrical Engineering and Computer Science, Knoxville, Tennessee, United States
| |
Collapse
|
185
|
Chen G, Hong X, Ding Q, Zhang Y, Chen H, Fu S, Zhao Y, Zhang X, Ji H, Wang G, Huang Q, Gao H. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse‐data CT. Med Phys 2020; 47:2916-2930. [DOI: 10.1002/mp.14170] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/26/2020] [Accepted: 03/28/2020] [Indexed: 11/06/2022] Open
Affiliation(s)
- Gaoyu Chen
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| | - Xiang Hong
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiaoqiao Ding
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Yi Zhang
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Hu Chen
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Shujun Fu
- School of Mathematics Shandong University Jinan Shandong 250100 China
| | - Yunsong Zhao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
- School of Mathematical Sciences Capital Normal University Beijing 100048 China
| | - Xiaoqun Zhang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hui Ji
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Ge Wang
- Department of Biomedical Engineering Rensselaer Polytechnic Institute Troy NY 12180 USA
| | - Qiu Huang
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hao Gao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| |
Collapse
|
186
|
Han Y, Ye JC. One network to solve all ROIs: Deep learning CT for any ROI using differentiated backprojection. Med Phys 2020; 46:e855-e872. [PMID: 31811795 DOI: 10.1002/mp.13631] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 05/16/2019] [Accepted: 05/21/2019] [Indexed: 12/27/2022] Open
Abstract
PURPOSE Computed tomography for the reconstruction of region of interest (ROI) has advantages in reducing the x-ray dose and the use of a small detector. However, standard analytic reconstruction methods such as filtered back projection (FBP) suffer from severe cupping artifacts, and existing model-based iterative reconstruction methods require extensive computations. Recently, we proposed a deep neural network to learn the cupping artifacts, but the network was not generalized well for different ROIs due to the singularities in the corrupted images. Therefore, there is an increasing demand for a neural network that works well for any ROI size. METHOD Two types of neural networks are designed. The first type learns ROI size-specific cupping artifacts from FBP images, whereas the second type network is for the inversion of the truncated Hilbert transform from the truncated differentiated backprojection (DBP) data. Their generalizabilities for different ROI sizes, pixel sizes, detector pitch and starting angles for a short scan are then investigated. RESULTS Experimental results show that the new type of neural networks significantly outperform existing iterative methods for all ROI sizes despite significantly lower runtime complexity. In addition, performance improvement is consistent across different acquisition scenarios. CONCLUSIONS Since the proposed method consistently surpasses existing methods, it can be used as a general CT reconstruction engine for many practical applications without compromising possible detector truncation.
Collapse
Affiliation(s)
- Yoseob Han
- BISPL - Bio Imaging, Signal Processing, and Learning Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Jong Chul Ye
- BISPL - Bio Imaging, Signal Processing, and Learning Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
187
|
Liang K, Zhang L, Yang H, Yang Y, Chen Z, Xing Y. Metal artifact reduction for practical dental computed tomography by improving interpolation-based reconstruction with deep learning. Med Phys 2020; 46:e823-e834. [PMID: 31811792 DOI: 10.1002/mp.13644] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 05/28/2019] [Accepted: 05/29/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Metal artifact is a quite common problem in diagnostic dental computed tomography (CT) images. Due to the high attenuation of heavy materials such as metal, severe global artifacts can occur in reconstructions. Typical metal artifact reduction (MAR) techniques segment out the metal regions and estimate the corrupted projection data by various interpolation methods. However, interpolations are not accurate and introduce new artifacts or even deform the teeth in the reconstructed image. This work presents a new strategy to take advantage of the power of deep learning for metal artifact reduction. METHOD The analysis first uses coarse reconstructions from simulated locally interpolated data affected by metal fillings as a starting point. A deep learning network is then trained using the simulated data and applied to practical data. Thus, an easily implemented three-step MAR method is formed: Firstly, use the acquired projection data to create a preliminary image reconstruction with linearly interpolated data for the metal-related projections. Secondly, a deep learning network is used to remove the artifacts from the linear interpolation and recover the nonmetal region information. Thirdly, the method adds the ROI reconstruction of the metal regions. The structures behind the shading artifacts in the direct filtered back-projection (FBP) reconstruction can be partially recovered by interpolation-based MAR (I-MAR) with the network further correcting for interpolation errors. The key to this method is that the linear interpolation reconstruction errors can be easily simulated to train a network and the effectiveness of the network can be easily generalized to I-MAR results in real situations. RESULTS We trained a network with a simulation dataset and validated the network against a separate simulation dataset. Then, the network was tested using simulation data that did not overlap with the training/validation datasets and real patient datasets. Both tests gave encouraging results with accurate tooth structure recovery and few artifacts. The relative root mean square error and structure similarity index method indexes were significantly improved in the tests. The method was also evaluated by two experienced dentists who gave positive evaluations. CONCLUSIONS This work presents a strategy to build a transferable learning from simulations to practical systems for metal artifact reduction using a supervised deep learning method. The system transforms the MAR analyses to an interpolation-artifact reduction problem to recover structural details from the coarse interpolation reconstruction. In this way, training data from simulations with ground truth labels can easily model the similar features in real data with I-MAR as the bridge. The network can seamlessly optimize both simulations and real data. The whole method is easily implemented with little computational cost. Test results demonstrated that this is an effective MAR method applicable to practical dental CT systems.
Collapse
Affiliation(s)
- Kaichao Liang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.,Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Li Zhang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.,Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Hongkai Yang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.,Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Yirong Yang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Zhiqiang Chen
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.,Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Yuxiang Xing
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.,Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| |
Collapse
|
188
|
Nakai H, Nishio M, Yamashita R, Ono A, Nakao KK, Fujimoto K, Togashi K. Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction. Acad Radiol 2020; 27:563-574. [PMID: 31281082 DOI: 10.1016/j.acra.2019.05.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/16/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. MATERIALS AND METHODS This study used 60 anonymized chest CT cases from a public database called "The Cancer Imaging Archive". Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. RESULTS The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0-3.5 versus 1.0-1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). CONCLUSION Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted.
Collapse
|
189
|
Feigin M, Freedman D, Anthony BW. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng 2020; 67:1142-1151. [DOI: 10.1109/tbme.2019.2931195] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
190
|
Deng M, Li S, Goy A, Kang I, Barbastathis G. Learning to synthesize: robust phase retrieval at low photon counts. LIGHT, SCIENCE & APPLICATIONS 2020; 9:36. [PMID: 32194950 PMCID: PMC7062747 DOI: 10.1038/s41377-020-0267-2] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/04/2020] [Accepted: 02/19/2020] [Indexed: 05/13/2023]
Abstract
The quality of inverse problem solutions obtained through deep learning is limited by the nature of the priors learned from examples presented during the training phase. Particularly in the case of quantitative phase retrieval, spatial frequencies that are underrepresented in the training database, most often at the high band, tend to be suppressed in the reconstruction. Ad hoc solutions have been proposed, such as pre-amplifying the high spatial frequencies in the examples; however, while that strategy improves the resolution, it also leads to high-frequency artefacts, as well as low-frequency distortions in the reconstructions. Here, we present a new approach that learns separately how to handle the two frequency bands, low and high, and learns how to synthesize these two bands into full-band reconstructions. We show that this "learning to synthesize" (LS) method yields phase reconstructions of high spatial resolution and without artefacts and that it is resilient to high-noise conditions, e.g., in the case of very low photon flux. In addition to the problem of quantitative phase retrieval, the LS method is applicable, in principle, to any inverse problem where the forward operator treats different frequency bands unevenly, i.e., is ill-posed.
Collapse
Affiliation(s)
- Mo Deng
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Shuai Li
- Sensebrain Technology Limited LLC, 2550 N 1st Street, Suite 300, San Jose, CA 95131 USA
| | - Alexandre Goy
- Omnisens SA, Riond Bosson 3, 1110 Morges, VD Switzerland
| | - Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, 117543 Singapore
| |
Collapse
|
191
|
Kofler A, Dewey M, Schaeffter T, Wald C, Kolbitsch C. Spatio-Temporal Deep Learning-Based Undersampling Artefact Reduction for 2D Radial Cine MRI With Limited Training Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:703-717. [PMID: 31403407 DOI: 10.1109/tmi.2019.2930318] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this work we reduce undersampling artefacts in two-dimensional (2D) golden-angle radial cine cardiac MRI by applying a modified version of the U-net. The network is trained on 2D spatio-temporal slices which are previously extracted from the image sequences. We compare our approach to two 2D and a 3D deep learning-based post processing methods, three iterative reconstruction methods and two recently proposed methods for dynamic cardiac MRI based on 2D and 3D cascaded networks. Our method outperforms the 2D spatially trained U-net and the 2D spatio-temporal U-net. Compared to the 3D spatio-temporal U-net, our method delivers comparable results, but requiring shorter training times and less training data. Compared to the compressed sensing-based methods kt-FOCUSS and a total variation regularized reconstruction approach, our method improves image quality with respect to all reported metrics. Further, it achieves competitive results when compared to the iterative reconstruction method based on adaptive regularization with dictionary learning and total variation and when compared to the methods based on cascaded networks, while only requiring a small fraction of the computational and training time. A persistent homology analysis demonstrates that the data manifold of the spatio-temporal domain has a lower complexity than the one of the spatial domain and therefore, the learning of a projection-like mapping is facilitated. Even when trained on only one single subject without data-augmentation, our approach yields results which are similar to the ones obtained on a large training dataset. This makes the method particularly suitable for training a network on limited training data. Finally, in contrast to the spatial 2D U-net, our proposed method is shown to be naturally robust with respect to image rotation in image space and almost achieves rotation-equivariance where neither data-augmentation nor a particular network design are required.
Collapse
|
192
|
Sunaguchi N, Shimao D, Yuasa T, Ichihara S, Nishimura R, Oshima R, Watanabe A, Niwa K, Ando M. Three-dimensional microanatomy of human nipple visualized by X-ray dark-field computed tomography. Breast Cancer Res Treat 2020; 180:397-405. [PMID: 32056054 DOI: 10.1007/s10549-020-05574-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Accepted: 02/07/2020] [Indexed: 12/31/2022]
Abstract
PURPOSE The three-dimensional (3D) structure of the human nipple has not been fully clarified. However, its importance has increased in recent years because it has become common practice to preoperatively explore the spread of breast cancer to the nipple with needle biopsy, ductoscopy, and/or ductal lavage for nipple-sparing mastectomy. Here, we demonstrated that X-ray dark-field computed tomography (XDFI-CT) is a powerful tool for reconstructing the 3D distribution pattern of human lactiferous ducts non-destructively, without contrast agent, and with high tissue contrast. METHODS Nipples amputated from mastectomy specimens of 51 patients with breast cancer were visualized three-dimensionally by XDFI-CT. First, CT images and conventionally stained tissue sections were compared to demonstrate that XDFI-CT provides 3D anatomical information. Next, the number of ducts in the nipple and the number of ducts sharing an ostium near the tip of the nipple were measured from the volume set of XDFI-CT. Finally, the 3D distribution pattern of the ducts was determined. RESULTS XDFI-CT can provide images almost equivalent to those of low-magnification light microscopy of conventional hematoxylin-eosin-stained histological sections. The mean number of ducts in all cases was 28.0. The total number of ducts sharing an ostium near the tip of the nipple was 525 of 1428. The 3D distribution patterns of the ducts were classified into three types that we defined as convergent (22%), straight (39%), or divergent (39%). CONCLUSIONS XDFI-CT is useful for exploring the microanatomy of the human nipple and might be used for non-invasive nipple diagnosis in the future.
Collapse
Affiliation(s)
- Naoki Sunaguchi
- Graduate School of Medicine, Nagoya University, Nagoya, Aichi, 461-8673, Japan.
| | - Daisuke Shimao
- Department of Radiological Technology, Hokkaido University of Science, Sapporo, Hokkaido, 006-8585, Japan
| | - Tetsuya Yuasa
- Graduate School of Engineering and Science, Yamagata University, Yonezawa, Yamagata, 992-8510, Japan
| | - Shu Ichihara
- Department of Pathology, Nagoya Medical Center, Nagoya, Aichi, 460-0001, Japan
| | - Rieko Nishimura
- Department of Pathology, Nagoya Medical Center, Nagoya, Aichi, 460-0001, Japan
| | - Risa Oshima
- Department of Radiological Technology, Hokkaido University of Science, Sapporo, Hokkaido, 006-8585, Japan
| | - Aya Watanabe
- Graduate School of Medicine, Nagoya University, Nagoya, Aichi, 461-8673, Japan
| | - Kikuko Niwa
- Graduate School of Medicine, Nagoya University, Nagoya, Aichi, 461-8673, Japan
| | - Masami Ando
- Comprehensive Research Organization for Science and Society, Tsuchiura, Ibaraki, 300-0811, Japan
| |
Collapse
|
193
|
Yang F, Zhang D, Zhang H, Huang K, Du Y, Teng M. Streaking artifacts suppression for cone-beam computed tomography with the residual learning in neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.087] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
194
|
Ge Y, Su T, Zhu J, Deng X, Zhang Q, Chen J, Hu Z, Zheng H, Liang D. ADAPTIVE-NET: deep computed tomography reconstruction network with analytical domain transformation knowledge. Quant Imaging Med Surg 2020; 10:415-427. [PMID: 32190567 DOI: 10.21037/qims.2019.12.12] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Background Recently, the paradigm of computed tomography (CT) reconstruction has shifted as the deep learning technique evolves. In this study, we proposed a new convolutional neural network (called ADAPTIVE-NET) to perform CT image reconstruction directly from a sinogram by integrating the analytical domain transformation knowledge. Methods In the proposed ADAPTIVE-NET, a specific network layer with constant weights was customized to transform the sinogram onto the CT image domain via analytical back-projection. With this new framework, feature extractions were performed simultaneously on both the sinogram domain and the CT image domain. The Mayo low dose CT (LDCT) data was used to validate the new network. In particular, the new network was compared with the previously proposed residual encoder-decoder (RED)-CNN network. For each network, the mean square error (MSE) loss with and without VGG-based perceptual loss was compared. Furthermore, to evaluate the image quality with certain metrics, the noise correlation was quantified via the noise power spectrum (NPS) on the reconstructed LDCT for each method. Results CT images that have clinically relevant dimensions of 512×512 can be easily reconstructed from a sinogram on a single graphics processing unit (GPU) with moderate memory size (e.g., 11 GB) by ADAPTIVE-NET. With the same MSE loss function, the new network is able to generate better results than the RED-CNN. Moreover, the new network is able to reconstruct natural looking CT images with enhanced image quality if jointly using the VGG loss. Conclusions The newly proposed end-to-end supervised ADAPTIVE-NET is able to reconstruct high-quality LDCT images directly from a sinogram.
Collapse
Affiliation(s)
- Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Ting Su
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jiongtao Zhu
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaolei Deng
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jianwei Chen
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Dong Liang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| |
Collapse
|
195
|
Ravishankar S, Ye JC, Fessler JA. Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 91] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
Affiliation(s)
- Saiprasad Ravishankar
- Departments of Computational Mathematics, Science and Engineering, and Biomedical Engineering at Michigan State University, East Lansing, MI, 48824 USA
| | - Jong Chul Ye
- Department of Bio and Brain Engineering and Department of Mathematical Sciences at the Korea Advanced Institute of Science & Technology (KAIST), Daejeon, South Korea
| | - Jeffrey A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
196
|
Shin H, Lee J, Eo T, Jun Y, Kim S, Hwang D. The Latest Trends in Attention Mechanisms and Their Application in Medical Imaging. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2020; 81:1305-1333. [PMID: 36237722 PMCID: PMC9431827 DOI: 10.3348/jksr.2020.0150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 11/02/2020] [Accepted: 11/07/2020] [Indexed: 11/29/2022]
Abstract
딥러닝 기술은 빅데이터 및 컴퓨팅 파워를 기반으로 최근 영상의학 분야의 연구에서 괄목할만한 성과를 이루어 내고 있다. 하지만 성능 향상을 위해 딥러닝 네트워크가 깊어질수록 그 내부의 계산 과정을 해석하기 어려워졌는데, 이는 환자의 생명과 직결되는 의료분야의 의사 결정 과정에서는 매우 심각한 문제이다. 이를 해결하기 위해 “설명 가능한 인공지능 기술”이 연구되고 있으며, 그중 하나로 개발된 것이 바로 어텐션(attention) 기법이다. 본 종설에서는 이미 학습이 완료된 네트워크를 분석하기 위한 Post-hoc attention과, 네트워크 성능의 추가적인 향상을 위한 Trainable attention 두 종류의 기법에 대해 각각의 방법 및 의료 영상 연구에 적용된 사례, 그리고 향후 전망 등에 대해 자세히 다루고자 한다.
Collapse
Affiliation(s)
- Hyungseob Shin
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Jeongryong Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Taejoon Eo
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Yohan Jun
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Sewon Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Dosik Hwang
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| |
Collapse
|
197
|
Chen Z, Zhang Q, Zhou C, Zhang M, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Low-dose CT reconstruction method based on prior information of normal-dose image. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:1091-1111. [PMID: 33044223 DOI: 10.3233/xst-200716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Radiation risk from computed tomography (CT) is always an issue for patients, especially those in clinical conditions in which repeated CT scanning is required. For patients undergoing repeated CT scanning, a low-dose protocol, such as sparse scanning, is often used, and consequently, an advanced reconstruction algorithm is also needed. OBJECTIVE To develop a novel algorithm used for sparse-view CT reconstruction associated with the prior image. METHODS A low-dose CT reconstruction method based on prior information of normal-dose image (PI-NDI) involving a transformed model for attenuation coefficients of the object to be reconstructed and prior information application in the forward-projection process was used to reconstruct CT images from sparse-view projection data. A digital extended cardiac-torso (XCAT) ventral phantom and a diagnostic head phantom were employed to evaluate the performance of the proposed PI-NDI method. The root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR) and mean percent absolute error (MPAE) of the reconstructed images were measured for quantitative evaluation of the proposed PI-NDI method. RESULTS The reconstructed images with sparse-view projection data via the proposed PI-NDI method have higher quality by visual inspection than that via the compared methods. In terms of quantitative evaluations, the RMSE measured on the images reconstructed by the PI-NDI method with sparse projection data is comparable to that by MLEM-TV, PWLS-TV and PWLS-PICCS with fully sampled projection data. When the projection data are very sparse, images reconstructed by the PI-NDI method have higher PSNR values and lower MPAE values than those from the compared algorithms. CONCLUSIONS This study presents a new low-dose CT reconstruction method based on prior information of normal-dose image (PI-NDI) for sparse-view CT image reconstruction. The experimental results validate that the new method has superior performance over other state-of-art methods.
Collapse
Affiliation(s)
- Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Mengxi Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
198
|
Mileto A, Guimaraes LS, McCollough CH, Fletcher JG, Yu L. State of the Art in Abdominal CT: The Limits of Iterative Reconstruction Algorithms. Radiology 2019; 293:491-503. [DOI: 10.1148/radiol.2019191422] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Achille Mileto
- From the Department of Radiology, University of Washington School of Medicine, Seattle, Wash (A.M.); Joint Department of Medical Imaging, Sinai Health System, University of Toronto, Toronto, Ontario, Canada (L.S.G.); and Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (C.H.M., J.G.F., L.Y.)
| | - Luis S. Guimaraes
- From the Department of Radiology, University of Washington School of Medicine, Seattle, Wash (A.M.); Joint Department of Medical Imaging, Sinai Health System, University of Toronto, Toronto, Ontario, Canada (L.S.G.); and Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (C.H.M., J.G.F., L.Y.)
| | - Cynthia H. McCollough
- From the Department of Radiology, University of Washington School of Medicine, Seattle, Wash (A.M.); Joint Department of Medical Imaging, Sinai Health System, University of Toronto, Toronto, Ontario, Canada (L.S.G.); and Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (C.H.M., J.G.F., L.Y.)
| | - Joel G. Fletcher
- From the Department of Radiology, University of Washington School of Medicine, Seattle, Wash (A.M.); Joint Department of Medical Imaging, Sinai Health System, University of Toronto, Toronto, Ontario, Canada (L.S.G.); and Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (C.H.M., J.G.F., L.Y.)
| | - Lifeng Yu
- From the Department of Radiology, University of Washington School of Medicine, Seattle, Wash (A.M.); Joint Department of Medical Imaging, Sinai Health System, University of Toronto, Toronto, Ontario, Canada (L.S.G.); and Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905 (C.H.M., J.G.F., L.Y.)
| |
Collapse
|
199
|
Bastiaannet R, van der Velden S, Lam MGEH, Viergever MA, de Jong HWAM. Fast and accurate quantitative determination of the lung shunt fraction in hepatic radioembolization. ACTA ACUST UNITED AC 2019; 64:235002. [DOI: 10.1088/1361-6560/ab4e49] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
200
|
Fluence-map generation for prostate intensity-modulated radiotherapy planning using a deep-neural-network. Sci Rep 2019; 9:15671. [PMID: 31666647 PMCID: PMC6821767 DOI: 10.1038/s41598-019-52262-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 10/10/2019] [Indexed: 01/30/2023] Open
Abstract
A deep-neural-network (DNN) was successfully used to predict clinically-acceptable dose distributions from organ contours for intensity-modulated radiotherapy (IMRT). To provide the next step in the DNN-based plan automation, we propose a DNN that directly generates beam fluence maps from the organ contours and volumetric dose distributions, without inverse planning. We collected 240 prostate IMRT plans and used to train a DNN using organ contours and dose distributions. After training was done, we made 45 synthetic plans (SPs) using the generated fluence-maps and compared them with clinical plans (CP) using various plan quality metrics including homogeneity and conformity indices for the target and dose constraints for organs at risk, including rectum, bladder, and bowel. The network was able to generate fluence maps with small errors. The qualities of the SPs were comparable to the corresponding CPs. The homogeneity index of the target was slightly worse in the SPs, but there was no difference in conformity index of the target, V60Gy of rectum, the V60Gy of bladder and the V45Gy of bowel. The time taken for generating fluence maps and qualities of SPs demonstrated the proposed method will improve efficiency of the treatment planning and help maintain the quality of plans.
Collapse
|