51
|
Dual attention enhancement feature fusion network for segmentation and quantitative analysis of paediatric echocardiography. Med Image Anal 2021; 71:102042. [PMID: 33784600 DOI: 10.1016/j.media.2021.102042] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 03/09/2021] [Accepted: 03/10/2021] [Indexed: 12/19/2022]
Abstract
Paediatric echocardiography is a standard method for screening congenital heart disease (CHD). The segmentation of paediatric echocardiography is essential for subsequent extraction of clinical parameters and interventional planning. However, it remains a challenging task due to (1) the considerable variation of key anatomic structures, (2) the poor lateral resolution affecting accurate boundary definition, (3) the existence of speckle noise and artefacts in echocardiographic images. In this paper, we propose a novel deep network to address these challenges comprehensively. We first present a dual-path feature extraction module (DP-FEM) to extract rich features via a channel attention mechanism. A high- and low-level feature fusion module (HL-FFM) is devised based on spatial attention, which selectively fuses rich semantic information from high-level features with spatial cues from low-level features. In addition, a hybrid loss is designed to deal with pixel-level misalignment and boundary ambiguities. Based on the segmentation results, we derive key clinical parameters for diagnosis and treatment planning. We extensively evaluate the proposed method on 4,485 two-dimensional (2D) paediatric echocardiograms from 127 echocardiographic videos. The proposed method consistently achieves better segmentation performance than other state-of-the-art methods, whichdemonstratesfeasibility for automatic segmentation and quantitative analysis of paediatric echocardiography. Our code is publicly available at https://github.com/end-of-the-century/Cardiac.
Collapse
|
52
|
He X, Guo BJ, Lei Y, Tian S, Wang T, Curran WJ, Zhang LJ, Liu T, Yang X. Thyroid gland delineation in noncontrast-enhanced CTs using deep convolutional neural networks. Phys Med Biol 2021; 66:055007. [PMID: 33590826 DOI: 10.1088/1361-6560/abc5a6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The purpose of this study is to develop a deep learning method for thyroid delineation with high accuracy, efficiency, and robustness in noncontrast-enhanced head and neck CTs. The cross-sectional analysis consisted of six tests, including randomized cross-validation and hold-out experiments, tests of prediction accuracy between cancer and benign and cross-gender analysis were performed to evaluate the proposed deep-learning-based performance method. CT images of 1977 patients with suspected thyroid carcinoma were retrospectively investigated. The automatically segmented thyroid gland volume was compared against physician-approved clinical contours using metrics, the Pearson correlation and Bland-Altman analysis. Quantitative metrics included: the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD) and the center of mass distance (CMD). The robustness of the proposed method was further tested using the nonparametric Kruskal-Wallis test to assess the equality of distribution of DSC values. The proposed method's accuracy remained high through all the tests, with the median DSC, JAC, sensitivity and specificity higher than 0.913, 0.839, 0.856 and 0.979, respectively. The proposed method also resulted in median MSD, RMSD, HD and CMD, of less than 0.31 mm, 0.48 mm, 2.06 mm and 0.50 mm, respectively. The MSD and RMSD were 0.40 ± 0.29 mm and 0.70 ± 0.46 mm, respectively. Concurrent testing of the proposed method with 3D U-Net and V-Net showed that the proposed method had significantly improved performance. The proposed deep-learning method achieved accurate and robust performance through six cross-sectional analysis tests.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
53
|
Bajaj R, Huang X, Kilic Y, Jain A, Ramasamy A, Torii R, Moon J, Koh T, Crake T, Parker MK, Tufaro V, Serruys PW, Pugliese F, Mathur A, Baumbach A, Dijkstra J, Zhang Q, Bourantas CV. A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images. Int J Cardiovasc Imaging 2021; 37:1825-1837. [PMID: 33590430 PMCID: PMC8255253 DOI: 10.1007/s10554-021-02162-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 01/07/2021] [Indexed: 12/13/2022]
Abstract
Coronary luminal dimensions change during the cardiac cycle. However, contemporary volumetric intravascular ultrasound (IVUS) analysis is performed in non-gated images as existing methods to acquire gated or to retrospectively gate IVUS images have failed to dominate in research. We developed a novel deep learning (DL)-methodology for end-diastolic frame detection in IVUS and compared its efficacy against expert analysts and a previously established methodology using electrocardiographic (ECG)-estimations as reference standard. Near-infrared spectroscopy-IVUS (NIRS-IVUS) data were prospectively acquired from 20 coronary arteries and co-registered with the concurrent ECG-signal to identify end-diastolic frames. A DL-methodology which takes advantage of changes in intensity of corresponding pixels in consecutive NIRS-IVUS frames and consists of a network model designed in a bidirectional gated-recurrent-unit (Bi-GRU) structure was trained to detect end-diastolic frames. The efficacy of the DL-methodology in identifying end-diastolic frames was compared with two expert analysts and a conventional image-based (CIB)-methodology that relies on detecting vessel movement to estimate phases of the cardiac cycle. A window of ± 100 ms from the ECG estimations was used to define accurate end-diastolic frames detection. The ECG-signal identified 3,167 end-diastolic frames. The mean difference between DL and ECG estimations was 3 ± 112 ms while the mean differences between the 1st-analyst and ECG, 2nd-analyst and ECG and CIB-methodology and ECG were 86 ± 192 ms, 78 ± 183 ms and 59 ± 207 ms, respectively. The DL-methodology was able to accurately detect 80.4%, while the two analysts and the CIB-methodology detected 39.0%, 43.4% and 42.8% of end-diastolic frames, respectively (P < 0.05). The DL-methodology can identify NIRS-IVUS end-diastolic frames accurately and should be preferred over expert analysts and CIB-methodologies, which have limited efficacy.
Collapse
Affiliation(s)
- Retesh Bajaj
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Yakup Kilic
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - Ajay Jain
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - Anantharaman Ramasamy
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Ryo Torii
- Department of Mechanical Engineering, University College London, London, UK
| | - James Moon
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Institute of Cardiovascular Sciences, University College London, London, UK
| | - Tat Koh
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - Tom Crake
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - Maurizio K Parker
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Vincenzo Tufaro
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Patrick W Serruys
- Faculty of Medicine, National Heart & Lung Institute, Imperial College London, London, UK
| | - Francesca Pugliese
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Anthony Mathur
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Andreas Baumbach
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK.,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Jouke Dijkstra
- Department of Radiology, Division of Image Processing, Leiden University Medical Center, Leiden, The Netherlands
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Christos V Bourantas
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK. .,Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK. .,Institute of Cardiovascular Sciences, University College London, London, UK.
| |
Collapse
|
54
|
Xue C, Zhu L, Fu H, Hu X, Li X, Zhang H, Heng PA. Global guidance network for breast lesion segmentation in ultrasound images. Med Image Anal 2021; 70:101989. [PMID: 33640719 DOI: 10.1016/j.media.2021.101989] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/01/2022]
Abstract
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.
Collapse
Affiliation(s)
- Cheng Xue
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Hong Kong, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | - Xiaowei Hu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hai Zhang
- Shenzhen People's Hospital, The Second Clinical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Guangdong Province, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong. Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
55
|
Shin Y, Yang J, Lee YH, Kim S. Artificial intelligence in musculoskeletal ultrasound imaging. Ultrasonography 2021; 40:30-44. [PMID: 33242932 PMCID: PMC7758096 DOI: 10.14366/usg.20080] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 09/04/2020] [Accepted: 09/06/2020] [Indexed: 12/14/2022] Open
Abstract
Ultrasonography (US) is noninvasive and offers real-time, low-cost, and portable imaging that facilitates the rapid and dynamic assessment of musculoskeletal components. Significant technological improvements have contributed to the increasing adoption of US for musculoskeletal assessments, as artificial intelligence (AI)-based computer-aided detection and computer-aided diagnosis are being utilized to improve the quality, efficiency, and cost of US imaging. This review provides an overview of classical machine learning techniques and modern deep learning approaches for musculoskeletal US, with a focus on the key categories of detection and diagnosis of musculoskeletal disorders, predictive analysis with classification and regression, and automated image segmentation. Moreover, we outline challenges and a range of opportunities for AI in musculoskeletal US practice.
Collapse
Affiliation(s)
- YiRang Shin
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Jaemoon Yang
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
- Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Korea
- Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Korea
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Sungjun Kim
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
56
|
Automatic quantification of myocardium and pericardial fat from coronary computed tomography angiography: a multicenter study. Eur Radiol 2020; 31:3826-3836. [PMID: 33206226 DOI: 10.1007/s00330-020-07482-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 09/03/2020] [Accepted: 11/05/2020] [Indexed: 12/13/2022]
Abstract
OBJECTIVES To develop a deep learning-based method for simultaneous myocardium and pericardial fat quantification from coronary computed tomography angiography (CCTA) for the diagnosis and treatment of cardiovascular disease (CVD). METHODS We retrospectively identified CCTA data obtained between May 2008 and July 2018 in a multicenter (six centers) CVD study. The proposed method was evaluated on 422 patients' data by two studies. The first overall study involves training model on CVD patients and testing on non-CVD patients, as well as training on non-CVD patients and testing on CVD patients. The second study was performed using the leave-center-out approach. The method performance was evaluated using Dice similarity coefficient (DSC), Jaccard index (JAC), 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). The robustness of the proposed method was tested using the nonparametric Kruskal-Wallis test and post hoc test to assess the equality of distribution of DSC values among different tests. RESULTS The automatic segmentation achieved a strong correlation with contour (ICC and R > 0.97, p value < 0.001 throughout all tests). The accuracy of the proposed method remained high through all the tests, with the median DSC higher than 0.88 for pericardial fat and 0.96 for myocardium. The proposed method also resulted in mean MSD, RMSD, HD95, and CMD of less than 1.36 mm for pericardial fat and 1.00 mm for myocardium. CONCLUSIONS The proposed deep learning-based segmentation method enables accurate simultaneous quantification of myocardium and pericardial fat in a multicenter study. KEY POINTS • Deep learning-based myocardium and pericardial fat segmentation method tested on 422 patients' coronary computed tomography angiography in a multicenter study. • The proposed method provides segmentations with high volumetric accuracy (ICC and R > 0.97, p value < 0.001) and similar shape as manual annotation by experienced radiologists (median Dice similarity coefficient ≥ 0.88 for pericardial fat and 0.96 for myocardium).
Collapse
|
57
|
Xia M, Yan W, Huang Y, Guo Y, Zhou G, Wang Y. Extracting Membrane Borders in IVUS Images Using a Multi-Scale Feature Aggregated U-Net. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1650-1653. [PMID: 33018312 DOI: 10.1109/embc44109.2020.9175970] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic extraction of the lumen-intima border (LIB) and the media-adventitia border (MAB) in intravascular ultrasound (IVUS) images is of high clinical interest. Despite the superior performance achieved by deep neural networks (DNNs) on various medical image segmentation tasks, there are few applications to IVUS images. The complicated pathological presentation and the lack of enough annotation in IVUS datasets make the learning process challenging. Several existing networks designed for IVUS segmentation train two groups of weights to detect the MAB and LIB separately. In this paper, we propose a multi-scale feature aggregated U-Net (MFAU-Net) to extract two membrane borders simultaneously. The MFAU-Net integrates multi-scale inputs, the deep supervision, and a bi-directional convolutional long short-term memory (BConvLSTM) unit. It is designed to sufficiently learn features from complicated IVUS images through a small number of training samples. Trained and tested on the publicly available IVUS datasets, the MFAU-Net achieves both 0.90 Jaccard measure (JM) for the MAB and LIB detection on 20 MHz dataset. The corresponding metrics on 40 MHz dataset are 0.85 and 0.84 JM respectively. Comparative evaluations with state-of-the-art published results demonstrate the competitiveness of the proposed MFAU-Net.
Collapse
|
58
|
Liu Y, Lei Y, Fu Y, Wang T, Zhou J, Jiang X, McDonald M, Beitler JJ, Curran WJ, Liu T, Yang X. Head and neck multi-organ auto-segmentation on CT images aided by synthetic MRI. Med Phys 2020; 47:4294-4302. [PMID: 32648602 PMCID: PMC11696540 DOI: 10.1002/mp.14378] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/22/2020] [Accepted: 06/30/2020] [Indexed: 11/03/2023] Open
Abstract
PURPOSE Because the manual contouring process is labor-intensive and time-consuming, segmentation of organs-at-risk (OARs) is a weak link in radiotherapy treatment planning process. Our goal was to develop a synthetic MR (sMR)-aided dual pyramid network (DPN) for rapid and accurate head and neck multi-organ segmentation in order to expedite the treatment planning process. METHODS Forty-five patients' CT, MR, and manual contours pairs were included as our training dataset. Nineteen OARs were target organs to be segmented. The proposed sMR-aided DPN method featured a deep attention strategy to effectively segment multiple organs. The performance of sMR-aided DPN method was evaluated using five metrics, including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and volume difference. Our method was further validated using the 2015 head and neck challenge data. RESULTS The contours generated by the proposed method closely resemble the ground truth manual contours, as evidenced by encouraging quantitative results in terms of DSC using the 2015 head and neck challenge data. Mean DSC values of 0.91 ± 0.02, 0.73 ± 0.11, 0.96 ± 0.01, 0.78 ± 0.09/0.78 ± 0.11, 0.88 ± 0.04/0.88 ± 0.06 and 0.86 ± 0.08/0.85 ± 0.1 were achieved for brain stem, chiasm, mandible, left/right optic nerve, left/right parotid, and left/right submandibular, respectively. CONCLUSIONS We demonstrated the feasibility of sMR-aided DPN for head and neck multi-organ delineation on CT images. Our method has shown superiority over the other methods on the 2015 head and neck challenge data results. The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs.
Collapse
Affiliation(s)
| | | | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Jonathan J. Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
59
|
Shen CC, Yang JE. Estimation of Ultrasound Echogenicity Map from B-Mode Images Using Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20174931. [PMID: 32878199 PMCID: PMC7506733 DOI: 10.3390/s20174931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 08/24/2020] [Accepted: 08/29/2020] [Indexed: 06/11/2023]
Abstract
In ultrasound B-mode imaging, speckle noises decrease the accuracy of estimation of tissue echogenicity of imaged targets from the amplitude of the echo signals. In addition, since the granular size of the speckle pattern is affected by the point spread function (PSF) of the imaging system, the resolution of B-mode image remains limited, and the boundaries of tissue structures often become blurred. This study proposed a convolutional neural network (CNN) to remove speckle noises together with improvement of image spatial resolution to reconstruct ultrasound tissue echogenicity map. The CNN model is trained using in silico simulation dataset and tested with experimentally acquired images. Results indicate that the proposed CNN method can effectively eliminate the speckle noises in the background of the B-mode images while retaining the contours and edges of the tissue structures. The contrast and the contrast-to-noise ratio of the reconstructed echogenicity map increased from 0.22/2.72 to 0.33/44.14, and the lateral and axial resolutions also improved from 5.9/2.4 to 2.9/2.0, respectively. Compared with other post-processing filtering methods, the proposed CNN method provides better approximation to the original tissue echogenicity by completely removing speckle noises and improving the image resolution together with the capability for real-time implementation.
Collapse
|
60
|
Zhang E, Seiler S, Chen M, Lu W, Gu X. BIRADS features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis. Phys Med Biol 2020; 65:125005. [PMID: 32155605 DOI: 10.1088/1361-6560/ab7e7d] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We propose a novel BIRADS-SSDL network that integrates clinically-approved breast lesion characteristics (BIRADS features) into task-oriented semi-supervised deep learning (SSDL) for accurate diagnosis of ultrasound (US) images with a small training dataset. Breast US images are converted to BIRADS-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of an SSDL network, which performs unsupervised stacked convolutional auto-encoder (SCAE) image reconstruction guided by lesion classification. This integrated multi-task learning allows SCAE to extract image features with the constraints from the lesion classification task, while the lesion classification is achieved by utilizing the SCAE encoder features with a convolutional network. We trained the BIRADS-SSDL network with an alternative learning strategy by balancing the reconstruction error and classification label prediction error. To demonstrate the effectiveness of our approach, we evaluated it using two breast US image datasets. We compared the performance of the BIRADS-SSDL network with conventional SCAE and SSDL methods that use the original images as inputs, as well as with an SCAE that use BFMs as inputs. The experimental results on two breast US datasets show that BIRADS-SSDL ranked the best among the four networks, with a classification accuracy of around 94.23 ± 3.33% and 84.38 ± 3.11% on two datasets. In the case of experiments across two datasets collected from two different institutions/and US devices, the developed BIRADS-SSDL is generalizable across the different US devices and institutions without overfitting to a single dataset and achieved satisfactory results. Furthermore, we investigate the performance of the proposed method by varying the model training strategies, lesion boundary accuracy, and Gaussian filter parameters. The experimental results showed that a pre-training strategy can help to speed up model convergence during training but with no improvement of the classification accuracy on the testing dataset. The classification accuracy decreases as the segmentation accuracy decreases. The proposed BIRADS-SSDL achieves the best results among the compared methods in each case and has the capacity to deal with multiple different datasets under one model. Compared with state-of-the-art methods, BIRADS-SSDL could be promising for effective breast US computer-aided diagnosis using small datasets.
Collapse
Affiliation(s)
- Erlei Zhang
- College of Information Science and Technology, Northwest University, Xi' an 710069, People's Republic of China. Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | | | | | | | | |
Collapse
|
61
|
He X, Guo BJ, Lei Y, Wang T, Fu Y, Curran WJ, Zhang LJ, Liu T, Yang X. Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography. Phys Med Biol 2020; 65:095012. [PMID: 32182595 DOI: 10.1088/1361-6560/ab8077] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Epicardial adipose tissue (EAT) is a visceral fat deposit, that's known for its association with factors, such as obesity, diabetes mellitus, age, and hypertension. Segmentation of the EAT in a fast and reproducible way is important for the interpretation of its role as an independent risk marker intricate. However, EAT has a variable distribution, and various diseases may affect the volume of the EAT, which can increase the complexity of the already time-consuming manual segmentation work. We propose a 3D deep attention U-Net method to automatically segment the EAT from coronary computed tomography angiography (CCTA). Five-fold cross-validation and hold-out experiments were used to evaluate the proposed method through a retrospective investigation of 200 patients. The automatically segmented EAT volume was compared with physician-approved clinical contours. Quantitative metrics used were the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). For cross-validation, the median DSC, sensitivity, and specificity were 92.7%, 91.1%, and 95.1%, respectively, with JAC, HD, CMD, MSD, and RMSD are 82.9% ± 8.8%, 3.77 ± 1.86 mm, 1.98 ± 1.50 mm, 0.37 ± 0.24 mm, and 0.65 ± 0.37 mm, respectively. For the hold-out test, the accuracy of the proposed method remained high. We developed a novel deep learning-based approach for the automated segmentation of the EAT on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 200 clinical patient cases using 8 quantitative metrics, Pearson correlation, and Bland-Altman analysis. Our automatic EAT segmentation results show the potential of the proposed method to be used in computer-aided diagnosis of coronary artery diseases (CADs) in clinical settings.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | |
Collapse
|
62
|
Kretz T, Mueller KR, Schaeffter T, Elster C. Mammography Image Quality Assurance Using Deep Learning. IEEE Trans Biomed Eng 2020; 67:3317-3326. [PMID: 32305886 DOI: 10.1109/tbme.2020.2983539] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE According to the European Reference Organization for Quality Assured Breast Cancer Screening and Diagnostic Services (EUREF) image quality in mammography is assessed by recording and analyzing a set of images of the CDMAM phantom. The EUREF procedure applies an automated analysis combining image registration, signal detection and nonlinear fitting. We present a proof of concept for an end-to-end deep learning framework that assesses image quality on the basis of single images as an alternative. METHODS Virtual mammography is used to generate a database with known ground truth for training a regression convolutional neural net (CNN). Training is carried out by continuously extending the training data and applying transfer learning. RESULTS The trained net is shown to correctly predict the image quality of simulated and real images. Specifically, image quality predictions on the basis of single images are of similar quality as those obtained by applying the EUREF procedure with 16 images. Our results suggest that the trained CNN generalizes well. CONCLUSION Mammography image quality assessment can benefit from the proposed deep learning approach. SIGNIFICANCE Deep learning avoids cumbersome pre-processing and allows mammography image quality to be estimated reliably using single images.
Collapse
|
63
|
Lei Y, Fu Y, Wang T, Liu Y, Patel P, Curran WJ, Liu T, Yang X. 4D-CT deformable image registration using multiscale unsupervised deep learning. Phys Med Biol 2020; 65:085003. [PMID: 32097902 PMCID: PMC7775640 DOI: 10.1088/1361-6560/ab79c4] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA, 30322
| | | | | | | | | | | | | | | |
Collapse
|
64
|
Jun Guo B, He X, Lei Y, Harms J, Wang T, Curran WJ, Liu T, Jiang Zhang L, Yang X. Automated left ventricular myocardium segmentation using 3D deeply supervised attention U‐net for coronary computed tomography angiography; CT myocardium segmentation. Med Phys 2020; 47:1775-1785. [DOI: 10.1002/mp.14066] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 01/22/2020] [Accepted: 01/28/2020] [Indexed: 01/30/2023] Open
Affiliation(s)
- Bang Jun Guo
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Long Jiang Zhang
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Medical Imaging Jinling Hospital Medical School of Nanjing University Nanjing210002China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| |
Collapse
|
65
|
Lei Y, Wang T, Tian S, Dong X, Jani AB, Schuster D, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys Med Biol 2020; 65:035013. [PMID: 31851956 DOI: 10.1088/1361-6560/ab63bb] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To develop an automated cone-beam computed tomography (CBCT) multi-organ segmentation method for potential CBCT-guided adaptive radiation therapy workflow. The proposed method combines the deep leaning-based image synthesis method, which generates magnetic resonance images (MRIs) with superior soft-tissue contrast from on-board setup CBCT images to aid CBCT segmentation, with a deep attention strategy, which focuses on learning discriminative features for differentiating organ margins. The whole segmentation method consists of 3 major steps. First, a cycle-consistent adversarial network (CycleGAN) was used to estimate a synthetic MRI (sMRI) from CBCT images. Second, a deep attention network was trained based on sMRI and its corresponding manual contours. Third, the segmented contours for a query patient was obtained by feeding the patient's CBCT images into the trained sMRI estimation and segmentation model. In our retrospective study, we included 100 prostate cancer patients, each of whom has CBCT acquired with prostate, bladder and rectum contoured by physicians with MRI guidance as ground truth. We trained and tested our model with separate datasets among these patients. The resulting segmentations were compared with physicians' manual contours. The Dice similarity coefficient and mean surface distance indices between our segmented and physicians' manual contours (bladder, prostate, and rectum) were 0.95 ± 0.02, 0.44 ± 0.22 mm, 0.86 ± 0.06, 0.73 ± 0.37 mm, and 0.91 ± 0.04, 0.72 ± 0.65 mm, respectively. We have proposed a novel CBCT-only pelvic multi-organ segmentation strategy using CBCT-based sMRI and validated its accuracy against manual contours. This technique could provide accurate organ volume for treatment planning without requiring MR images acquisition, greatly facilitating routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | | | |
Collapse
|
66
|
Lei Y, Dong X, Tian Z, Liu Y, Tian S, Wang T, Jiang X, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network. Med Phys 2020; 47:530-540. [PMID: 31745995 PMCID: PMC7764436 DOI: 10.1002/mp.13933] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Revised: 10/10/2019] [Accepted: 11/13/2019] [Indexed: 01/02/2023] Open
Abstract
PURPOSE Accurate segmentation of the prostate on computed tomography (CT) for treatment planning is challenging due to CT's poor soft tissue contrast. Magnetic resonance imaging (MRI) has been used to aid prostate delineation, but its final accuracy is limited by MRI-CT registration errors. We developed a deep attention-based segmentation strategy on CT-based synthetic MRI (sMRI) to deal with the CT prostate delineation challenge without MRI acquisition. METHODS AND MATERIALS We developed a prostate segmentation strategy which employs an sMRI-aided deep attention network to accurately segment the prostate on CT. Our method consists of three major steps. First, a cycle generative adversarial network was used to estimate an sMRI from CT images. Second, a deep attention fully convolution network was trained based on sMRI and the prostate contours deformed from MRIs. Attention models were introduced to pay more attention to prostate boundary. The prostate contour for a query patient was obtained by feeding the patient's CT images into the trained sMRI generation model and segmentation model. RESULTS The segmentation technique was validated with a clinical study of 49 patients by leave-one-out experiments and validated with an additional 50 patients by hold-out test. The Dice similarity coefficient, Hausdorff distance, and mean surface distance indices between our segmented and deformed MRI-defined prostate manual contours were 0.92 ± 0.09, 4.38 ± 4.66, and 0.62 ± 0.89 mm, respectively, with leave-one-out experiments, and were 0.91 ± 0.07, 4.57 ± 3.03, and 0.62 ± 0.65 mm, respectively, with hold-out test. CONCLUSIONS We have proposed a novel CT-only prostate segmentation strategy using CT-based sMRI, and validated its accuracy against the prostate contours that were manually drawn on MRI images and deformed to CT images. This technique could provide accurate prostate volume for treatment planning without requiring MRI acquisition, greatly facilitating the routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
67
|
Dong X, Wang T, Lei Y, Higgins K, Liu T, Curran WJ, Mao H, Nye JA, Yang X. Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging. Phys Med Biol 2019; 64:215016. [PMID: 31622962 DOI: 10.1088/1361-6560/ab4eb7] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Attenuation correction (AC) of PET/MRI faces challenges including inter-scan motion, image artifacts such as truncation and distortion, and erroneous transformation of structural voxel-intensities to PET mu-map values. We propose a deep-learning-based method to derive synthetic CT (sCT) images from non-attenuation corrected PET (NAC PET) images for AC on whole-body PET/MRI imaging. A 3D cycle-consistent generative adversarial networks (CycleGAN) framework was employed to synthesize CT images from NAC PET. The method learns a transformation that minimizes the difference between sCT, generated from NAC PET, and true CT. It also learns an inverse transformation such that cycle NAC PET image generated from the sCT is close to true NAC PET image. A self-attention strategy was also utilized to identify the most informative component and mitigate the disturbance of noise. We conducted a retrospective study on a total of 119 sets of whole-body PET/CT, with 80 sets for training and 39 sets for testing and evaluation. The whole-body sCT images generated with proposed method demonstrate great resemblance to true CT images, and show good contrast on soft tissue, lung and bony tissues. The mean absolute error (MAE) of sCT over true CT is less than 110 HU. Using sCT for whole-body PET AC, the mean error of PET quantification is less than 1% and normalized mean square error (NMSE) is less than 1.4%. Average normalized cross correlation on whole body is close to one, and PSNR is larger than 42 dB. We proposed a deep learning-based approach to generate sCT from whole-body NAC PET for PET AC. sCT generated with proposed method shows great similarity to true CT images both qualitatively and quantitatively, and demonstrates great potential for whole-body PET AC in the absence of structural information.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America. Co-author
| | | | | | | | | | | | | | | | | |
Collapse
|
68
|
Dong X, Lei Y, Tian S, Wang T, Patel P, Curran WJ, Jani AB, Liu T, Yang X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother Oncol 2019; 141:192-199. [PMID: 31630868 DOI: 10.1016/j.radonc.2019.09.028] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/24/2019] [Accepted: 09/29/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND PURPOSE Manual contouring is labor intensive, and subject to variations in operator knowledge, experience and technique. This work aims to develop an automated computed tomography (CT) multi-organ segmentation method for prostate cancer treatment planning. METHODS AND MATERIALS The proposed method exploits the superior soft-tissue information provided by synthetic MRI (sMRI) to aid the multi-organ segmentation on pelvic CT images. A cycle generative adversarial network (CycleGAN) was used to estimate sMRIs from CT images. A deep attention U-Net (DAUnet) was trained on sMRI and corresponding multi-organ contours for auto-segmentation. The deep attention strategy was introduced to identify the most relevant features to differentiate different organs. Deep supervision was incorporated into the DAUnet to enhance the features' discriminative ability. Segmented contours of a patient were obtained by feeding CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate organ contours. We trained and evaluated our model with 140 datasets from prostate patients. RESULTS The Dice similarity coefficient and mean surface distance between our segmented and bladder, prostate, and rectum manual contours were 0.95 ± 0.03, 0.52 ± 0.22 mm; 0.87 ± 0.04, 0.93 ± 0.51 mm; and 0.89 ± 0.04, 0.92 ± 1.03 mm, respectively. CONCLUSION We proposed a sMRI-aided multi-organ automatic segmentation method on pelvic CT images. By integrating deep attention and deep supervision strategy, the proposed network provides accurate and consistent prostate, bladder and rectum segmentation, and has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States.
| |
Collapse
|
69
|
Wang T, Lei Y, Tian Z, Dong X, Liu Y, Jiang X, Curran WJ, Liu T, Shu HK, Yang X. Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy. J Med Imaging (Bellingham) 2019; 6:043504. [PMID: 31673567 PMCID: PMC6811730 DOI: 10.1117/1.jmi.6.4.043504] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 10/03/2019] [Indexed: 01/02/2023] Open
Abstract
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼ 1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are < 0.1 Gy ( p > 0.05 ). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Zhen Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xue Dong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yingzi Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaojun Jiang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J. Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
70
|
Hammouche A, Cloutier G, Tardif JC, Hammouche K, Meunier J. Automatic IVUS lumen segmentation using a 3D adaptive helix model. Comput Biol Med 2019; 107:58-72. [DOI: 10.1016/j.compbiomed.2019.01.023] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 01/23/2019] [Accepted: 01/24/2019] [Indexed: 10/27/2022]
|