1
|
Wu Y, Xia S, Liang Z, Chen R, Qi S. Artificial intelligence in COPD CT images: identification, staging, and quantitation. Respir Res 2024; 25:319. [PMID: 39174978 PMCID: PMC11340084 DOI: 10.1186/s12931-024-02913-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/09/2024] [Indexed: 08/24/2024] Open
Abstract
Chronic obstructive pulmonary disease (COPD) stands as a significant global health challenge, with its intricate pathophysiological manifestations often demanding advanced diagnostic strategies. The recent applications of artificial intelligence (AI) within the realm of medical imaging, especially in computed tomography, present a promising avenue for transformative changes in COPD diagnosis and management. This review delves deep into the capabilities and advancements of AI, particularly focusing on machine learning and deep learning, and their applications in COPD identification, staging, and imaging phenotypes. Emphasis is laid on the AI-powered insights into emphysema, airway dynamics, and vascular structures. The challenges linked with data intricacies and the integration of AI in the clinical landscape are discussed. Lastly, the review casts a forward-looking perspective, highlighting emerging innovations in AI for COPD imaging and the potential of interdisciplinary collaborations, hinting at a future where AI doesn't just support but pioneers breakthroughs in COPD care. Through this review, we aim to provide a comprehensive understanding of the current state and future potential of AI in shaping the landscape of COPD diagnosis and management.
Collapse
Affiliation(s)
- Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Shuyue Xia
- Respiratory Department, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
- Key Laboratory of Medicine and Engineering for Chronic Obstructive Pulmonary Disease in Liaoning Province, Shenyang, China
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Rongchang Chen
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital, Shenzhen, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
2
|
Wu Y, Qi S, Wang M, Zhao S, Pang H, Xu J, Bai L, Ren H. Transformer-based 3D U-Net for pulmonary vessel segmentation and artery-vein separation from CT images. Med Biol Eng Comput 2023; 61:2649-2663. [PMID: 37420036 DOI: 10.1007/s11517-023-02872-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Accepted: 06/20/2023] [Indexed: 07/09/2023]
Abstract
Transformer-based methods have led to the revolutionizing of multiple computer vision tasks. Inspired by this, we propose a transformer-based network with a channel-enhanced attention module to explore contextual and spatial information in non-contrast (NC) and contrast-enhanced (CE) computed tomography (CT) images for pulmonary vessel segmentation and artery-vein separation. Our proposed network employs a 3D contextual transformer module in the encoder and decoder part and a double attention module in skip connection to effectively finish high-quality vessel and artery-vein segmentation. Extensive experiments are conducted on the in-house dataset and the ISICDM2021 challenge dataset. The in-house dataset includes 56 NC CT scans with vessel annotations and the challenge dataset consists of 14 NC and 14 CE CT scans with vessel and artery-vein annotations. For vessel segmentation, Dice is 0.840 for CE CT and 0.867 for NC CT. For artery-vein separation, the proposed method achieves a Dice of 0.758 of CE images and 0.602 of NC images. Quantitative and qualitative results demonstrated that the proposed method achieved high accuracy for pulmonary vessel segmentation and artery-vein separation. It provides useful support for further research associated with the vascular system in CT images. The code is available at https://github.com/wuyanan513/Pulmonary-Vessel-Segmentation-and-Artery-vein-Separation .
Collapse
Affiliation(s)
- Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
- Department of Electronic Engineering, Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Meihuan Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shuiqing Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Haowen Pang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jiaxuan Xu
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, The National Center for Respiratory Medicine, Guangzhou Institute of Respiratory Health, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Long Bai
- Department of Electronic Engineering, Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hongliang Ren
- Department of Electronic Engineering, Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong, China.
- Department of Biomedical Engineering (BME), National University of Singapore, Singapore, Singapore.
- Research Institute, National University of Suzhou, Suzhou, Jiangsu, China.
- Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong (CUHK), Hong Kong, China.
| |
Collapse
|
3
|
Pang H, Qi S, Wu Y, Wang M, Li C, Sun Y, Qian W, Tang G, Xu J, Liang Z, Chen R. NCCT-CECT image synthesizers and their application to pulmonary vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107389. [PMID: 36739625 DOI: 10.1016/j.cmpb.2023.107389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Non-contrast CT (NCCT) and contrast-enhanced CT (CECT) are important diagnostic tools with distinct features and applications for chest diseases. We developed two synthesizers for the mutual synthesis of NCCT and CECT and evaluated their applications. METHODS Two synthesizers (S1 and S2) were proposed based on a generative adversarial network. S1 generated synthetic CECT (SynCECT) from NCCT and S2 generated synthetic NCCT (SynNCCT) from CECT. A new training procedure for synthesizers was proposed. Initially, the synthesizers were pretrained using self-supervised learning (SSL) and dual-energy CT (DECT) and then fine-tuned using the registered NCCT and CECT images. Pulmonary vessel segmentation from NCCT was used as an example to demonstrate the effectiveness of the synthesizers. Two strategies (ST1 and ST2) were proposed for pulmonary vessel segmentation. In ST1, CECT images were used to train a segmentation model (Model-CECT), NCCT images were converted to SynCECT through S1, and SynCECT was input to Model-CECT for testing. In ST2, CECT data were converted to SynNCCT through S2. SynNCCT and CECT-based annotations were used to train an additional model (Model-NCCT), and NCCT was input to Model-NCCT for testing. Three datasets, D1 (40 paired CTs), D2 (14 NCCTs and 14 CECTs), and D3 (49 paired DECTs), were used to evaluate the synthesizers and strategies. RESULTS For S1, the mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were 14.60± 2.19, 1644± 890, 34.34± 1.91, and 0.94± 0.02, respectively. For S2, they were 12.52± 2.59, 1460± 922, 35.08± 2.35, and 0.95± 0.02, respectively. Our synthesizers outperformed the counterparts of CycleGAN, Pix2Pix, and Pix2PixHD. The results of ablation studies on SSL pretraining, DECT pretraining, and fine-tuning showed that performance worsened (for example, for S1, MAE increased to 16.53± 3.10, 17.98± 3.10, and 20.57± 3.75, respectively). Model-NCCT and Model-CECT achieved dice similarity coefficients (DSC) of 0.77 and 0.86 on D1 and 0.77 and 0.72 on D2, respectively. CONCLUSIONS The proposed synthesizers realized mutual and high-quality synthesis between NCCT and CECT images; the training procedures, including SSL pretraining, DECT pretraining, and fine-tuning, were critical to their effectiveness. The results demonstrated the usefulness of synthesizers for pulmonary vessel segmentation from NCCT images.
Collapse
Affiliation(s)
- Haowen Pang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Meihuan Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yu Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Guoyan Tang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jiaxuan Xu
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Rongchang Chen
- Key Laboratory of Respiratory Disease of Shenzhen, Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital (Second Affiliated Hospital of Jinan University, First Affiliated Hospital of South University of Science and Technology of China), Shenzhen, China.
| |
Collapse
|
4
|
Wang M, Qi S, Wu Y, Sun Y, Chang R, Pang H, Qian W. CE-NC-VesselSegNet: Supervised by contrast-enhanced CT images but utilized to segment pulmonary vessels from non-contrast-enhanced CT images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
5
|
Wu R, Xin Y, Qian J, Dong Y. A multi-scale interactive U-Net for pulmonary vessel segmentation method based on transfer learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
6
|
Meng J, Feng Z, Qian S, Wang C, Li X, Gao L, Ding Z, Qian J, Liu Z. Mapping physiological and pathological functions of cortical vasculature through aggregation-induced emission nanoprobes assisted quantitative, in vivo NIR-II imaging. BIOMATERIALS ADVANCES 2022; 136:212760. [PMID: 35929291 DOI: 10.1016/j.bioadv.2022.212760] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/07/2022] [Accepted: 03/11/2022] [Indexed: 06/15/2023]
Abstract
Cerebrovascular disease includes all disorders that affect cerebrovascular and cerebral circulation. Unfortunately, there is currently a lack of a systematic method to image blood vessels directly and achieve accurate quantification. Herein, we build a non-invasive, quantitative imaging and characterization system applicable to mapping physiological and pathological functions of cortical vasculature. Assisted by aggregation-induced emission (AIE) luminogens with either excitation or emission at near-infrared-II (NIR-II) region, large-depth and/or high signal-to-background ratio images of cerebral blood vessels from mice and marmosets are captured, based on which we develop an optical metric of vessel thickness in an automated, pixel-wise manner and both two-dimensional (2D) and three-dimensional (3D) contexts. By monitoring time-dependent cerebrovascular images in marmosets, periodic changes in the diameter of vibrating cerebral blood vessels are found to be regulated mainly by heartbeat. In mice photothrombosis model, vessel alterations throughout the whole process of thrombotic stroke are found to be stage-dependent. From a large field of view, the distance-dependent vessel thickness variation before and right after stroke is obtained away from the thrombus site. Importantly, a buffer zone exists right surrounding the lesion, indicating the inhomogeneity of vascular morphological changes. Biologically excretable AIE nanoparticles are used for assessing physiological and pathological functions, offering great potential for clinical translation.
Collapse
Affiliation(s)
- Jia Meng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Zhe Feng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Shuhao Qian
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Chuncheng Wang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang 310027, China
| | - Zhihua Ding
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China.
| | - Jun Qian
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China.
| | - Zhiyi Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, International Research Center for Advanced Photonics, Zhejiang University, Hangzhou, Zhejiang 310027, China; Intelligent Optics & Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Jiaxing, Zhejiang 314000, China.
| |
Collapse
|
7
|
Yang R, Wang Z, Jia Y, Li H, Mou Y. Comparison of Clinical Efficacy of Sodium Nitroprusside and Urapidil in the Treatment of Acute Hypertensive Cerebral Hemorrhage. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2209070. [PMID: 35388331 PMCID: PMC8979678 DOI: 10.1155/2022/2209070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/21/2022] [Accepted: 03/02/2022] [Indexed: 11/17/2022]
Abstract
This paper mainly studies the clinical efficacy of sodium nitroprusside and urapidil in the treatment of acute hypertensive intracerebral hemorrhage and analyzes the brain CT image detection based on a deep learning algorithm. A total of 132 cases of acute hypertension admitted to XXX hospital from XX 2019 to XX 2020 were retrospectively analyzed. The diseases of all patients were clinically confirmed, and patients were divided into groups according to the differences in treatment methods. Urapidil was used for group 1; sodium nitroprusside was used for group 2; and urapidil combined with sodium nitroprusside was used for group 3. A convolutional neural network in deep learning is used to construct intelligent processing to classify brain CT images of patients. The network performance of AlexNet, GoogLeNet, and CNN3 is predicted. The results show that GoogLeNet has the highest prediction accuracy of 0.83, followed by AlexNet with 0.80 and CNN3 with 0.74. The results of the performance parameter curve show that the GoogLeNet has the highest performance parameter of 0.89, followed by AlexNet and CNN3 network. The performance parameter curve of machine learning is above 0.80. After five weeks of drug treatment, the hematoma volume was (3.8 ± 2.6) mL in group1, (7.6 ± 2.8) mL in group 2, and (2.8 ± 1.5) mL in group 3. After 5 days of treatment, the patients' heart rate changed compared with before treatment. Compared with group 2, there were significant differences between groups 1 and 3 (P < 0.01), indicating that the therapeutic effect of the combination group was significantly better than that of the other groups alone. In summary, the combination of sodium nitroprusside and urapidil has a significantly better effect than that of urapidil alone. A convolutional neural network based on deep learning improves the recognition accuracy of medical images.
Collapse
Affiliation(s)
- Rui Yang
- Department of Neurology, Gucheng Hospital, Hengshui 253800, China
| | - Zhenzhen Wang
- Department of Neurology, Gucheng Hospital, Hengshui 253800, China
| | - Yanxun Jia
- Department of Neurosurgery, Gucheng Hospital, Hengshui 253800, China
| | - Hao Li
- Department of Neurology, Gucheng Hospital, Hengshui 253800, China
| | - Yating Mou
- Department of Neurology, Gucheng Hospital, Hengshui 253800, China
| |
Collapse
|
8
|
Guo W, Zhang G, Gong Z, Li Q. Effective integration of object boundaries and regions for improving the performance of medical image segmentation by using two cascaded networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106423. [PMID: 34673377 DOI: 10.1016/j.cmpb.2021.106423] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 09/13/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES The existing CNN-based methods for object segmentation use the regions of objects alone as the labels for training networks, and the potentially useful boundaries annotated by radiologists are not used directly during the training. Thus, we proposed a framework of two cascaded networks to integrate both the region and boundary information for improving the accuracy of object segmentation. METHODS The first network was used to extract the boundary from original images. The predicted dilated boundary from the first network and the corresponding original image were employed to train the second network for final segmentation. Compared with the object regions, the boundaries may provide additional useful local information for improved object segmentation. The two cascaded networks were evaluated on three datasets, including 40 CT scans for segmenting the esophagus, heart, trachea, and aorta, 247 chest radiographs for segmenting the lung, heart, and clavicle, and 101 retinal images for segmenting the optical disk and cup. The mean values of Dices, 90% Hausdorff distance, and Euclidean distance were employed to quantitatively evaluate the segmentation results. RESULTS Compared with the baseline method of the conventional U-Net, the two cascaded networks consistently improved the mean Dices and reduced the mean 90% Hausdorff distances and Euclidean distances for all objects, and the reduction rate of the 90% Hausdorff distance was as high as ten times for certain objects. CONCLUSIONS The boundary is very useful information for object segmentation, and the integration of object boundary and region would improve the segmentation results compared with the use of object region alone.
Collapse
Affiliation(s)
- Wei Guo
- Huazhong University of Science and Technology, China; Shenyang Aerospace University, China
| | | | | | - Qiang Li
- Huazhong University of Science and Technology, China.
| |
Collapse
|
9
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
10
|
The effect of pulmonary vessel suppression on computerized detection of nodules in chest CT scans. Med Phys 2020; 47:4917-4927. [DOI: 10.1002/mp.14401] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 06/28/2020] [Accepted: 07/09/2020] [Indexed: 12/19/2022] Open
|