1
|
Yuan H, Hong C, Tran NTA, Xu X, Liu N. Leveraging anatomical constraints with uncertainty for pneumothorax segmentation. HEALTH CARE SCIENCE 2024; 3:456-474. [PMID: 39735285 PMCID: PMC11671217 DOI: 10.1002/hcs2.119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 09/01/2024] [Accepted: 09/19/2024] [Indexed: 12/31/2024]
Abstract
Background Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space-the potential space between the lungs and chest wall. On 2D chest radiographs, pneumothorax occurs within the thoracic cavity and outside of the mediastinum, and we refer to this area as "lung + space." While deep learning (DL) has increasingly been utilized to segment pneumothorax lesions in chest radiographs, many existing DL models employ an end-to-end approach. These models directly map chest radiographs to clinician-annotated lesion areas, often neglecting the vital domain knowledge that pneumothorax is inherently location-sensitive. Methods We propose a novel approach that incorporates the lung + space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs. To circumvent the need for additional annotations and to prevent potential label leakage on the target task, our method utilizes external datasets and an auxiliary task of lung segmentation. This approach generates a specific constraint of lung + space for each chest radiograph. Furthermore, we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets. Results Our results demonstrated considerable improvements, with average performance gains of 4.6%, 3.6%, and 3.3% regarding intersection over union, dice similarity coefficient, and Hausdorff distance. These results were consistent across six baseline models built on three architectures (U-Net, LinkNet, or PSPNet) and two backbones (VGG-11 or MobileOne-S0). We further conducted an ablation study to evaluate the contribution of each component in the proposed method and undertook several robustness studies on hyper-parameter selection to validate the stability of our method. Conclusions The integration of domain knowledge in DL models for medical applications has often been underemphasized. Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation and further bolster clinicians' trust in DL tools. Beyond pneumothorax, our approach is promising for other thoracic conditions that possess location-relevant characteristics.
Collapse
Affiliation(s)
- Han Yuan
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
| | - Chuan Hong
- Department of Biostatistics and BioinformaticsDuke UniversityDurhamNorth CarolinaUSA
| | | | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and ResearchSingapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
- Programme in Health Services and Systems Research, Duke‐NUS Medical SchoolSingapore
- Institute of Data ScienceNational University of SingaporeSingapore
| |
Collapse
|
2
|
Cao D, Zhang R, Zhang Y. MFLUnet: multi-scale fusion lightweight Unet for medical image segmentation. BIOMEDICAL OPTICS EXPRESS 2024; 15:5574-5591. [PMID: 39421782 PMCID: PMC11482190 DOI: 10.1364/boe.529505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 07/13/2024] [Accepted: 07/15/2024] [Indexed: 10/19/2024]
Abstract
Recently, the use of point-of-care medical devices has been increasing; however, many Unet and its latest variant networks have numerous parameters, high computational complexity, and slow inference speed, making them unsuitable for deployment on these point-of-care or mobile devices. In order to deploy in the real medical environment, we propose a multi-scale fusion lightweight network (MFLUnet), a CNN-based lightweight medical image segmentation model. For the information extraction ability and utilization efficiency of the network, we propose two modules, MSBDCB and EF module, which enable the model to effectively extract local features and global features and integrate multi-scale and multi-stage information while maintaining low computational complexity. The proposed network is validated on three challenging medical image segmentation tasks: skin lesion segmentation, cell segmentation, and ultrasound image segmentation. The experimental results show that our network has excellent performance without occupying almost any computing resources. Ablation experiments confirm the effectiveness of the proposed encoder-decoder and skip connection module. This study introduces a new method for medical image segmentation and promotes the application of medical image segmentation networks in real medical environments.
Collapse
Affiliation(s)
- Dianlei Cao
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| | - Rui Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| | - Yunfeng Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| |
Collapse
|
3
|
Slika B, Dornaika F, Merdji H, Hammoudi K. Lung pneumonia severity scoring in chest X-ray images using transformers. Med Biol Eng Comput 2024; 62:2389-2407. [PMID: 38589723 PMCID: PMC11289055 DOI: 10.1007/s11517-024-03066-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 02/24/2024] [Indexed: 04/10/2024]
Abstract
To create robust and adaptable methods for lung pneumonia diagnosis and the assessment of its severity using chest X-rays (CXR), access to well-curated, extensive datasets is crucial. Many current severity quantification approaches require resource-intensive training for optimal results. Healthcare practitioners require efficient computational tools to swiftly identify COVID-19 cases and predict the severity of the condition. In this research, we introduce a novel image augmentation scheme as well as a neural network model founded on Vision Transformers (ViT) with a small number of trainable parameters for quantifying COVID-19 severity and other lung diseases. Our method, named Vision Transformer Regressor Infection Prediction (ViTReg-IP), leverages a ViT architecture and a regression head. To assess the model's adaptability, we evaluate its performance on diverse chest radiograph datasets from various open sources. We conduct a comparative analysis against several competing deep learning methods. Our results achieved a minimum Mean Absolute Error (MAE) of 0.569 and 0.512 and a maximum Pearson Correlation Coefficient (PC) of 0.923 and 0.855 for the geographic extent score and the lung opacity score, respectively, when the CXRs from the RALO dataset were used in training. The experimental results reveal that our model delivers exceptional performance in severity quantification while maintaining robust generalizability, all with relatively modest computational requirements. The source codes used in our work are publicly available at https://github.com/bouthainas/ViTReg-IP .
Collapse
Affiliation(s)
- Bouthaina Slika
- University of the Basque Country UPV/EHU, San Sebastian, Spain
- Lebanese International University, Beirut, Lebanon
- Beirut International University, Beirut, Lebanon
| | - Fadi Dornaika
- University of the Basque Country UPV/EHU, San Sebastian, Spain.
- IKERBASQUE, Basque Foundation for Science, Bilbao, Spain.
| | - Hamid Merdji
- INSERM, UMR 1260, Regenerative Nanomedicine (RNM), CRBS, University of Strasbourg, Strasbourg, France
- Hôpital Universitaire de Strasbourg, Strasbourg, France
| | - Karim Hammoudi
- Université de Haute-Alsace IRIMAS, Mulhouse, France
- University of Strasbourg, Strasbourg, France
| |
Collapse
|
4
|
Li Y, Zhang L, Yu H, Wang J, Wang S, Liu J, Zheng Q. A comprehensive segmentation of chest X-ray improves deep learning-based WHO radiologically confirmed pneumonia diagnosis in children. Eur Radiol 2024; 34:3471-3482. [PMID: 37930411 DOI: 10.1007/s00330-023-10367-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 08/29/2023] [Accepted: 08/31/2023] [Indexed: 11/07/2023]
Abstract
OBJECTIVES To investigate a comprehensive segmentation of chest X-ray (CXR) in promoting deep learning-based World Health Organization's (WHO) radiologically confirmed pneumonia diagnosis in children. METHODS A total of 4400 participants between January 2016 and June 2021were identified for a cross-sectional study and divided into primary endpoint pneumonia (PEP), other infiltrates, and normal groups according to WHO's diagnostic criteria. The CXR was divided into six segments of left lung, right lung, mediastinum, diaphragm, ext-left lung, and ext-right lung by adopting the RA-UNet. To demonstrate the benefits of lung field segmentation in pneumonia diagnosis, the segmented images and images that were not segmented, which constituted seven segmentation combinations, were fed into the CBAM-ResNet under a three-category classification comparison. The interpretability of the CBAM-ResNet for pneumonia diagnosis was also performed by adopting a Grad-CAM module. RESULTS The RA-UNet achieved a high spatial overlap between manual and automatic segmentation (averaged DSC = 0.9639). The CBAM-ResNet when fed with the six segments achieved superior three-category diagnosis performance (accuracy = 0.8243) over other segmentation combinations and deep learning models under comparison, which was increased by around 6% in accuracy, precision, specificity, sensitivity, F1-score, and around 3% in AUC. The Grad-CAM could capture the pneumonia lesions more accurately, generating a more interpretable visualization and enhancing the superiority and reliability of our study in assisting pediatric pneumonia diagnosis. CONCLUSIONS The comprehensive segmentation of CXR could improve deep learning-based pneumonia diagnosis in childhood with a more reasonable WHO's radiological standardized pneumonia classification instead of conventional dichotomous bacterial pneumonia and viral pneumonia. CLINICAL RELEVANCE STATEMENT The comprehensive segmentation of chest X-ray improves deep learning-based WHO confirmed pneumonia diagnosis in children, laying a strong foundation for the potential inclusion of computer-aided pediatric CXR readings in precise classification of pneumonia and PCV vaccine trials efficacy in children. KEY POINTS • The chest X-ray was comprehensively segmented into six anatomical structures of left lung, right lung, mediastinum, diaphragm, ext-left lung, and ext-right lung. • The comprehensive segmentation improved the three-category classification of primary endpoint pneumonia, other infiltrates, and normal with an increase by around 6% in accuracy, precision, specificity, sensitivity, F1-score, and around 3% in AUC. • The comprehensive segmentation gave rise to a more accurate and interpretable visualization results in capturing the pneumonia lesions.
Collapse
Affiliation(s)
- Yuemei Li
- School of Computer and Control Engineering, Yantai University, Yantai, 264005, China
| | - Lin Zhang
- Department of Radiology, Xiamen Children's Hospital, Children's Hospital of Fudan University at Xiamen, Xiamen, Fujian, China
| | - Hu Yu
- School of Computer and Control Engineering, Yantai University, Yantai, 264005, China
| | - Jian Wang
- Department of Radiology, Xiamen Children's Hospital, Children's Hospital of Fudan University at Xiamen, Xiamen, Fujian, China
| | - Shuo Wang
- Yantai University Trier College of Sustainable Technology, Yantai, 264005, Shandong Province, China
- Trier University of Applied Sciences, D-54208, Trier, Germany
| | - Jungang Liu
- Department of Radiology, Xiamen Children's Hospital, Children's Hospital of Fudan University at Xiamen, Xiamen, Fujian, China.
| | - Qiang Zheng
- School of Computer and Control Engineering, Yantai University, Yantai, 264005, China.
| |
Collapse
|
5
|
Alam MS, Wang D, Liao Q, Sowmya A. A Multi-Scale Context Aware Attention Model for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3731-3739. [PMID: 37015493 DOI: 10.1109/jbhi.2022.3227540] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Medical image segmentation is critical for efficient diagnosis of diseases and treatment planning. In recent years, convolutional neural networks (CNN)-based methods, particularly U-Net and its variants, have achieved remarkable results on medical image segmentation tasks. However, they do not always work consistently on images with complex structures and large variations in regions of interest (ROI). This could be due to the fixed geometric structure of the receptive fields used for feature extraction and repetitive down-sampling operations that lead to information loss. To overcome these problems, the standard U-Net architecture is modified in this work by replacing the convolution block with a dilated convolution block to extract multi-scale context features with varying sizes of receptive fields, and adding a dilated inception block between the encoder and decoder paths to alleviate the problem of information recession and the semantic gap between features. Furthermore, the input of each dilated convolution block is added to the output through a squeeze and excitation unit, which alleviates the vanishing gradient problem and improves overall feature representation by re-weighting the channel-wise feature responses. The original inception block is modified by reducing the size of the spatial filter and introducing dilated convolution to obtain a larger receptive field. The proposed network was validated on three challenging medical image segmentation tasks with varying size ROIs: lung segmentation on chest X-ray (CXR) images, skin lesion segmentation on dermoscopy images and nucleus segmentation on microscopy cell images. Improved performance compared to state-of-the-art techniques demonstrates the effectiveness and generalisability of the proposed Dilated Convolution and Inception blocks-based U-Net (DCI-UNet).
Collapse
|
6
|
Padash S, Mohebbian MR, Adams SJ, Henderson RDE, Babyn P. Pediatric chest radiograph interpretation: how far has artificial intelligence come? A systematic literature review. Pediatr Radiol 2022; 52:1568-1580. [PMID: 35460035 PMCID: PMC9033522 DOI: 10.1007/s00247-022-05368-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/28/2022] [Accepted: 03/24/2022] [Indexed: 10/24/2022]
Abstract
Most artificial intelligence (AI) studies have focused primarily on adult imaging, with less attention to the unique aspects of pediatric imaging. The objectives of this study were to (1) identify all publicly available pediatric datasets and determine their potential utility and limitations for pediatric AI studies and (2) systematically review the literature to assess the current state of AI in pediatric chest radiograph interpretation. We searched PubMed, Web of Science and Embase to retrieve all studies from 1990 to 2021 that assessed AI for pediatric chest radiograph interpretation and abstracted the datasets used to train and test AI algorithms, approaches and performance metrics. Of 29 publicly available chest radiograph datasets, 2 datasets included solely pediatric chest radiographs, and 7 datasets included pediatric and adult patients. We identified 55 articles that implemented an AI model to interpret pediatric chest radiographs or pediatric and adult chest radiographs. Classification of chest radiographs as pneumonia was the most common application of AI, evaluated in 65% of the studies. Although many studies report high diagnostic accuracy, most algorithms were not validated on external datasets. Most AI studies for pediatric chest radiograph interpretation have focused on a limited number of diseases, and progress is hindered by a lack of large-scale pediatric chest radiograph datasets.
Collapse
Affiliation(s)
- Sirwa Padash
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada.
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| | - Mohammad Reza Mohebbian
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Robert D E Henderson
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| |
Collapse
|
7
|
Peng T, Wang C, Zhang Y, Wang J. H-SegNet: hybrid segmentation network for lung segmentation in chest radiographs using mask region-based convolutional neural network and adaptive closed polyline searching method. Phys Med Biol 2022; 67. [PMID: 35287125 DOI: 10.1088/1361-6560/ac5d74] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 03/14/2022] [Indexed: 12/24/2022]
Abstract
Chest x-ray (CXR) is one of the most commonly used imaging techniques for the detection and diagnosis of pulmonary diseases. One critical component in many computer-aided systems, for either detection or diagnosis in digital CXR, is the accurate segmentation of the lung. Due to low-intensity contrast around lung boundary and large inter-subject variance, it has been challenging to segment lung from structural CXR images accurately. In this work, we propose an automatic Hybrid Segmentation Network (H-SegNet) for lung segmentation on CXR. The proposed H-SegNet consists of two key steps: (1) an image preprocessing step based on a deep learning model to automatically extract coarse lung contours; (2) a refinement step to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. Experimental results on several public datasets show that the proposed method achieves superior segmentation results in lung CXRs, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Peng
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Caishan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, People's Republic of China
| | - You Zhang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Jing Wang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| |
Collapse
|
8
|
Nino G, Molto J, Aguilar H, Zember J, Sanchez-Jacob R, Diez CT, Tabrizi PR, Mohammed B, Weinstock J, Xuchen X, Kahanowitch R, Arroyo M, Linguraru MG. Chest X-ray lung imaging features in pediatric COVID-19 and comparison with viral lower respiratory infections in young children. Pediatr Pulmonol 2021; 56:3891-3898. [PMID: 34487422 PMCID: PMC8661937 DOI: 10.1002/ppul.25661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 07/27/2021] [Accepted: 08/08/2021] [Indexed: 12/23/2022]
Abstract
RATIONALE Chest radiography (CXR) is a noninvasive imaging approach commonly used to evaluate lower respiratory tract infections (LRTIs) in children. However, the specific imaging patterns of pediatric coronavirus disease 2019 (COVID-19) on CXR, their relationship to clinical outcomes, and the possible differences from LRTIs caused by other viruses in children remain to be defined. METHODS This is a cross-sectional study of patients seen at a pediatric hospital with polymerase chain reaction (PCR)-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (n = 95). Patients were subdivided in infants (0-2 years, n = 27), children (3-10 years, n = 27), and adolescents (11-19 years, n = 41). A sample of young children (0-2 years, n = 68) with other viral lower respiratory infections (LRTI) was included to compare their CXR features with the subset of infants (0-2 years) with COVID-19. RESULTS Forty-five percent of pediatric patients with COVID-19 were hospitalized and 20% required admission to intensive care unit (ICU). The most common abnormalities identified were ground-glass opacifications (GGO)/consolidations (35%) and increased peribronchial markings/cuffing (33%). GGO/consolidations were more common in older individuals and perihilar markings were more common in younger subjects. Subjects requiring hospitalization or ICU admission had significantly more GGO/consolidations in CXR (p < .05). Typical CXR features of pediatric viral LRTI (e.g., hyperinflation) were more common in non-COVID-19 viral LRTI cases than in COVID-19 cases (p < .05). CONCLUSIONS CXR may be a complemental exam in the evaluation of moderate or severe pediatric COVID-19 cases. The severity of GGO/consolidations seen in CXR is predictive of clinically relevant outcomes. Hyperinflation could potentially aid clinical assessment in distinguishing COVID-19 from other types of viral LRTI in young children.
Collapse
Affiliation(s)
- Gustavo Nino
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Jose Molto
- Department of Radiology, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Hector Aguilar
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Jonathan Zember
- Department of Radiology, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Ramon Sanchez-Jacob
- Department of Radiology, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Carlos T Diez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, District Columbia, USA
| | - Pooneh R Tabrizi
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, District Columbia, USA
| | - Bilal Mohammed
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, District Columbia, USA
| | - Jered Weinstock
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Xilei Xuchen
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Ryan Kahanowitch
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Maria Arroyo
- Division of Pediatric Pulmonary and Sleep Medicine, Children's National Hospital, Washington, District Columbia, USA.,Department of Pediatrics, George Washington University School of Medicine, Washington, District Columbia, USA
| | - Marius G Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, District Columbia, USA
| |
Collapse
|
9
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
10
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|