1
|
Moitra M, Alafeef M, Narasimhan A, Kakaria V, Moitra P, Pan D. Diagnosis of COVID-19 with simultaneous accurate prediction of cardiac abnormalities from chest computed tomographic images. PLoS One 2023; 18:e0290494. [PMID: 38096254 PMCID: PMC10721010 DOI: 10.1371/journal.pone.0290494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 08/09/2023] [Indexed: 12/17/2023] Open
Abstract
COVID-19 has potential consequences on the pulmonary and cardiovascular health of millions of infected people worldwide. Chest computed tomographic (CT) imaging has remained the first line of diagnosis for individuals infected with SARS-CoV-2. However, differentiating COVID-19 from other types of pneumonia and predicting associated cardiovascular complications from the same chest-CT images have remained challenging. In this study, we have first used transfer learning method to distinguish COVID-19 from other pneumonia and healthy cases with 99.2% accuracy. Next, we have developed another CNN-based deep learning approach to automatically predict the risk of cardiovascular disease (CVD) in COVID-19 patients compared to the normal subjects with 97.97% accuracy. Our model was further validated against cardiac CT-based markers including cardiac thoracic ratio (CTR), pulmonary artery to aorta ratio (PA/A), and presence of calcified plaque. Thus, we successfully demonstrate that CT-based deep learning algorithms can be employed as a dual screening diagnostic tool to diagnose COVID-19 and differentiate it from other pneumonia, and also predicts CVD risk associated with COVID-19 infection.
Collapse
Affiliation(s)
- Moumita Moitra
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
| | - Maha Alafeef
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
- Biomedical Engineering Department, Jordan University of Science and Technology, Irbid, Jordan
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
| | - Arjun Narasimhan
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
| | - Vikram Kakaria
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
| | - Parikshit Moitra
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
| | - Dipanjan Pan
- Center for Blood Oxygen Transport and Hemostasis, Department of Pediatrics, University of Maryland Baltimore School of Medicine, Baltimore, Maryland, United States of America
- Department of Chemical, Biochemical and Environmental Engineering, University of Maryland Baltimore County, Baltimore, Maryland, United States of America
- Department of Nuclear Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
- Department of Materials Science & Engineering, The Pennsylvania State University, State College, Pennsylvania, United States of America
- Huck Institutes of the Life Sciences, State College, Pennsylvania, United States of America
| |
Collapse
|
2
|
Wang J, Sun H, Jiang K, Cao W, Chen S, Zhu J, Yang X, Zheng J. CAPNet: Context attention pyramid network for computer-aided detection of microcalcification clusters in digital breast tomosynthesis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107831. [PMID: 37783114 DOI: 10.1016/j.cmpb.2023.107831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 12/25/2022] [Accepted: 09/23/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided detection (CADe) of microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) is crucial in the early diagnosis of breast cancer. Although convolutional neural network (CNN)-based detection models have achieved excellent performance in medical lesion detection, they are subject to some limitations in MC detection: 1) Most existing models employ the feature pyramid network (FPN) for multi-scale object detection; however, the rough feature sharing between adjacent layers in the FPN may limit the detection ability for small and low-contrast MCs; and 2) the MCs region only accounts for a small part of the annotation box, so the features extracted indiscriminately within the whole box may easily be affected by the background. In this paper, we develop a novel CNN-based CADe method to alleviate the impacts of the above limitations for the accurate and rapid detection of MCs in DBT. METHODS The proposed method has two parts: a novel context attention pyramid network (CAPNet) for intra-layer MC detection in two-dimensional (2D) slices and a three-dimensional (3D) aggregation procedure for aggregating 2D intra-layer MCs into a 3D result according to their connectivity in 3D space. The proposed CAPNet is based on an anchor-free and one-stage detection architecture and contains a context feature selection fusion (CFSF) module and a microcalcification response (MCR) branch. The CFSF module can efficiently enrich shallow layers' features by the complementary selection of local context features, aiming to reduce the missed detection of small and low-contrast MCs. The MCR branch is a one-layer branch parallel to the classification branch, which can alleviate the influence of the background region within the annotation box on feature extraction and enhance the ability of the model to distinguish MCs from normal breast tissue. RESULTS We performed a comparison experiment on an in-house clinical dataset with 648 DBT volumes, and the proposed method achieved impressive performance with a sensitivity of 91.56 % at 1 false positive per DBT volume (FPs/volume) and 93.51 % at 2 FPs/volume, outperforming other representative detection models. CONCLUSIONS The experimental results indicate that the proposed method is effective in the detection of MCs in DBT. This method can provide objective, accurate, and quick diagnostic suggestions for radiologists, presenting potential clinical value for early breast cancer screening.
Collapse
Affiliation(s)
- Jingkun Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Haotian Sun
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Ke Jiang
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Shuangqing Chen
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Jianbing Zhu
- Suzhou Science & Technology Town Hospital, Gusu School, Nanjing Medical University, Suzhou 215153, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China.
| |
Collapse
|
3
|
Kim K, Lee JH, Je Oh S, Chung MJ. AI-based computer-aided diagnostic system of chest digital tomography synthesis: Demonstrating comparative advantage with X-ray-based AI systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107643. [PMID: 37348439 DOI: 10.1016/j.cmpb.2023.107643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 05/26/2023] [Accepted: 06/03/2023] [Indexed: 06/24/2023]
Abstract
BACKGROUND Compared with chest X-ray (CXR) imaging, which is a single image projected from the front of the patient, chest digital tomosynthesis (CDTS) imaging can be more advantageous for lung lesion detection because it acquires multiple images projected from multiple angles of the patient. Various clinical comparative analysis and verification studies have been reported to demonstrate this, but there is no artificial intelligence (AI)-based comparative analysis studies. Existing AI-based computer-aided detection (CAD) systems for lung lesion diagnosis have been developed mainly based on CXR images; however, CAD-based on CDTS, which uses multi-angle images of patients in various directions, has not been proposed and verified for its usefulness compared to CXR-based counterparts. BACKGROUND AND OBJECTIVE This study develops and tests a CDTS-based AI CAD system to detect lung lesions to demonstrate performance improvements compared to CXR-based AI CAD. METHODS We used multiple (e.g., five) projection images as input for the CDTS-based AI model and a single-projection image as input for the CXR-based AI model to compare and evaluate the performance between models. Multiple/single projection input images were obtained by virtual projection on the three-dimensional (3D) stack of computed tomography (CT) slices of each patient's lungs from which the bed area was removed. These multiple images result from shooting from the front and left and right 30/60∘. The projected image captured from the front was used as the input for the CXR-based AI model. The CDTS-based AI model used all five projected images. The proposed CDTS-based AI model consisted of five AI models that received images in each of the five directions, and obtained the final prediction result through an ensemble of five models. Each model used WideResNet-50. To train and evaluate CXR- and CDTS-based AI models, 500 healthy data, 206 tuberculosis data, and 242 pneumonia data were used, and three three-fold cross-validation was applied. RESULTS The proposed CDTS-based AI CAD system yielded sensitivities of 0.782 and 0.785 and accuracies of 0.895 and 0.837 for the (binary classification) performance of detecting tuberculosis and pneumonia, respectively, against normal subjects. These results show higher performance than the sensitivity of 0.728 and 0.698 and accuracies of 0.874 and 0.826 for detecting tuberculosis and pneumonia through the CXR-based AI CAD, which only uses a single projection image in the frontal direction. We found that CDTS-based AI CAD improved the sensitivity of tuberculosis and pneumonia by 5.4% and 8.7% respectively, compared to CXR-based AI CAD without loss of accuracy. CONCLUSIONS This study comparatively proves that CDTS-based AI CAD technology can improve performance more than CXR. These results suggest that we can enhance the clinical application of CDTS. Our code is available at https://github.com/kskim-phd/CDTS-CAD-P.
Collapse
Affiliation(s)
- Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| | - Ju Hwan Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Seong Je Oh
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| |
Collapse
|
4
|
Mendes J, Matela N, Garcia N. Avoiding Tissue Overlap in 2D Images: Single-Slice DBT Classification Using Convolutional Neural Networks. Tomography 2023; 9:398-412. [PMID: 36828384 PMCID: PMC9962912 DOI: 10.3390/tomography9010032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/08/2023] [Accepted: 02/13/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer was the most diagnosed cancer around the world in 2020. Screening programs, based on mammography, aim to achieve early diagnosis which is of extreme importance when it comes to cancer. There are several flaws associated with mammography, with one of the most important being tissue overlapping that can result in both lesion masking and fake-lesion appearance. To overcome this, digital breast tomosynthesis takes images (slices) at different angles that are later reconstructed into a 3D image. Having in mind that the slices are planar images where tissue overlapping does not occur, the goal of the work done here was to develop a deep learning model that could, based on the said slices, classify lesions as benign or malignant. The developed model was based on the work done by Muduli et. al, with a slight change in the fully connected layers and in the regularization done. In total, 77 DBT volumes-39 benign and 38 malignant-were available. From each volume, nine slices were taken, one where the lesion was most visible and four above/below. To increase the quantity and the variability of the data, common data augmentation techniques (rotation, translation, mirroring) were applied to the original images three times. Therefore, 2772 images were used for training. Data augmentation techniques were then applied two more times-one set used for validation and one set used for testing. Our model achieved, on the testing set, an accuracy of 93.2% while the values of sensitivity, specificity, precision, F1-score, and Cohen's kappa were 92%, 94%, 94%, 94%, and 0.86, respectively. Given these results, the work done here suggests that the use of single-slice DBT can compare to state-of-the-art studies and gives a hint that with more data, better augmentation techniques and the use of transfer learning might overcome the use of mammograms in this type of studies.
Collapse
Affiliation(s)
- João Mendes
- Faculdade de Ciências, Instituto de Biofísica e Engenharia Biomédica, Universidade de Lisboa, 1749-016 Lisboa, Portugal
- Faculdade de Ciências, LASIGE, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| | - Nuno Matela
- Faculdade de Ciências, Instituto de Biofísica e Engenharia Biomédica, Universidade de Lisboa, 1749-016 Lisboa, Portugal
- Correspondence:
| | - Nuno Garcia
- Faculdade de Ciências, LASIGE, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| |
Collapse
|
5
|
Goldberg JE, Reig B, Lewin AA, Gao Y, Heacock L, Heller SL, Moy L. New Horizons: Artificial Intelligence for Digital Breast Tomosynthesis. Radiographics 2023; 43:e220060. [DOI: 10.1148/rg.220060] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Julia E. Goldberg
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Beatriu Reig
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Alana A. Lewin
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Yiming Gao
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Laura Heacock
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Samantha L. Heller
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Linda Moy
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| |
Collapse
|
6
|
Demircioğlu A. Predictive performance of radiomic models based on features extracted from pretrained deep networks. Insights Imaging 2022; 13:187. [PMID: 36484873 PMCID: PMC9733744 DOI: 10.1186/s13244-022-01328-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 11/09/2022] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES In radiomics, generic texture and morphological features are often used for modeling. Recently, features extracted from pretrained deep networks have been used as an alternative. However, extracting deep features involves several decisions, and it is unclear how these affect the resulting models. Therefore, in this study, we considered the influence of such choices on the predictive performance. METHODS On ten publicly available radiomic datasets, models were trained using feature sets that differed in terms of the utilized network architecture, the layer of feature extraction, the used set of slices, the use of segmentation, and the aggregation method. The influence of these choices on the predictive performance was measured using a linear mixed model. In addition, models with generic features were trained and compared in terms of predictive performance and correlation. RESULTS No single choice consistently led to the best-performing models. In the mixed model, the choice of architecture (AUC + 0.016; p < 0.001), the level of feature extraction (AUC + 0.016; p < 0.001), and using all slices (AUC + 0.023; p < 0.001) were highly significant; using the segmentation had a lower influence (AUC + 0.011; p = 0.023), while the aggregation method was insignificant (p = 0.774). Models based on deep features were not significantly better than those based on generic features (p > 0.05 on all datasets). Deep feature sets correlated moderately with each other (r = 0.4), in contrast to generic feature sets (r = 0.89). CONCLUSIONS Different choices have a significant effect on the predictive performance of the resulting models; however, for the highest performance, these choices should be optimized during cross-validation.
Collapse
Affiliation(s)
- Aydin Demircioğlu
- grid.410718.b0000 0001 0262 7331Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
7
|
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J Imaging 2022; 8:jimaging8090231. [PMID: 36135397 PMCID: PMC9503015 DOI: 10.3390/jimaging8090231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/26/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022] Open
Abstract
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
Collapse
|
8
|
Leong YS, Hasikin K, Lai KW, Mohd Zain N, Azizan MM. Microcalcification Discrimination in Mammography Using Deep Convolutional Neural Network: Towards Rapid and Early Breast Cancer Diagnosis. Front Public Health 2022; 10:875305. [PMID: 35570962 PMCID: PMC9096221 DOI: 10.3389/fpubh.2022.875305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 04/04/2022] [Indexed: 11/30/2022] Open
Abstract
Breast cancer is among the most common types of cancer in women and under the cases of misdiagnosed, or delayed in treatment, the mortality risk is high. The existence of breast microcalcifications is common in breast cancer patients and they are an effective indicator for early sign of breast cancer. However, microcalcifications are often missed and wrongly classified during screening due to their small sizes and indirect scattering in mammogram images. Motivated by this issue, this project proposes an adaptive transfer learning deep convolutional neural network in segmenting breast mammogram images with calcifications cases for early breast cancer diagnosis and intervention. Mammogram images of breast microcalcifications are utilized to train several deep neural network models and their performance is compared. Image filtering of the region of interest images was conducted to remove possible artifacts and noises to enhance the quality of the images before the training. Different hyperparameters such as epoch, batch size, etc were tuned to obtain the best possible result. In addition, the performance of the proposed fine-tuned hyperparameter of ResNet50 is compared with another state-of-the-art machine learning network such as ResNet34, VGG16, and AlexNet. Confusion matrices were utilized for comparison. The result from this study shows that the proposed ResNet50 achieves the highest accuracy with a value of 97.58%, followed by ResNet34 of 97.35%, VGG16 96.97%, and finally AlexNet of 83.06%.
Collapse
Affiliation(s)
- Yew Sum Leong
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia.,Department of Biomedical Engineering, Center for Image and Signal Processing (CISIP), Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Norita Mohd Zain
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Muhammad Mokhzaini Azizan
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Built Environment, Universiti Sains Islam Malaysia, Nilai, Malaysia
| |
Collapse
|