1
|
Bani Ahmad AYA, Alzubi JA, Vasanthan M, Kondaveeti SB, Shreyas J, Priyanka TP. Efficient hybrid heuristic adopted deep learning framework for diagnosing breast cancer using thermography images. Sci Rep 2025; 15:13605. [PMID: 40253418 PMCID: PMC12009285 DOI: 10.1038/s41598-025-96827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 04/01/2025] [Indexed: 04/21/2025] Open
Abstract
The most dangerous form of cancer is breast cancer. This disease is life-threatening because of its aggressive nature and high death rates. Therefore, early discovery increases the patient's survival. Mammography has recently been recommended as diagnosis technique. Mammography, is expensive and exposure the person to radioactivity. Thermography is a less invasive and affordable technique that is becoming increasingly popular. Considering this, a recent deep learning-based breast cancer diagnosis approach is executed by thermography images. Initially, thermography images are chosen from online sources. The collected thermography images are being preprocessed by Contrast Limited Adaptive Histogram Equalization (CLAHE) and contrasting enhancement methods to improve the quality and brightness of the images. Then, the optimal binary thresholding is done to segment the preprocessed images, where optimized the thresholding value using developed Rock Hyraxes Dandelion Algorithm Optimization (RHDAO). A newly implemented deep learning structure StackVRDNet is used for further processing breast cancer diagnosing using thermography images. The segmented images are fed to the StackVRDNet framework, where the Visual Geometry Group (VGG16), Resnet, and DenseNet are employed for constructing this model. The relevant features are extracted usingVGG16, Resnet, and DenseNet, and then obtain stacked weighted feature pool from the extracted features, where the weight optimization is done with the help of RHDAO. The final classification is performed using StackVRDNet, and the diagnosis results are obtained at the final layer of VGG16, Resnet, and DenseNet. A higher scoring method is rated for ensuring final diagnosis results. Here, the parameters present within the VGG16, Resnet, and DenseNet are optimized via the RHDAO to improve the diagnosis results. The simulation outcomes of the developed model achieve 97.05% and 86.86% in terms of accuracy and precision, respectively. The effectiveness of the designed methd is being analyzed via the conventional breast cancer diagnosis models in terms of various performance measures.
Collapse
Affiliation(s)
- Ahmad Y A Bani Ahmad
- Department of Accounting and Finance, Faculty of Business, Middle East University, Amman, 11831, Jordan
| | - Jafar A Alzubi
- Faculty of Engineering, Al-Balqa Applied University, Salt, 19117, Jordan
| | - Manimaran Vasanthan
- Department of Pharmaceutics, SRM College of Pharmacy, Medicine and health sciences, SRM institute of Science and Technology Kattankulathur, Chennai, 603203, Tamilnadu, India
| | - Suresh Babu Kondaveeti
- Department of Biochemistry, Symbiosis Medical College for Women, Symbiosis International (Deemed University), Pune, 412115, Maharashtra, India
| | - J Shreyas
- Department of Information Technology, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, 560064, Karnataka, India.
| | - Thella Preethi Priyanka
- Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Thandalam, Chennai, 602105, Tamilnadu, India
| |
Collapse
|
2
|
Zhao B, Song K, Wei DQ, Xiong Y, Ding J. scCobra allows contrastive cell embedding learning with domain adaptation for single cell data integration and harmonization. Commun Biol 2025; 8:233. [PMID: 39948393 PMCID: PMC11825689 DOI: 10.1038/s42003-025-07692-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 02/06/2025] [Indexed: 02/16/2025] Open
Abstract
The rapid advancement of single-cell technologies has created an urgent need for effective methods to integrate and harmonize single-cell data. Technical and biological variations across studies complicate data integration, while conventional tools often struggle with reliance on gene expression distribution assumptions and over-correction. Here, we present scCobra, a deep generative neural network designed to overcome these challenges through contrastive learning with domain adaptation. scCobra effectively mitigates batch effects, minimizes over-correction, and ensures biologically meaningful data integration without assuming specific gene expression distributions. It enables online label transfer across datasets with batch effects, allowing continuous integration of new data without retraining. Additionally, scCobra supports batch effect simulation, advanced multi-omic integration, and scalable processing of large datasets. By integrating and harmonizing datasets from similar studies, scCobra expands the available data for investigating specific biological problems, improving cross-study comparability, and revealing insights that may be obscured in isolated datasets.
Collapse
Affiliation(s)
- Bowen Zhao
- State Key Laboratory of Microbial Metabolism, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- Meakins-Christie Laboratories, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Division of Experimental Medicine, Department of Medicine, McGill University, Montreal, QC, Canada
| | - Kailu Song
- Meakins-Christie Laboratories, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Quantitative Life Sciences, McGill University, Montreal, QC, Canada
| | - Dong-Qing Wei
- State Key Laboratory of Microbial Metabolism, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Xiong
- State Key Laboratory of Microbial Metabolism, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China.
| | - Jun Ding
- Meakins-Christie Laboratories, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada.
- Division of Experimental Medicine, Department of Medicine, McGill University, Montreal, QC, Canada.
- Quantitative Life Sciences, McGill University, Montreal, QC, Canada.
- School of Computer Science, McGill University, Montreal, QC, Canada.
- Mila-Quebec AI Institute, Montreal, QC, Canada.
| |
Collapse
|
3
|
Martínez-Ramírez JM, Carmona C, Ramírez-Expósito MJ, Martínez-Martos JM. Extracting Knowledge from Machine Learning Models to Diagnose Breast Cancer. Life (Basel) 2025; 15:211. [PMID: 40003620 PMCID: PMC11856414 DOI: 10.3390/life15020211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 01/17/2025] [Accepted: 01/29/2025] [Indexed: 02/27/2025] Open
Abstract
This study explored the application of explainable machine learning models to enhance breast cancer diagnosis using serum biomarkers, contrary to many studies that focus on medical images and demographic data. The primary objective was to develop models that are not only accurate but also provide insights into the factors driving predictions, addressing the need for trustworthy AI in healthcare. Several classification models were evaluated, including OneR, JRIP, the FURIA, J48, the ADTree, and the Random Forest, all of which are known for their explainability. The dataset included a variety of biomarkers, such as electrolytes, metal ions, marker proteins, enzymes, lipid profiles, peptide hormones, steroid hormones, and hormone receptors. The Random Forest model achieved the highest accuracy at 99.401%, followed closely by JRIP, the FURIA, and the ADTree at 98.802%. OneR and J48 achieved 98.204% accuracy. Notably, the models identified oxytocin as a key predictive biomarker, with most models featuring it in their rules. Other significant parameters included GnRH, β-endorphin, vasopressin, IRAP, and APB, as well as factors like iron, cholinesterase, the total protein, progesterone, 5-nucleotidase, and the BMI, which are considered clinically relevant to breast cancer pathogenesis. This study discusses the roles of the identified parameters in cancer development, thus underscoring the potential of explainable machine learning models for enhancing early breast cancer diagnosis by focusing on explainability and the use of serum biomarkers.The combination of both can lead to improved early detection and personalized treatments, emphasizing the potential of these methods in clinical settings. The identified markers also provide additional research and therapeutic targets for breast cancer pathogenesis and a deep understanding of their interactions, advancing personalized approaches to breast cancer management.
Collapse
Affiliation(s)
| | - Cristobal Carmona
- Department of Computer Science, University of Jaén, E-23071 Jaén, Spain; (J.M.M.-R.); (C.C.)
- Andalusian Research Institute in Data Science and Computational Intelligence, DASCI, University of Jaén, E-23071 Jaén, Spain
- Leicester School of Pharmacy, DeMontfort University, Leicester LE1 7RH, UK
| | - María Jesús Ramírez-Expósito
- Experimental and Clinical Physiopathology Research Group CVI-1039, Department of Health Sciences, University of Jaén, E-23071 Jaén, Spain;
| | - José Manuel Martínez-Martos
- Experimental and Clinical Physiopathology Research Group CVI-1039, Department of Health Sciences, University of Jaén, E-23071 Jaén, Spain;
| |
Collapse
|
4
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
5
|
Shao Z, Cai Y, Hao Y, Hu C, Yu Z, Shen Y, Gao F, Zhang F, Ma W, Zhou Q, Chen J, Lu H. AI-based strategies in breast mass ≤ 2 cm classification with mammography and tomosynthesis. Breast 2024; 78:103805. [PMID: 39321503 PMCID: PMC11462177 DOI: 10.1016/j.breast.2024.103805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/26/2024] [Accepted: 09/04/2024] [Indexed: 09/27/2024] Open
Abstract
PURPOSE To evaluate the diagnosis performance of digital mammography (DM) and digital breast tomosynthesis (DBT), DM combined DBT with AI-based strategies for breast mass ≤ 2 cm. MATERIALS AND METHODS DM and DBT images in 483 patients including 512 breast masses were acquired from November 2018 to November 2019. Malignant and benign tumours were determined by biopsies using histological analysis and follow-up within 24 months. The radiomics and deep learning methods were employed to extract the breast mass features in images and finally for benign and malignant classification. The DM, DBT and DM combined DBT (DM + DBT) images were fed into radiomics and deep learning models to construct corresponding models, respectively. The area under the receiver operating characteristic curve (AUC) was employed to estimate model performance. An external dataset of 146 patients from March 2021 to December 2022 from another center was enrolled for external validation. RESULTS In the internal testing dataset, compared with the DM model and the DBT model, the DM + DBT models based on radiomics and deep learning both showed statistically significant higher AUCs [0.810 (RA-DM), 0.823 (RA-DBT) and 0.869 (RA-DM + DBT), P ≤ 0.001; 0.867 (DL-DM), 0.871 (DL-DBT) and 0.908 (DL-DM + DBT), P = 0.001]. The deep learning models present superior to the radiomics models in the experiments with only DM (0.867 vs 0.810, P = 0.001), only DBT (0.871 vs 0.823, P = 0.001) and DM + DBT (0.908 vs 0.869, P = 0.003). CONCLUSIONS DBT has a clear additional value for diagnosing breast mass less than 2 cm compared with only DM. AI-based methods, especially deep learning, can help achieve excellent performance.
Collapse
Affiliation(s)
- Zhenzhen Shao
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yuxin Cai
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yujuan Hao
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Congyi Hu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Ziling Yu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yue Shen
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Fei Gao
- School of Computer Science, Peking University, Beijing, PR China.
| | | | - Wenjuan Ma
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Qian Zhou
- Department of Breast imaging, The affiliated Hospital of Qingdao University, Qingdao, PR China.
| | - Jingjing Chen
- Department of Breast imaging, The affiliated Hospital of Qingdao University, Qingdao, PR China.
| | - Hong Lu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| |
Collapse
|
6
|
Abbas S, Asif M, Rehman A, Alharbi M, Khan MA, Elmitwally N. Emerging research trends in artificial intelligence for cancer diagnostic systems: A comprehensive review. Heliyon 2024; 10:e36743. [PMID: 39263113 PMCID: PMC11387343 DOI: 10.1016/j.heliyon.2024.e36743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 09/13/2024] Open
Abstract
This review article offers a comprehensive analysis of current developments in the application of machine learning for cancer diagnostic systems. The effectiveness of machine learning approaches has become evident in improving the accuracy and speed of cancer detection, addressing the complexities of large and intricate medical datasets. This review aims to evaluate modern machine learning techniques employed in cancer diagnostics, covering various algorithms, including supervised and unsupervised learning, as well as deep learning and federated learning methodologies. Data acquisition and preprocessing methods for different types of data, such as imaging, genomics, and clinical records, are discussed. The paper also examines feature extraction and selection techniques specific to cancer diagnosis. Model training, evaluation metrics, and performance comparison methods are explored. Additionally, the review provides insights into the applications of machine learning in various cancer types and discusses challenges related to dataset limitations, model interpretability, multi-omics integration, and ethical considerations. The emerging field of explainable artificial intelligence (XAI) in cancer diagnosis is highlighted, emphasizing specific XAI techniques proposed to improve cancer diagnostics. These techniques include interactive visualization of model decisions and feature importance analysis tailored for enhanced clinical interpretation, aiming to enhance both diagnostic accuracy and transparency in medical decision-making. The paper concludes by outlining future directions, including personalized medicine, federated learning, deep learning advancements, and ethical considerations. This review aims to guide researchers, clinicians, and policymakers in the development of efficient and interpretable machine learning-based cancer diagnostic systems.
Collapse
Affiliation(s)
- Sagheer Abbas
- Department of Computer Science, Prince Mohammad Bin Fahd University, Al-Khobar, KSA
| | - Muhammad Asif
- Department of Computer Science, Education University Lahore, Attock Campus, Pakistan
| | - Abdur Rehman
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, 11942, Alkharj, Saudi Arabia
| | - Muhammad Adnan Khan
- Riphah School of Computing & Innovation, Faculty of Computing, Riphah International University, Lahore Campus, Lahore, 54000, Pakistan
- School of Computing, Skyline University College, University City Sharjah, 1797, Sharjah, United Arab Emirates
- Department of Software, Faculty of Artificial Intelligence and Software, Gachon University, Seongnam-si, 13120, Republic of Korea
| | - Nouh Elmitwally
- Department of Computer Science, Faculty of Computers and Artificial Intelligence, Cairo University, Giza, 12613, Egypt
- School of Computing and Digital Technology, Birmingham City University, Birmingham, B4 7XG, UK
| |
Collapse
|
7
|
Gomi T, Ishihara K, Yamada S, Koibuchi Y. Pre-Reconstruction Processing with the Cycle-Consist Generative Adversarial Network Combined with Attention Gate to Improve Image Quality in Digital Breast Tomosynthesis. Diagnostics (Basel) 2024; 14:1957. [PMID: 39272741 PMCID: PMC11394014 DOI: 10.3390/diagnostics14171957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 08/30/2024] [Accepted: 08/30/2024] [Indexed: 09/15/2024] Open
Abstract
The current study proposed and evaluated "residual squeeze and excitation attention gate" (rSEAG), a novel network that can improve image quality by reducing distortion attributed to artifacts. This method was established by modifying the Cycle Generative Adversarial Network (cycleGAN)-based generator network using projection data for pre-reconstruction processing in digital breast tomosynthesis. Residual squeeze and excitation were installed in the bridge of the generator network, and the attention gate was installed in the skip connection between the encoder and decoder. Based on the radiation dose index (exposure index and division index) incident on the detector, the cases approved by the ethics committee and used for the study were classified as reference (675 projection images) and object (675 projection images). For the cases, unsupervised data containing a mixture of cases with and without masses were used. The cases were trained using cycleGAN with rSEAG and the conventional networks (ResUNet and U-Net). For testing, predictive processing was performed on cases (60 projection images) that were not used for learning. Images were generated using filtered backprojection reconstruction (kernel: Ramachandran and Lakshminarayanan) from projection data for testing data and without pre-reconstruction processing data (evaluation: in-focus plane). The distortion was evaluated using perception-based image quality evaluation (PIQE) analysis, texture analysis (feature: "Homogeneity" and "Contrast"), and a statistical model with a Gumbel distribution. PIQE has a low rSEAG value. Texture analysis showed that rSEAG and a network without cycleGAN were similar in terms of the "Contrast" feature. In dense breasts, ResUNet had the lowest "Contrast" feature and U-Net had differences between cases. The maximal variations in the Gumbel plot, rSEAG reduced the high-frequency ripple artifacts. In this study, rSEAG could improve distortion and reduce ripple artifacts.
Collapse
Affiliation(s)
- Tsutomu Gomi
- School of Allied Health Sciences, Kitasato University, Sagamihara 252-0373, Kanagawa, Japan
| | - Kotomi Ishihara
- Department of Radiology, NHO Takasaki General Medical Center, Takasaki 370-0829, Gunma, Japan
| | - Satoko Yamada
- School of Allied Health Sciences, Kitasato University, Sagamihara 252-0373, Kanagawa, Japan
| | - Yukio Koibuchi
- Department of Breast and Endocrine Surgery, NHO Takasaki General Medical Center, Takasaki 370-0829, Gunma, Japan
| |
Collapse
|
8
|
Xu Q, Zhou LL, Xing C, Xu X, Feng Y, Lv H, Zhao F, Chen YC, Cai Y. Tinnitus classification based on resting-state functional connectivity using a convolutional neural network architecture. Neuroimage 2024; 290:120566. [PMID: 38467345 DOI: 10.1016/j.neuroimage.2024.120566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/02/2024] [Accepted: 03/04/2024] [Indexed: 03/13/2024] Open
Abstract
OBJECTIVES Many studies have investigated aberrant functional connectivity (FC) using resting-state functional MRI (rs-fMRI) in subjective tinnitus patients. However, no studies have verified the efficacy of resting-state FC as a diagnostic imaging marker. We established a convolutional neural network (CNN) model based on rs-fMRI FC to distinguish tinnitus patients from healthy controls, providing guidance and fast diagnostic tools for the clinical diagnosis of subjective tinnitus. METHODS A CNN architecture was trained on rs-fMRI data from 100 tinnitus patients and 100 healthy controls using an asymmetric convolutional layer. Additionally, a traditional machine learning model and a transfer learning model were included for comparison with the CNN, and each of the three models was tested on three different brain atlases. RESULTS Of the three models, the CNN model outperformed the other two models with the highest area under the curve, especially on the Dos_160 atlas (AUC = 0.944). Meanwhile, the model with the best classification performance highlights the crucial role of the default mode network, salience network, and sensorimotor network in distinguishing between normal controls and patients with subjective tinnitus. CONCLUSION Our CNN model could appropriately tackle the diagnosis of tinnitus patients using rs-fMRI and confirmed the diagnostic value of FC as measured by rs-fMRI.
Collapse
Affiliation(s)
- Qianhui Xu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou, Guangdong Province 510120, China
| | - Lei-Lei Zhou
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Chunhua Xing
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Xiaomin Xu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Yuan Feng
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Fei Zhao
- Department of Speech and Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, UK
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China.
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou, Guangdong Province 510120, China.
| |
Collapse
|
9
|
Sobiecki A, Hadjiiski LM, Chan HP, Samala RK, Zhou C, Stojanovska J, Agarwal PP. Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data. Diagnostics (Basel) 2024; 14:341. [PMID: 38337857 PMCID: PMC10855789 DOI: 10.3390/diagnostics14030341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/24/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
The diagnosis of severe COVID-19 lung infection is important because it carries a higher risk for the patient and requires prompt treatment with oxygen therapy and hospitalization while those with less severe lung infection often stay on observation. Also, severe infections are more likely to have long-standing residual changes in their lungs and may need follow-up imaging. We have developed deep learning neural network models for classifying severe vs. non-severe lung infections in COVID-19 patients on chest radiographs (CXR). A deep learning U-Net model was developed to segment the lungs. Inception-v1 and Inception-v4 models were trained for the classification of severe vs. non-severe COVID-19 infection. Four CXR datasets from multi-country and multi-institutional sources were used to develop and evaluate the models. The combined dataset consisted of 5748 cases and 6193 CXR images with physicians' severity ratings as reference standard. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. We studied the reproducibility of classification performance using the different combinations of training and validation data sets. We also evaluated the generalizability of the trained deep learning models using both independent internal and external test sets. The Inception-v1 based models achieved AUC ranging between 0.81 ± 0.02 and 0.84 ± 0.0, while the Inception-v4 models achieved AUC in the range of 0.85 ± 0.06 and 0.89 ± 0.01, on the independent test sets, respectively. These results demonstrate the promise of using deep learning models in differentiating COVID-19 patients with severe from non-severe lung infection on chest radiographs.
Collapse
Affiliation(s)
- André Sobiecki
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Lubomir M. Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Ravi K. Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | | | - Prachi P. Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| |
Collapse
|
10
|
Gao M, Fessler JA, Chan HP. Model-based deep CNN-regularized reconstruction for digital breast tomosynthesis with a task-based CNN image assessment approach. Phys Med Biol 2023; 68:245024. [PMID: 37988758 PMCID: PMC10719554 DOI: 10.1088/1361-6560/ad0eb4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/02/2023] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective. Digital breast tomosynthesis (DBT) is a quasi-three-dimensional breast imaging modality that improves breast cancer screening and diagnosis because it reduces fibroglandular tissue overlap compared with 2D mammography. However, DBT suffers from noise and blur problems that can lower the detectability of subtle signs of cancers such as microcalcifications (MCs). Our goal is to improve the image quality of DBT in terms of image noise and MC conspicuity.Approach. We proposed a model-based deep convolutional neural network (deep CNN or DCNN) regularized reconstruction (MDR) for DBT. It combined a model-based iterative reconstruction (MBIR) method that models the detector blur and correlated noise of the DBT system and the learning-based DCNN denoiser using the regularization-by-denoising framework. To facilitate the task-based image quality assessment, we also proposed two DCNN tools for image evaluation: a noise estimator (CNN-NE) trained to estimate the root-mean-square (RMS) noise of the images, and an MC classifier (CNN-MC) as a DCNN model observer to evaluate the detectability of clustered MCs in human subject DBTs.Main results. We demonstrated the efficacies of CNN-NE and CNN-MC on a set of physical phantom DBTs. The MDR method achieved low RMS noise and the highest detection area under the receiver operating characteristic curve (AUC) rankings evaluated by CNN-NE and CNN-MC among the reconstruction methods studied on an independent test set of human subject DBTs.Significance. The CNN-NE and CNN-MC may serve as a cost-effective surrogate for human observers to provide task-specific metrics for image quality comparisons. The proposed reconstruction method shows the promise of combining physics-based MBIR and learning-based DCNNs for DBT image reconstruction, which may potentially lead to lower dose and higher sensitivity and specificity for MC detection in breast cancer screening and diagnosis.
Collapse
Affiliation(s)
- Mingjie Gao
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
| |
Collapse
|
11
|
Ahn JS, Shin S, Yang SA, Park EK, Kim KH, Cho SI, Ock CY, Kim S. Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. J Breast Cancer 2023; 26:405-435. [PMID: 37926067 PMCID: PMC10625863 DOI: 10.4048/jbc.2023.26.e45] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 09/25/2023] [Accepted: 10/06/2023] [Indexed: 11/07/2023] Open
Abstract
Breast cancer is a significant cause of cancer-related mortality in women worldwide. Early and precise diagnosis is crucial, and clinical outcomes can be markedly enhanced. The rise of artificial intelligence (AI) has ushered in a new era, notably in image analysis, paving the way for major advancements in breast cancer diagnosis and individualized treatment regimens. In the diagnostic workflow for patients with breast cancer, the role of AI encompasses screening, diagnosis, staging, biomarker evaluation, prognostication, and therapeutic response prediction. Although its potential is immense, its complete integration into clinical practice is challenging. Particularly, these challenges include the imperatives for extensive clinical validation, model generalizability, navigating the "black-box" conundrum, and pragmatic considerations of embedding AI into everyday clinical environments. In this review, we comprehensively explored the diverse applications of AI in breast cancer care, underlining its transformative promise and existing impediments. In radiology, we specifically address AI in mammography, tomosynthesis, risk prediction models, and supplementary imaging methods, including magnetic resonance imaging and ultrasound. In pathology, our focus is on AI applications for pathologic diagnosis, evaluation of biomarkers, and predictions related to genetic alterations, treatment response, and prognosis in the context of breast cancer diagnosis and treatment. Our discussion underscores the transformative potential of AI in breast cancer management and emphasizes the importance of focused research to realize the full spectrum of benefits of AI in patient care.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Seokhwi Kim
- Department of Pathology, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
12
|
Kim K, Lee JH, Je Oh S, Chung MJ. AI-based computer-aided diagnostic system of chest digital tomography synthesis: Demonstrating comparative advantage with X-ray-based AI systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107643. [PMID: 37348439 DOI: 10.1016/j.cmpb.2023.107643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 05/26/2023] [Accepted: 06/03/2023] [Indexed: 06/24/2023]
Abstract
BACKGROUND Compared with chest X-ray (CXR) imaging, which is a single image projected from the front of the patient, chest digital tomosynthesis (CDTS) imaging can be more advantageous for lung lesion detection because it acquires multiple images projected from multiple angles of the patient. Various clinical comparative analysis and verification studies have been reported to demonstrate this, but there is no artificial intelligence (AI)-based comparative analysis studies. Existing AI-based computer-aided detection (CAD) systems for lung lesion diagnosis have been developed mainly based on CXR images; however, CAD-based on CDTS, which uses multi-angle images of patients in various directions, has not been proposed and verified for its usefulness compared to CXR-based counterparts. BACKGROUND AND OBJECTIVE This study develops and tests a CDTS-based AI CAD system to detect lung lesions to demonstrate performance improvements compared to CXR-based AI CAD. METHODS We used multiple (e.g., five) projection images as input for the CDTS-based AI model and a single-projection image as input for the CXR-based AI model to compare and evaluate the performance between models. Multiple/single projection input images were obtained by virtual projection on the three-dimensional (3D) stack of computed tomography (CT) slices of each patient's lungs from which the bed area was removed. These multiple images result from shooting from the front and left and right 30/60∘. The projected image captured from the front was used as the input for the CXR-based AI model. The CDTS-based AI model used all five projected images. The proposed CDTS-based AI model consisted of five AI models that received images in each of the five directions, and obtained the final prediction result through an ensemble of five models. Each model used WideResNet-50. To train and evaluate CXR- and CDTS-based AI models, 500 healthy data, 206 tuberculosis data, and 242 pneumonia data were used, and three three-fold cross-validation was applied. RESULTS The proposed CDTS-based AI CAD system yielded sensitivities of 0.782 and 0.785 and accuracies of 0.895 and 0.837 for the (binary classification) performance of detecting tuberculosis and pneumonia, respectively, against normal subjects. These results show higher performance than the sensitivity of 0.728 and 0.698 and accuracies of 0.874 and 0.826 for detecting tuberculosis and pneumonia through the CXR-based AI CAD, which only uses a single projection image in the frontal direction. We found that CDTS-based AI CAD improved the sensitivity of tuberculosis and pneumonia by 5.4% and 8.7% respectively, compared to CXR-based AI CAD without loss of accuracy. CONCLUSIONS This study comparatively proves that CDTS-based AI CAD technology can improve performance more than CXR. These results suggest that we can enhance the clinical application of CDTS. Our code is available at https://github.com/kskim-phd/CDTS-CAD-P.
Collapse
Affiliation(s)
- Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| | - Ju Hwan Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Seong Je Oh
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| |
Collapse
|
13
|
Ren Y, Liu X, Ge J, Liang Z, Xu X, Grimm LJ, Go J, Marks JR, Lo JY. Ipsilateral Lesion Detection Refinement for Tomosynthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3080-3090. [PMID: 37227903 PMCID: PMC11033619 DOI: 10.1109/tmi.2023.3280135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Computer-aided detection (CAD) frameworks for breast cancer screening have been researched for several decades. Early adoption of deep-learning models in CAD frameworks has shown greatly improved detection performance compared to traditional CAD on single-view images. Recently, studies have improved performance by merging information from multiple views within each screening exam. Clinically, the integration of lesion correspondence during screening is a complicated decision process that depends on the correct execution of several referencing steps. However, most multi-view CAD frameworks are deep-learning-based black-box techniques. Fully end-to-end designs make it very difficult to analyze model behaviors and fine-tune performance. More importantly, the black-box nature of the techniques discourages clinical adoption due to the lack of explicit reasoning for each multi-view referencing step. Therefore, there is a need for a multi-view detection framework that can not only detect cancers accurately but also provide step-by-step, multi-view reasoning. In this work, we present Ipsilateral-Matching-Refinement Networks (IMR-Net) for digital breast tomosynthesis (DBT) lesion detection across multiple views. Our proposed framework adaptively refines the single-view detection scores based on explicit ipsilateral lesion matching. IMR-Net is built on a robust, single-view detection CAD pipeline with a commercial development DBT dataset of 24675 DBT volumetric views from 8034 exams. Performance is measured using location-based, case-level receiver operating characteristic (ROC) and case-level free-response ROC (FROC) analysis.
Collapse
|
14
|
Sun D, Hadjiiski L, Gormley J, Chan HP, Caoili EM, Cohan RH, Alva A, Gulani V, Zhou C. Survival Prediction of Patients with Bladder Cancer after Cystectomy Based on Clinical, Radiomics, and Deep-Learning Descriptors. Cancers (Basel) 2023; 15:4372. [PMID: 37686647 PMCID: PMC10486459 DOI: 10.3390/cancers15174372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
Accurate survival prediction for bladder cancer patients who have undergone radical cystectomy can improve their treatment management. However, the existing predictive models do not take advantage of both clinical and radiological imaging data. This study aimed to fill this gap by developing an approach that leverages the strengths of clinical (C), radiomics (R), and deep-learning (D) descriptors to improve survival prediction. The dataset comprised 163 patients, including clinical, histopathological information, and CT urography scans. The data were divided by patient into training, validation, and test sets. We analyzed the clinical data by a nomogram and the image data by radiomics and deep-learning models. The descriptors were input into a BPNN model for survival prediction. The AUCs on the test set were (C): 0.82 ± 0.06, (R): 0.73 ± 0.07, (D): 0.71 ± 0.07, (CR): 0.86 ± 0.05, (CD): 0.86 ± 0.05, and (CRD): 0.87 ± 0.05. The predictions based on D and CRD descriptors showed a significant difference (p = 0.007). For Kaplan-Meier survival analysis, the deceased and alive groups were stratified successfully by C (p < 0.001) and CRD (p < 0.001), with CRD predicting the alive group more accurately. The results highlight the potential of combining C, R, and D descriptors to accurately predict the survival of bladder cancer patients after cystectomy.
Collapse
Affiliation(s)
- Di Sun
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - John Gormley
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Elaine M. Caoili
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Richard H. Cohan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Ajjai Alva
- Department of Internal Medicine-Hematology/Oncology, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Vikas Gulani
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| |
Collapse
|
15
|
Müller-Franzes G, Niehues JM, Khader F, Arasteh ST, Haarburger C, Kuhl C, Wang T, Han T, Nolte T, Nebelung S, Kather JN, Truhn D. A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis. Sci Rep 2023; 13:12098. [PMID: 37495660 PMCID: PMC10372018 DOI: 10.1038/s41598-023-39278-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 07/22/2023] [Indexed: 07/28/2023] Open
Abstract
Although generative adversarial networks (GANs) can produce large datasets, their limited diversity and fidelity have been recently addressed by denoising diffusion probabilistic models, which have demonstrated superiority in natural image synthesis. In this study, we introduce Medfusion, a conditional latent DDPM designed for medical image generation, and evaluate its performance against GANs, which currently represent the state-of-the-art. Medfusion was trained and compared with StyleGAN-3 using fundoscopy images from the AIROGS dataset, radiographs from the CheXpert dataset, and histopathology images from the CRCDX dataset. Based on previous studies, Progressively Growing GAN (ProGAN) and Conditional GAN (cGAN) were used as additional baselines on the CheXpert and CRCDX datasets, respectively. Medfusion exceeded GANs in terms of diversity (recall), achieving better scores of 0.40 compared to 0.19 in the AIROGS dataset, 0.41 compared to 0.02 (cGAN) and 0.24 (StyleGAN-3) in the CRMDX dataset, and 0.32 compared to 0.17 (ProGAN) and 0.08 (StyleGAN-3) in the CheXpert dataset. Furthermore, Medfusion exhibited equal or higher fidelity (precision) across all three datasets. Our study shows that Medfusion constitutes a promising alternative to GAN-based models for generating high-quality medical images, leading to improved diversity and less artifacts in the generated images.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | | | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | | | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Tianci Wang
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Tianyu Han
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Teresa Nolte
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
| |
Collapse
|
16
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
17
|
Murtas F, Landoni V, Ordòñez P, Greco L, Ferranti FR, Russo A, Perracchio L, Vidiri A. Clinical-radiomic models based on digital breast tomosynthesis images: a preliminary investigation of a predictive tool for cancer diagnosis. Front Oncol 2023; 13:1152158. [PMID: 37251915 PMCID: PMC10213670 DOI: 10.3389/fonc.2023.1152158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/24/2023] [Indexed: 05/31/2023] Open
Abstract
Objective This study aimed to develop a clinical-radiomic model based on radiomic features extracted from digital breast tomosynthesis (DBT) images and clinical factors that may help to discriminate between benign and malignant breast lesions. Materials and methods A total of 150 patients were included in this study. DBT images acquired in the setting of a screening protocol were used. Lesions were delineated by two expert radiologists. Malignity was always confirmed by histopathological data. The data were randomly divided into training and validation set with an 80:20 ratio. A total of 58 radiomic features were extracted from each lesion using the LIFEx Software. Three different key methods of feature selection were implemented in Python: (1) K best (KB), (2) sequential (S), and (3) Random Forrest (RF). A model was therefore produced for each subset of seven variables using a machine-learning algorithm, which exploits the RF classification based on the Gini index. Results All three clinical-radiomic models show significant differences (p < 0.05) between malignant and benign tumors. The area under the curve (AUC) values of the models obtained with three different feature selection methods were 0.72 [0.64,0.80], 0.72 [0.64,0.80] and 0.74 [0.66,0.82] for KB, SFS, and RF, respectively. Conclusion The clinical-radiomic models developed by using radiomic features from DBT images showed a good discriminating power and hence may help radiologists in breast cancer tumor diagnoses already at the first screening.
Collapse
Affiliation(s)
- Federica Murtas
- Medical Physics Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
- Department of Biomedicine and Prevention, University of Rome "Tor Vergata", Rome, Italy
| | - Valeria Landoni
- Medical Physics Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Pedro Ordòñez
- Medical Physics Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Laura Greco
- Radiology and Diagnostic Imaging Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Francesca Romana Ferranti
- Radiology and Diagnostic Imaging Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Andrea Russo
- Pathology Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Letizia Perracchio
- Pathology Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Antonello Vidiri
- Radiology and Diagnostic Imaging Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| |
Collapse
|
18
|
Li Y, He Z, Pan J, Zeng W, Liu J, Zeng Z, Xu W, Xu Z, Wang S, Wen C, Zeng H, Wu J, Ma X, Chen W, Lu Y. Atypical architectural distortion detection in digital breast tomosynthesis: a computer-aided detection model with adaptive receptive field. Phys Med Biol 2023; 68. [PMID: 36595312 DOI: 10.1088/1361-6560/acaba7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 12/14/2022] [Indexed: 12/15/2022]
Abstract
Objective. In digital breast tomosynthesis (DBT), architectural distortion (AD) is a breast lesion that is difficult to detect. Compared with typical ADs, which have radial patterns, identifying a typical ADs is more difficult. Most existing computer-aided detection (CADe) models focus on the detection of typical ADs. This study focuses on atypical ADs and develops a deep learning-based CADe model with an adaptive receptive field in DBT.Approach. Our proposed model uses a Gabor filter and convergence measure to depict the distribution of fibroglandular tissues in DBT slices. Subsequently, two-dimensional (2D) detection is implemented using a deformable-convolution-based deep learning framework, in which an adaptive receptive field is introduced to extract global features in slices. Finally, 2D candidates are aggregated to form the three-dimensional AD detection results. The model is trained on 99 positive cases with ADs and evaluated on 120 AD-positive cases and 100 AD-negative cases.Main results. A convergence-measure-based model and deep-learning model without an adaptive receptive field are reproduced as controls. Their mean true positive fractions (MTPF) ranging from 0.05 to 4 false positives per volume are 0.3846 ± 0.0352 and 0.6501 ± 0.0380, respectively. Our proposed model achieves an MTPF of 0.7148 ± 0.0322, which is a significant improvement (p< 0.05) compared with the other two methods. In particular, our model detects more atypical ADs, primarily contributing to the performance improvement.Significance. The adaptive receptive field helps the model improve the atypical AD detection performance. It can help radiologists identify more ADs in breast cancer screening.
Collapse
Affiliation(s)
- Yue Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, People's Republic of China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Jiawei Pan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, People's Republic of China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Weixiong Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Jialing Liu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Zhaodong Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Weimin Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Zeyuan Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Sina Wang
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Chanjuan Wen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Hui Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Jiefang Wu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Xiangyuan Ma
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, People's Republic of China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, People's Republic of China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou, People's Republic of China.,Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, People's Republic of China
| |
Collapse
|
19
|
Chen C, Wang J, Pan J, Bian C, Zhang Z. GraphSKT: Graph-Guided Structured Knowledge Transfer for Domain Adaptive Lesion Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:507-518. [PMID: 36201413 DOI: 10.1109/tmi.2022.3212784] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Adversarial-based adaptation has dominated the area of domain adaptive detection over the past few years. Despite their general efficacy for various tasks, the learned representations may not capture the intrinsic topological structures of the whole images and thus are vulnerable to distributional shifts especially in real-world applications, such as geometric distortions across imaging devices in medical images. In this case, forcefully matching data distributions across domains cannot ensure precise knowledge transfer and are prone to result in the negative transfer. In this paper, we explore the problem of domain adaptive lesion detection from the perspective of relational reasoning, and propose a Graph-Structured Knowledge Transfer (GraphSKT) framework to perform hierarchical reasoning by modeling both the intra- and inter-domain topological structures. To be specific, we utilize cross-domain correspondence to mine meaningful foreground regions for representing graph nodes and explicitly endow each node with contextual information. Then, the intra- and inter-domain graphs are built on the top of instance-level features to achieve a high-level understanding of the lesion and whole medical image, and transfer the structured knowledge from source to target domains. The contextual and semantic information is propagated through graph nodes methodically, enhancing the expressive power of learned features for the lesion detection tasks. Extensive experiments on two types of challenging datasets demonstrate that the proposed GraphSKT significantly outperforms the state-of-the-art approaches for detection of polyps in colonoscopy images and of mass in mammographic images.
Collapse
|
20
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
21
|
Hadjiiski L, Cha K, Chan HP, Drukker K, Morra L, Näppi JJ, Sahiner B, Yoshida H, Chen Q, Deserno TM, Greenspan H, Huisman H, Huo Z, Mazurchuk R, Petrick N, Regge D, Samala R, Summers RM, Suzuki K, Tourassi G, Vergara D, Armato SG. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023; 50:e1-e24. [PMID: 36565447 DOI: 10.1002/mp.16188] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/25/2022] Open
Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Collapse
Affiliation(s)
- Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Kenny Cha
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Berkman Sahiner
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Hayit Greenspan
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv, Israel & Department of Radiology, Ichan School of Medicine, Tel Aviv University, Mt Sinai, New York, New York, USA
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Zhimin Huo
- Tencent America, Palo Alto, California, USA
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | | | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy.,Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Ravi Samala
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Maryland, USA
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan
| | | | - Daniel Vergara
- Department of Radiology, Yale New Haven Hospital, New Haven, Connecticut, USA
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
22
|
Basu S, Gupta M, Rana P, Gupta P, Arora C. RadFormer: Transformers with global-local attention for interpretable and accurate Gallbladder Cancer detection. Med Image Anal 2023; 83:102676. [PMID: 36455424 DOI: 10.1016/j.media.2022.102676] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 09/17/2022] [Accepted: 10/27/2022] [Indexed: 11/21/2022]
Abstract
We propose a novel deep neural network architecture to learn interpretable representation for medical image analysis. Our architecture generates a global attention for region of interest, and then learns bag of words style deep feature embeddings with local attention. The global, and local feature maps are combined using a contemporary transformer architecture for highly accurate Gallbladder Cancer (GBC) detection from Ultrasound (USG) images. Our experiments indicate that the detection accuracy of our model beats even human radiologists, and advocates its use as the second reader for GBC diagnosis. Bag of words embeddings allow our model to be probed for generating interpretable explanations for GBC detection consistent with the ones reported in medical literature. We show that the proposed model not only helps understand decisions of neural network models but also aids in discovery of new visual features relevant to the diagnosis of GBC. Source-code is available at https://github.com/sbasu276/RadFormer.
Collapse
Affiliation(s)
- Soumen Basu
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India.
| | - Mayank Gupta
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India
| | - Pratyaksha Rana
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education & Research, Chandigarh, India
| | - Pankaj Gupta
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education & Research, Chandigarh, India
| | - Chetan Arora
- Department of Computer Science, Indian Institute of Technology Delhi, New Delhi, India
| |
Collapse
|
23
|
Dey S, Mitra S, Chakraborty S, Mondal D, Nasipuri M, Das N. GC-EnC: A Copula based ensemble of CNNs for malignancy identification in breast histopathology and cytology images. Comput Biol Med 2023; 152:106329. [PMID: 36473342 DOI: 10.1016/j.compbiomed.2022.106329] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
In the present work, we have explored the potential of Copula-based ensemble of CNNs(Convolutional Neural Networks) over individual classifiers for malignancy identification in histopathology and cytology images. The Copula-based model that integrates three best performing CNN architectures, namely, DenseNet-161/201, ResNet-101/34, InceptionNet-V3 is proposed. Also, the limitation of small dataset is circumvented using a Fuzzy template based data augmentation technique that intelligently selects multiple region of interests (ROIs) from an image. The proposed framework of data augmentation amalgamated with the ensemble technique showed a gratifying performance in malignancy prediction surpassing the individual CNN's performance on breast cytology and histopathology datasets. The proposed method has achieved accuracies of 84.37%, 97.32%, 91.67% on the JUCYT, BreakHis and BI datasets respectively. This automated technique will serve as a useful guide to the pathologist in delivering the appropriate diagnostic decision in reduced time and effort. The relevant codes of the proposed ensemble model are publicly available on GitHub.
Collapse
Affiliation(s)
- Soumyajyoti Dey
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Shyamali Mitra
- Jadavpur University, Department of Instrumentation & Electronics Engineering, Kolkata, West Bengal, India.
| | | | - Debashri Mondal
- Theism Medical Diagnostics Centre, Kolkata, West Bengal, India.
| | - Mita Nasipuri
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Nibaran Das
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| |
Collapse
|
24
|
Du L, Yuan J, Gan M, Li Z, Wang P, Hou Z, Wang C. A comparative study between deep learning and radiomics models in grading liver tumors using hepatobiliary phase contrast-enhanced MR images. BMC Med Imaging 2022; 22:218. [DOI: 10.1186/s12880-022-00946-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 12/02/2022] [Indexed: 12/15/2022] Open
Abstract
Abstract
Purpose
To compare a deep learning model with a radiomics model in differentiating high-grade (LR-3, LR-4, LR-5) liver imaging reporting and data system (LI-RADS) liver tumors from low-grade (LR-1, LR-2) LI-RADS tumors based on the contrast-enhanced magnetic resonance images.
Methods
Magnetic resonance imaging scans of 361 suspected hepatocellular carcinoma patients were retrospectively reviewed. Lesion volume segmentation was manually performed by two radiologists, resulting in 426 lesions from the training set and 83 lesions from the test set. The radiomics model was constructed using a support vector machine (SVM) with pre-defined features, which was first selected using Chi-square test, followed by refining using binary least absolute shrinkage and selection operator (LASSO) regression. The deep learning model was established based on the DenseNet. Performance of the models was quantified by area under the receiver-operating characteristic curve (AUC), accuracy, sensitivity, specificity and F1-score.
Results
A set of 8 most informative features was selected from 1049 features to train the SVM classifier. The AUCs of the radiomics model were 0.857 (95% confidence interval [CI] 0.816–0.888) for the training set and 0.879 (95% CI 0.779–0.935) for the test set. The deep learning method achieved AUCs of 0.838 (95% CI 0.799–0.871) for the training set and 0.717 (95% CI 0.601–0.814) for the test set. The performance difference between these two models was assessed by t-test, which showed the results in both training and test sets were statistically significant.
Conclusion
The deep learning based model can be trained end-to-end with little extra domain knowledge, while the radiomics model requires complex feature selection. However, this process makes the radiomics model achieve better performance in this study with smaller computational cost and more potential on model interpretability.
Collapse
|
25
|
Chen W, Gong M, Zhou D, Zhang L, Kong J, Jiang F, Feng S, Yuan R. CT-based deep learning radiomics signature for the preoperative prediction of the muscle-invasive status of bladder cancer. Front Oncol 2022; 12:1019749. [PMID: 36544709 PMCID: PMC9761839 DOI: 10.3389/fonc.2022.1019749] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/17/2022] [Indexed: 12/07/2022] Open
Abstract
Objectives Although the preoperative assessment of whether a bladder cancer (BCa) indicates muscular invasion is crucial for adequate treatment, there currently exist some challenges involved in preoperative diagnosis of BCa with muscular invasion. The aim of this study was to construct deep learning radiomic signature (DLRS) for preoperative predicting the muscle invasion status of BCa. Methods A retrospective review covering 173 patients revealed 43 with pathologically proven muscle-invasive bladder cancer (MIBC) and 130 with non-muscle-invasive bladder cancer (non- MIBC). A total of 129 patients were randomly assigned to the training cohort and 44 to the test cohort. The Pearson correlation coefficient combined with the least absolute shrinkage and selection operator (LASSO) was utilized to reduce radiomic redundancy. To decrease the dimension of deep learning features, Principal Component Analysis (PCA) was adopted. Six machine learning classifiers were finally constructed based on deep learning radiomics features, which were adopted to predict the muscle invasion status of bladder cancer. The area under the curve (AUC), accuracy, sensitivity and specificity were used to evaluate the performance of the model. Results According to the comparison, DLRS-based models performed the best in predicting muscle violation status, with MLP (Train AUC: 0.973260 (95% CI 0.9488-0.9978) and Test AUC: 0.884298 (95% CI 0.7831-0.9855)) outperforming the other models. In the test cohort, the sensitivity, specificity and accuracy of the MLP model were 0.91 (95% CI 0.551-0.873), 0.78 (95% CI 0.594-0.863) and 0.58 (95% CI 0.729-0.827), respectively. DCA indicated that the MLP model showed better clinical utility than Radiomics-only model, which was demonstrated by the decision curve analysis. Conclusions A deep radiomics model constructed with CT images can accurately predict the muscle invasion status of bladder cancer.
Collapse
Affiliation(s)
- Weitian Chen
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China
| | - Mancheng Gong
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China
| | - Dongsheng Zhou
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Lijie Zhang
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Jie Kong
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Feng Jiang
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Shengxing Feng
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Runqiang Yuan
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China,*Correspondence: Runqiang Yuan,
| |
Collapse
|
26
|
Muacevic A, Adler JR. Role of Artificial Intelligence and Machine Learning in Prediction, Diagnosis, and Prognosis of Cancer. Cureus 2022; 14:e31008. [PMID: 36475188 PMCID: PMC9717523 DOI: 10.7759/cureus.31008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 01/25/2023] Open
Abstract
Cancer is one of the most devastating, fatal, dangerous, and unpredictable ailments. To reduce the risk of fatality in this disease, we need some ways to predict the disease, diagnose it faster and precisely, and predict the prognosis accurately. The incorporation of artificial intelligence (AI), machine learning (ML), and deep learning (DL) algorithms into the healthcare system has already proven to work wonders for patients. Artificial intelligence is a simulation of intelligence that uses data, rules, and information programmed in it to make predictions. The science of machine learning (ML) uses data to enhance performance in a variety of activities and tasks. A bigger family of machine learning techniques built on artificial neural networks and representation learning is deep learning (DL). To clarify, we require AI, ML, and DL to predict cancer risk, survival chances, cancer recurrence, cancer diagnosis, and cancer prognosis. All of these are required to improve patient's quality of life, increase their survival rates, decrease anxiety and fear to some extent, and make a proper personalized treatment plan for the suffering patient. The survival rates of people with diffuse large B-cell lymphoma (DLBCL) can be forecasted. Both solid and non-solid tumors can be diagnosed precisely with the help of AI and ML algorithms. The prognosis of the disease can also be forecasted with AI and its approaches like deep learning. This improvement in cancer care is a turning point in advanced healthcare and will deeply impact patient's life for good.
Collapse
|
27
|
Classification of Multiclass Histopathological Breast Images Using Residual Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9086060. [PMID: 36262625 PMCID: PMC9576372 DOI: 10.1155/2022/9086060] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/21/2022] [Accepted: 08/29/2022] [Indexed: 11/20/2022]
Abstract
Pathologists need a lot of clinical experience and time to do the histopathological investigation. AI may play a significant role in supporting pathologists and resulting in more accurate and efficient histopathological diagnoses. Breast cancer is one of the most diagnosed cancers in women worldwide. Breast cancer may be detected and diagnosed using imaging methods such as histopathological images. Since various tissues make up the breast, there is a wide range of textural intensity, making abnormality detection difficult. As a result, there is an urgent need to improve computer-assisted systems (CAD) that can serve as a second opinion for radiologists when they use medical images. A self-training learning method employing deep learning neural network with residual learning is proposed to overcome the issue of needing a large number of labeled images to train deep learning models in breast cancer histopathology image classification. The suggested model is built from scratch and trained.
Collapse
|
28
|
Number of Convolution Layers and Convolution Kernel Determination and Validation for Multilayer Convolutional Neural Network: Case Study in Breast Lesion Screening of Mammographic Images. Processes (Basel) 2022. [DOI: 10.3390/pr10091867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Mammography is a low-dose X-ray imaging technique that can detect breast tumors, cysts, and calcifications, which can aid in detecting potential breast cancer in the early stage and reduce the mortality rate. This study employed a multilayer convolutional neural network (MCNN) to screen breast lesions with mammographic images. Within the region of interest, a specific bounding box is used to extract feature maps before automatic image segmentation and feature classification are conducted. These include three classes, namely, normal, benign tumor, and malignant tumor. Multiconvolution processes with kernel convolution operations have noise removal and sharpening effects that are better than other image processing methods, which can strengthen the features of the desired object and contour and increase the classifier’s classification accuracy. However, excessive convolution layers and kernel convolution operations will increase the computational complexity, computational time, and training time for training the classifier. Thus, this study aimed to determine a suitable number of convolution layers and kernels to achieve a classifier with high learning performance and classification accuracy, with a case study in the breast lesion screening of mammographic images. The Mammographic Image Analysis Society Digital Mammogram Database (United Kingdom National Breast Screening Program) was used for experimental tests to determine the number of convolution layers and kernels. The optimal classifier’s performance is evaluated using accuracy (%), precision (%), recall (%), and F1 score to test and validate the most suitable MCNN model architecture.
Collapse
|
29
|
|
30
|
Aswiga RV, Shanthi AP. A Multilevel Transfer Learning Technique and LSTM Framework for Generating Medical Captions for Limited CT and DBT Images. J Digit Imaging 2022; 35:564-580. [PMID: 35217942 PMCID: PMC9156604 DOI: 10.1007/s10278-021-00567-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 09/24/2021] [Accepted: 12/06/2021] [Indexed: 12/15/2022] Open
Abstract
Medical image captioning has been recently attracting the attention of the medical community. Also, generating captions for images involving multiple organs is an even more challenging task. Therefore, any attempt toward such medical image captioning becomes the need of the hour. In recent years, the rapid developments in deep learning approaches have made them an effective option for the analysis of medical images and automatic report generation. But analyzing medical images that are scarce and limited is hard, and it is difficult even with machine learning approaches. The concept of transfer learning can be employed in such applications that suffer from insufficient training data. This paper presents an approach to develop a medical image captioning model based on a deep recurrent architecture that combines Multi Level Transfer Learning (MLTL) framework with a Long Short-Term-Memory (LSTM) model. A basic MLTL framework with three models is designed to detect and classify very limited datasets, using the knowledge acquired from easily available datasets. The first model for the source domain uses the abundantly available non-medical images and learns the generalized features. The acquired knowledge is then transferred to the second model for the intermediate and auxiliary domain, which is related to the target domain. This information is then used for the final target domain, which consists of medical datasets that are very limited in nature. Therefore, the knowledge learned from a non-medical source domain is transferred to improve the learning in the target domain that deals with medical images. Then, a novel LSTM model, which is used for sequence generation and machine translation, is proposed to generate captions for the given medical image from the MLTL framework. To improve the captioning of the target sentence further, an enhanced multi-input Convolutional Neural Network (CNN) model along with feature extraction techniques is proposed. This enhanced multi-input CNN model extracts the most important features of an image that help in generating a more precise and detailed caption of the medical image. Experimental results show that the proposed model performs well with an accuracy of 96.90%, with BLEU score of 76.9%, even with very limited datasets, when compared to the work reported in literature.
Collapse
Affiliation(s)
- R. V. Aswiga
- Department of Computer Science & Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Tamil Nadu, Chennai, 601103 India
| | - A. P. Shanthi
- Department of Computer Science & Engineering, College of Engineering, Guindy (CEG), Anna University, Tamil Nadu, Chennai, 600025 India
| |
Collapse
|
31
|
Fan M, Yuan C, Huang G, Xu M, Wang S, Gao X, Li L. A framework for deep multitask learning with multiparametric magnetic resonance imaging for the joint prediction of histological characteristics in breast cancer. IEEE J Biomed Health Inform 2022; 26:3884-3895. [PMID: 35635826 DOI: 10.1109/jbhi.2022.3179014] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The clinical management and decision-making process related to breast cancer are based on multiple histological indicators. This study aims to jointly predict the Ki-67 expression level, luminal A subtype and histological grade molecular biomarkers using a new deep multitask learning method with multiparametric magnetic resonance imaging. A multitask learning network structure was proposed by introducing a common-task layer and task-specific layers to learn the high-level features that are common to all tasks and related to a specific task, respectively. A network pretrained with knowledge from the ImageNet dataset was used and fine-tuned with MRI data. Information from multiparametric MR images was fused using the strategy at the feature and decision levels. The area under the receiver operating characteristic curve (AUC) was used to measure model performance. For single-task learning using a single image series, the deep learning model generated AUCs of 0.752, 0.722, and 0.596 for the Ki-67, luminal A and histological grade prediction tasks, respectively. The performance was improved by freezing the first 5 convolutional layers, using 20% shared layers and fusing multiparametric series at the feature level, which achieved AUCs of 0.819, 0.799 and 0.747 for Ki-67, luminal A and histological grade prediction tasks, respectively. Our study showed advantages in jointly predicting correlated clinical biomarkers using a deep multitask learning framework with an appropriate number of fine-tuned convolutional layers by taking full advantage of common and complementary imaging features. Multiparametric image series-based multitask learning could be a promising approach for the multiple clinical indicator-based management of breast cancer.
Collapse
|
32
|
Zhou C, Chan HP, Hadjiiski LM, Chughtai A. Recursive Training Strategy for a Deep Learning Network for Segmentation of Pathology Nuclei With Incomplete Annotation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:49337-49346. [PMID: 35665366 PMCID: PMC9161776 DOI: 10.1109/access.2022.3172958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This study developed a recursive training strategy to train a deep learning model for nuclei detection and segmentation using incomplete annotation. A dataset of 141 H&E stained breast cancer pathologic images with incomplete annotation was randomly split into training/validation set and test set of 89 and 52 images, respectively. The positive training samples were extracted at each annotated cell and augmented with affine translation. The negative training samples were selected from the non-cellular regions free of nuclei using a histogram-based semi-automatic method. A U-Net model was initially trained by minimizing a custom loss function. After the first stage of training, the trained U-Net model was applied to the images in the training set in an inference mode. The U-Net segmented objects with high quality were selected by a semi-automated method. Combining the newly selected high quality objects with the annotated nuclei and the previously generated negative samples, the U-Net model was retrained recursively until the stopping criteria were satisfied. For the 52 test images, the U-Net trained with and without using our recursive training method achieved a sensitivity of 90.3% and 85.3% for nuclei detection, respectively. For nuclei segmentation, the average Dice coefficient and average Jaccard index were 0.831±0.213 and 0.750±0.217, 0.780±0.270 and 0.697±0.264, for U-Net with and without recursive training, respectively. The improvement achieved by our proposed method was statistically significant (P < 0.05). In conclusion, our recursive training method effectively enlarged the set of annotated objects for training the deep learning model and further improved the detection and segmentation performance.
Collapse
Affiliation(s)
- Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Aamer Chughtai
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
33
|
Sun L, Wen J, Wang J, Zhao Y, Zhang B, Wu J, Xu Y. Two‐view attention‐guided convolutional neural network for mammographic image classification. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2022. [DOI: 10.1049/cit2.12096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Lilei Sun
- College of Computer Science and Technology Guizhou University Guiyang China
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
| | - Jie Wen
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| | - Junqian Wang
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| | - Yong Zhao
- College of Computer Science and Technology Guizhou University Guiyang China
- School of Electronic and Computer Engineering Shenzhen Graduate School of Peking University Shenzhen China
| | - Bob Zhang
- Department of Computer and Information Science University of Macau Taipa China
| | - Jian Wu
- Science for Life Laboratory KTH Royal Institute of Technology Stockholm Sweden
| | - Yong Xu
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| |
Collapse
|
34
|
Automatic Breast Tumor Screening of Mammographic Images with Optimal Convolutional Neural Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084079] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Mammography is a first-line imaging examination approach used for early breast tumor screening. Computational techniques based on deep-learning methods, such as convolutional neural network (CNN), are routinely used as classifiers for rapid automatic breast tumor screening in mammography examination. Classifying multiple feature maps on two-dimensional (2D) digital images, a multilayer CNN has multiple convolutional-pooling layers and fully connected networks, which can increase the screening accuracy and reduce the error rate. However, this multilayer architecture presents some limitations, such as high computational complexity, large-scale training dataset requirements, and poor suitability for real-time clinical applications. Hence, this study designs an optimal multilayer architecture for a CNN-based classifier for automatic breast tumor screening, consisting of three convolutional layers, two pooling layers, a flattening layer, and a classification layer. In the first convolutional layer, the proposed classifier performs the fractional-order convolutional process to enhance the image and remove unwanted noise for obtaining the desired object’s edges; in the second and third convolutional-pooling layers, two kernel convolutional and pooling operations are used to ensure the continuous enhancement and sharpening of the feature patterns for further extracting of the desired features at different scales and different levels. Moreover, there is a reduction of the dimensions of the feature patterns. In the classification layer, a multilayer network with an adaptive moment estimation algorithm is used to refine a classifier’s network parameters for mammography classification by separating tumor-free feature patterns from tumor feature patterns. Images can be selected from a curated breast imaging subset of a digital database for screening mammography (CBIS-DDSM), and K-fold cross-validations are performed. The experimental results indicate promising performance for automatic breast tumor screening in terms of recall (%), precision (%), accuracy (%), F1 score, and Youden’s index.
Collapse
|
35
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 183] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
36
|
Analysis of Sports Video Intelligent Classification Technology Based on Neural Network Algorithm and Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7474581. [PMID: 35371207 PMCID: PMC8970915 DOI: 10.1155/2022/7474581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 01/28/2022] [Indexed: 11/17/2022]
Abstract
With the rapid development of information technology, digital content shows an explosive growth trend. Sports video classification is of great significance for digital content archiving in the server. Therefore, the accurate classification of sports video categories is realized by using deep neural network algorithm (DNN), convolutional neural network (CNN), and transfer learning. Block brightness comparison coding (BICC) and block color histogram are proposed, which reflect the brightness relationship between different regions in video and the color information in the region. The maximum mean difference (MMD) algorithm is adopted to achieve the purpose of transfer learning. On the basis of obtaining the features of sports video images, the sports video image classification method based on deep learning coding model is adopted to realize sports video classification. The results show that, for different types of sports videos, the overall classification effect of this method is obviously better than other current sports video classification methods, which greatly improves the classification effect of sports videos.
Collapse
|
37
|
High-performance medical image secret sharing using super-resolution for CAD systems. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03095-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
38
|
TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073273] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results.
Collapse
|
39
|
Wang A, Togo R, Ogawa T, Haseyama M. Defect Detection of Subway Tunnels Using Advanced U-Net Network. SENSORS 2022; 22:s22062330. [PMID: 35336501 PMCID: PMC8955254 DOI: 10.3390/s22062330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/08/2022] [Accepted: 03/13/2022] [Indexed: 12/02/2022]
Abstract
In this paper, we present a novel defect detection model based on an improved U-Net architecture. As a semantic segmentation task, the defect detection task has the problems of background–foreground imbalance, multi-scale targets, and feature similarity between the background and defects in the real-world data. Conventionally, general convolutional neural network (CNN)-based networks mainly focus on natural image tasks, which are insensitive to the problems in our task. The proposed method has a network design for multi-scale segmentation based on the U-Net architecture including an atrous spatial pyramid pooling (ASPP) module and an inception module, and can detect various types of defects compared to conventional simple CNN-based methods. Through the experiments using a real-world subway tunnel image dataset, the proposed method showed a better performance than that of general semantic segmentation including state-of-the-art methods. Additionally, we showed that our method can achieve excellent detection balance among multi-scale defects.
Collapse
Affiliation(s)
- An Wang
- Graduate School of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan
- Correspondence:
| | - Ren Togo
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| | - Takahiro Ogawa
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| | - Miki Haseyama
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| |
Collapse
|
40
|
Ayana G, Park J, Choe SW. Patchless Multi-Stage Transfer Learning for Improved Mammographic Breast Mass Classification. Cancers (Basel) 2022; 14:cancers14051280. [PMID: 35267587 PMCID: PMC8909211 DOI: 10.3390/cancers14051280] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary In this study, we propose a novel deep-learning method based on multi-stage transfer learning (MSTL) from ImageNet and cancer cell line image pre-trained models to classify mammographic masses as either benign or malignant. The proposed method alleviates the challenge of obtaining large amounts of labeled mammogram training data by utilizing a large number of cancer cell line microscopic images as an intermediate domain of learning between the natural domain (ImageNet) and medical domain (mammography). Moreover, our method does not utilize patch separation (to segment the region of interest before classification), which renders it computationally simple and fast compared to previous studies. The findings of this study are of crucial importance in the early diagnosis of breast cancer in young women with dense breasts because mammography does not provide reliable diagnosis in such cases. Abstract Despite great achievements in classifying mammographic breast-mass images via deep-learning (DL), obtaining large amounts of training data and ensuring generalizations across different datasets with robust and well-optimized algorithms remain a challenge. ImageNet-based transfer learning (TL) and patch classifiers have been utilized to address these challenges. However, researchers have been unable to achieve the desired performance for DL to be used as a standalone tool. In this study, we propose a novel multi-stage TL from ImageNet and cancer cell line image pre-trained models to classify mammographic breast masses as either benign or malignant. We trained our model on three public datasets: Digital Database for Screening Mammography (DDSM), INbreast, and Mammographic Image Analysis Society (MIAS). In addition, a mixed dataset of the images from these three datasets was used to train the model. We obtained an average five-fold cross validation AUC of 1, 0.9994, 0.9993, and 0.9998 for DDSM, INbreast, MIAS, and mixed datasets, respectively. Moreover, the observed performance improvement using our method against the patch-based method was statistically significant, with a p-value of 0.0029. Furthermore, our patchless approach performed better than patch- and whole image-based methods, improving test accuracy by 8% (91.41% vs. 99.34%), tested on the INbreast dataset. The proposed method is of significant importance in solving the need for a large training dataset as well as reducing the computational burden in training and implementing the mammography-based deep-learning models for early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Jinhyung Park
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
41
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
42
|
Evaluation of a Generative Adversarial Network to Improve Image Quality and Reduce Radiation-Dose during Digital Breast Tomosynthesis. Diagnostics (Basel) 2022; 12:diagnostics12020495. [PMID: 35204582 PMCID: PMC8871529 DOI: 10.3390/diagnostics12020495] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/31/2022] [Accepted: 02/08/2022] [Indexed: 01/27/2023] Open
Abstract
In this study, we evaluated the improvement of image quality in digital breast tomosynthesis under low-radiation dose conditions of pre-reconstruction processing using conditional generative adversarial networks [cGAN (pix2pix)]. Pix2pix pre-reconstruction processing with filtered back projection (FBP) was compared with and without multiscale bilateral filtering (MSBF) during pre-reconstruction processing. Noise reduction and preserve contrast rates were compared using full width at half-maximum (FWHM), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) in the in-focus plane using a BR3D phantom at various radiation doses [reference-dose (automatic exposure control reference dose: AECrd), 50% and 75% reduction of AECrd] and phantom thicknesses (40 mm, 50 mm, and 60 mm). The overall performance of pix2pix pre-reconstruction processing was effective in terms of FWHM, PSNR, and SSIM. At ~50% radiation-dose reduction, FWHM yielded good results independently of the microcalcification size used in the BR3D phantom, and good noise reduction and preserved contrast. PSNR results showed that pix2pix pre-reconstruction processing represented the minimum in the error with reference FBP images at an approximately 50% reduction in radiation-dose. SSIM analysis indicated that pix2pix pre-reconstruction processing yielded superior similarity when compared with and without MSBF pre-reconstruction processing at ~50% radiation-dose reduction, with features most similar to the reference FBP images. Thus, pix2pix pre-reconstruction processing is promising for reducing noise with preserve contrast and radiation-dose reduction in clinical practice.
Collapse
|
43
|
Imagawa K, Shiomoto K. Performance change with the number of training data: A case study on the binary classification of COVID-19 chest X-ray by using convolutional neural networks. Comput Biol Med 2022; 142:105251. [PMID: 35093727 DOI: 10.1016/j.compbiomed.2022.105251] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/15/2022] [Accepted: 01/19/2022] [Indexed: 12/24/2022]
Abstract
One of the features of artificial intelligence/machine learning-based medical devices resides in their ability to learn from real-world data. However, obtaining a large number of training data in the early phase is difficult, and the device performance may change after their first introduction into the market. To introduce the safety and effectiveness of these devices into the market in a timely manner, an appropriate post-market performance change plan must be established at the timing of the premarket approval. In this work, we evaluate the performance change with the variation of the number of training data. Two publicly available datasets are used: one consisting of 4000 images for COVID-19 and another comprising 4000 images for Normal. The dataset was split into 7000 images for training and validation, also 1000 images for test. Furthermore, the training and validation data were selected as different 16 datasets. Two different convolutional neural networks, namely AlexNet and ResNet34, with and without a fine-tuning method were used to classify two image types. The area under the curve, sensitivity, and specificity were evaluated for each dataset. Our result shows that all performances were rapidly improved as the number of training data was increased and reached an equilibrium state. AlexNet outperformed ResNet34 when the number of images was small. The difference tended to decrease as the number of training data increased, and the fine-tuning method improved all performances. In conclusion, the appropriate model and method should be selected considering the intended performance and available number of data.
Collapse
Affiliation(s)
- Kuniki Imagawa
- Tokyo City University, Faculty of Information Technology, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan.
| | - Kohei Shiomoto
- Tokyo City University, Faculty of Information Technology, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan
| |
Collapse
|
44
|
Ayana G, Park J, Jeong JW, Choe SW. A Novel Multistage Transfer Learning for Ultrasound Breast Cancer Image Classification. Diagnostics (Basel) 2022; 12:135. [PMID: 35054303 PMCID: PMC8775102 DOI: 10.3390/diagnostics12010135] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/24/2021] [Accepted: 12/30/2021] [Indexed: 12/31/2022] Open
Abstract
Breast cancer diagnosis is one of the many areas that has taken advantage of artificial intelligence to achieve better performance, despite the fact that the availability of a large medical image dataset remains a challenge. Transfer learning (TL) is a phenomenon that enables deep learning algorithms to overcome the issue of shortage of training data in constructing an efficient model by transferring knowledge from a given source task to a target task. However, in most cases, ImageNet (natural images) pre-trained models that do not include medical images, are utilized for transfer learning to medical images. Considering the utilization of microscopic cancer cell line images that can be acquired in large amount, we argue that learning from both natural and medical datasets improves performance in ultrasound breast cancer image classification. The proposed multistage transfer learning (MSTL) algorithm was implemented using three pre-trained models: EfficientNetB2, InceptionV3, and ResNet50 with three optimizers: Adam, Adagrad, and stochastic gradient de-scent (SGD). Dataset sizes of 20,400 cancer cell images, 200 ultrasound images from Mendeley and 400 ultrasound images from the MT-Small-Dataset were used. ResNet50-Adagrad-based MSTL achieved a test accuracy of 99 ± 0.612% on the Mendeley dataset and 98.7 ± 1.1% on the MT-Small-Dataset, averaging over 5-fold cross validation. A p-value of 0.01191 was achieved when comparing MSTL against ImageNet based TL for the Mendeley dataset. The result is a significant improvement in the performance of artificial intelligence methods for ultrasound breast cancer classification compared to state-of-the-art methods and could remarkably improve the early diagnosis of breast cancer in young women.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Jinhyung Park
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Jin-Woo Jeong
- Department of Data Science, Seoul National University of Science and Technology, Seoul 01811, Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| |
Collapse
|
45
|
Lee J, Nishikawa RM. Identifying Women With Mammographically- Occult Breast Cancer Leveraging GAN-Simulated Mammograms. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:225-236. [PMID: 34460371 PMCID: PMC8799372 DOI: 10.1109/tmi.2021.3108949] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Our objective is to show the feasibility of using simulated mammograms to detect mammographically-occult (MO) cancer in women with dense breasts and a normal screening mammogram who could be triaged for additional screening with magnetic resonance imaging (MRI) or ultrasound. We developed a Conditional Generative Adversarial Network (CGAN) to simulate a mammogram with normal appearance using the opposite mammogram as the condition. We used a Convolutional Neural Network (CNN) trained on Radon Cumulative Distribution Transform (RCDT) processed mammograms to detect MO cancer. For training CGAN, we used screening mammograms of 1366 women. For MO cancer detection, we used screening mammograms of 333 women (97 MO cancer) with dense breasts. We simulated the right mammogram for normal controls and the cancer side for MO cancer cases. We created two RCDT images, one from a real mammogram pair and another from a real-simulated mammogram pair. We finetuned a VGG16 on resulting RCDT images to classify the women with MO cancer. We compared the classification performance of the CNN trained on fused RCDT images, CNNFused to that of trained only on real RCDT images, CNNReal, and to that of trained only on simulated RCDT images, CNNSimulated. The test AUC for CNNFused was 0.77 with a 95% confidence interval (95CI) of [0.71, 0.83], which was statistically better (p-value < 0.02) than the CNNReal AUC of 0.70 with a 95CI of [0.64, 0.77] and CNNSimulated AUC of 0.68 with a 95CI of [0.62, 0.75]. It showed that CGAN simulated mammograms can help MO cancer detection.
Collapse
|
46
|
Bailey JD, DeFulio A. Predicting Substance Use Treatment Failure with Transfer Learning. Subst Use Misuse 2022; 57:1982-1987. [PMID: 36128946 DOI: 10.1080/10826084.2022.2125272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Transfer learning, which involves repurposing a trained model on a related task, may allow for better predictions with substance use data than models that are trained using the target data alone. This approach may also be useful for small clinical datasets. The current study examined a method of classifying substance use treatment success using transfer learning. Transfer learning was used to classify data from a nationwide database. We trained a convolutional neural network on a heroin use treatment dataset, then trained and tested on a smaller opioid use treatment dataset. We compared this model with a baseline model that did not benefit from transfer learning, and a tuned random forest (RF). The goal was to see if model weights transfer across related substances and from large to small datasets. The transfer model outperformed the RF model and baseline model. These findings suggest leveraging the power of large datasets for transfer learning may be an effective approach in predicting substance use disorder (SUD) treatment outcomes. It is possible to achieve a score that performs better than RF using transfer learning.
Collapse
|
47
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
48
|
Liu G, Liao Y, Wang F, Zhang B, Zhang L, Liang X, Wan X, Li S, Li Z, Zhang S, Cui S. Medical-VLBERT: Medical Visual Language BERT for COVID-19 CT Report Generation With Alternate Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3786-3797. [PMID: 34370672 DOI: 10.1109/tnnls.2021.3099165] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Medical imaging technologies, including computed tomography (CT) or chest X-Ray (CXR), are largely employed to facilitate the diagnosis of the COVID-19. Since manual report writing is usually too time-consuming, a more intelligent auxiliary medical system that could generate medical reports automatically and immediately is urgently needed. In this article, we propose to use the medical visual language BERT (Medical-VLBERT) model to identify the abnormality on the COVID-19 scans and generate the medical report automatically based on the detected lesion regions. To produce more accurate medical reports and minimize the visual-and-linguistic differences, this model adopts an alternate learning strategy with two procedures that are knowledge pretraining and transferring. To be more precise, the knowledge pretraining procedure is to memorize the knowledge from medical texts, while the transferring procedure is to utilize the acquired knowledge for professional medical sentences generations through observations of medical images. In practice, for automatic medical report generation on the COVID-19 cases, we constructed a dataset of 368 medical findings in Chinese and 1104 chest CT scans from The First Affiliated Hospital of Jinan University, Guangzhou, China, and The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China. Besides, to alleviate the insufficiency of the COVID-19 training samples, our model was first trained on the large-scale Chinese CX-CHR dataset and then transferred to the COVID-19 CT dataset for further fine-tuning. The experimental results showed that Medical-VLBERT achieved state-of-the-art performances on terminology prediction and report generation with the Chinese COVID-19 CT dataset and the CX-CHR dataset. The Chinese COVID-19 CT dataset is available at https://covid19ct.github.io/.
Collapse
|
49
|
Chen J, Jiao J, He S, Han G, Qin J. Few-Shot Breast Cancer Metastases Classification via Unsupervised Cell Ranking. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1914-1923. [PMID: 31841420 DOI: 10.1109/tcbb.2019.2960019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Tumor metastases detection is of great importance for the treatment of breast cancer patients. Various CNN (convolutional neural network) based methods get excellent performance in object detection/segmentation. However, the detection of metastases in hematoxylin and eosin (H&E) stained whole-slide images (WSI) is still challenging mainly due to two aspects. (1) The resolution of the image is too large. (2) lacking labeled training data. Whole-slide images generally stored in a multi-resolution structure with multiple downsampled tiles. It is difficult to feed the whole image into memory without compression. Moreover, labeling images for the pathologists are time-consuming and expensive. In this paper, we study the problem of detecting breast cancer metastases in the pathological image on patch level. To address the abovementioned challenges, we propose a few-shot learning method to classify whether an image patch contains tumor cells. Specifically, we propose a patch-level unsupervised cell ranking approach, which only relies on images with limited labels. The main idea of the proposed method is that when cropping a patch A from the WSI and further cropping a sub-patch B from A, the cell number of A is always larger than that of B. Based on this observation, we make use of the unlabeled images to learn the ranking information of cell counting to extract the abstract features. Experimental results show that our method is effective to improve the patch-level classification accuracy, compared to the traditional supervised method. The source code is publicly available at https://github.com/fewshot-camelyon.
Collapse
|
50
|
Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102925] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|