1
|
Voon W, Hum YC, Tee YK, Yap WS, Lai KW, Nisar H, Mokayed H. IMAML-IDCG: Optimization-based meta-learning with ImageNet feature reusing for few-shot invasive ductal carcinoma grading. EXPERT SYSTEMS WITH APPLICATIONS 2024; 257:124969. [DOI: 10.1016/j.eswa.2024.124969] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
|
2
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Deep-Learning-Based Approach in Cancer-Region Assessment from HER2-SISH Breast Histopathology Whole Slide Images. Cancers (Basel) 2024; 16:3794. [PMID: 39594748 PMCID: PMC11593209 DOI: 10.3390/cancers16223794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/07/2024] [Indexed: 11/28/2024] Open
Abstract
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
- Institute for Research, Development and Innovation, IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia;
| | - Phaik-Leng Cheah
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Seow-Fan Chiew
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Lai-Meng Looi
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| |
Collapse
|
3
|
Liu W, Liang S, Qin X. A novel embedded kernel CNN-PCFF algorithm for breast cancer pathological image classification. Sci Rep 2024; 14:23758. [PMID: 39390003 PMCID: PMC11467218 DOI: 10.1038/s41598-024-74025-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 09/23/2024] [Indexed: 10/12/2024] Open
Abstract
Early screening of breast cancer through image recognition technology can significantly increase the survival rate of patients. Therefore, breast cancer pathological image is of great significance for medical diagnosis and clinical research. In recent years, numerous deep learning models have been applied to breast cancer image classification, with deep CNN being a typical representative. Due to the use of multi-depth small convolutional kernels in mainstream CNN architectures such as VGG and Inception, the obtained image features often have high dimensionality. Although high dimensionality can bring more fine-grained features, it also increases the computational complexity of subsequent classifiers and may even lead to the curse of dimensionality and overfitting. To address these issues, a novel embedded kernel CNN principal component feature fusion (CNN-PCFF) algorithm is proposed. The constructed kernel function is embedded in the principal component analysis to form the multi-kernel principal component. Multi-kernel principal component analysis is used to fuse the high dimensional features obtained from the convolution base into some representative comprehensive variables, which are called kernel principal components, so as to achieve the purpose of dimensionality reduction. Any type of classifier can be added based on multi-kernel principal components. Through experimental analysis on two public breast cancer image datasets, the results show that the proposed algorithm can improve the performance of the current mainstream CNN architecture and subsequent classifiers. Therefore, the proposed algorithm in this paper is an effective tool for the classification of breast cancer pathological images.
Collapse
Affiliation(s)
- Wenbo Liu
- School of Mathematics and Statistics, Qiannan Normal University for Nationalities, Duyun, 558000, Guizhou, China
- Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Duyun, 558000, Guizhou, China
| | - Shengnan Liang
- School of Mathematics and Statistics, Qiannan Normal University for Nationalities, Duyun, 558000, Guizhou, China.
- Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Duyun, 558000, Guizhou, China.
| | - Xiwen Qin
- School of Mathematics and Statistics, Changchun University of Technology, Changchun, 130012, Jilin, China
| |
Collapse
|
4
|
Șiancu P, Oprinca GC, Vulcu AC, Pătran M, Croitoru AE, Tănăsescu D, Bratu D, Boicean A, Tănăsescu C. The Significance of C-Reactive Protein Value and Tumor Grading for Malignant Tumors: A Systematic Review. Diagnostics (Basel) 2024; 14:2073. [PMID: 39335753 PMCID: PMC11430861 DOI: 10.3390/diagnostics14182073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 08/22/2024] [Accepted: 09/17/2024] [Indexed: 09/30/2024] Open
Abstract
BACKGROUND Malignant tumors represent a significant pathology with a profound global impact on the medical system. The fight against cancer represents a significant challenge, with multidisciplinary teams identifying numerous areas requiring improvement to enhance the prognosis. Facilitating the patient's journey from diagnosis to treatment represents one such area of concern. One area of research interest is the use of various biomarkers to accurately predict the outcome of these patients. A substantial body of research has been conducted over the years examining the relationship between C-reactive protein (CRP) and malignant tumors. The existing literature suggests that combining imaging diagnostic modalities with biomarkers, such as CRP, may enhance diagnostic accuracy. METHODS A systematic review was conducted on the PubMed and Web of Science platforms with the objective of documenting the interrelationship between CRP value and tumor grading for malignant tumors. After the application of the exclusion and inclusion criteria, 17 studies were identified, published between 2002 and 2024, comprising a total of 9727 patients. RESULTS These studies indicate this interrelationship for soft tissue sarcomas and for renal, colorectal, esophageal, pancreatic, brain, bronchopulmonary, ovarian, and mesenchymal tumors. CONCLUSIONS Elevated CRP levels are correlated with higher grading, thereby underscoring the potential utility of this biomarker in clinical prognostication.
Collapse
Affiliation(s)
- Paul Șiancu
- Oncology Department, Sibiu County Emergency Clinical Hospital, 550245 Sibiu, Romania; (P.Ș.); (M.P.)
- Preclinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania;
| | - George-Călin Oprinca
- Preclinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania;
| | | | - Monica Pătran
- Oncology Department, Sibiu County Emergency Clinical Hospital, 550245 Sibiu, Romania; (P.Ș.); (M.P.)
| | | | - Denisa Tănăsescu
- Medical Clinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania; (D.T.); (A.B.)
| | - Dan Bratu
- Surgical Clinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania;
- Surgical Department, Sibiu County Emergency Clinical Hospital, 550245 Sibiu, Romania
| | - Adrian Boicean
- Medical Clinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania; (D.T.); (A.B.)
- Gastroenterology Department, Sibiu County Emergency Clinical Hospital, 550245 Sibiu, Romania
| | - Ciprian Tănăsescu
- Surgical Clinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550169 Sibiu, Romania;
- Surgical Department, Sibiu County Emergency Clinical Hospital, 550245 Sibiu, Romania
| |
Collapse
|
5
|
Teoh CL, Tan XJ, Ab Rahman KS, Bakrin IH, Goh KM, Siet JJW, Wan Muhamad WZA. A Quantitative Measurement Method for Nuclear-Pleomorphism Scoring in Breast Cancer. Diagnostics (Basel) 2024; 14:2045. [PMID: 39335724 PMCID: PMC11431806 DOI: 10.3390/diagnostics14182045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/08/2024] [Accepted: 09/13/2024] [Indexed: 09/30/2024] Open
Abstract
BACKGROUND/OBJECTIVES Nuclear pleomorphism, a crucial determinant of breast cancer grading under the Nottingham Histopathology Grading (NHG) system, remains inadequately quantified in the existing literature. Motivated by this gap, our study seeks to investigate and establish correlations among morphological features across various scores of nuclear pleomorphism, as per the NHG system. We aim to quantify nuclear pleomorphism across these scores and validate our proposed measurement method against ground-truth data. METHODS Initially, we deconstruct the descriptions of nuclear pleomorphism into three core elements: size, shape, and appearance. These elements are subsequently mathematically modeled into equations, termed ESize, EShape, and EAppearance. These equations are then integrated into a unified model termed Harmonic Mean (HM). The HM equation yields a value approaching 1 for nuclei demonstrating characteristics of score-3 nuclear pleomorphism and near 0 for those exhibiting features of score-1 nuclear pleomorphism. RESULTS The proposed HM model demonstrates promising performance metrics, including Accuracy, Recall, Specificity, Precision, and F1-score, with values of 0.97, 0.96, 0.97, 0.94, and 0.95, respectively. CONCLUSIONS In summary, this study proposes the HM equation as a novel feature for the precise quantification of nuclear pleomorphism in breast cancer.
Collapse
Affiliation(s)
- Chai Ling Teoh
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
| | - Xiao Jian Tan
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
- Biomedical and Bioinformatics Engineering (BBE) Research Group, Centre for Multimodal Signal Processing, Department of Electrical and Electronic Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
- Sports Engineering Research Centre (SERC), Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
| | - Khairul Shakir Ab Rahman
- Department of Pathology, Hospital Tuanku Fauziah, Jalan Tun Abdul Razak, Kangar 01000, Perlis, Malaysia
| | - Ikmal Hisyam Bakrin
- Department of Pathology, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia (UPM) Serdang, Serdang 43400, Selangor, Malaysia
| | - Kam Meng Goh
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
- Biomedical and Bioinformatics Engineering (BBE) Research Group, Centre for Multimodal Signal Processing, Department of Electrical and Electronic Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
| | - Joseph Jiun Wen Siet
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University of Management and Technology (TAR UMT), Jalan Genting Kelang, Setapak, Kuala Lumpur 53300, Malaysia
| | - Wan Zuki Azman Wan Muhamad
- Sports Engineering Research Centre (SERC), Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
- Institute of Engineering Mathematics, Universiti Malaysia Perlis (UniMAP), Kampus Pauh Putra, Arau 02600, Perlis, Malaysia
- Centre of Excellence for Advanced Computing (ADVCOMP), Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
| |
Collapse
|
6
|
Carrasco K, Tomalá L, Ramírez Meza E, Meza Bolaños D, Ramírez Montalvan W. Computational Techniques in PET/CT Image Processing for Breast Cancer: A Systematic Mapping Review. ACM COMPUTING SURVEYS 2024; 56:1-38. [DOI: 10.1145/3648359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Accepted: 01/31/2024] [Indexed: 01/06/2025]
Abstract
The problem arises from the lack of sufficient and comprehensive information about the necessary computer techniques. These techniques are crucial for developing information systems that assist doctors in diagnosing breast cancer, especially those related to positron emission tomography and computed tomography (PET/CT). Despite global efforts in breast cancer prevention and control, the scarcity of literature poses an obstacle to a complete understanding in this area of interest. The methodologies studied were systematic mapping and systematic literature review. For each article, the journal, conference, year of publication, dataset, breast cancer characteristics, PET/CT processing techniques, metrics and diagnostic yield results were identified. Sixty-four articles were analyzed, 44 (68.75%) belong to journals and 20 (31.25%) belong to the conference category. A total of 102 techniques were identified, which were distributed in preprocessing with 7 (6.86%), segmentation with 15 (14.71%), feature extraction with 15 (14.71%), and classification with 65 (63.73%). The techniques with the highest incidence identified in each stage are: Gaussian Filter, SLIC, Local Binary Pattern, and Support Vector Machine with 4, 2, 7, and 35 occurrences, respectively. Support Vector Machine is the predominant technique in the classification stage, due to the fact that Artificial Intelligence is emerging in medical image processing and health care to make expert systems increasingly intelligent and obtain favorable results.
Collapse
|
7
|
Koziarski M, Cyganek B, Niedziela P, Olborski B, Antosz Z, Żydak M, Kwolek B, Wąsowicz P, Bukała A, Swadźba J, Sitkowski P. DiagSet: a dataset for prostate cancer histopathological image classification. Sci Rep 2024; 14:6780. [PMID: 38514661 PMCID: PMC10958036 DOI: 10.1038/s41598-024-52183-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 01/15/2024] [Indexed: 03/23/2024] Open
Abstract
Cancer diseases constitute one of the most significant societal challenges. In this paper, we introduce a novel histopathological dataset for prostate cancer detection. The proposed dataset, consisting of over 2.6 million tissue patches extracted from 430 fully annotated scans, 4675 scans with assigned binary diagnoses, and 46 scans with diagnoses independently provided by a group of histopathologists can be found at https://github.com/michalkoziarski/DiagSet . Furthermore, we propose a machine learning framework for detection of cancerous tissue regions and prediction of scan-level diagnosis, utilizing thresholding to abstain from the decision in uncertain cases. The proposed approach, composed of ensembles of deep neural networks operating on the histopathological scans at different scales, achieves 94.6% accuracy in patch-level recognition and is compared in a scan-level diagnosis with 9 human histopathologists showing high statistical agreement.
Collapse
Affiliation(s)
- Michał Koziarski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland.
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland.
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1, Canada.
| | - Bogusław Cyganek
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Przemysław Niedziela
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Bogusław Olborski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Zbigniew Antosz
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Marcin Żydak
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Bogdan Kwolek
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Paweł Wąsowicz
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| | - Andrzej Bukała
- AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Jakub Swadźba
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
- Andrzej Frycz Modrzewski Krakow University, Gustawa Herlinga-Grudzińskiego 1, 30-705, Kraków, Poland
| | - Piotr Sitkowski
- Diagnostyka Consilio Sp. z o.o., Ul. Kosynierów Gdyńskich 61a, 93-357, Łódż, Poland
| |
Collapse
|
8
|
Liu L, Wang Y, Zhang P, Qiao H, Sun T, Zhang H, Xu X, Shang H. Collaborative Transfer Network for Multi-Classification of Breast Cancer Histopathological Images. IEEE J Biomed Health Inform 2024; 28:110-121. [PMID: 37294651 DOI: 10.1109/jbhi.2023.3283042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The incidence of breast cancer is increasing rapidly around the world. Accurate classification of the breast cancer subtype from hematoxylin and eosin images is the key to improve the precision of treatment. However, the high consistency of disease subtypes and uneven distribution of cancer cells seriously affect the performance of multi-classification methods. Furthermore, it is difficult to apply existing classification methods to multiple datasets. In this article, we propose a collaborative transfer network (CTransNet) for multi-classification of breast cancer histopathological images. CTransNet consists of a transfer learning backbone branch, a residual collaborative branch, and a feature fusion module. The transfer learning branch adopts the pre-trained DenseNet structure to extract image features from ImageNet. The residual branch extracts target features from pathological images in a collaborative manner. The feature fusion strategy of optimizing these two branches is used to train and fine-tune CTransNet. Experiments show that CTransNet achieves 98.29% classification accuracy on the public BreaKHis breast cancer dataset, exceeding the performance of state-of-the-art methods. Visual analysis is carried out under the guidance of oncologists. Based on the training parameters of the BreaKHis dataset, CTransNet achieves superior performance on other two public breast cancer datasets (breast-cancer-grade-ICT and ICIAR2018_BACH_Challenge), indicating that CTransNet has good generalization performance.
Collapse
|
9
|
Kaur A, Kaushal C, Sandhu JK, Damaševičius R, Thakur N. Histopathological Image Diagnosis for Breast Cancer Diagnosis Based on Deep Mutual Learning. Diagnostics (Basel) 2023; 14:95. [PMID: 38201406 PMCID: PMC10795733 DOI: 10.3390/diagnostics14010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Every year, millions of women across the globe are diagnosed with breast cancer (BC), an illness that is both common and potentially fatal. To provide effective therapy and enhance patient outcomes, it is essential to make an accurate diagnosis as soon as possible. In recent years, deep-learning (DL) approaches have shown great effectiveness in a variety of medical imaging applications, including the processing of histopathological images. Using DL techniques, the objective of this study is to recover the detection of BC by merging qualitative and quantitative data. Using deep mutual learning (DML), the emphasis of this research was on BC. In addition, a wide variety of breast cancer imaging modalities were investigated to assess the distinction between aggressive and benign BC. Based on this, deep convolutional neural networks (DCNNs) have been established to assess histopathological images of BC. In terms of the Break His-200×, BACH, and PUIH datasets, the results of the trials indicate that the level of accuracy achieved by the DML model is 98.97%, 96.78, and 96.34, respectively. This indicates that the DML model outperforms and has the greatest value among the other methodologies. To be more specific, it improves the results of localization without compromising the performance of the classification, which is an indication of its increased utility. We intend to proceed with the development of the diagnostic model to make it more applicable to clinical settings.
Collapse
Affiliation(s)
- Amandeep Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Chetna Kaushal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Jasjeet Kaur Sandhu
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Akademija, Lithuania
| | - Neetika Thakur
- Junior Laboratory Technician, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| |
Collapse
|
10
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
11
|
Voon W, Hum YC, Tee YK, Yap WS, Nisar H, Mokayed H, Gupta N, Lai KW. Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images. Sci Rep 2023; 13:20518. [PMID: 37993544 PMCID: PMC10665422 DOI: 10.1038/s41598-023-46619-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 11/02/2023] [Indexed: 11/24/2023] Open
Abstract
Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.
Collapse
Affiliation(s)
- Wingates Voon
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia.
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Wun-She Yap
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Humaira Nisar
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, 31900, Kampar, Malaysia
| | - Hamam Mokayed
- Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea, Sweden
| | - Neha Gupta
- School of Electronics Engineering, Vellore Institute of Technology, Amaravati, AP, India
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
12
|
Chen CB, Wang Y, Fu X, Yang H. Recurrence Network Analysis of Histopathological Images for the Detection of Invasive Ductal Carcinoma in Breast Cancer. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3234-3244. [PMID: 37276118 DOI: 10.1109/tcbb.2023.3282798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The histopathological image analysis is one of the most crucial diagnostic procedures to identify Invasive ductal carcinoma (IDC) in breast cancers. However, this diagnosis process is currently time-consuming and heavily dependent on human expertise. Prior research has shown that different degrees of tumors present various microstructures in the histopathological images. However, very little has been done to utilize spatial recurrence features of microstructures for identifying IDC. This paper presents a novel recurrence analysis methodology for automatic image-guided IDC detection. We first utilize wavelet decomposition to delineate the subtle information in the images. Then, we model the patches with a weighted recurrence network approach to characterize the recurrence patterns of the histopathological images. Finally, we develop automated IDC detection models leveraging machine learning methods with spatial recurrence features extracted. The developed recurrence analysis models successfully characterize the complex microstructures of histopathological images and achieve the IDC detection performances of at least AUC = 0.96. This research developed a spatial recurrence analysis methodology to effectively identify IDC regions in histopathological images for BC. It shows a high potential to assist physicians in the decision-making process. The proposed methodology can further be applicable to image processing for other medical or biological applications.
Collapse
|
13
|
Pandey D, Wang H, Yin X, Wang K, Zhang Y, Shen J. Automatic breast lesion segmentation in phase preserved DCE-MRIs. Health Inf Sci Syst 2022; 10:9. [PMID: 35607433 PMCID: PMC9123154 DOI: 10.1007/s13755-022-00176-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/25/2022] [Indexed: 12/24/2022] Open
Abstract
We offer a framework for automatically and accurately segmenting breast lesions from Dynamic Contrast Enhanced (DCE) MRI in this paper. The framework is built using max flow and min cut problems in the continuous domain over phase preserved denoised images. Three stages are required to complete the proposed approach. First, post-contrast and pre-contrast images are subtracted, followed by image registrations that benefit to enhancing lesion areas. Second, a phase preserved denoising and pixel-wise adaptive Wiener filtering technique is used, followed by max flow and min cut problems in a continuous domain. A denoising mechanism clears the noise in the images by preserving useful and detailed features such as edges. Then, lesion detection is performed using continuous max flow. Finally, a morphological operation is used as a post-processing step to further delineate the obtained results. A series of qualitative and quantitative trials employing nine performance metrics on 21 cases with two different MR image resolutions were used to verify the effectiveness of the proposed method. Performance results demonstrate the quality of segmentation obtained from the proposed method.
Collapse
Affiliation(s)
| | - Hua Wang
- Victoria University, Melbourne, Australia
| | | | - Kate Wang
- RMIT University, Melbourne, Australia
| | | | - Jing Shen
- Radiology Department, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| |
Collapse
|
14
|
Voon W, Hum YC, Tee YK, Yap WS, Salim MIM, Tan TS, Mokayed H, Lai KW. Performance analysis of seven Convolutional Neural Networks (CNNs) with transfer learning for Invasive Ductal Carcinoma (IDC) grading in breast histopathological images. Sci Rep 2022; 12:19200. [PMID: 36357456 PMCID: PMC9649772 DOI: 10.1038/s41598-022-21848-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 10/04/2022] [Indexed: 11/11/2022] Open
Abstract
Computer-aided Invasive Ductal Carcinoma (IDC) grading classification systems based on deep learning have shown that deep learning may achieve reliable accuracy in IDC grade classification using histopathology images. However, there is a dearth of comprehensive performance comparisons of Convolutional Neural Network (CNN) designs on IDC in the literature. As such, we would like to conduct a comparison analysis of the performance of seven selected CNN models: EfficientNetB0, EfficientNetV2B0, EfficientNetV2B0-21k, ResNetV1-50, ResNetV2-50, MobileNetV1, and MobileNetV2 with transfer learning. To implement each pre-trained CNN architecture, we deployed the corresponded feature vector available from the TensorFlowHub, integrating it with dropout and dense layers to form a complete CNN model. Our findings indicated that the EfficientNetV2B0-21k (0.72B Floating-Point Operations and 7.1 M parameters) outperformed other CNN models in the IDC grading task. Nevertheless, we discovered that practically all selected CNN models perform well in the IDC grading task, with an average balanced accuracy of 0.936 ± 0.0189 on the cross-validation set and 0.9308 ± 0.0211on the test set.
Collapse
Affiliation(s)
- Wingates Voon
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia.
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia
| | - Wun-She Yap
- Department of Electrical and Electronic Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Sungai Long, Malaysia
| | - Maheza Irna Mohamad Salim
- Diagnostic Research Group, School of Biomedical Engineering and Health Sciences, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, 81300, Skudai, Johor, Malaysia
| | - Tian Swee Tan
- BioInspired Device and Tissue Engineering Research Group, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, 81300, Skudai, Johor, Malaysia
| | - Hamam Mokayed
- Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Luleå, Sweden
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
15
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
16
|
Wetstein SC, de Jong VMT, Stathonikos N, Opdam M, Dackus GMHE, Pluim JPW, van Diest PJ, Veta M. Deep learning-based breast cancer grading and survival analysis on whole-slide histopathology images. Sci Rep 2022; 12:15102. [PMID: 36068311 PMCID: PMC9448798 DOI: 10.1038/s41598-022-19112-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 08/24/2022] [Indexed: 11/10/2022] Open
Abstract
Breast cancer tumor grade is strongly associated with patient survival. In current clinical practice, pathologists assign tumor grade after visual analysis of tissue specimens. However, different studies show significant inter-observer variation in breast cancer grading. Computer-based breast cancer grading methods have been proposed but only work on specifically selected tissue areas and/or require labor-intensive annotations to be applied to new datasets. In this study, we trained and evaluated a deep learning-based breast cancer grading model that works on whole-slide histopathology images. The model was developed using whole-slide images from 706 young (< 40 years) invasive breast cancer patients with corresponding tumor grade (low/intermediate vs. high), and its constituents nuclear grade, tubule formation and mitotic rate. The performance of the model was evaluated using Cohen's kappa on an independent test set of 686 patients using annotations by expert pathologists as ground truth. The predicted low/intermediate (n = 327) and high (n = 359) grade groups were used to perform survival analysis. The deep learning system distinguished low/intermediate versus high tumor grade with a Cohen's Kappa of 0.59 (80% accuracy) compared to expert pathologists. In subsequent survival analysis the two groups predicted by the system were found to have a significantly different overall survival (OS) and disease/recurrence-free survival (DRFS/RFS) (p < 0.05). Univariate Cox hazard regression analysis showed statistically significant hazard ratios (p < 0.05). After adjusting for clinicopathologic features and stratifying for molecular subtype the hazard ratios showed a trend but lost statistical significance for all endpoints. In conclusion, we developed a deep learning-based model for automated grading of breast cancer on whole-slide images. The model distinguishes between low/intermediate and high grade tumors and finds a trend in the survival of the two predicted groups.
Collapse
Affiliation(s)
- Suzanne C Wetstein
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands
| | - Vincent M T de Jong
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Mark Opdam
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Gwen M H E Dackus
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Josien P W Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands
| | - Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Mitko Veta
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands.
| |
Collapse
|
17
|
Characterization of Nuclear Pleomorphism and Tubules in Histopathological Images of Breast Cancer. SENSORS 2022; 22:s22155649. [PMID: 35957203 PMCID: PMC9371191 DOI: 10.3390/s22155649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/24/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer (BC) diagnosis is made by a pathologist who analyzes a portion of the breast tissue under the microscope and performs a histological evaluation. This evaluation aims to determine the grade of cellular differentiation and the aggressiveness of the tumor by the Nottingham Grade Classification System (NGS). Nowadays, digital pathology is an innovative tool for pathologists in diagnosis and acquiring new learning. However, a recurring problem in health services is the excessive workload in all medical services. For this reason, it is required to develop computational tools that assist histological evaluation. This work proposes a methodology for the quantitative analysis of BC tissue that follows NGS. The proposed methodology is based on digital image processing techniques through which the BC tissue can be characterized automatically. Moreover, the proposed nuclei characterization was helpful for grade differentiation in carcinoma images of the BC tissue reaching an 0.84 accuracy. In addition, a metric was proposed to assess the likelihood of a structure in the tissue corresponding to a tubule by considering spatial and geometrical characteristics between lumina and its surrounding nuclei, reaching an accuracy of 0.83. Tests were performed from different databases and under various magnification and staining contrast conditions, showing that the methodology is reliable for histological breast tissue analysis.
Collapse
|
18
|
Yan R, Yang Z, Li J, Zheng C, Zhang F. Divide-and-Attention Network for HE-Stained Pathological Image Classification. BIOLOGY 2022; 11:982. [PMID: 36101363 PMCID: PMC9311575 DOI: 10.3390/biology11070982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 11/19/2022]
Abstract
Since pathological images have some distinct characteristics that are different from natural images, the direct application of a general convolutional neural network cannot achieve good classification performance, especially for fine-grained classification problems (such as pathological image grading). Inspired by the clinical experience that decomposing a pathological image into different components is beneficial for diagnosis, in this paper, we propose a Divide-and-Attention Network (DANet) for Hematoxylin-and-Eosin (HE)-stained pathological image classification. The DANet utilizes a deep-learning method to decompose a pathological image into nuclei and non-nuclei parts. With such decomposed pathological images, the DANet first performs feature learning independently in each branch, and then focuses on the most important feature representation through the branch selection attention module. In this way, the DANet can learn representative features with respect to different tissue structures and adaptively focus on the most important ones, thereby improving classification performance. In addition, we introduce deep canonical correlation analysis (DCCA) constraints in the feature fusion process of different branches. The DCCA constraints play the role of branch fusion attention, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. The experimental results of three datasets demonstrate the superiority of the DANet, with an average classification accuracy of 92.5% on breast cancer classification, 95.33% on colorectal cancer grading, and 91.6% on breast cancer grading tasks.
Collapse
Affiliation(s)
- Rui Yan
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zhidong Yang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Jintao Li
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Chunhou Zheng
- School of Artificial Intelligence, Anhui University, Hefei 230093, China
| | - Fa Zhang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| |
Collapse
|
19
|
Beyond the colors: enhanced deep learning on invasive ductal carcinoma. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07478-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
20
|
Yan R, Ren F, Li J, Rao X, Lv Z, Zheng C, Zhang F. Nuclei-Guided Network for Breast Cancer Grading in HE-Stained Pathological Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:4061. [PMID: 35684680 PMCID: PMC9185232 DOI: 10.3390/s22114061] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 05/19/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
Breast cancer grading methods based on hematoxylin-eosin (HE) stained pathological images can be summarized into two categories. The first category is to directly extract the pathological image features for breast cancer grading. However, unlike the coarse-grained problem of breast cancer classification, breast cancer grading is a fine-grained classification problem, so general methods cannot achieve satisfactory results. The second category is to apply the three evaluation criteria of the Nottingham Grading System (NGS) separately, and then integrate the results of the three criteria to obtain the final grading result. However, NGS is only a semiquantitative evaluation method, and there may be far more image features related to breast cancer grading. In this paper, we proposed a Nuclei-Guided Network (NGNet) for breast invasive ductal carcinoma (IDC) grading in pathological images. The proposed nuclei-guided attention module plays the role of nucleus attention, so as to learn more nuclei-related feature representations for breast IDC grading. In addition, the proposed nuclei-guided fusion module in the fusion process of different branches can further enable the network to focus on learning nuclei-related features. Overall, under the guidance of nuclei-related features, the entire NGNet can learn more fine-grained features for breast IDC grading. The experimental results show that the performance of the proposed method is better than that of state-of-the-art method. In addition, we released a well-labeled dataset with 3644 pathological images for breast IDC grading. This dataset is currently the largest publicly available breast IDC grading dataset and can serve as a benchmark to facilitate a broader study of breast IDC grading.
Collapse
Affiliation(s)
- Rui Yan
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (F.R.); (J.L.); (Z.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Fei Ren
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (F.R.); (J.L.); (Z.L.)
| | - Jintao Li
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (F.R.); (J.L.); (Z.L.)
| | - Xiaosong Rao
- Department of Pathology, Boao Evergrande International Hospital, Qionghai 571435, China;
- Department of Pathology, Peking University International Hospital, Beijing 100084, China
| | - Zhilong Lv
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (F.R.); (J.L.); (Z.L.)
| | - Chunhou Zheng
- College of Computer Science and Technology, Anhui University, Hefei 230093, China;
| | - Fa Zhang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (F.R.); (J.L.); (Z.L.)
| |
Collapse
|
21
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
22
|
Zhu J, Liu M, Li X. Progress on deep learning in digital pathology of breast cancer: a narrative review. Gland Surg 2022; 11:751-766. [PMID: 35531111 PMCID: PMC9068546 DOI: 10.21037/gs-22-11] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/04/2022] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Pathology is the gold standard criteria for breast cancer diagnosis and has important guiding value in formulating the clinical treatment plan and predicting the prognosis. However, traditional microscopic examinations of tissue sections are time consuming and labor intensive, with unavoidable subjective variations. Deep learning (DL) can evaluate and extract the most important information from images with less need for human instruction, providing a promising approach to assist in the pathological diagnosis of breast cancer. To provide an informative and up-to-date summary on the topic of DL-based diagnostic systems for breast cancer pathology image analysis and discuss the advantages and challenges to the routine clinical application of digital pathology. METHODS A PubMed search with keywords ("breast neoplasm" or "breast cancer") and ("pathology" or "histopathology") and ("artificial intelligence" or "deep learning") was conducted. Relevant publications in English published from January 2000 to October 2021 were screened manually for their title, abstract, and even full text to determine their true relevance. References from the searched articles and other supplementary articles were also studied. KEY CONTENT AND FINDINGS DL-based computerized image analysis has obtained impressive achievements in breast cancer pathology diagnosis, classification, grading, staging, and prognostic prediction, providing powerful methods for faster, more reproducible, and more precise diagnoses. However, all artificial intelligence (AI)-assisted pathology diagnostic models are still in the experimental stage. Improving their economic efficiency and clinical adaptability are still required to be developed as the focus of further researches. CONCLUSIONS Having searched PubMed and other databases and summarized the application of DL-based AI models in breast cancer pathology, we conclude that DL is undoubtedly a promising tool for assisting pathologists in routines, but further studies are needed to realize the digitization and automation of clinical pathology.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
23
|
Tan K, Huang W, Liu X, Hu J, Dong S. A multi-modal fusion framework based on multi-task correlation learning for cancer prognosis prediction. Artif Intell Med 2022; 126:102260. [DOI: 10.1016/j.artmed.2022.102260] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 01/07/2022] [Accepted: 02/16/2022] [Indexed: 12/30/2022]
|
24
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
25
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
26
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
27
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
28
|
|
29
|
R R, Prasad K, Udupa CBK. BCHisto-Net: Breast histopathological image classification by global and local feature aggregation. Artif Intell Med 2021; 121:102191. [PMID: 34763806 DOI: 10.1016/j.artmed.2021.102191] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/15/2021] [Accepted: 10/05/2021] [Indexed: 02/06/2023]
Abstract
Breast cancer among women is the second most common cancer worldwide. Non-invasive techniques such as mammograms and ultrasound imaging are used to detect the tumor. However, breast histopathological image analysis is inevitable for the detection of malignancy of the tumor. Manual analysis of breast histopathological images is subjective, tedious, laborious and is prone to human errors. Recent developments in computational power and memory have made automation a popular choice for the analysis of these images. One of the key challenges of breast histopathological image classification at 100× magnification is to extract the features of the potential regions of interest to decide on the malignancy of the tumor. The current state-of-the-art CNN based methods for breast histopathological image classification extract features from the entire image (global features) and thus may overlook the features of the potential regions of interest. This can lead to inaccurate diagnosis of breast histopathological images. This research gap has motivated us to propose BCHisto-Net to classify breast histopathological images at 100× magnification. The proposed BCHisto-Net extracts both global and local features required for the accurate classification of breast histopathological images. The global features extract abstract image features while local features focus on potential regions of interest. Furthermore, a feature aggregation branch is proposed to combine these features for the classification of 100× images. The proposed method is quantitatively evaluated on red a private dataset and publicly available BreakHis dataset. An extensive evaluation of the proposed model showed the effectiveness of the local and global features for the classification of these images. The proposed method achieved an accuracy of 95% and 89% on KMC and BreakHis datasets respectively, outperforming state-of-the-art classifiers.
Collapse
Affiliation(s)
- Rashmi R
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
30
|
Lagree A, Shiner A, Alera MA, Fleshner L, Law E, Law B, Lu FI, Dodington D, Gandhi S, Slodkowska EA, Shenfield A, Jerzak KJ, Sadeghi-Naini A, Tran WT. Assessment of Digital Pathology Imaging Biomarkers Associated with Breast Cancer Histologic Grade. Curr Oncol 2021; 28:4298-4316. [PMID: 34898544 PMCID: PMC8628688 DOI: 10.3390/curroncol28060366] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 10/17/2021] [Accepted: 10/23/2021] [Indexed: 12/31/2022] Open
Abstract
Background: Evaluating histologic grade for breast cancer diagnosis is standard and associated with prognostic outcomes. Current challenges include the time required for manual microscopic evaluation and interobserver variability. This study proposes a computer-aided diagnostic (CAD) pipeline for grading tumors using artificial intelligence. Methods: There were 138 patients included in this retrospective study. Breast core biopsy slides were prepared using standard laboratory techniques, digitized, and pre-processed for analysis. Deep convolutional neural networks (CNNs) were developed to identify the regions of interest containing malignant cells and to segment tumor nuclei. Imaging-based features associated with spatial parameters were extracted from the segmented regions of interest (ROIs). Clinical datasets and pathologic biomarkers (estrogen receptor, progesterone receptor, and human epidermal growth factor 2) were collected from all study subjects. Pathologic, clinical, and imaging-based features were input into machine learning (ML) models to classify histologic grade, and model performances were tested against ground-truth labels at the patient-level. Classification performances were evaluated using receiver-operating characteristic (ROC) analysis. Results: Multiparametric feature sets, containing both clinical and imaging-based features, demonstrated high classification performance. Using imaging-derived markers alone, the classification performance demonstrated an area under the curve (AUC) of 0.745, while modeling these features with other pathologic biomarkers yielded an AUC of 0.836. Conclusion: These results demonstrate an association between tumor nuclear spatial features and tumor grade. If further validated, these systems may be implemented into pathology CADs and can assist pathologists to expeditiously grade tumors at the time of diagnosis and to help guide clinical decisions.
Collapse
Affiliation(s)
- Andrew Lagree
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Temerty Centre for AI Research and Education, University of Toronto, Toronto, ON M5S 1A8, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Audrey Shiner
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Marie Angeli Alera
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Lauren Fleshner
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Ethan Law
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Brianna Law
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Fang-I Lu
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - David Dodington
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - Sonal Gandhi
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, ON M5S 3H2, Canada;
| | - Elzbieta A. Slodkowska
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - Alex Shenfield
- Department of Engineering and Mathematics, Sheffield Hallam University, Howard St, Sheffield S1 1WB, UK;
| | - Katarzyna J. Jerzak
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, ON M5S 3H2, Canada;
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON M3J 2S5, Canada
| | - William T. Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Temerty Centre for AI Research and Education, University of Toronto, Toronto, ON M5S 1A8, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence: ; Tel.: +1-416-480-6100 (ext. 3746)
| |
Collapse
|
31
|
Barsha NA, Rahman A, Mahdy MRC. Automated detection and grading of Invasive Ductal Carcinoma breast cancer using ensemble of deep learning models. Comput Biol Med 2021; 139:104931. [PMID: 34666229 DOI: 10.1016/j.compbiomed.2021.104931] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/18/2021] [Accepted: 10/06/2021] [Indexed: 10/20/2022]
Abstract
Invasive ductal carcinoma (IDC) breast cancer is a significant health concern for women all around the world and early detection of the disease may increase the survival rate in patients. Therefore, Computer-Aided Diagnosis (CAD) based systems can assist pathologists to detect the disease early. In this study, we present an ensemble model to detect IDC using DenseNet-121 and DenseNet-169 followed by test time augmentation (TTA). The model achieved a balanced accuracy of 92.70% and an F1-score of 95.70% outperforming the current state-of-the-art. Comparative analysis against various pre-trained deep learning models and preprocessing methods have been carried out. Qualitative analysis has also been conducted on the test dataset. After the detection of IDC breast cancer, it is important to grade it for further treatment. In our study, we also propose an ensemble model for the grading of IDC using the pre-trained DenseNet-121, DenseNet-201, ResNet-101v2, and ResNet-50 architectures. The model is inferred from two validation cohorts. For the patch-level classification, the model yielded an overall accuracy of 69.31%, 75.07%, 61.85%, and 60.50% on one validation cohort and 62.44%, 79.14%, 76.62%, and 71.05% on the second validation cohort for 4×, 10×, 20×, and 40× magnified images respectively. The same architecture is further validated using a different IDC dataset where it achieved an overall accuracy of 90.07%. The performance of the models on the detection and grading of IDC shows that they can be useful to help pathologists detect and grade the disease.
Collapse
Affiliation(s)
- Nusrat Ameen Barsha
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - Aimon Rahman
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| | - M R C Mahdy
- Department of Electrical & Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh.
| |
Collapse
|
32
|
Classification of breast cancer types, sub-types and grade from histopathological images using deep learning technique. HEALTH AND TECHNOLOGY 2021. [DOI: 10.1007/s12553-021-00592-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
33
|
Chen J, Jiao J, He S, Han G, Qin J. Few-Shot Breast Cancer Metastases Classification via Unsupervised Cell Ranking. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1914-1923. [PMID: 31841420 DOI: 10.1109/tcbb.2019.2960019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Tumor metastases detection is of great importance for the treatment of breast cancer patients. Various CNN (convolutional neural network) based methods get excellent performance in object detection/segmentation. However, the detection of metastases in hematoxylin and eosin (H&E) stained whole-slide images (WSI) is still challenging mainly due to two aspects. (1) The resolution of the image is too large. (2) lacking labeled training data. Whole-slide images generally stored in a multi-resolution structure with multiple downsampled tiles. It is difficult to feed the whole image into memory without compression. Moreover, labeling images for the pathologists are time-consuming and expensive. In this paper, we study the problem of detecting breast cancer metastases in the pathological image on patch level. To address the abovementioned challenges, we propose a few-shot learning method to classify whether an image patch contains tumor cells. Specifically, we propose a patch-level unsupervised cell ranking approach, which only relies on images with limited labels. The main idea of the proposed method is that when cropping a patch A from the WSI and further cropping a sub-patch B from A, the cell number of A is always larger than that of B. Based on this observation, we make use of the unlabeled images to learn the ranking information of cell counting to extract the abstract features. Experimental results show that our method is effective to improve the patch-level classification accuracy, compared to the traditional supervised method. The source code is publicly available at https://github.com/fewshot-camelyon.
Collapse
|
34
|
Das A, Nair MS, Peter SD. Computer-Aided Histopathological Image Analysis Techniques for Automated Nuclear Atypia Scoring of Breast Cancer: a Review. J Digit Imaging 2021; 33:1091-1121. [PMID: 31989390 DOI: 10.1007/s10278-019-00295-z] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Breast cancer is the most common type of malignancy diagnosed in women. Through early detection and diagnosis, there is a great chance of recovery and thereby reduce the mortality rate. Many preliminary tests like non-invasive radiological diagnosis using ultrasound, mammography, and MRI are widely used for the diagnosis of breast cancer. However, histopathological analysis of breast biopsy specimen is inevitable and is considered to be the golden standard for the affirmation of cancer. With the advancements in the digital computing capabilities, memory capacity, and imaging modalities, the development of computer-aided powerful analytical techniques for histopathological data has increased dramatically. These automated techniques help to alleviate the laborious work of the pathologist and to improve the reproducibility and reliability of the interpretation. This paper reviews and summarizes digital image computational algorithms applied on histopathological breast cancer images for nuclear atypia scoring and explores the future possibilities. The algorithms for nuclear pleomorphism scoring of breast cancer can be widely grouped into two categories: handcrafted feature-based and learned feature-based. Handcrafted feature-based algorithms mainly include the computational steps like pre-processing the images, segmenting the nuclei, extracting unique features, feature selection, and machine learning-based classification. However, most of the recent algorithms are based on learned features, that extract high-level abstractions directly from the histopathological images utilizing deep learning techniques. In this paper, we discuss the various algorithms applied for the nuclear pleomorphism scoring of breast cancer, discourse the challenges to be dealt with, and outline the importance of benchmark datasets. A comparative analysis of some prominent works on breast cancer nuclear atypia scoring is done using a benchmark dataset which enables to quantitatively measure and compare the different features and algorithms used for breast cancer grading. Results show that improvements are still required, to have an automated cancer grading system suitable for clinical applications.
Collapse
Affiliation(s)
- Asha Das
- Artificial Intelligence & Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, Kerala, 682022, India.
| | - Madhu S Nair
- Artificial Intelligence & Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, Kerala, 682022, India
| | - S David Peter
- Artificial Intelligence & Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, Kerala, 682022, India
| |
Collapse
|
35
|
Oliveira SP, Neto PC, Fraga J, Montezuma D, Monteiro A, Monteiro J, Ribeiro L, Gonçalves S, Pinto IM, Cardoso JS. CAD systems for colorectal cancer from WSI are still not ready for clinical acceptance. Sci Rep 2021; 11:14358. [PMID: 34257363 PMCID: PMC8277780 DOI: 10.1038/s41598-021-93746-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 06/28/2021] [Indexed: 02/07/2023] Open
Abstract
Most oncological cases can be detected by imaging techniques, but diagnosis is based on pathological assessment of tissue samples. In recent years, the pathology field has evolved to a digital era where tissue samples are digitised and evaluated on screen. As a result, digital pathology opened up many research opportunities, allowing the development of more advanced image processing techniques, as well as artificial intelligence (AI) methodologies. Nevertheless, despite colorectal cancer (CRC) being the second deadliest cancer type worldwide, with increasing incidence rates, the application of AI for CRC diagnosis, particularly on whole-slide images (WSI), is still a young field. In this review, we analyse some relevant works published on this particular task and highlight the limitations that hinder the application of these works in clinical practice. We also empirically investigate the feasibility of using weakly annotated datasets to support the development of computer-aided diagnosis systems for CRC from WSI. Our study underscores the need for large datasets in this field and the use of an appropriate learning methodology to gain the most benefit from partially annotated datasets. The CRC WSI dataset used in this study, containing 1,133 colorectal biopsy and polypectomy samples, is available upon reasonable request.
Collapse
Affiliation(s)
- Sara P Oliveira
- INESCTEC, 4200-465, Porto, Portugal.
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal.
| | - Pedro C Neto
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| | - João Fraga
- IMP Diagnostics, 4150-146, Porto, Portugal
| | - Diana Montezuma
- IMP Diagnostics, 4150-146, Porto, Portugal
- ICBAS, University of Porto, 4050-313, Porto , Portugal
- Cancer Biology and Epigenetics Group, IPO-Porto, 4200-072, Porto, Portugal
| | | | | | | | | | | | - Jaime S Cardoso
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| |
Collapse
|
36
|
Liew XY, Hameed N, Clos J. A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:2764. [PMID: 34199444 PMCID: PMC8199592 DOI: 10.3390/cancers13112764] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/18/2022] Open
Abstract
A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.
Collapse
Affiliation(s)
- Xin Yu Liew
- Jubilee Campus, University of Nottingham, Wollaton Road, Nottingham NG8 1BB, UK; (N.H.); (J.C.)
| | | | | |
Collapse
|
37
|
Tewary S, Mukhopadhyay S. HER2 Molecular Marker Scoring Using Transfer Learning and Decision Level Fusion. J Digit Imaging 2021; 34:667-677. [PMID: 33742331 PMCID: PMC8329150 DOI: 10.1007/s10278-021-00442-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 01/13/2021] [Accepted: 03/01/2021] [Indexed: 01/28/2023] Open
Abstract
In prognostic evaluation of breast cancer, immunohistochemical (IHC) marker human epidermal growth factor receptor 2 (HER2) is used for prognostic evaluation. Accurate assessment of HER2-stained tissue sample is essential in therapeutic decision making for the patients. In regular clinical settings, expert pathologists assess the HER2-stained tissue slide under microscope for manual scoring based on prior experience. Manual scoring is time consuming, tedious, and often prone to inter-observer variation among group of pathologists. With the recent advancement in the area of computer vision and deep learning, medical image analysis has got significant attention. A number of deep learning architectures have been proposed for classification of different image groups. These networks are also used for transfer learning to classify other image classes. In the presented study, a number of transfer learning architectures are used for HER2 scoring. Five pre-trained architectures viz. VGG16, VGG19, ResNet50, MobileNetV2, and NASNetMobile with decimating the fully connected layers to get 3-class classification have been used for the comparative assessment of the networks as well as further scoring of stained tissue sample image based on statistical voting using mode operator. HER2 Challenge dataset from Warwick University is used in this study. A total of 2130 image patches were extracted to generate the training dataset from 300 training images corresponding to 30 training cases. The output model is then tested on 800 new test image patches from 100 test images acquired from 10 test cases (different from training cases) to report the outcome results. The transfer learning models have shown significant accuracy with VGG19 showing the best accuracy for the test images. The accuracy is found to be 93%, which increases to 98% on the image-based scoring using statistical voting mechanism. The output shows a capable quantification pipeline in automated HER2 score generation.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, India
- Computational Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - Sudipta Mukhopadhyay
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India.
| |
Collapse
|
38
|
Mitotic nuclei analysis in breast cancer histopathology images using deep ensemble classifier. Med Image Anal 2021; 72:102121. [PMID: 34139665 DOI: 10.1016/j.media.2021.102121] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Mitotic nuclei estimation in breast tumour samples has a prognostic significance in analysing tumour aggressiveness and grading system. The automated assessment of mitotic nuclei is challenging because of their high similarity with non-mitotic nuclei and heteromorphic appearance. In this work, we have proposed a new Deep Convolutional Neural Network (CNN) based Heterogeneous Ensemble technique "DHE-Mit-Classifier" for analysis of mitotic nuclei in breast histopathology images. The proposed technique in the first step detects candidate mitotic patches from the histopathological biopsy regions, whereas, in the second step, these patches are classified into mitotic and non-mitotic nuclei using the proposed DHE-Mit-Classifier. For the development of a heterogeneous ensemble, five different deep CNNs are designed and used as base-classifiers. These deep CNNs have varying architectural designs to capture the structural, textural, and morphological properties of the mitotic nuclei. The developed base-classifiers exploit different ideas, including (i) region homogeneity and feature invariance, (ii) asymmetric split-transform-merge, (iii) dilated convolution based multi-scale transformation, (iv) spatial and channel attention, and (v) residual learning. Multi-layer-perceptron is used as a meta-classifier to develop a robust and accurate classifier for providing the final decision. The performance of the proposed ensemble "DHE-Mit-Classifier" is evaluated against state-of-the-art CNNs. The performance evaluation on the test set suggests the superiority of the proposed ensemble with an F-score (0.77), recall (0.71), precision (0.83), and area under the precision-recall curve (0.80). The good generalisation of the proposed ensemble with a considerably high F-score and precision suggests its potential use in the development of an assistance tool for pathologists.
Collapse
|
39
|
Ruan J, Zhu Z, Wu C, Ye G, Zhou J, Yue J. A fast and effective detection framework for whole-slide histopathology image analysis. PLoS One 2021; 16:e0251521. [PMID: 33979398 PMCID: PMC8115773 DOI: 10.1371/journal.pone.0251521] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 04/27/2021] [Indexed: 11/25/2022] Open
Abstract
Pathologists generally pan, focus, zoom and scan tissue biopsies either under microscopes or on digital images for diagnosis. With the rapid development of whole-slide digital scanners for histopathology, computer-assisted digital pathology image analysis has attracted increasing clinical attention. Thus, the working style of pathologists is also beginning to change. Computer-assisted image analysis systems have been developed to help pathologists perform basic examinations. This paper presents a novel lightweight detection framework for automatic tumor detection in whole-slide histopathology images. We develop the Double Magnification Combination (DMC) classifier, which is a modified DenseNet-40 to make patch-level predictions with only 0.3 million parameters. To improve the detection performance of multiple instances, we propose an improved adaptive sampling method with superpixel segmentation and introduce a new heuristic factor, local sampling density, as the convergence condition of iterations. In postprocessing, we use a CNN model with 4 convolutional layers to regulate the patch-level predictions based on the predictions of adjacent sampling points and use linear interpolation to generate a tumor probability heatmap. The entire framework was trained and validated using the dataset from the Camelyon16 Grand Challenge and Hubei Cancer Hospital. In our experiments, the average AUC was 0.95 in the test set for pixel-level detection.
Collapse
Affiliation(s)
- Jun Ruan
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Zhikui Zhu
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Chenchen Wu
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Guanglu Ye
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Jingfan Zhou
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Junqiu Yue
- Department of Pathology, Huazhong University of Science and Technology, Tongji Medical College, Hubei Cancer Hospital, Wuhan, China
- * E-mail:
| |
Collapse
|
40
|
Lagree A, Mohebpour M, Meti N, Saednia K, Lu FI, Slodkowska E, Gandhi S, Rakovitch E, Shenfield A, Sadeghi-Naini A, Tran WT. A review and comparison of breast tumor cell nuclei segmentation performances using deep convolutional neural networks. Sci Rep 2021; 11:8025. [PMID: 33850222 PMCID: PMC8044238 DOI: 10.1038/s41598-021-87496-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/30/2021] [Indexed: 02/07/2023] Open
Abstract
Breast cancer is currently the second most common cause of cancer-related death in women. Presently, the clinical benchmark in cancer diagnosis is tissue biopsy examination. However, the manual process of histopathological analysis is laborious, time-consuming, and limited by the quality of the specimen and the experience of the pathologist. This study's objective was to determine if deep convolutional neural networks can be trained, with transfer learning, on a set of histopathological images independent of breast tissue to segment tumor nuclei of the breast. Various deep convolutional neural networks were evaluated for the study, including U-Net, Mask R-CNN, and a novel network (GB U-Net). The networks were trained on a set of Hematoxylin and Eosin (H&E)-stained images of eight diverse types of tissues. GB U-Net demonstrated superior performance in segmenting sites of invasive diseases (AJI = 0.53, mAP = 0.39 & AJI = 0.54, mAP = 0.38), validated on two hold-out datasets exclusively containing breast tissue images of approximately 7,582 annotated cells. The results of the networks, trained on images independent of breast tissue, demonstrated that tumor nuclei of the breast could be accurately segmented.
Collapse
Affiliation(s)
- Andrew Lagree
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
| | - Majidreza Mohebpour
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
| | - Nicholas Meti
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Khadijeh Saednia
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - Fang-I Lu
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Elzbieta Slodkowska
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Sonal Gandhi
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Eileen Rakovitch
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Alex Shenfield
- Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield, UK
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - William T Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada.
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada.
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada.
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada.
- Department of Radiation Oncology, University of Toronto, Toronto, Canada.
- Department of Radiation Oncology, University of Toronto and Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, TB 095, Toronto, ON, M4N 3M5, Canada.
| |
Collapse
|
41
|
On the Scale Invariance in State of the Art CNNs Trained on ImageNet. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3020019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
The diffused practice of pre-training Convolutional Neural Networks (CNNs) on large natural image datasets such as ImageNet causes the automatic learning of invariance to object scale variations. This, however, can be detrimental in medical imaging, where pixel spacing has a known physical correspondence and size is crucial to the diagnosis, for example, the size of lesions, tumors or cell nuclei. In this paper, we use deep learning interpretability to identify at what intermediate layers such invariance is learned. We train and evaluate different regression models on the PASCAL-VOC (Pattern Analysis, Statistical modeling and ComputAtional Learning-Visual Object Classes) annotated data to (i) separate the effects of the closely related yet different notions of image size and object scale, (ii) quantify the presence of scale information in the CNN in terms of the layer-wise correlation between input scale and feature maps in InceptionV3 and ResNet50, and (iii) develop a pruning strategy that reduces the invariance to object scale of the learned features. Results indicate that scale information peaks at central CNN layers and drops close to the softmax, where the invariance is reached. Our pruning strategy uses this to obtain features that preserve scale information. We show that the pruning significantly improves the performance on medical tasks where scale is a relevant factor, for example for the regression of breast histology image magnification. These results show that the presence of scale information at intermediate layers legitimates transfer learning in applications that require scale covariance rather than invariance and that the performance on these tasks can be improved by pruning off the layers where the invariance is learned. All experiments are performed on publicly available data and the code is available on GitHub.
Collapse
|
42
|
Stenzinger A, Alber M, Allgäuer M, Jurmeister P, Bockmayr M, Budczies J, Lennerz J, Eschrich J, Kazdal D, Schirmacher P, Wagner AH, Tacke F, Capper D, Müller KR, Klauschen F. Artificial intelligence and pathology: From principles to practice and future applications in histomorphology and molecular profiling. Semin Cancer Biol 2021; 84:129-143. [PMID: 33631297 DOI: 10.1016/j.semcancer.2021.02.011] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 01/29/2021] [Accepted: 02/16/2021] [Indexed: 02/07/2023]
Abstract
The complexity of diagnostic (surgical) pathology has increased substantially over the last decades with respect to histomorphological and molecular profiling. Pathology has steadily expanded its role in tumor diagnostics and beyond from disease entity identification via prognosis estimation to precision therapy prediction. It is therefore not surprising that pathology is among the disciplines in medicine with high expectations in the application of artificial intelligence (AI) or machine learning approaches given their capabilities to analyze complex data in a quantitative and standardized manner to further enhance scope and precision of diagnostics. While an obvious application is the analysis of histological images, recent applications for the analysis of molecular profiling data from different sources and clinical data support the notion that AI will enhance both histopathology and molecular pathology in the future. At the same time, current literature should not be misunderstood in a way that pathologists will likely be replaced by AI applications in the foreseeable future. Although AI will transform pathology in the coming years, recent studies reporting AI algorithms to diagnose cancer or predict certain molecular properties deal with relatively simple diagnostic problems that fall short of the diagnostic complexity pathologists face in clinical routine. Here, we review the pertinent literature of AI methods and their applications to pathology, and put the current achievements and what can be expected in the future in the context of the requirements for research and routine diagnostics.
Collapse
Affiliation(s)
- Albrecht Stenzinger
- Institute of Pathology, University Hospital Heidelberg, Im Neuenheimer Feld 224, Heidelberg, 69120, Germany; German Cancer Consortium (DKTK), Partner Site Heidelberg, and German Cancer Research Center (DKFZ), Heidelberg, Germany; German Center for Lung Research (DZL), Partner Site Heidelberg, Heidelberg, Germany.
| | - Maximilian Alber
- Institute of Pathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany; Aignostics GmbH, Schumannstr. 17, Berlin, 10117, Germany
| | - Michael Allgäuer
- Institute of Pathology, University Hospital Heidelberg, Im Neuenheimer Feld 224, Heidelberg, 69120, Germany
| | - Philipp Jurmeister
- Institute of Pathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany; German Cancer Consortium (DKTK), Partner Site Berlin, and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Bockmayr
- Institute of Pathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany; Department of Pediatric Hematology and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Research Institute, Children's Cancer Center Hamburg, Hamburg, Germany
| | - Jan Budczies
- Institute of Pathology, University Hospital Heidelberg, Im Neuenheimer Feld 224, Heidelberg, 69120, Germany; German Cancer Consortium (DKTK), Partner Site Heidelberg, and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jochen Lennerz
- Department of Pathology, Center for Integrated Diagnostics, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | - Johannes Eschrich
- Department of Hepatology & Gastroenterology, Charité University Medical Center, Berlin, Germany
| | - Daniel Kazdal
- Institute of Pathology, University Hospital Heidelberg, Im Neuenheimer Feld 224, Heidelberg, 69120, Germany; German Center for Lung Research (DZL), Partner Site Heidelberg, Heidelberg, Germany
| | - Peter Schirmacher
- Institute of Pathology, University Hospital Heidelberg, Im Neuenheimer Feld 224, Heidelberg, 69120, Germany; German Cancer Consortium (DKTK), Partner Site Heidelberg, and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alex H Wagner
- The Steve and Cindy Rasmussen Institute for Genomic Medicine, Nationwide Children's Hospital, Columbus, OH, 43205, USA; Department of Pediatrics, The Ohio State University, Columbus, OH, 43210, USA
| | - Frank Tacke
- Department of Hepatology & Gastroenterology, Charité University Medical Center, Berlin, Germany
| | - David Capper
- German Cancer Consortium (DKTK), Partner Site Berlin, and German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Neuropathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Klaus-Robert Müller
- Machine Learning Group, Technische Universität Berlin, Berlin, 10587, Germany; Department of Artificial Intelligence, Korea University, Seoul, 136-713, South Korea; Max-Planck-Institute for Informatics, Saarland Informatics Campus E1 4, Saarbrücken, 66123, Germany; Google Research, Brain Team, Berlin, Germany.
| | - Frederick Klauschen
- Institute of Pathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany; German Cancer Consortium (DKTK), Partner Site Berlin, and German Cancer Research Center (DKFZ), Heidelberg, Germany; Institute of Pathology, Ludwig-Maximilians-Universität München, Thalkirchner Strasse 36, München, 80337, Germany.
| |
Collapse
|
43
|
Murtaza G, Abdul Wahab AW, Raza G, Shuib L. A tree-based multiclassification of breast tumor histopathology images through deep learning. Comput Med Imaging Graph 2021; 89:101870. [PMID: 33545489 DOI: 10.1016/j.compmedimag.2021.101870] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 12/28/2020] [Accepted: 01/21/2021] [Indexed: 10/22/2022]
Abstract
Worldwide, the burden of cancer is drastically increasing over the past few years. Among all types of cancers in women, breast cancer (BrC) is the main cause of unnatural deaths. For early diagnosis, histopathology (Hp) imaging is a gold standard for positive and detailed (at tissue level) diagnosis of breast tumor (BrT) compared to mammogram images. A large number of studies used BrT Hp images to solve binary or multiclassification problems using high computational resources. However, classification models' performance may be compromised due to the high correlation among various types of BrT in Hp images, which raises the misclassification rate. Thus, this paper aims to develop a tree-based BrT multiclassification model via deep learning (DL) to extract discriminative features to solve the multiclassification problem with better performance using less computational resources. The main contributions of this work are to create an ensemble, tree-based DL model that is pre-trained on the BreakHis dataset, and implementation of a misclassification reduction algorithm. The ensemble, tree-based DL model, extracts discriminative BrT features from Hp images. The target dataset (i.e., Bioimaging challenge 2015 breast histology) is small in size; thus, to avoid overfitting of the proposed model, pretraining is performed on the BreakHis dataset. Whereas, misclassification reduction algorithm is implemented to enhance the performance of the classification model. The experimental results show that the proposed model outperformed the existing state-of-the-art baseline studies. The achieved classification accuracy is ranging from 87.50 % to 100 % for four subtypes of BrT. Thus, the proposed model can assist doctors as the second opinion in any healthcare centre.
Collapse
Affiliation(s)
- Ghulam Murtaza
- Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia; Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan.
| | - Ainuddin Wahid Abdul Wahab
- Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia.
| | - Ghulam Raza
- Our Lady of Lourdes Hospital Drogheda Ireland, Ireland.
| | - Liyana Shuib
- Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
44
|
ElOuassif B, Idri A, Hosni M, Abran A. Classification techniques in breast cancer diagnosis: A systematic literature review. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1811159] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Bouchra ElOuassif
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Ali Idri
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Mohamed Hosni
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Alain Abran
- Department of Software Engineering and Information Technology, Ecole De Technologie Supérieure, –university of Québec, Montreal, Canada
| |
Collapse
|
45
|
Adam A, Hadi Abd Rahman A, Samsiah Sani N, Abdi Alkareem Alyessari Z, Jumaadzan Zaleha Mamat N, Hasan B. Epithelial Layer Estimation Using Curvatures and Textural Features for Dysplastic Tissue Detection. COMPUTERS, MATERIALS & CONTINUA 2021; 67:761-777. [DOI: 10.32604/cmc.2021.014599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 11/14/2020] [Indexed: 09/02/2023]
|
46
|
Dey P. The emerging role of deep learning in cytology. Cytopathology 2020; 32:154-160. [PMID: 33222315 DOI: 10.1111/cyt.12942] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 11/11/2020] [Accepted: 11/16/2020] [Indexed: 12/14/2022]
Abstract
Deep learning (DL) is a component or subset of artificial intelligence. DL has contributed significant change in feature extraction and image classification. Various algorithmic models are used in DL such as a convolutional neural network (CNN), recurrent neural network, restricted Boltzmann machine, deep belief network and autoencoders. Of these, CNN is the most commonly used algorithm in the field of pathology for feature extraction and building neural network models. DL may be useful for tumour diagnosis, classification of the tumour and grading of the tumour in cytology. In this brief review, the basic concept of the DL and CNN are described. The application, prospects and challenges of the DL in the cytology are also discussed.
Collapse
Affiliation(s)
- Pranab Dey
- Department of Cytology and Gynec Pathology, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| |
Collapse
|
47
|
Wan T, Zhao L, Feng H, Li D, Tong C, Qin Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.103] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
48
|
Weakly-Supervised Classification of HER2 Expression in Breast Cancer Haematoxylin and Eosin Stained Slides. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10144728] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Human epidermal growth factor receptor 2 (HER2) evaluation commonly requires immunohistochemistry (IHC) tests on breast cancer tissue, in addition to the standard haematoxylin and eosin (H&E) staining tests. Additional costs and time spent on further testing might be avoided if HER2 overexpression could be effectively inferred from H&E stained slides, as a preliminary indication of the IHC result. In this paper, we propose the first method that aims to achieve this goal. The proposed method is based on multiple instance learning (MIL), using a convolutional neural network (CNN) that separately processes H&E stained slide tiles and outputs an IHC label. This CNN is pretrained on IHC stained slide tiles but does not use these data during inference/testing. H&E tiles are extracted from invasive tumour areas segmented with the HASHI algorithm. The individual tile labels are then combined to obtain a single label for the whole slide. The network was trained on slides from the HER2 Scoring Contest dataset (HER2SC) and tested on two disjoint subsets of slides from the HER2SC database and the TCGA-TCIA-BRCA (BRCA) collection. The proposed method attained 83.3 % classification accuracy on the HER2SC test set and 53.8 % on the BRCA test set. Although further efforts should be devoted to achieving improved performance, the obtained results are promising, suggesting that it is possible to perform HER2 overexpression classification on H&E stained tissue slides.
Collapse
|
49
|
Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (Lond) 2020; 40:154-166. [PMID: 32277744 PMCID: PMC7170661 DOI: 10.1002/cac2.12012] [Citation(s) in RCA: 222] [Impact Index Per Article: 44.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/06/2020] [Indexed: 12/11/2022] Open
Abstract
The development of digital pathology and progression of state-of-the-art algorithms for computer vision have led to increasing interest in the use of artificial intelligence (AI), especially deep learning (DL)-based AI, in tumor pathology. The DL-based algorithms have been developed to conduct all kinds of work involved in tumor pathology, including tumor diagnosis, subtyping, grading, staging, and prognostic prediction, as well as the identification of pathological features, biomarkers and genetic changes. The applications of AI in pathology not only contribute to improve diagnostic accuracy and objectivity but also reduce the workload of pathologists and subsequently enable them to spend additional time on high-level decision-making tasks. In addition, AI is useful for pathologists to meet the requirements of precision oncology. However, there are still some challenges relating to the implementation of AI, including the issues of algorithm validation and interpretability, computing systems, the unbelieving attitude of pathologists, clinicians and patients, as well as regulators and reimbursements. Herein, we present an overview on how AI-based approaches could be integrated into the workflow of pathologists and discuss the challenges and perspectives of the implementation of AI in tumor pathology.
Collapse
Affiliation(s)
- Yahui Jiang
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| | - Meng Yang
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Shuhao Wang
- Institute for Interdisciplinary Information SciencesTsinghua UniversityBeijing100084P. R. China
| | - Xiangchun Li
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Yan Sun
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| |
Collapse
|
50
|
Batch Mode Active Learning on the Riemannian Manifold for Automated Scoring of Nuclear Pleomorphism in Breast Cancer. Artif Intell Med 2020; 103:101805. [PMID: 32143801 DOI: 10.1016/j.artmed.2020.101805] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 11/22/2022]
Abstract
Breast cancer is the most prevalent invasive type of cancer among women. The mortality rate of the disease can be reduced considerably through timely prognosis and felicitous treatment planning, by utilizing the computer aided detection and diagnosis techniques. With the advent of whole slide image (WSI) scanners for digitizing the histopathological tissue samples, there is a drastic increase in the availability of digital histopathological images. However, these samples are often unlabeled and hence they need labeling to be done through manual annotations by domain experts and experienced pathologists. But this annotation process required for acquiring high quality large labeled training set for nuclear atypia scoring is a tedious, expensive and time consuming job. Active learning techniques have achieved widespread acceptance in reducing this human effort in annotating the data samples. In this paper, we explore the possibilities of active learning on nuclear pleomorphism scoring over a non-Euclidean framework, the Riemannian manifold. Active learning technique adopted for the cancer grading is in the batch-mode framework, that adaptively identifies the apt batch size along with the batch of instances to be queried, following a submodular optimization framework. Samples for annotation are selected considering the diversity and redundancy between the pair of samples, based on the kernelized Riemannian distance measures such as log-Euclidean metrics and the two Bregman divergences - Stein and Jeffrey divergences. Results of the adaptive Batch Mode Active Learning on the Riemannian metric show a superior performance when compared with the state-of-the-art techniques for breast cancer nuclear pleomorphism scoring, as it makes use of the information from the unlabeled samples.
Collapse
|