101
|
Arunachalam P, Venkatakrishnan P, Janakiraman N, Sangeetha S. Detection of Structure Characteristics and Its Discontinuity Based Field Programmable Gate Array Processor in Cancer Cell by Wavelet Transform. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Digital clinical histopathology is one of the crucial techniques for precise cancer cell diagnosing in modern medicine. The Synovial Sarcoma (SS) cancer cell patterns seem to be a spindle shaped cell (SSC) structure and it is very difficult to identify the exact oval shaped cell
structure through pathologist’s eye perception. Meanwhile, there is necessitating for monitoring and securing the successful and effective image data processing in the the huge network data which is also a complex one. A field programmable Gate Array (FPGA) was regarded as a necessary
one for this. In this work, based on FPGA a Cancer Cell classification is made for the regulation and execution. Hence, mathematically the SSC regularity structures and its discontinuities are measured by the holder exponent (HE) function. In this research work, HE values have been
determined by Wavelet Transform Modulus Maxima (WTMM) and Wavelet Leader (WL) methods with basis function of Haar wavelet based on FPGA Processor. The quantitative parameters such as Mean of Asymptotic Discontinuity (MAD), Mean of Removable Discontinuity (MRD) and Number of Discontinuity Points
(NDPs) have been considered to determine the prediction of discontinuity detection between WTMM and WL methods. With the help of receiver operating characteristics (ROC) curve, the significant difference of discontinuity detection performance between both the methods has been analyzed. From
the experimental results, it is clear that the WL method is more practically feasible and it gives satisfactory performance, in terms of sensitivity and specificity percentage values, which are 80.56% and 59.46%, respectively in the blue color components of the SNR 20 dB noise image.
Collapse
Affiliation(s)
- P. Arunachalam
- Electronics and Communication Engineering Department, KLN College of Engineering-Madurai, Affiliated to Centre for Research Anna University-Chennai 630612, Tamilnadu, India
| | - P. Venkatakrishnan
- Electronics and Communication Engineering Department, CMR Technical Campus, Telangana 501401, India
| | - N. Janakiraman
- Electronics and Communication Engineering Department, KLN College of Engineering-Madurai, Affiliated to Centre for Research Anna University-Chennai 630612, Tamilnadu, India
| | - S. Sangeetha
- Electrical and Electronics Engineering Department, CMR College of Engineering & Technology, Telangana 501401, India
| |
Collapse
|
102
|
|
103
|
Zheng Y, Jiang Z, Shi J, Xie F, Zhang H, Luo W, Hu D, Sun S, Jiang Z, Xue C. Encoding histopathology whole slide images with location-aware graphs for diagnostically relevant regions retrieval. Med Image Anal 2021; 76:102308. [PMID: 34856455 DOI: 10.1016/j.media.2021.102308] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 10/14/2021] [Accepted: 11/17/2021] [Indexed: 01/18/2023]
Abstract
Content-based histopathological image retrieval (CBHIR) has become popular in recent years in histopathological image analysis. CBHIR systems provide auxiliary diagnosis information for pathologists by searching for and returning regions that are contently similar to the region of interest (ROI) from a pre-established database. It is challenging and yet significant in clinical applications to retrieve diagnostically relevant regions from a database consisting of histopathological whole slide images (WSIs). In this paper, we propose a novel framework for regions retrieval from WSI database based on location-aware graphs and deep hash techniques. Compared to the present CBHIR framework, both structural information and global location information of ROIs in the WSI are preserved by graph convolution and self-attention operations, which makes the retrieval framework more sensitive to regions that are similar in tissue distribution. Moreover, benefited from the graph structure, the proposed framework has good scalability for both the size and shape variation of ROIs. It allows the pathologist to define query regions using free curves according to the appearance of tissue. Thirdly, the retrieval is achieved based on the hash technique, which ensures the framework is efficient and adequate for practical large-scale WSI database. The proposed method was evaluated on an in-house endometrium dataset with 2650 WSIs and the public ACDC-LungHP dataset. The experimental results have demonstrated that the proposed method achieved a mean average precision above 0.667 on the endometrium dataset and above 0.869 on the ACDC-LungHP dataset in the task of irregular region retrieval, which are superior to the state-of-the-art methods. The average retrieval time from a database containing 1855 WSIs is 0.752 ms. The source code is available at https://github.com/zhengyushan/lagenet.
Collapse
Affiliation(s)
- Yushan Zheng
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Zhiguo Jiang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Fengying Xie
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Haopeng Zhang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Wei Luo
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Dingyi Hu
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Shujiao Sun
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Zhongmin Jiang
- Department of Pathology, Tianjin Fifth Central Hospital, Tianjin 300450, China
| | - Chenghai Xue
- Wankangyuan Tianjin Gene Technology, Inc, Tianjin 300220, China; Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences, Tianjin 300308, China
| |
Collapse
|
104
|
Sethy PK, Pandey C, Khan MR, Behera SK, Vijaykumar K, Panigrahi S. A cost-effective computer-vision based breast cancer diagnosis. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021; 41:5253-5263. [DOI: 10.3233/jifs-189848] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In the last decade, there have been extensive reports of world health organization (WHO) on breast cancer. About 2.1 million women are affected every year and it is the second most leading cause of cancer death in women. Initial detection and diagnosis of cancer appreciably increase the chance of saving lives and reduce treatment costs. In this paper, we perform a survey of the techniques utilized in breast cancer detection and diagnosis in image processing, machine learning (ML), and deep learning (DL). We also proposed a novel computer-vision based cost-effective method for breast cancer detection and diagnosis. Along with the detection and diagnosis of breast cancer, our proposed method is capable of finding the exact position of the abnormality present in the breast that will help in breast-conserving surgery or partial mastectomy. The proposed method is the simplest and cost-effective approach that has produced highly accurate and useful outcomes when compared with the existing approach.
Collapse
Affiliation(s)
| | - Chanki Pandey
- Department of ET&T Engineering, Government Engineering College, Jagdalpur, CG, India
| | - Mohammad Rafique Khan
- Department of ET&T Engineering, Government Engineering College, Jagdalpur, CG, India
| | | | - K. Vijaykumar
- Department of Computer Science & Engineering, St. Joseph’s Institute of Technology, India
| | | |
Collapse
|
105
|
Davila Delgado JM, Oyedele L. Deep learning with small datasets: using autoencoders to address limited datasets in construction management. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107836] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
106
|
Wang L, You ZH, Li JQ, Huang YA. IMS-CDA: Prediction of CircRNA-Disease Associations From the Integration of Multisource Similarity Information With Deep Stacked Autoencoder Model. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5522-5531. [PMID: 33027025 DOI: 10.1109/tcyb.2020.3022852] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Emerging evidence indicates that circular RNA (circRNA) has been an indispensable role in the pathogenesis of human complex diseases and many critical biological processes. Using circRNA as a molecular marker or therapeutic target opens up a new avenue for our treatment and detection of human complex diseases. The traditional biological experiments, however, are usually limited to small scale and are time consuming, so the development of an effective and feasible computational-based approach for predicting circRNA-disease associations is increasingly favored. In this study, we propose a new computational-based method, called IMS-CDA, to predict potential circRNA-disease associations based on multisource biological information. More specifically, IMS-CDA combines the information from the disease semantic similarity, the Jaccard and Gaussian interaction profile kernel similarity of disease and circRNA, and extracts the hidden features using the stacked autoencoder (SAE) algorithm of deep learning. After training in the rotation forest (RF) classifier, IMS-CDA achieves 88.08% area under the ROC curve with 88.36% accuracy at the sensitivity of 91.38% on the CIRCR2Disease dataset. Compared with the state-of-the-art support vector machine and K -nearest neighbor models and different descriptor models, IMS-CDA achieves the best overall performance. In the case studies, eight of the top 15 circRNA-disease associations with the highest prediction score were confirmed by recent literature. These results indicated that IMS-CDA has an outstanding ability to predict new circRNA-disease associations and can provide reliable candidates for biological experiments.
Collapse
|
107
|
Lagree A, Shiner A, Alera MA, Fleshner L, Law E, Law B, Lu FI, Dodington D, Gandhi S, Slodkowska EA, Shenfield A, Jerzak KJ, Sadeghi-Naini A, Tran WT. Assessment of Digital Pathology Imaging Biomarkers Associated with Breast Cancer Histologic Grade. Curr Oncol 2021; 28:4298-4316. [PMID: 34898544 PMCID: PMC8628688 DOI: 10.3390/curroncol28060366] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 10/17/2021] [Accepted: 10/23/2021] [Indexed: 12/31/2022] Open
Abstract
Background: Evaluating histologic grade for breast cancer diagnosis is standard and associated with prognostic outcomes. Current challenges include the time required for manual microscopic evaluation and interobserver variability. This study proposes a computer-aided diagnostic (CAD) pipeline for grading tumors using artificial intelligence. Methods: There were 138 patients included in this retrospective study. Breast core biopsy slides were prepared using standard laboratory techniques, digitized, and pre-processed for analysis. Deep convolutional neural networks (CNNs) were developed to identify the regions of interest containing malignant cells and to segment tumor nuclei. Imaging-based features associated with spatial parameters were extracted from the segmented regions of interest (ROIs). Clinical datasets and pathologic biomarkers (estrogen receptor, progesterone receptor, and human epidermal growth factor 2) were collected from all study subjects. Pathologic, clinical, and imaging-based features were input into machine learning (ML) models to classify histologic grade, and model performances were tested against ground-truth labels at the patient-level. Classification performances were evaluated using receiver-operating characteristic (ROC) analysis. Results: Multiparametric feature sets, containing both clinical and imaging-based features, demonstrated high classification performance. Using imaging-derived markers alone, the classification performance demonstrated an area under the curve (AUC) of 0.745, while modeling these features with other pathologic biomarkers yielded an AUC of 0.836. Conclusion: These results demonstrate an association between tumor nuclear spatial features and tumor grade. If further validated, these systems may be implemented into pathology CADs and can assist pathologists to expeditiously grade tumors at the time of diagnosis and to help guide clinical decisions.
Collapse
Affiliation(s)
- Andrew Lagree
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Temerty Centre for AI Research and Education, University of Toronto, Toronto, ON M5S 1A8, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Audrey Shiner
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Marie Angeli Alera
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Lauren Fleshner
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Ethan Law
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Brianna Law
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
| | - Fang-I Lu
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - David Dodington
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - Sonal Gandhi
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, ON M5S 3H2, Canada;
| | - Elzbieta A. Slodkowska
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (D.D.); (E.A.S.)
| | - Alex Shenfield
- Department of Engineering and Mathematics, Sheffield Hallam University, Howard St, Sheffield S1 1WB, UK;
| | - Katarzyna J. Jerzak
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, ON M5S 3H2, Canada;
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON M3J 2S5, Canada
| | - William T. Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (A.L.); (A.S.); (M.A.A.); (L.F.); (E.L.); (B.L.); (A.S.-N.)
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Temerty Centre for AI Research and Education, University of Toronto, Toronto, ON M5S 1A8, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; (F.-I.L.); (S.G.)
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence: ; Tel.: +1-416-480-6100 (ext. 3746)
| |
Collapse
|
108
|
Kaur J, Kaur P. Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2351-2382. [PMID: 34690493 PMCID: PMC8525064 DOI: 10.1007/s11831-021-09667-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual's lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| |
Collapse
|
109
|
Zhou X, Gu M, Cheng Z. Local Integral Regression Network for Cell Nuclei Detection. ENTROPY 2021; 23:e23101336. [PMID: 34682060 PMCID: PMC8535160 DOI: 10.3390/e23101336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022]
Abstract
Nuclei detection is a fundamental task in the field of histopathology image analysis and remains challenging due to cellular heterogeneity. Recent studies explore convolutional neural networks to either isolate them with sophisticated boundaries (segmentation-based methods) or locate the centroids of the nuclei (counting-based approaches). Although these two methods have demonstrated superior success, their fully supervised training demands considerable and laborious pixel-wise annotations manually labeled by pathology experts. To alleviate such tedious effort and reduce the annotation cost, we propose a novel local integral regression network (LIRNet) that allows both fully and weakly supervised learning (FSL/WSL) frameworks for nuclei detection. Furthermore, the LIRNet can output an exquisite density map of nuclei, in which the localization of each nucleus is barely affected by the post-processing algorithms. The quantitative experimental results demonstrate that the FSL version of the LIRNet achieves a state-of-the-art performance compared to other counterparts. In addition, the WSL version has exhibited a competitive detection performance and an effortless data annotation that requires only 17.5% of the annotation effort.
Collapse
|
110
|
A Pyramid Architecture-Based Deep Learning Framework for Breast Cancer Detection. BIOMED RESEARCH INTERNATIONAL 2021; 2021:2567202. [PMID: 34631877 PMCID: PMC8500767 DOI: 10.1155/2021/2567202] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/20/2021] [Indexed: 11/24/2022]
Abstract
Breast cancer diagnosis is a critical step in clinical decision making, and this is achieved by making a pathological slide and gives a decision by the doctors, which is the method of final decision making for cancer diagnosis. Traditionally, the doctors usually check the pathological images by visual inspection under the microscope. Whole-slide images (WSIs) have supported the state-of-the-art diagnosis results and have been admitted as the gold standard clinically. However, this task is time-consuming and labour-intensive, and all of these limitations make low efficiency in decision making. Medical image processing protocols have been used for this task during the last decades and have obtained satisfactory results under some conditions; especially in the deep learning era, it has exhibited the advantages than those in the shallow learning period. In this paper, we proposed a novel breast cancer region mining framework based on deep pyramid architecture from multilevel and multiscale breast pathological WSIs. We incorporate the tissue- and cell-level information together and integrate these into a LSTM model for the final sequence modelling, which successfully keeps the WSIs' integration and is not mentioned by the prevalence frameworks. The experiment results demonstrated that our proposed framework greatly improved the detection accuracy than that only using tissue-level information.
Collapse
|
111
|
Oyelade ON, Ezugwu AE. A bioinspired neural architecture search based convolutional neural network for breast cancer detection using histopathology images. Sci Rep 2021; 11:19940. [PMID: 34620891 PMCID: PMC8497552 DOI: 10.1038/s41598-021-98978-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 09/16/2021] [Indexed: 12/12/2022] Open
Abstract
The design of neural architecture to address the challenge of detecting abnormalities in histopathology images can leverage the gains made in the field of neural architecture search (NAS). The NAS model consists of a search space, search strategy and evaluation strategy. The approach supports the automation of deep learning (DL) based networks such as convolutional neural networks (CNN). Automating the process of CNN architecture engineering using this approach allows for finding the best performing network for learning classification problems in specific domains and datasets. However, the engineering process of NAS is often limited by the potential solutions in search space and the search strategy. This problem often narrows the possibility of obtaining best performing networks for challenging tasks such as the classification of breast cancer in digital histopathological samples. This study proposes a NAS model with a novel search space initialization algorithm and a new search strategy. We designed a block-based stochastic categorical-to-binary (BSCB) algorithm for generating potential CNN solutions into the search space. Also, we applied and investigated the performance of a new bioinspired optimization algorithm, namely the Ebola optimization search algorithm (EOSA), for the search strategy. The evaluation strategy was achieved through computation of loss function, architectural latency and accuracy. The results obtained using images from the BACH and BreakHis databases showed that our approach obtained best performing architectures with the top-5 of the architectures yielding a significant detection rate. The top-1 CNN architecture demonstrated a state-of-the-art performance of base on classification accuracy. The NAS strategy applied in this study and the resulting candidate architecture provides researchers with the most appropriate or suitable network configuration for using digital histopathology.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, KwaZulu-Natal, 3201, South Africa.
| | - Absalom E Ezugwu
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, KwaZulu-Natal, 3201, South Africa.
| |
Collapse
|
112
|
Autoencoder-based detection of the residues involved in G protein-coupled receptor signaling. Sci Rep 2021; 11:19867. [PMID: 34615896 PMCID: PMC8494915 DOI: 10.1038/s41598-021-99019-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 09/17/2021] [Indexed: 11/08/2022] Open
Abstract
Regulator binding and mutations alter protein dynamics. The transmission of the signal of these alterations to distant sites through protein motion results in changes in protein expression and cell function. The detection of residues involved in signal transmission contributes to an elucidation of the mechanisms underlying processes as vast as cellular function and disease pathogenesis. We developed an autoencoder (AE) based method that detects residues essential for signaling by comparing the fluctuation data, particularly the time fluctuation of the side-chain distances between residues, during molecular dynamics simulations between the ligand-bound and -unbound forms or wild-type and mutant forms of proteins. Here, the AE-based method was applied to the G protein-coupled receptor (GPCR) system, particularly a class A-type GPCR, CXCR4, to detect the essential residues involved in signaling. Among the residues involved in the signaling of the homolog CXCR2, which were extracted from the literature based on the complex structures of the ligand and G protein, our method could detect more than half of the essential residues involved in G protein signaling, including those spanning the fifth and sixth transmembrane helices in the intracellular region, despite the lack of information regarding the interaction with G protein in our CXCR4 models.
Collapse
|
113
|
Gupta V, Vasudev M, Doegar A, Sambyal N. Breast cancer detection from histopathology images using modified residual neural networks. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.08.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
114
|
Abousamra S, Belinsky D, Van Arnam J, Allard F, Yee E, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Multi-Class Cell Detection Using Spatial Context Representation. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3985-3994. [PMID: 38783989 PMCID: PMC11114143 DOI: 10.1109/iccv48922.2021.00397] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
In digital pathology, both detection and classification of cells are important for automatic diagnostic and prognostic tasks. Classifying cells into subtypes, such as tumor cells, lymphocytes or stromal cells is particularly challenging. Existing methods focus on morphological appearance of individual cells, whereas in practice pathologists often infer cell classes through their spatial context. In this paper, we propose a novel method for both detection and classification that explicitly incorporates spatial contextual information. We use the spatial statistical function to describe local density in both a multi-class and a multi-scale manner. Through representation learning and deep clustering techniques, we learn advanced cell representation with both appearance and spatial context. On various benchmarks, our method achieves better performance than state-of-the-arts, especially on the classification task. We also create a new dataset for multi-class cell detection and classification in breast cancer and we make both our code and data publicly available.
Collapse
Affiliation(s)
| | | | | | | | - Eric Yee
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Tahsin Kurc
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Joel Saltz
- Stony Brook University, Stony Brook, NY 11794, USA
| | - Chao Chen
- Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
115
|
Tan J, Xia D, Dong S, Zhu H, Xu B. Research On Pre-Training Method and Generalization Ability of Big Data Recognition Model of the Internet of Things. ACM T ASIAN LOW-RESO 2021. [DOI: 10.1145/3433539] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The Internet of Things and big data are currently hot concepts and research fields. The mining, classification, and recognition of big data in the Internet of Things system are the key links that are widely of concern at present. The artificial neural network is beneficial for multi-dimensional data classification and recognition because of its strong feature extraction and self-learning ability. Pre-training is an effective method to address the gradient diffusion problem in deep neural networks and could result in better generalization. This article focuses on the performance of supervised pre-training that uses labelled data. In particular, this pre-training procedure is a simulation that shows the changes in judgment patterns as they progress from primary to mature within the human brain. In this article, the state-of-the-art of neural network pre-training is reviewed. Then, the principles of the auto-encoder and supervised pre-training are introduced in detail. Furthermore, an extended structure of supervised pre-training is proposed. A set of experiments are carried out to compare the performances of different pre-training methods. These experiments include a comparison between the original and pre-trained networks as well as a comparison between the networks with two types of sub-network structures. In addition, a homemade database is established to analyze the influence of pre-training on the generalization ability of neural networks. Finally, an ordinary convolutional neural network is used to verify the applicability of supervised pre-training.
Collapse
Affiliation(s)
- Junyang Tan
- National Key Laboratory for Remanufacturing, Beijing, China and The Department of 63926 Troops, Beijing, China
| | - Dan Xia
- National Key Laboratory for Remanufacturing, Beijing, China
| | - Shiyun Dong
- National Key Laboratory for Remanufacturing, Beijing, China
| | - Honghao Zhu
- National Key Laboratory for Remanufacturing, Beijing, China
| | - Binshi Xu
- National Key Laboratory for Remanufacturing, Beijing, China
| |
Collapse
|
116
|
Song J, Xiao L, Molaei M, Lian Z. Sparse Coding Driven Deep Decision Tree Ensembles for Nucleus Segmentation in Digital Pathology Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8088-8101. [PMID: 34534088 DOI: 10.1109/tip.2021.3112057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Automating generalized nucleus segmentation has proven to be non-trivial and challenging in digital pathology. Most existing techniques in the field rely either on deep neural networks or on shallow learning-based cascading models. The former lacks theoretical understanding and tends to degrade performance when only limited amounts of training data are available while the latter often suffers from limitations for generalization. To address these issues, we propose sparse coding driven deep decision tree ensembles (ScD2TE), an easily trained yet powerful representation learning approach with performance highly competitive to deep neural networks in the generalized nucleus segmentation task. We explore the possibility of stacking several layers based on fast convolutional sparse coding-decision tree ensemble pairwise modules and generate a layer-wise encoder-decoder architecture with intra-decoder and inter-encoder dense connectivity patterns. Under this architecture, all the encoders share the same assumption across the different layers to represent images and interact with their decoders to give fast convergence. Compared with deep neural networks, our proposed ScD2TE does not require back-propagation computation and depends on less hyper-parameters. ScD2TE is able to achieve a fast end-to-end pixel-wise training in a layer-wise manner. We demonstrated the superiority of our segmentation method by evaluating it on the multi-disease state and multi-organ dataset where consistently higher performances were obtained for comparison against other state-of-the-art deep learning techniques and cascading methods with various connectivity patterns.
Collapse
|
117
|
A Histogram-Based Low-Complexity Approach for the Effective Detection of COVID-19 Disease from CT and X-ray Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11198867] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The global COVID-19 pandemic certainly has posed one of the more difficult challenges for researchers in the current century. The development of an automatic diagnostic tool, able to detect the disease in its early stage, could undoubtedly offer a great advantage to the battle against the pandemic. In this regard, most of the research efforts have been focused on the application of Deep Learning (DL) techniques to chest images, including traditional chest X-rays (CXRs) and Computed Tomography (CT) scans. Although these approaches have demonstrated their effectiveness in detecting the COVID-19 disease, they are of huge computational complexity and require large datasets for training. In addition, there may not exist a large amount of COVID-19 CXRs and CT scans available to researchers. To this end, in this paper, we propose an approach based on the evaluation of the histogram from a common class of images that is considered as the target. A suitable inter-histogram distance measures how this target histogram is far from the histogram evaluated on a test image: if this distance is greater than a threshold, the test image is labeled as anomaly, i.e., the scan belongs to a patient affected by COVID-19 disease. Extensive experimental results and comparisons with some benchmark state-of-the-art methods support the effectiveness of the developed approach, as well as demonstrate that, at least when the images of the considered datasets are homogeneous enough (i.e., a few outliers are present), it is not really needed to resort to complex-to-implement DL techniques, in order to attain an effective detection of the COVID-19 disease. Despite the simplicity of the proposed approach, all the considered metrics (i.e., accuracy, precision, recall, and F-measure) attain a value of 1.0 under the selected datasets, a result comparable to the corresponding state-of-the-art DNN approaches, but with a remarkable computational simplicity.
Collapse
|
118
|
Takeshima H. Deep Learning and Its Application to Function Approximation for MR in Medicine: An Overview. Magn Reson Med Sci 2021; 21:553-568. [PMID: 34544924 DOI: 10.2463/mrms.rev.2021-0040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
This article presents an overview of deep learning (DL) and its applications to function approximation for MR in medicine. The aim of this article is to help readers develop various applications of DL. DL has made a large impact on the literature of many medical sciences, including MR. However, its technical details are not easily understandable for non-experts of machine learning (ML).The first part of this article presents an overview of DL and its related technologies, such as artificial intelligence (AI) and ML. AI is explained as a function that can receive many inputs and produce many outputs. ML is a process of fitting the function to training data. DL is a kind of ML, which uses a composite of many functions to approximate the function of interest. This composite function is called a deep neural network (DNN), and the functions composited into a DNN are called layers. This first part also covers the underlying technologies required for DL, such as loss functions, optimization, initialization, linear layers, non-linearities, normalization, recurrent neural networks, regularization, data augmentation, residual connections, autoencoders, generative adversarial networks, model and data sizes, and complex-valued neural networks.The second part of this article presents an overview of the applications of DL in MR and explains how functions represented as DNNs are applied to various applications, such as RF pulse, pulse sequence, reconstruction, motion correction, spectroscopy, parameter mapping, image synthesis, and segmentation.
Collapse
Affiliation(s)
- Hidenori Takeshima
- Advanced Technology Research Department, Research and Development Center, Canon Medical Systems Corporation
| |
Collapse
|
119
|
Zhou J, Zhang Q, Zhang B. Two-phase non-invasive multi-disease detection via sublingual region. Comput Biol Med 2021; 137:104782. [PMID: 34520987 DOI: 10.1016/j.compbiomed.2021.104782] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 08/16/2021] [Accepted: 08/17/2021] [Indexed: 10/20/2022]
Abstract
Non-invasive multi-disease detection is an active technology that detects human diseases automatically. By observing images of the human body, computers can make inferences on disease detection based on artificial intelligence and computer vision techniques. The sublingual vein, lying on the lower part of the human tongue, is a critical identifier in non-invasive multi-disease detection, reflecting health status. However, few studies have fully investigated non-invasive multi-disease detection via the sublingual vein using a quantitative method. In this paper, a two-phase sublingual-based disease detection framework for non-invasive multi-disease detection was proposed. In this framework, sublingual vein region segmentation was performed on each image in the first phase to achieve the region with the highest probability of covering the sublingual vein. In the second phase, features in this region were extracted, and multi-class classification was applied to these features to output a detection result. To better represent the characterisation of the obtained sublingual vein region, multi-feature representations were generated of the sublingual vein region (based on color, texture, shape, and latent representation). The effectiveness of sublingual-based multi-disease detection was quantitatively evaluated, and the proposed framework was based on 1103 sublingual vein images from patients in different health status categories. The best multi-feature representation was generated based on color, texture, and latent representation features with the highest accuracy of 98.05%.
Collapse
Affiliation(s)
- Jianhang Zhou
- PAMI Research Group, Dept. of Computer and Information Science, University of Macau, Taipa, Macau, China; Shenzhen Research Institute of Big Data, Shenzhen, 518172, China.
| | - Qi Zhang
- PAMI Research Group, Dept. of Computer and Information Science, University of Macau, Taipa, Macau, China.
| | - Bob Zhang
- PAMI Research Group, Dept. of Computer and Information Science, University of Macau, Taipa, Macau, China.
| |
Collapse
|
120
|
|
121
|
Huang P, Tan X, Zhou X, Liu S, Mercaldo F, Santone A. FABNet: Fusion Attention Block and Transfer Learning for Laryngeal cancer Tumor Grading in P63 IHC Histopathology Images. IEEE J Biomed Health Inform 2021; 26:1696-1707. [PMID: 34469320 DOI: 10.1109/jbhi.2021.3108999] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Laryngeal cancer tumor (LCT) grading is a challenging task in P63 Immunohistochemical (IHC) histopathology images due to small differences between LCT levels in pathology images, the lack of precision in lesion regions of interest (LROIs) and the paucity of LCT pathology image samples. The key to solving the LCT grading problem is to transfer knowledge from other images and to identify more accurate LROIs, but the following problems occur: 1) transferring knowledge without a priori experience often causes negative transfer and creates a heavy workload due to the abundance of image types, and 2) convolutional neural networks (CNNs) constructing deep models by stacking cannot sufficiently identify LROIs, often deviate significantly from the LROIs focused on by experienced pathologists, and are prone to providing misleading second opinions. So we propose a novel fusion attention block network (FABNet) to address these problems. First, we propose a model transfer method based on clinical a priori experience and sample analysis (CPESA) that analyzes the transfer ability by integrating clinical a priori experience using indicators such as the relationship between the cancer onset location and morphology and the texture and staining degree of cell nuclei in histopathology images; our method further validates these indicators by the probability distribution of cancer image samples. Then, we propose a fusion attention block (FAB) structure, which can both provide an advanced non-uniform sparse representation of images and extract spatial relationship information between nuclei; consequently, the LROI can be more accurate and more relevant to pathologists. We conducted extensive experiments, compared with the best Baseline model, the classification accuracy is improved 25%, and It is demonstrated that FABNet performs better on different cancer pathology image datasets and outperforms other state of the art (SOTA) models.
Collapse
|
122
|
Wan Y, Yang P, Xu L, Yang J, Luo C, Wang J, Chen F, Wu Y, Lu Y, Ruan D, Niu T. Radiomics analysis combining unsupervised learning and handcrafted features: A multiple-disease study. Med Phys 2021; 48:7003-7015. [PMID: 34453332 DOI: 10.1002/mp.15199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To study and investigate the synergistic benefit of incorporating both conventional handcrafted and learning-based features in disease identification across a wide range of clinical setups. METHODS AND MATERIALS In this retrospective study, we collected 170, 150, 209, and 137 patients with four different disease types associated with identification objectives : Lymph node metastasis status of gastric cancer (GC), 5-year survival status of patients with high-grade osteosarcoma (HOS), early recurrence status of intrahepatic cholangiocarcinoma (ICC), and pathological grades of pancreatic neuroendocrine tumors (pNETs). Computed tomography (CT) and magnetic resonance imaging (MRI) were used to derive image features for GC/HOS/pNETs and ICC, respectively. In each study, 67 universal handcrafted features and study-specific features based on the sparse autoencoder (SAE) method were extracted and fed into the subsequent feature selection and learning model to predict the corresponding disease identification. Models using handcrafted alone, SAE alone, and hybrid features were optimized and their performance was compared. Prominent features were analyzed both qualitatively and quantitatively to generate study-specific and cross-study insight. In addition to direct performance gain assessment, correlation analysis was performed to assess the complementarity between handcrafted features and SAE features. RESULTS On the independent hold-off test, the handcrafted, SAE, and hybrid features based prediction yielded area under the curve of 0.761 versus 0.769 versus 0.829 for GC, 0.629 versus 0.740 versus 0.709 for HOS, 0.717 versus 0.718 versus 0.758 for ICC, and 0.739 versus 0.715 versus 0.771 for pNETs studies, respectively. In three out of the four studies, prediction using the hybrid features yields the best performance, demonstrating the general benefit in using hybrid features. Prediction with SAE features alone had the best performance in the HOS study, which may be explained by the complexity of HOS prognosis and the possibility of a slight overfit due to higher correlation between handcrafted and SAE features. CONCLUSION This study demonstrated the general benefit of combing handcrafted and learning-based features in radiomics modeling. It also clearly illustrates the task-specific and data-specific dependency on the performance gain and suggests that while the common methodology of feature combination may be applied across various studies and tasks, study-specific feature selection and model optimization are still necessary to achieve high accuracy and robustness.
Collapse
Affiliation(s)
- Yidong Wan
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Pengfei Yang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Lei Xu
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Jing Yang
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China
| | - Chen Luo
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Jing Wang
- Institute of Translational Medicine, Zhejiang University, Hangzhou, China.,Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Feng Chen
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yan Wu
- Department of Orthopaedics, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yun Lu
- The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Dan Ruan
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, California, USA
| | - Tianye Niu
- Nuclear & Radiological Engineering and Medical Physics Programs, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
123
|
Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2:86-94. [DOI: 10.35711/aimi.v2.i4.86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/24/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Affiliation(s)
- Guang-Yuan Li
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| | - Cheng-Yan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| |
Collapse
|
124
|
Zhang B, Zhou J. Multi-feature representation for burn depth classification via burn images. Artif Intell Med 2021; 118:102128. [PMID: 34412845 DOI: 10.1016/j.artmed.2021.102128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 05/13/2021] [Accepted: 06/22/2021] [Indexed: 11/19/2022]
Abstract
Burns are a common and severe problem in public health. Early and timely classification of burn depth is effective for patients to receive targeted treatment, which can save their lives. However, identifying burn depth from burn images requires physicians to have a lot of medical experience. The speed and precision to diagnose the depth of the burn image are not guaranteed due to its high workload and cost for clinicians. Thus, implementing some smart burn depth classification methods is desired at present. In this paper, we propose a computerized method to automatically evaluate the burn depth by using multiple features extracted from burn images. Specifically, color features, texture features and latent features are extracted from burn images, which are then concatenated together and fed to several classifiers, such as random forest to generate the burn level. A standard burn image dataset is evaluated by our proposed method, obtaining an Accuracy of 85.86% and 76.87% by classifying the burn images into two classes and three classes, respectively, outperforming conventional methods in the burn depth identification. The results indicate our approach is effective and has the potential to aid medical experts in identifying different burn depths.
Collapse
Affiliation(s)
- Bob Zhang
- PAMI Research Group, Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau.
| | - Jianhang Zhou
- PAMI Research Group, Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau
| |
Collapse
|
125
|
Chen Y, Liang D, Bai X, Xu Y, Yang X. Cell Localization and Counting Using Direction Field Map. IEEE J Biomed Health Inform 2021; 26:359-368. [PMID: 34406952 DOI: 10.1109/jbhi.2021.3105545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic cell counting in pathology images is challenging due to blurred boundaries, low-contrast, and overlapping between cells. In this paper, we train a convolutional neural network (CNN) to predict a two-dimensional direction field map and then use it to localize cell individuals for counting. Specifically, we define a direction field on each pixel in the cell regions (obtained by dilating the original annotation in terms of cell centers) as a two-dimensional unit vector pointing from the pixel to its corresponding cell center. Direction field for adjacent pixels in different cells have opposite directions departing from each other, while those in the same cell region have directions pointing to the same center. Such unique property is used to partition overlapped cells for localization and counting. To deal with those blurred boundaries or low contrast cells, we set the direction field of the background pixels to be zeros in the ground-truth generation. Thus, adjacent pixels belonging to cells and background will have an obvious difference in the predicted direction field. To further deal with cells of varying density and overlapping issues, we adopt geometry adaptive (varying) radius for cells of different densities in the generation of ground-truth direction field map, which guides the CNN model to separate cells of different densities and overlapping cells. Extensive experimental results on three widely used datasets (i.e., Cell, CRCHistoPhenotype2016, and MBM datasets) demonstrate the effectiveness of the proposed approach.
Collapse
|
126
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
127
|
Chen Y, Yi Z. Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
128
|
Deep learning with neighborhood preserving embedding regularization and its application for soft sensor in an industrial hydrocracking process. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.03.026] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
129
|
A Cascade Deep Forest Model for Breast Cancer Subtype Classification Using Multi-Omics Data. MATHEMATICS 2021. [DOI: 10.3390/math9131574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Automated diagnosis systems aim to reduce the cost of diagnosis while maintaining the same efficiency. Many methods have been used for breast cancer subtype classification. Some use single data source, while others integrate many data sources, the case that results in reduced computational performance as opposed to accuracy. Breast cancer data, especially biological data, is known for its imbalance, with lack of extensive amounts of histopathological images as biological data. Recent studies have shown that cascade Deep Forest ensemble model achieves a competitive classification accuracy compared with other alternatives, such as the general ensemble learning methods and the conventional deep neural networks (DNNs), especially for imbalanced training sets, through learning hyper-representations through using cascade ensemble decision trees. In this work, a cascade Deep Forest is employed to classify breast cancer subtypes, IntClust and Pam50, using multi-omics datasets and different configurations. The results obtained recorded an accuracy of 83.45% for 5 subtypes and 77.55% for 10 subtypes. The significance of this work is that it is shown that using gene expression data alone with the cascade Deep Forest classifier achieves comparable accuracy to other techniques with higher computational performance, where the time recorded is about 5 s for 10 subtypes, and 7 s for 5 subtypes.
Collapse
|
130
|
Hao Y, Qiao S, Zhang L, Xu T, Bai Y, Hu H, Zhang W, Zhang G. Breast Cancer Histopathological Images Recognition Based on Low Dimensional Three-Channel Features. Front Oncol 2021; 11:657560. [PMID: 34195073 PMCID: PMC8236881 DOI: 10.3389/fonc.2021.657560] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Accepted: 05/11/2021] [Indexed: 11/28/2022] Open
Abstract
Breast cancer (BC) is the primary threat to women’s health, and early diagnosis of breast cancer is imperative. Although there are many ways to diagnose breast cancer, the gold standard is still pathological examination. In this paper, a low dimensional three-channel features based breast cancer histopathological images recognition method is proposed to achieve fast and accurate breast cancer benign and malignant recognition. Three-channel features of 10 descriptors were extracted, which are gray level co-occurrence matrix on one direction (GLCM1), gray level co-occurrence matrix on four directions (GLCM4), average pixel value of each channel (APVEC), Hu invariant moment (HIM), wavelet features, Tamura, completed local binary pattern (CLBP), local binary pattern (LBP), Gabor, histogram of oriented gradient (Hog), respectively. Then support vector machine (SVM) was used to assess their performance. Experiments on BreaKHis dataset show that GLCM1, GLCM4 and APVEC achieved the recognition accuracy of 90.2%-94.97% at the image level and 89.18%-94.24% at the patient level, which is better than many state-of-the-art methods, including many deep learning frameworks. The experimental results show that the breast cancer recognition based on high dimensional features will increase the recognition time, but the recognition accuracy is not greatly improved. Three-channel features will enhance the recognizability of the image, so as to achieve higher recognition accuracy than gray-level features.
Collapse
Affiliation(s)
- Yan Hao
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Shichang Qiao
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Li Zhang
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Ting Xu
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Yanping Bai
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Hongping Hu
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
131
|
Hoq M, Uddin MN, Park SB. Vocal Feature Extraction-Based Artificial Intelligent Model for Parkinson's Disease Detection. Diagnostics (Basel) 2021; 11:diagnostics11061076. [PMID: 34208330 PMCID: PMC8231105 DOI: 10.3390/diagnostics11061076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 05/25/2021] [Accepted: 06/01/2021] [Indexed: 11/16/2022] Open
Abstract
As a neurodegenerative disorder, Parkinson’s disease (PD) affects the nerve cells of the human brain. Early detection and treatment can help to relieve the symptoms of PD. Recent PD studies have extracted the features from vocal disorders as a harbinger for PD detection, as patients face vocal changes and impairments at the early stages of PD. In this study, two hybrid models based on a Support Vector Machine (SVM) integrating with a Principal Component Analysis (PCA) and a Sparse Autoencoder (SAE) are proposed to detect PD patients based on their vocal features. The first model extracted and reduced the principal components of vocal features based on the explained variance of each feature using PCA. For the first time, the second model used a novel Deep Neural Network (DNN) of an SAE, consisting of multiple hidden layers with L1 regularization to compress the vocal features into lower-dimensional latent space. In both models, reduced features were fed into the SVM as inputs, which performed classification by learning hyperplanes, along with projecting the data into a higher dimension. An F1-score, a Mathews Correlation Coefficient (MCC), and a Precision-Recall curve were used, along with accuracy to evaluate the proposed models due to highly imbalanced data. With its highest accuracy of 0.935, F1-score of 0.951, and MCC value of 0.788, the probing results show that the proposed model of the SAE-SVM surpassed not only the former model of the PCA-SVM and other standard models including Multilayer Perceptron (MLP), Extreme Gradient Boosting (XGBoost), K-Nearest Neighbor (KNN), and Random Forest (RF), but also surpassed two recent studies using the same dataset. Oversampling and balancing the dataset with SMOTE boosted the performance of the models.
Collapse
Affiliation(s)
- Muntasir Hoq
- Department of Computer Science and Engineering, East Delta University, Chattogram 4209, Bangladesh;
| | - Mohammed Nazim Uddin
- Department of Computer Science and Engineering, East Delta University, Chattogram 4209, Bangladesh;
- Correspondence:
| | - Seung-Bo Park
- Department of Software Convergence Engineering, Inha University, Incheon 22201, Korea;
| |
Collapse
|
132
|
Liu X, Guo Z, Cao J, Tang J. MDC-net: A new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information. Comput Biol Med 2021; 135:104543. [PMID: 34146800 DOI: 10.1016/j.compbiomed.2021.104543] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 05/28/2021] [Accepted: 05/29/2021] [Indexed: 11/25/2022]
Abstract
Accurate segmentation of nuclei in digital pathology images can assist doctors in diagnosing diseases and evaluating subsequent treatments. Manual segmentation of nuclei from pathology images is time-consuming because of the large number of nuclei and is also error-prone. Therefore, accurate and automatic nucleus segmentation methods are required. Owing to the large variations in the characterization of nuclei, it is difficult to accurately segment nuclei using traditional methods. In this study, we propose a new method for nucleus segmentation. The proposed method uses a deep fully convolutional neural network to perform end-to-end segmentation on pathological tissue slices. Multiple short residual connections were used to fuse feature maps from different scales to better utilize the context information. Dilated convolutions with different dilation ratios were used to increase the receptive fields. In addition, we incorporated the distance map and contour information into the segmentation method to segment touching nuclei, which is difficult via traditional segmentation methods. Finally, post-processing was used to improve the segmentation results. The results demonstrate that our segmentation method can obtain comparable or better performance than other state-of-the-art methods on the public nuclei histopathology datasets.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China.
| | - Zhengsheng Guo
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Jun Cao
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Jinshan Tang
- Department of Applied Computing, College of Computing, Michigan Technological University, Houghton, MI, 49931, USA; Center for Biocomputing and Digital Heath, Institute of Computing and Cybersystems, & Health Research Institute, Michigan Technological University, Houghton, MI, 49931, USA.
| |
Collapse
|
133
|
Li Y, Wang Y, Zhang Y, Zhang J. Diagnosis of Inter-turn Short Circuit of Permanent Magnet Synchronous Motor Based on Deep learning and Small Fault Samples. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.160] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
134
|
Mercan C, Aygunes B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J Biomed Health Inform 2021; 25:2041-2049. [PMID: 33166257 PMCID: PMC8274968 DOI: 10.1109/jbhi.2020.3036734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.
Collapse
|
135
|
Gavrielides MA, Miller M, Hagemann IS, Abdelal H, Alipour Z, Chen JF, Salari B, Sun L, Zhou H, Seidman JD. Clinical Decision Support for Ovarian Carcinoma Subtype Classification: A Pilot Observer Study With Pathology Trainees. Arch Pathol Lab Med 2021; 144:869-877. [PMID: 31816269 DOI: 10.5858/arpa.2019-0390-oa] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/16/2019] [Indexed: 11/06/2022]
Abstract
CONTEXT.— Clinical decision support (CDS) systems could assist less experienced pathologists with certain diagnostic tasks for which subspecialty training or extensive experience is typically needed. The effect of decision support on pathologist performance for such diagnostic tasks has not been examined. OBJECTIVE.— To examine the impact of a CDS tool for the classification of ovarian carcinoma subtypes by pathology trainees in a pilot observer study using digital pathology. DESIGN.— Histologic review on 90 whole slide images from 75 ovarian cancer patients was conducted by 6 pathology residents using: (1) unaided review of whole slide images, and (2) aided review, where in addition to whole slide images observers used a CDS tool that provided information about the presence of 8 histologic features important for subtype classification that were identified previously by an expert in gynecologic pathology. The reference standard of ovarian subtype consisted of majority consensus from a panel of 3 gynecologic pathology experts. RESULTS.— Aided review improved pairwise concordance with the reference standard for 5 of 6 observers by 3.3% to 17.8% (for 2 observers, increase was statistically significant) and mean interobserver agreement by 9.2% (not statistically significant). Observers benefited the most when the CDS tool prompted them to look for missed histologic features that were definitive for a certain subtype. Observer performance varied widely across cases with unanimous and nonunanimous reference classification, supporting the need for balancing data sets in terms of case difficulty. CONCLUSIONS.— Findings showed the potential of CDS systems to close the knowledge gap between pathologists for complex diagnostic tasks.
Collapse
Affiliation(s)
- Marios A Gavrielides
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Meghan Miller
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Ian S Hagemann
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Heba Abdelal
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Zahra Alipour
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Jie-Fu Chen
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Behzad Salari
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Lulu Sun
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Huifang Zhou
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| | - Jeffrey D Seidman
- From the Division of Imaging, Diagnostics, and Software Reliability, Office of Engineering and Science Laboratories (Dr Gavrielides and Ms Miller), and the Office of In Vitro Diagnostics and Radiological Health, Division of Molecular Genetics and Pathology (Dr Seidman), Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland; the Department of Bioengineering, University of Maryland, College Park (Ms Miller); and the Departments of Pathology and Immunology (Drs Hagemann, Abdelal, Alipour, Chen, Salari, Sun, and Zhou) and Obstetrics and Gynecology (Dr Hagemann), Washington University School of Medicine, St Louis, Missouri. Ms Miller is currently with PCTEST Engineering Laboratory, Columbia, Maryland
| |
Collapse
|
136
|
Muramatsu C. [9. Computerized Diagnostic Aid for Mammography Using Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:489-496. [PMID: 34011792 DOI: 10.6009/jjrt.2021_jsrt_77.5.489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
137
|
Li Z, Fan X, Shang Z, Zhang L, Zhen H, Fang C. Towards computational analytics of 3D neuron images using deep adversarial learning. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.03.129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
138
|
Xu J, Lu H, Li H, Yan C, Wang X, Zang M, Rooij DGD, Madabhushi A, Xu EY. Computerized spermatogenesis staging (CSS) of mouse testis sections via quantitative histomorphological analysis. Med Image Anal 2021; 70:101835. [PMID: 33676102 PMCID: PMC8046964 DOI: 10.1016/j.media.2020.101835] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 01/16/2023]
Abstract
Spermatogenesis in mammals is a cyclic process of spermatogenic cell development in the seminiferous epithelium that can be subdivided into 12 subsequent stages. Histological staging analysis of testis sections, specifically of seminiferous tubule cross-sections, is the only effective method to evaluate the quality of the spermatogenic process and to determine developmental defects leading to infertility. Such staging analysis, however, is tedious and time-consuming, and it may take a long time to become proficient. We now have developed a Computerized Staging system of Spermatogenesis (CSS) for mouse testis sections through learning of an expert with decades of experience in mouse testis staging. The development of the CSS system comprised three major parts: 1) Developing computational image analysis models for mouse testis sections; 2) Automated classification of each seminiferous tubule cross-section into three stage groups: Early Stages (ES: stages I-V), Middle Stages (MS: stages VI-VIII), and Late Stages (LS: stages IV-XII); 3) Automated classification of MS into distinct stages VI, VII-mVIII, and late VIII based on newly developed histomorphological features. A cohort of 40 H&E stained normal mouse testis sections was built according to three modules where 28 cross-sections were leveraged for developing tubule region segmentation, spermatogenic cells types and multi-concentric-layers segmentation models. The rest of 12 testis cross-sections, approximately 2314 tubules whose stages were manually annotated by two expert testis histologists, served as the basis for developing the CSS system. The CSS system's accuracy of mean and standard deviation (MSD) in identifying ES, MS, and LS were 0.93 ± 0.03, 0.94 ± 0.11, and 0.89 ± 0.05 and 0.85 ± 0.12, 0.88 ± 0.07, and 0.96 ± 0.04 for one with 5 years of experience, respectively. The CSS system's accuracy of MSD in identifying stages VI, VII-mVIII, and late VIII are 0.74 ± 0.03, 0.85 ± 0.04, and 0.78 ± 0.06 and 0.34 ± 0.18, 0.78 ± 0.16, and 0.44 ± 0.25 for one with 5 years of experience, respectively. In terms of time it takes to collect these data, it takes on average 3 hours for a histologist and 1.87 hours for the CSS system to finish evaluating an entire testis section (computed with a PC (I7-6800k 4.0 GHzwith 32GB of RAM & 256G SSD) and a Titan 1080Ti GPU). Therefore, the CSS system is more accurate and faster compared to a human histologist in staging, and further optimization and development will not only lead to a complete staging of all 12 stages of mouse spermatogenesis but also could aid in the future diagnosis of human infertility. Moreover, the top-ranking histomorphological features identified by the CSS classifier are consistent with the primary features used by histologists in discriminating stages VI, VII-mVIII, and late VIII.
Collapse
Affiliation(s)
- Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China; School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China.
| | - Haoda Lu
- Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China; School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Haixin Li
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing 211166, China
| | - Chaoyang Yan
- Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China; School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiangxue Wang
- Department of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA
| | - Min Zang
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing 211166, China
| | - Dirk G de Rooij
- Reproductive Biology Group, Division of Developmental Biology, Dept. of Biology, Faculty of Science, Utrecht University, Utrecht 3584 CH, The Netherlands; Center for Reproductive Medicine, Academic Medical Center, University of Amsterdam, Amsterdam 1105 AZ, The Netherlands
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio 44106-7207, USA
| | - Eugene Yujun Xu
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing 211166, China; Department of Neurology, Center for Reproductive Sciences, Northwestern University Feinberg School of Medicine, IL 60611, USA.
| |
Collapse
|
139
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 266] [Impact Index Per Article: 66.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
140
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
141
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
142
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 1011] [Impact Index Per Article: 252.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
143
|
Alzubaidi L, Al-Amidie M, Al-Asadi A, Humaidi AJ, Al-Shamma O, Fadhel MA, Zhang J, Santamaría J, Duan Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers (Basel) 2021; 13:1590. [PMID: 33808207 PMCID: PMC8036379 DOI: 10.3390/cancers13071590] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/24/2021] [Accepted: 03/27/2021] [Indexed: 12/27/2022] Open
Abstract
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Ahmed Al-Asadi
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad 10001, Iraq;
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq;
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain;
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| |
Collapse
|
144
|
Wang H, Pujos-Guillot E, Comte B, de Miranda JL, Spiwok V, Chorbev I, Castiglione F, Tieri P, Watterson S, McAllister R, de Melo Malaquias T, Zanin M, Rai TS, Zheng H. Deep learning in systems medicine. Brief Bioinform 2021; 22:1543-1559. [PMID: 33197934 PMCID: PMC8382976 DOI: 10.1093/bib/bbaa237] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 08/25/2020] [Accepted: 08/26/2020] [Indexed: 12/11/2022] Open
Abstract
Systems medicine (SM) has emerged as a powerful tool for studying the human body at the systems level with the aim of improving our understanding, prevention and treatment of complex diseases. Being able to automatically extract relevant features needed for a given task from high-dimensional, heterogeneous data, deep learning (DL) holds great promise in this endeavour. This review paper addresses the main developments of DL algorithms and a set of general topics where DL is decisive, namely, within the SM landscape. It discusses how DL can be applied to SM with an emphasis on the applications to predictive, preventive and precision medicine. Several key challenges have been highlighted including delivering clinical impact and improving interpretability. We used some prototypical examples to highlight the relevance and significance of the adoption of DL in SM, one of them is involving the creation of a model for personalized Parkinson's disease. The review offers valuable insights and informs the research in DL and SM.
Collapse
Affiliation(s)
| | - Estelle Pujos-Guillot
- metabolomic platform dedicated to metabolism studies in nutrition and health in the French National Research Institute for Agriculture, Food and Environment
| | - Blandine Comte
- French National Research Institute for Agriculture, Food and Environment
| | - Joao Luis de Miranda
- (ESTG/IPP) and a Researcher (CERENA/IST) in optimization methods and process systems engineering
| | - Vojtech Spiwok
- Molecular Modelling Researcher applying machine learning to accelerate molecular simulations
| | - Ivan Chorbev
- Faculty for Computer Science and Engineering, University Ss Cyril and Methodius in Skopje, North Macedonia working in the area of eHealth and assistive technologies
| | | | - Paolo Tieri
- National Research Council of Italy (CNR) and a lecturer at Sapienza University in Rome, working in the field of network medicine and computational biology
| | | | - Roisin McAllister
- Research Associate working in CTRIC, University of Ulster, Derry, and has worked in clinical and academic roles in the fields of molecular diagnostics and biomarker discovery
| | | | - Massimiliano Zanin
- Researcher working in the Institute for Cross-Disciplinary Physics and Complex Systems, Spain, with an interest on data analysis and integration using statistical physics techniques
| | - Taranjit Singh Rai
- Lecturer in cellular ageing at the Centre for Stratified Medicine. Dr Rai’s research interests are in cellular senescence, which is thought to promote cellular and tissue ageing in disease, and the development of senolytic compounds to restrict this process
| | - Huiru Zheng
- Professor of computer sciences at Ulster University
| |
Collapse
|
145
|
Wang Z, Li R, Wang M, Li A. GPDBN: deep bilinear network integrating both genomic data and pathological images for breast cancer prognosis prediction. Bioinformatics 2021; 37:2963-2970. [PMID: 33734318 PMCID: PMC8479662 DOI: 10.1093/bioinformatics/btab185] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 02/07/2021] [Accepted: 03/16/2021] [Indexed: 02/02/2023] Open
Abstract
MOTIVATION Breast cancer is a very heterogeneous disease and there is an urgent need to design computational methods that can accurately predict the prognosis of breast cancer for appropriate therapeutic regime. Recently, deep learning-based methods have achieved great success in prognosis prediction, but many of them directly combine features from different modalities that may ignore the complex inter-modality relations. In addition, existing deep learning-based methods do not take intra-modality relations into consideration that are also beneficial to prognosis prediction. Therefore, it is of great importance to develop a deep learning-based method that can take advantage of the complementary information between intra-modality and inter-modality by integrating data from different modalities for more accurate prognosis prediction of breast cancer. RESULTS We present a novel unified framework named genomic and pathological deep bilinear network (GPDBN) for prognosis prediction of breast cancer by effectively integrating both genomic data and pathological images. In GPDBN, an inter-modality bilinear feature encoding module is proposed to model complex inter-modality relations for fully exploiting intrinsic relationship of the features across different modalities. Meanwhile, intra-modality relations that are also beneficial to prognosis prediction, are captured by two intra-modality bilinear feature encoding modules. Moreover, to take advantage of the complementary information between inter-modality and intra-modality relations, GPDBN further combines the inter- and intra-modality bilinear features by using a multi-layer deep neural network for final prognosis prediction. Comprehensive experiment results demonstrate that the proposed GPDBN significantly improves the performance of breast cancer prognosis prediction and compares favorably with existing methods. AVAILABILITYAND IMPLEMENTATION GPDBN is freely available at https://github.com/isfj/GPDBN. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Zhiqin Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China
| | - Ruiqing Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China,Centers for Biomedical Engineering, University of Science and Technology of China, Hefei AH230027, China,To whom correspondence should be addressed. or
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei AH230027, China,Centers for Biomedical Engineering, University of Science and Technology of China, Hefei AH230027, China,To whom correspondence should be addressed. or
| |
Collapse
|
146
|
Akay M, Du Y, Sershen CL, Wu M, Chen TY, Assassi S, Mohan C, Akay YM. Deep Learning Classification of Systemic Sclerosis Skin Using the MobileNetV2 Model. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:104-110. [PMID: 35402975 PMCID: PMC8901014 DOI: 10.1109/ojemb.2021.3066097] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/03/2021] [Accepted: 03/08/2021] [Indexed: 11/21/2022] Open
Abstract
Goal: Systemic sclerosis (SSc) is a rare autoimmune, systemic disease with prominent fibrosis of skin and internal organs. Early diagnosis of the disease is crucial for designing effective therapy and management plans. Machine learning algorithms, especially deep learning, have been found to be greatly useful in biology, medicine, healthcare, and biomedical applications, in the areas of medical image processing and speech recognition. However, the need for a large training data set and the requirement for a graphics processing unit (GPU) have hindered the wide application of machine learning algorithms as a diagnostic tool in resource-constrained environments (e.g., clinics). Methods: In this paper, we propose a novel mobile deep learning network for the characterization of SSc skin. The proposed network architecture consists of the UNet, a dense connectivity convolutional neural network (CNN) with added classifier layers that when combined with limited training data, yields better image segmentation and more accurate classification, and a mobile training module. In addition, to improve the computational efficiency and diagnostic accuracy, the highly efficient training model called "MobileNetV2," which is designed for mobile and embedded applications, was used to train the network. Results: The proposed network was implemented using a standard laptop (2.5 GHz Intel Core i7). After fine tuning, our results showed the proposed network reached 100% accuracy on the training image set, 96.8% accuracy on the validation image set, and 95.2% on the testing image set. The training time was less than 5 hours. We also analyzed the same normal vs SSc skin image sets using the CNN using the same laptop. The CNN reached 100% accuracy on the training image set, 87.7% accuracy on the validation image set, and 82.9% on the testing image set. Additionally, it took more than 14 hours to train the CNN architecture. We also utilized the MobileNetV2 model to analyze an additional dataset of images and classified them as normal, early (mid and moderate) SSc or late (severe) SSc skin images. The network reached 100% accuracy on the training image set, 97.2% on the validation set, and 94.8% on the testing image set. Using the same normal, early and late phase SSc skin images, the CNN reached 100% accuracy on the training image set, 87.7% accuracy on the validation image set, and 82.9% on the testing image set. These results indicated that the MobileNetV2 architecture is more accurate and efficient compared to the CNN to classify normal, early and late phase SSc skin images. Conclusions: Our preliminary study, intended to show the efficacy of the proposed network architecture, holds promise in the characterization of SSc. We believe that the proposed network architecture could easily be implemented in a clinical setting, providing a simple, inexpensive, and accurate screening tool for SSc.
Collapse
Affiliation(s)
- Metin Akay
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| | - Yong Du
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| | - Cheryl L. Sershen
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| | - Minghua Wu
- Division of Rheumatology and Clinical Immunogenetics, Department of Internal Medicine UTHealthHoustonTX77030USA
| | - Ting Y. Chen
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| | - Shervin Assassi
- Division of Rheumatology and Clinical Immunogenetics, Department of Internal Medicine UTHealthHoustonTX77030USA
| | - Chandra Mohan
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| | - Yasemin M. Akay
- Biomedical Engineering DepartmentUniversity of HoustonHoustonTX77204USA
| |
Collapse
|
147
|
Rossi A, Hosseinzadeh M, Bianchini M, Scarselli F, Huisman H. Multi-Modal Siamese Network for Diagnostically Similar Lesion Retrieval in Prostate MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:986-995. [PMID: 33296302 DOI: 10.1109/tmi.2020.3043641] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-parametric prostate MRI (mpMRI) is a powerful tool to diagnose prostate cancer, though difficult to interpret even for experienced radiologists. A common radiological procedure is to compare a magnetic resonance image with similarly diagnosed cases. To assist the radiological image interpretation process, computerized Content-Based Image Retrieval systems (CBIRs) can therefore be employed to improve the reporting workflow and increase its accuracy. In this article, we propose a new, supervised siamese deep learning architecture able to handle multi-modal and multi-view MR images with similar PIRADS score. An experimental comparison with well-established deep learning-based CBIRs (namely standard siamese networks and autoencoders) showed significantly improved performance with respect to both diagnostic (ROC-AUC), and information retrieval metrics (Precision-Recall, Discounted Cumulative Gain and Mean Average Precision). Finally, the new proposed multi-view siamese network is general in design, facilitating a broad use in diagnostic medical imaging retrieval.
Collapse
|
148
|
Mallow GM, Siyaji ZK, Galbusera F, Espinoza-Orías AA, Giers M, Lundberg H, Ames C, Karppinen J, Louie PK, Phillips FM, Pourzal R, Schwab J, Sciubba DM, Wang JC, Wilke HJ, Williams FMK, Mohiuddin SA, Makhni MC, Shepard NA, An HS, Samartzis D. Intelligence-Based Spine Care Model: A New Era of Research and Clinical Decision-Making. Global Spine J 2021; 11:135-145. [PMID: 33251858 PMCID: PMC7882816 DOI: 10.1177/2192568220973984] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Affiliation(s)
- G. Michael Mallow
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Zakariah K. Siyaji
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | | | - Alejandro A. Espinoza-Orías
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Morgan Giers
- School of Chemical, Biological, and Environmental Engineering, Oregon State University, Corvallis, OR, USA
| | - Hannah Lundberg
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Christopher Ames
- Department of Neurosurgery, University of California San Francisco, CA, USA
| | - Jaro Karppinen
- Medical Research Center Oulu, University of Oulu and Oulu University Hospital, Oulu, Finland
| | | | - Frank M. Phillips
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Robin Pourzal
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Joseph Schwab
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Daniel M. Sciubba
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey C. Wang
- Department of Orthopaedic Surgery, University of Southern California, Los Angeles, CA, USA
| | - Hans-Joachim Wilke
- Institute of Orthopaedic Research and Biomechanics, Centre for Trauma Research Ulm, Ulm University Medical Centre, Ulm, Germany
| | - Frances M. K. Williams
- Department of Twin Research and Genetic Epidemiology, King’s College London, London, United Kingdom
| | | | - Melvin C. Makhni
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Nicholas A. Shepard
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Howard S. An
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Dino Samartzis
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, IL, USA
- The International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| |
Collapse
|
149
|
Breath analysis based early gastric cancer classification from deep stacked sparse autoencoder neural network. Sci Rep 2021; 11:4014. [PMID: 33597551 PMCID: PMC7889910 DOI: 10.1038/s41598-021-83184-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 01/29/2021] [Indexed: 01/16/2023] Open
Abstract
Deep learning is an emerging tool, which is regularly used for disease diagnosis in the medical field. A new research direction has been developed for the detection of early-stage gastric cancer. The computer-aided diagnosis (CAD) systems reduce the mortality rate due to their effectiveness. In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. A Softmax classifier was then integrated to the proposed method of feature extraction, to classify gastric cancer from the breath samples. Precisely, we identified fifty peaks in each spectrum to distinguish the EGC, AGC, and healthy persons. This CAD system reduces the distance between the input and output by learning the features and preserve the structure of the input data set of breath samples. The features were extracted from the unlabeled data of the breath samples. After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. In last, fine-tuning of the developed neural network was carried out with labeled training data to make the model more reliable and repeatable. The proposed deep stacked sparse autoencoder neural network architecture exhibits excellent results, with an overall accuracy of 98.7% for advanced gastric cancer classification and 97.3% for early gastric cancer detection using breath analysis. Moreover, the developed model produces an excellent result for recall, precision, and f score value, making it suitable for clinical application.
Collapse
|
150
|
Yu J, Liu G. Extracting and inserting knowledge into stacked denoising auto-encoders. Neural Netw 2021; 137:31-42. [PMID: 33545610 DOI: 10.1016/j.neunet.2021.01.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Revised: 11/28/2020] [Accepted: 01/14/2021] [Indexed: 10/22/2022]
Abstract
Deep neural networks (DNNs) with a complex structure and multiple nonlinear processing units have achieved great successes for feature learning in image and visualization analysis. Due to interpretability of the "black box" problem in DNNs, however, there are still many obstacles to applications of DNNs in various real-world cases. This paper proposes a new DNN model, knowledge-based deep stacked denoising auto-encoders (KBSDAE), which inserts the knowledge (i.e., confidence and classification rules) into the deep network structure. This model not only can offer a good understanding of the representations learned by the deep network but also can produce an improvement in the learning performance of stacked denoising auto-encoder (SDAE). The knowledge discovery algorithm is proposed to extract confidence rules to interpret the layerwise network (i.e., denoising auto-encoder (DAE)). The symbolic language is developed to describe the deep network and shows that it is suitable for the representation of quantitative reasoning in a deep network. The confidence rule insertion to the deep network is able to produce an improvement in feature learning of DAEs. The classification rules extracted from the data offer a novel method for knowledge insertion to the classification layer of SDAE. The testing results of KBSDAE on various benchmark data indicate that the proposed method not only effectively extracts knowledge from the deep network, but also shows better feature learning performance than that of those typical DNNs (e.g., SDAE).
Collapse
Affiliation(s)
- Jianbo Yu
- School of Mechanical Engineering, Tongji University, Shanghai 201804, China.
| | - Guoliang Liu
- School of Mechanical Engineering, Tongji University, Shanghai 201804, China
| |
Collapse
|