1
|
Gunasundari C, Selva Bhuvaneswari K. A novel approach for the detection of brain tumor and its classification via independent component analysis. Sci Rep 2025; 15:8252. [PMID: 40064997 PMCID: PMC11894048 DOI: 10.1038/s41598-025-87934-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 01/23/2025] [Indexed: 03/14/2025] Open
Abstract
A brain tumor is regarded as one of the deadliest types of cancer due to its intricate nature.This is why it is important that patients get the best possible diagnosis and treatment options. With the help of machine vision, neurologists can now perform a more accurate and faster diagnosis. There are currently no suitable methods that can be used to perform brain segmentation using image processing recently neural network model is used that it can perform better than other methods. Unfortunately, due to the complexity of the model, performing accurate brain segmentation in real images is not feasible. The main objective is to develop a novel method that can be used to analyze brain tumors using a component analysis. The proposed model consists of a deep neural network and an image processing framework. It is divided into various phases, such as the mapping stage, the data augmentation stage, and the tumor discovery stage. The data augmentation stage involves training a CNN to identify the regions of the image that are overlapping with the tumor space marker. The DCNN's predicted performance is compared with the test result. The third stage is focused on training a deep neural system and a SVM. This model was able to achieve a 99% accuracy rate and a sensitivity of 0.973%. It is primarily utilized for identifying brain tumors.
Collapse
Affiliation(s)
- C Gunasundari
- School of Computing, SRM Institute of Science and Technology, Tiruchirappalli, India.
| | - K Selva Bhuvaneswari
- Department of Computer Science and Engineering, University College of Engineering Kancheepuram, Kancheepuram, India
| |
Collapse
|
2
|
Berghout T. The Neural Frontier of Future Medical Imaging: A Review of Deep Learning for Brain Tumor Detection. J Imaging 2024; 11:2. [PMID: 39852315 PMCID: PMC11766058 DOI: 10.3390/jimaging11010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 01/26/2025] Open
Abstract
Brain tumor detection is crucial in medical research due to high mortality rates and treatment challenges. Early and accurate diagnosis is vital for improving patient outcomes, however, traditional methods, such as manual Magnetic Resonance Imaging (MRI) analysis, are often time-consuming and error-prone. The rise of deep learning has led to advanced models for automated brain tumor feature extraction, segmentation, and classification. Despite these advancements, comprehensive reviews synthesizing recent findings remain scarce. By analyzing over 100 research papers over past half-decade (2019-2024), this review fills that gap, exploring the latest methods and paradigms, summarizing key concepts, challenges, datasets, and offering insights into future directions for brain tumor detection using deep learning. This review also incorporates an analysis of previous reviews and targets three main aspects: feature extraction, segmentation, and classification. The results revealed that research primarily focuses on Convolutional Neural Networks (CNNs) and their variants, with a strong emphasis on transfer learning using pre-trained models. Other methods, such as Generative Adversarial Networks (GANs) and Autoencoders, are used for feature extraction, while Recurrent Neural Networks (RNNs) are employed for time-sequence modeling. Some models integrate with Internet of Things (IoT) frameworks or federated learning for real-time diagnostics and privacy, often paired with optimization algorithms. However, the adoption of eXplainable AI (XAI) remains limited, despite its importance in building trust in medical diagnostics. Finally, this review outlines future opportunities, focusing on image quality, underexplored deep learning techniques, expanding datasets, and exploring deeper learning representations and model behavior such as recurrent expansion to advance medical imaging diagnostics.
Collapse
Affiliation(s)
- Tarek Berghout
- Laboratory of Automation and Manufacturing Engineering, Department of Industrial Engineering, Batna 2 University, Batna 05000, Algeria
| |
Collapse
|
3
|
Li B, Liu H, Zhao M, Zhang X, Huang P, Chen X, Lin J. Carboxylesterase Activatable Molecular Probe for Personalized Treatment Guidance by Analyte-Induced Molecular Transformation. Angew Chem Int Ed Engl 2024; 63:e202404093. [PMID: 38727540 DOI: 10.1002/anie.202404093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Indexed: 06/28/2024]
Abstract
Accurate visualization of tumor microenvironment is of great significance for personalized medicine. Here, we develop a near-infrared (NIR) fluorescence/photoacoustic (FL/PA) dual-mode molecular probe (denoted as NIR-CE) for distinguishing tumors based on carboxylesterase (CE) level by an analyte-induced molecular transformation (AIMT) strategy. The recognition moiety for CE activity is the acetyl unit of NIR-CE, generating the pre-product, NIR-CE-OH, which undergoes spontaneous hydrogen atom exchange between the nitrogen atoms in the indole group and the phenol hydroxyl group, eventually transforming into NIR-CE-H. In cellular experiments and in vivo blind studies, the human hepatoma cells and tumors with high level of CE were successfully distinguished by both NIR FL and PA imaging. Our findings provide a new molecular imaging strategy for personalized treatment guidance.
Collapse
Affiliation(s)
- Benhao Li
- Marshall Laboratory of Biomedical Engineering, International Cancer Center, Laboratory of Evolutionary Theranostics (LET), School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, China
- Departments of Diagnostic Radiology, Surgery, Chemical and Biomolecular Engineering, and Biomedical Engineering, Yong Loo Lin School of Medicine and Faculty of Engineering, National University of Singapore, Singapore, 119074, Singapore
- Clinical Imaging Research Centre, Centre for Translational Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117599, Singapore
- Nanomedicine Translational Research Program, NUS Centre for Nanomedicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117597, Singapore
| | - Hengke Liu
- Marshall Laboratory of Biomedical Engineering, International Cancer Center, Laboratory of Evolutionary Theranostics (LET), School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, China
| | - Mengyao Zhao
- Departments of Diagnostic Radiology, Surgery, Chemical and Biomolecular Engineering, and Biomedical Engineering, Yong Loo Lin School of Medicine and Faculty of Engineering, National University of Singapore, Singapore, 119074, Singapore
- Clinical Imaging Research Centre, Centre for Translational Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117599, Singapore
- Nanomedicine Translational Research Program, NUS Centre for Nanomedicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117597, Singapore
| | - Xinming Zhang
- Marshall Laboratory of Biomedical Engineering, International Cancer Center, Laboratory of Evolutionary Theranostics (LET), School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, China
| | - Peng Huang
- Marshall Laboratory of Biomedical Engineering, International Cancer Center, Laboratory of Evolutionary Theranostics (LET), School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, China
| | - Xiaoyuan Chen
- Departments of Diagnostic Radiology, Surgery, Chemical and Biomolecular Engineering, and Biomedical Engineering, Yong Loo Lin School of Medicine and Faculty of Engineering, National University of Singapore, Singapore, 119074, Singapore
- Clinical Imaging Research Centre, Centre for Translational Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117599, Singapore
- Nanomedicine Translational Research Program, NUS Centre for Nanomedicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 117597, Singapore
- Theranostics Center of Excellence (TCE), Yong Loo Lin School of Medicine, National University of Singapore, 11 Biopolis Way, Helios, Singapore, 138667
- Institute of Molecular and Cell Biology, Agency for Science, Technology, and Research (A*STAR), Singapore, 138673, Singapore
| | - Jing Lin
- Marshall Laboratory of Biomedical Engineering, International Cancer Center, Laboratory of Evolutionary Theranostics (LET), School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, China
| |
Collapse
|
4
|
Lee JH, Lee U, Yoo JH, Lee TS, Jung JH, Kim HS. AraDQ: an automated digital phenotyping software for quantifying disease symptoms of flood-inoculated Arabidopsis seedlings. PLANT METHODS 2024; 20:44. [PMID: 38493119 PMCID: PMC10943777 DOI: 10.1186/s13007-024-01171-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/09/2024] [Indexed: 03/18/2024]
Abstract
BACKGROUND Plant scientists have largely relied on pathogen growth assays and/or transcript analysis of stress-responsive genes for quantification of disease severity and susceptibility. These methods are destructive to plants, labor-intensive, and time-consuming, thereby limiting their application in real-time, large-scale studies. Image-based plant phenotyping is an alternative approach that enables automated measurement of various symptoms. However, most of the currently available plant image analysis tools require specific hardware platform and vendor specific software packages, and thus, are not suited for researchers who are not primarily focused on plant phenotyping. In this study, we aimed to develop a digital phenotyping tool to enhance the speed, accuracy, and reliability of disease quantification in Arabidopsis. RESULTS Here, we present the Arabidopsis Disease Quantification (AraDQ) image analysis tool for examination of flood-inoculated Arabidopsis seedlings grown on plates containing plant growth media. It is a cross-platform application program with a user-friendly graphical interface that contains highly accurate deep neural networks for object detection and segmentation. The only prerequisite is that the input image should contain a fixed-sized 24-color balance card placed next to the objects of interest on a white background to ensure reliable and reproducible results, regardless of the image acquisition method. The image processing pipeline automatically calculates 10 different colors and morphological parameters for individual seedlings in the given image, and disease-associated phenotypic changes can be easily assessed by comparing plant images captured before and after infection. We conducted two case studies involving bacterial and plant mutants with reduced virulence and disease resistance capabilities, respectively, and thereby demonstrated that AraDQ can capture subtle changes in plant color and morphology with a high level of sensitivity. CONCLUSIONS AraDQ offers a simple, fast, and accurate approach for image-based quantification of plant disease symptoms using various parameters. Its fully automated pipeline neither requires prior image processing nor costly hardware setups, allowing easy implementation of the software by researchers interested in digital phenotyping of diseased plants.
Collapse
Grants
- Grant No. 2022R1C1C1012137 The National Research Foundation of Korea
- Grant No. 421002-04) The Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, and Forestry (IPET) and Korea Smart Farm R&D (KosFarm) through the Smart Farm Innovation Technology Development Program, funded by the Ministry of Agriculture, Food and Rural Affairs (MAFRA) and Ministry of Science and ICT (MSIT), Rural Development Administration (RDA)
- The Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, and Forestry (IPET) and Korea Smart Farm R&D (KosFarm) through the Smart Farm Innovation Technology Development Program, funded by the Ministry of Agriculture, Food and Rural Affairs (MAFRA) and Ministry of Science and ICT (MSIT), Rural Development Administration (RDA)
Collapse
Affiliation(s)
- Jae Hoon Lee
- Department of Agricultural Biotechnology, Seoul National University, Seoul, 08826, Republic of Korea
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, 08826, Republic of Korea
| | - Unseok Lee
- Smart Farm Research Center, Korea Institute of Science and Technology, Gangneung, 25451, Republic of Korea
| | - Ji Hye Yoo
- Smart Farm Research Center, Korea Institute of Science and Technology, Gangneung, 25451, Republic of Korea
| | - Taek Sung Lee
- Smart Farm Research Center, Korea Institute of Science and Technology, Gangneung, 25451, Republic of Korea
| | - Je Hyeong Jung
- Smart Farm Research Center, Korea Institute of Science and Technology, Gangneung, 25451, Republic of Korea
| | - Hyoung Seok Kim
- Smart Farm Research Center, Korea Institute of Science and Technology, Gangneung, 25451, Republic of Korea.
| |
Collapse
|
5
|
Kifle N, Teti S, Ning B, Donoho DA, Katz I, Keating R, Cha RJ. Pediatric Brain Tissue Segmentation Using a Snapshot Hyperspectral Imaging (sHSI) Camera and Machine Learning Classifier. Bioengineering (Basel) 2023; 10:1190. [PMID: 37892919 PMCID: PMC10603997 DOI: 10.3390/bioengineering10101190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/06/2023] [Accepted: 10/11/2023] [Indexed: 10/29/2023] Open
Abstract
Pediatric brain tumors are the second most common type of cancer, accounting for one in four childhood cancer types. Brain tumor resection surgery remains the most common treatment option for brain cancer. While assessing tumor margins intraoperatively, surgeons must send tissue samples for biopsy, which can be time-consuming and not always accurate or helpful. Snapshot hyperspectral imaging (sHSI) cameras can capture scenes beyond the human visual spectrum and provide real-time guidance where we aim to segment healthy brain tissues from lesions on pediatric patients undergoing brain tumor resection. With the institutional research board approval, Pro00011028, 139 red-green-blue (RGB), 279 visible, and 85 infrared sHSI data were collected from four subjects with the system integrated into an operating microscope. A random forest classifier was used for data analysis. The RGB, infrared sHSI, and visible sHSI models achieved average intersection of unions (IoUs) of 0.76, 0.59, and 0.57, respectively, while the tumor segmentation achieved a specificity of 0.996, followed by the infrared HSI and visible HSI models at 0.93 and 0.91, respectively. Despite the small dataset considering pediatric cases, our research leveraged sHSI technology and successfully segmented healthy brain tissues from lesions with a high specificity during pediatric brain tumor resection procedures.
Collapse
Affiliation(s)
- Naomi Kifle
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (N.K.); (I.K.)
| | - Saige Teti
- Department of Neurosurgery, Children’s National Hospital, Washington, DC 20010, USA; (S.T.); (D.A.D.)
| | - Bo Ning
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (N.K.); (I.K.)
| | - Daniel A. Donoho
- Department of Neurosurgery, Children’s National Hospital, Washington, DC 20010, USA; (S.T.); (D.A.D.)
| | - Itai Katz
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (N.K.); (I.K.)
| | - Robert Keating
- Department of Neurosurgery, Children’s National Hospital, Washington, DC 20010, USA; (S.T.); (D.A.D.)
| | - Richard Jaepyeong Cha
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (N.K.); (I.K.)
- Department of Pediatrics, George Washington University School of Medicine, Washington, DC 20010, USA
| |
Collapse
|
6
|
Multiclass tumor identification using combined texture and statistical features. Med Biol Eng Comput 2023; 61:45-59. [PMID: 36323980 DOI: 10.1007/s11517-022-02687-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/02/2022] [Indexed: 11/06/2022]
Abstract
Early detection and diagnosis of brain tumors are essential for early intervention and eventually successful treatment plans leading to either a full recovery or an increase in the patient lifespan. However, diagnosis of brain tumors is not an easy task since it requires highly skilled professionals, making this procedure both costly and time-consuming. The diagnosis process relying on MR images gets even harder in the presence of similar objects in terms of their density, size, and shape. No matter how skilled professionals are, their task is still prone to human error. The main aim of this work is to propose a system that can automatically classify and diagnose glioma brain tumors into one of the four tumor types: (1) necrosis, (2) edema, (3) enhancing, and (4) non-enhancing. In this paper, we propose a combined texture discrete wavelet transform (DWT) and statistical features based on the first- and second-order features for the accurate classification and diagnosis of multiclass glioma tumors. Four well-known classifiers, namely, support vector machines (SVM), random forest (RF), multilayer perceptron (MLP), and naïve Bayes (NB), are used for classification. The BraTS 2018 dataset is used for the experiments, and with the combined DWT and statistical features, the RF classifier achieved the highest average accuracy whether for separated modalities or combined modalities. The highest average accuracy of 89.59% and 90.28% for HGG and LGG, respectively, was reported in this paper. It has also been observed that the proposed method outperforms similar existing methods reported in the extant literature.
Collapse
|
7
|
Tabari A, Chan SM, Omar OMF, Iqbal SI, Gee MS, Daye D. Role of Machine Learning in Precision Oncology: Applications in Gastrointestinal Cancers. Cancers (Basel) 2022; 15:cancers15010063. [PMID: 36612061 PMCID: PMC9817513 DOI: 10.3390/cancers15010063] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/14/2022] [Accepted: 12/20/2022] [Indexed: 12/24/2022] Open
Abstract
Gastrointestinal (GI) cancers, consisting of a wide spectrum of pathologies, have become a prominent health issue globally. Despite medical imaging playing a crucial role in the clinical workflow of cancers, standard evaluation of different imaging modalities may provide limited information. Accurate tumor detection, characterization, and monitoring remain a challenge. Progress in quantitative imaging analysis techniques resulted in "radiomics", a promising methodical tool that helps to personalize diagnosis and treatment optimization. Radiomics, a sub-field of computer vision analysis, is a bourgeoning area of interest, especially in this era of precision medicine. In the field of oncology, radiomics has been described as a tool to aid in the diagnosis, classification, and categorization of malignancies and to predict outcomes using various endpoints. In addition, machine learning is a technique for analyzing and predicting by learning from sample data, finding patterns in it, and applying it to new data. Machine learning has been increasingly applied in this field, where it is being studied in image diagnosis. This review assesses the current landscape of radiomics and methodological processes in GI cancers (including gastric, colorectal, liver, pancreatic, neuroendocrine, GI stromal, and rectal cancers). We explain in a stepwise fashion the process from data acquisition and curation to segmentation and feature extraction. Furthermore, the applications of radiomics for diagnosis, staging, assessment of tumor prognosis and treatment response according to different GI cancer types are explored. Finally, we discussed the existing challenges and limitations of radiomics in abdominal cancers and investigate future opportunities.
Collapse
Affiliation(s)
- Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02115, USA
- Correspondence:
| | - Shin Mei Chan
- Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06510, USA
| | - Omar Mustafa Fathy Omar
- Center for Vascular Biology, University of Connecticut Health Center, Farmington, CT 06030, USA
| | - Shams I. Iqbal
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Michael S. Gee
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
8
|
Ghasempour E, Hesami S, Movahed E, keshel SH, Doroudian M. Mesenchymal stem cell-derived exosomes as a new therapeutic strategy in the brain tumors. Stem Cell Res Ther 2022; 13:527. [PMID: 36536420 PMCID: PMC9764546 DOI: 10.1186/s13287-022-03212-4] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
Brain tumors are one of the most mortal cancers, leading to many deaths among kids and adults. Surgery, chemotherapy, and radiotherapy are available options for brain tumor treatment. However, these methods are not able to eradicate cancer cells. The blood-brain barrier (BBB) is one of the most important barriers to treat brain tumors that prevents adequate drug delivery to brain tissue. The connection between different brain parts is heterogeneous and causes many challenges in treatment. Mesenchymal stem cells (MSCs) migrate to brain tumor cells and have anti-tumor effects by delivering cytotoxic compounds. They contain very high regenerative properties, as well as support the immune system. MSCs-based therapy involves cell replacement and releases various vesicles, including exosomes. Exosomes receive more attention due to their excellent stability, less immunogenicity and toxicity compare to cells. Exosomes derived from MSCs can develop a powerful therapeutic strategy for different diseases and be a hopeful candidate for cell-based and cell-free regenerative medicine. These nanoparticles contain nucleic acid, proteins, lipids, microRNAs, and other biologically active substances. Many studies show that each microRNA can prevent angiogenesis, migration, and metastasis in glioblastoma. These exosomes can-act as a suitable nanoparticle carrier for therapeutic applications of brain tumors by passing through the BBB. In this review, we discuss potential applications of MSC and their produced exosomes in the treatment of brain tumors.
Collapse
Affiliation(s)
- Elham Ghasempour
- grid.411600.2Department of Tissue Engineering and Applied Cell Sciences, School of Advanced Technologies in Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Shilan Hesami
- grid.411600.2Department of Tissue Engineering and Applied Cell Sciences, School of Advanced Technologies in Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Elaheh Movahed
- grid.238491.50000 0004 0367 6866Wadsworth Center, New York State Department of Health, Albany, NY USA
| | - Saeed Heidari keshel
- grid.411600.2Department of Tissue Engineering and Applied Cell Sciences, School of Advanced Technologies in Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Doroudian
- grid.412265.60000 0004 0406 5813Department of Cell and Molecular Sciences, Faculty of Biological Sciences, Kharazmi University, Tehran, Iran
| |
Collapse
|
9
|
Pan T, Yang Y. Design of a Classification Recognition Model for Bone and Muscle Anatomical Imaging Based on Convolutional Neural Network and 3D Magnetic Resonance. Appl Bionics Biomech 2022; 2022:4393154. [PMID: 35637747 PMCID: PMC9146807 DOI: 10.1155/2022/4393154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/22/2022] [Indexed: 11/17/2022] Open
Abstract
In this paper, we use convolutional neural networks to conduct in-depth research and analysis on the classification and recognition of bone and muscle anatomical imaging graphics of 3D magnetic resonance and design corresponding models for practical applications. A series of medical image segmentation models based on convolutional neural networks is proposed. In this paper, firstly, a separated attention mechanism is introduced in the model, which divides the input data into multiple paths, applies self-attention weights to adjacent data paths, and finally fuses the weighted values to form the basic convolutional block. This structure has multiple parallel data paths, which increases the width of the network and therefore improves the feature extraction capability of the model. Then, this paper proposes a bidirectional feature pyramid for medical image segmentation task, which has top-down and bottom-up data paths, and, together with jump connections, can fully interact with feature maps at different scales. After that, a new activation function Mish is introduced, and its advantages over other activation functions are experimentally demonstrated. Finally, for the situation that medical image annotations are not easy to obtain, a semisupervised learning method is introduced in the model training process, and the effectiveness of this method is experimentally demonstrated. The joint network first denoises the input image, then super-resolution mapping is performed on the noise-removed feature map, and finally, the super-resolution 3D-MR image is obtained. We update the network by combining the denoising loss and super-resolution loss during the joint network training process. The experimental results show that the joint network with denoising first and then super-resolution outperforms the joint network with other task order and outperforms the method that performs the two tasks separately and the proposed method in this paper has the optimal performance.
Collapse
Affiliation(s)
- Ting Pan
- Wuhan Fourth Hospital; Puai Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430000, China
| | - Yang Yang
- Wuhan Fourth Hospital; Puai Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430000, China
| |
Collapse
|
10
|
Öksüz C, Urhan O, Güllü MK. Brain tumor classification using the fused features extracted from expanded tumor region. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
11
|
Neighbourhood component analysis and deep feature-based diagnosis model for middle ear otoscope images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06810-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Ali M, Hussain Shah J, Attique Khan M, Alhaisoni M, Tariq U, Akram T, Jin Kim Y, Chang B. Brain Tumor Detection and Classification Using PSO and Convolutional Neural Network. COMPUTERS, MATERIALS & CONTINUA 2022; 73:4501-4518. [DOI: 10.32604/cmc.2022.030392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
13
|
Aziz MJ, Amiri Tehrani Zade A, Farnia P, Alimohamadi M, Makkiabadi B, Ahmadian A, Alirezaie J. Accurate Automatic Glioma Segmentation in Brain MRI images Based on CapsNet. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3882-3885. [PMID: 34892080 DOI: 10.1109/embc46164.2021.9630324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Glioma is a highly invasive type of brain tumor with an irregular morphology and blurred infiltrative borders that may affect different parts of the brain. Therefore, it is a challenging task to identify the exact boundaries of the tumor in an MR image. In recent years, deep learning-based Convolutional Neural Networks (CNNs) have gained popularity in the field of image processing and have been utilized for accurate image segmentation in medical applications. However, due to the inherent constraints of CNNs, tens of thousands of images are required for training, and collecting and annotating such a large number of images poses a serious challenge for their practical implementation. Here, for the first time, we have optimized a network based on the capsule neural network called SegCaps, to achieve accurate glioma segmentation on MR images. We have compared our results with a similar experiment conducted using the commonly utilized U-Net. Both experiments were performed on the BraTS2020 challenging dataset. For U-Net, network training was performed on the entire dataset, whereas a subset containing only 20% of the whole dataset was used for the SegCaps. To evaluate the results of our proposed method, the Dice Similarity Coefficient (DSC) was used. SegCaps and U-Net reached DSC of 87.96% and 85.56% on glioma tumor core segmentation, respectively. The SegCaps uses convolutional layers as the basic components and has the intrinsic capability to generalize novel viewpoints. The network learns the spatial relationship between features using dynamic routing of capsules. These capabilities of the capsule neural network have led to a 3% improvement in results of glioma segmentation with fewer data while it contains 95.4% fewer parameters than U-Net.
Collapse
|
14
|
Samani ZR, Parker D, Wolf R, Hodges W, Brem S, Verma R. Distinct tumor signatures using deep learning-based characterization of the peritumoral microenvironment in glioblastomas and brain metastases. Sci Rep 2021; 11:14469. [PMID: 34262079 PMCID: PMC8280204 DOI: 10.1038/s41598-021-93804-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/30/2021] [Indexed: 11/25/2022] Open
Abstract
Tumor types are classically distinguished based on biopsies of the tumor itself, as well as a radiological interpretation using diverse MRI modalities. In the current study, the overarching goal is to demonstrate that primary (glioblastomas) and secondary (brain metastases) malignancies can be differentiated based on the microstructure of the peritumoral region. This is achieved by exploiting the extracellular water differences between vasogenic edema and infiltrative tissue and training a convolutional neural network (CNN) on the Diffusion Tensor Imaging (DTI)-derived free water volume fraction. We obtained 85% accuracy in discriminating extracellular water differences between local patches in the peritumoral area of 66 glioblastomas and 40 metastatic patients in a cross-validation setting. On an independent test cohort consisting of 20 glioblastomas and 10 metastases, we got 93% accuracy in discriminating metastases from glioblastomas using majority voting on patches. This level of accuracy surpasses CNNs trained on other conventional DTI-based measures such as fractional anisotropy (FA) and mean diffusivity (MD), that have been used in other studies. Additionally, the CNN captures the peritumoral heterogeneity better than conventional texture features, including Gabor and radiomic features. Our results demonstrate that the extracellular water content of the peritumoral tissue, as captured by the free water volume fraction, is best able to characterize the differences between infiltrative and vasogenic peritumoral regions, paving the way for its use in classifying and benchmarking peritumoral tissue with varying degrees of infiltration.
Collapse
Affiliation(s)
- Zahra Riahi Samani
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Drew Parker
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ronald Wolf
- Department of Radiology, Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Wes Hodges
- Founder at Synaptive Medical, Toronto, ON, Canada
| | - Steven Brem
- Department of Radiology, Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Ragini Verma
- Diffusion and Connectomics in Precision Healthcare Research Lab (DiCIPHR), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
15
|
Sajjad M, Ramzan F, Khan MUG, Rehman A, Kolivand M, Fati SM, Bahaj SA. Deep convolutional generative adversarial network for Alzheimer's disease classification using positron emission tomography (PET) and synthetic data augmentation. Microsc Res Tech 2021; 84:3023-3034. [PMID: 34245203 DOI: 10.1002/jemt.23861] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 05/13/2021] [Accepted: 06/15/2021] [Indexed: 11/09/2022]
Abstract
With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI.
Collapse
Affiliation(s)
- Muhammad Sajjad
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan
| | - Farheen Ramzan
- Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Muhammad Usman Ghani Khan
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan.,Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Mahyar Kolivand
- Department of Medicine, University of Liverpool, Liverpool, UK
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
16
|
Saba T, Akbar S, Kolivand H, Ali Bahaj S. Automatic detection of papilledema through fundus retinal images using deep learning. Microsc Res Tech 2021; 84:3066-3077. [PMID: 34236733 DOI: 10.1002/jemt.23865] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 04/22/2021] [Accepted: 05/29/2021] [Indexed: 11/09/2022]
Abstract
Papilledema is a syndrome of the retina in which retinal optic nerve is inflated by elevation of intracranial pressure. The papilledema abnormalities such as retinal nerve fiber layer (RNFL) opacification may lead to blindness. These abnormalities could be seen through capturing of retinal images by means of fundus camera. This paper presents a deep learning-based automated system that detects and grades the papilledema through U-Net and Dense-Net architectures. The proposed approach has two main stages. First, optic disc and its surrounding area in fundus retinal image are localized and cropped for input to Dense-Net which classifies the optic disc as papilledema or normal. Second, consists of preprocessing of Dense-Net classified papilledema fundus image by Gabor filter. The preprocessed papilledema image is input to U-Net to achieve the segmented vascular network from which the vessel discontinuity index (VDI) and vessel discontinuity index to disc proximity (VDIP) are calculated for grading of papilledema. The VDI and VDIP are standard parameter to check the severity and grading of papilledema. The proposed system is evaluated on 60 papilledema and 40 normal fundus images taken from STARE dataset. The experimental results for classification of papilledema through Dense-Net are much better in terms of sensitivity 98.63%, specificity 97.83%, and accuracy 99.17%. Similarly, the grading results for mild and severe papilledema classification through U-Net are also much better in terms of sensitivity 99.82%, specificity 98.65%, and accuracy 99.89%. The deep learning-based automated detection and grading of papilledema for clinical purposes is first effort in state of art.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Shahzad Akbar
- Department of Computing, Riphah International University, Faisalabad Campus, Faisalabad, 38000, Pakistan
| | - Hoshang Kolivand
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, United Kingdom.,School of Computing and Digital Technologies, Staffordshire University, Staffordshire, United Kingdom
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
17
|
Jia Y, Ying Y, Feng J. Multi-Parameter Magnetic Resonance Imaging Fusion Technology Assists in Bone Diagnosis and Rehabilitation of Prostate Cancer. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Multi-parameter magnetic resonance imaging has been widely used in the diagnosis and evaluation of prostate cancer, and has important guiding significance for clinical diagnosis of prostate cancer and their treatment. This article studies the value of transrectal multiparametric ultrasound
(mpUSS) in the diagnosis of clinically meaningful prostate cancer. 102 patients with high risk factors for prostate cancer were examined by mpUSS and mpMRI. The transrectal biopsy (SB) results of the prostate system were regarded as the excellent standard, and the diagnostic value of mpUSS,
mpMRl and mpUSS combined with mpMRl examination for clinically meaningful prostate cancer was analyzed. The results showed that 58 of the 102 patients with SB were diagnosed with prostate cancer. Among them, 43 cases were detected by mpUSS, 50 cases were detected by mpMRl, 42 cases were detected
by mpUSS combined with mpMRI (series), and 56 cases were detected by mpUSS combined with mpMRl (parallel). Grouped by Gleason score, the detection rate of mpUSS for clinically significant prostate cancer was 83.74%, and the detection rate of mpMRl was 93.5%. The comparison between the two
was not statistically significant (P > 0.05), but when the two inspection methods were combined. The detection rate was 97.8%, which was significantly higher than the two inspection methods alone. Therefore, we conclude that mpUSS can be used as an imaging test for the diagnosis of prostate
cancer. In addition, mpUSS has a high application value in the diagnosis of prostate cancer. The detection rate of mpUSS combined with mpMRl examination for clinically meaningful prostate cancer is significantly higher than that of mpMRl examination alone, which can be used as a diagnostic
technique for early diagnosis of meaningful prostate cancer and can be used as a guide clinicians’ early diagnosis and treatment of meaningful prostate cancer.
Collapse
Affiliation(s)
- Yuzhu Jia
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou Zhejiang, 310012, China
| | - Yibo Ying
- Department of Radiology, The Second Hospital of Yinzhou, Ningbo, Ningbo Zhejiang, 315192 China
| | - Jianju Feng
- Department of Radiology, Zhuji Affiliated Hospital of Shaoxing University, Zhuji Zhejiang, 311800, China
| |
Collapse
|
18
|
A Comparative Assessment of Different Approaches of Segmentation and Classification Methods on Childhood Medulloblastoma Images. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00612-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
19
|
Attallah O. CoMB-Deep: Composite Deep Learning-Based Pipeline for Classifying Childhood Medulloblastoma and Its Classes. Front Neuroinform 2021; 15:663592. [PMID: 34122031 PMCID: PMC8193683 DOI: 10.3389/fninf.2021.663592] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Childhood medulloblastoma (MB) is a threatening malignant tumor affecting children all over the globe. It is believed to be the foremost common pediatric brain tumor causing death. Early and accurate classification of childhood MB and its classes are of great importance to help doctors choose the suitable treatment and observation plan, avoid tumor progression, and lower death rates. The current gold standard for diagnosing MB is the histopathology of biopsy samples. However, manual analysis of such images is complicated, costly, time-consuming, and highly dependent on the expertise and skills of pathologists, which might cause inaccurate results. This study aims to introduce a reliable computer-assisted pipeline called CoMB-Deep to automatically classify MB and its classes with high accuracy from histopathological images. This key challenge of the study is the lack of childhood MB datasets, especially its four categories (defined by the WHO) and the inadequate related studies. All relevant works were based on either deep learning (DL) or textural analysis feature extractions. Also, such studies employed distinct features to accomplish the classification procedure. Besides, most of them only extracted spatial features. Nevertheless, CoMB-Deep blends the advantages of textural analysis feature extraction techniques and DL approaches. The CoMB-Deep consists of a composite of DL techniques. Initially, it extracts deep spatial features from 10 convolutional neural networks (CNNs). It then performs a feature fusion step using discrete wavelet transform (DWT), a texture analysis method capable of reducing the dimension of fused features. Next, the CoMB-Deep explores the best combination of fused features, enhancing the performance of the classification process using two search strategies. Afterward, it employs two feature selection techniques on the fused feature sets selected in the previous step. A bi-directional long-short term memory (Bi-LSTM) network; a DL-based approach that is utilized for the classification phase. CoMB-Deep maintains two classification categories: binary category for distinguishing between the abnormal and normal cases and multi-class category to identify the subclasses of MB. The results of the CoMB-Deep for both classification categories prove that it is reliable. The results also indicate that the feature sets selected using both search strategies have enhanced the performance of Bi-LSTM compared to individual spatial deep features. CoMB-Deep is compared to related studies to verify its competitiveness, and this comparison confirmed its robustness and outperformance. Hence, CoMB-Deep can help pathologists perform accurate diagnoses, reduce misdiagnosis risks that could occur with manual diagnosis, accelerate the classification procedure, and decrease diagnosis costs.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
20
|
Rehman A, Saba T, Tariq U, Ayesha N. Deep Learning-Based COVID-19 Detection Using CT and X-Ray Images: Current Analytics and Comparisons. IT PROFESSIONAL 2021; 23:63-68. [PMID: 35582037 PMCID: PMC8864950 DOI: 10.1109/mitp.2020.3036820] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 10/23/2020] [Indexed: 06/04/2023]
Abstract
Currently, the world faces a novel coronavirus disease 2019 (COVID-19) challenge and infected cases are increasing exponentially. COVID-19 is a disease that has been reported by the WHO in March 2020, caused by a virus called the SARS-CoV-2. As of 10 March 2021, more than 150 million people were infected and 3v million died. Researchers strive to find out about the virus and recommend effective actions. An unprecedented increase in pathogens is happening and a major attempt is being made to tackle the epidemic. This article presents deep learning-based COVID-19 detection using CT and X-ray images and data analytics on its spread worldwide. This article's research structure builds on a recent analysis of the COVID-19 data and prospective research to systematize current resources, help the researchers, practitioners by using in-depth learning methodologies to build solutions for the COVID-19 pandemic.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab.CCIS Prince Sultan University Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab.CCIS Prince Sultan University Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and SciencePrince Sattam bin Abdulaziz University Saudi Arabia
| | - Noor Ayesha
- School of Clinical MedicineZhengzhou University China
| |
Collapse
|
21
|
Sadad T, Khan AR, Hussain A, Tariq U, Fati SM, Bahaj SA, Munir A. Internet of medical things embedding deep learning with data augmentation for mammogram density classification. Microsc Res Tech 2021; 84:2186-2194. [PMID: 33908111 DOI: 10.1002/jemt.23773] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/14/2021] [Accepted: 03/29/2021] [Indexed: 11/09/2022]
Abstract
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer-aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Asim Munir
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
22
|
Lee SH, Cho HH, Kwon J, Lee HY, Park H. Are radiomics features universally applicable to different organs? Cancer Imaging 2021; 21:31. [PMID: 33827699 PMCID: PMC8028225 DOI: 10.1186/s40644-021-00400-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 03/26/2021] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND Many studies have successfully identified radiomics features reflecting macroscale tumor features and tumor microenvironment for various organs. There is an increased interest in applying these radiomics features found in a given organ to other organs. Here, we explored whether common radiomics features could be identified over target organs in vastly different environments. METHODS Four datasets of three organs were analyzed. One radiomics model was constructed from the training set (lungs, n = 401), and was further evaluated in three independent test sets spanning three organs (lungs, n = 59; kidneys, n = 48; and brains, n = 43). Intensity histograms derived from the whole organ were compared to establish organ-level differences. We constructed a radiomics score based on selected features using training lung data over the tumor region. A total of 143 features were computed for each tumor. We adopted a feature selection approach that favored stable features, which can also capture survival. The radiomics score was applied to three independent test data from lung, kidney, and brain tumors, and whether the score could be used to separate high- and low-risk groups, was evaluated. RESULTS Each organ showed a distinct pattern in the histogram and the derived parameters (mean and median) at the organ-level. The radiomics score trained from the lung data of the tumor region included seven features, and the score was only effective in stratifying survival for other lung data, not in other organs such as the kidney and brain. Eliminating the lung-specific feature (2.5 percentile) from the radiomics score led to similar results. There were no common features between training and test sets, but a common category of features (texture category) was identified. CONCLUSION Although the possibility of a generally applicable model cannot be excluded, we suggest that radiomics score models for survival were mostly specific for a given organ; applying them to other organs would require careful consideration of organ-specific properties.
Collapse
Affiliation(s)
- Seung-Hak Lee
- Departement of Electronic Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, South Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, South Korea
- Core Research & Development Center, Korea University Ansan Hospital, Ansan, 15355, South Korea
| | - Hwan-Ho Cho
- Departement of Electronic Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, South Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, South Korea
| | - Junmo Kwon
- Departement of Electronic Electrical and Computer Engineering, Sungkyunkwan University, Suwon, 16419, South Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, South Korea
| | - Ho Yun Lee
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, South Korea.
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, 06351, South Korea.
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, South Korea.
- School of Electronic and Electrical Engineering, Center for Neuroscience Imaging Research, Sungkyunkwan University, Suwon, 16419, South Korea.
| |
Collapse
|
23
|
Khan AR, Khan S, Harouni M, Abbasi R, Iqbal S, Mehmood Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc Res Tech 2021; 84:1389-1399. [PMID: 33524220 DOI: 10.1002/jemt.23694] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/11/2020] [Accepted: 11/27/2020] [Indexed: 12/19/2022]
Abstract
Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time-consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k-means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Siraj Khan
- Department of Computer Science, Islamia College University, Peshawar, Pakistan
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, China
| | - Sajid Iqbal
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
24
|
Rehman A. Light microscopic iris classification using ensemble multi-class support vector machine. Microsc Res Tech 2021; 84:982-991. [PMID: 33438285 DOI: 10.1002/jemt.23659] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 10/24/2020] [Accepted: 11/06/2020] [Indexed: 02/04/2023]
Abstract
Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law-enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag-of-Words model to extract distinct and robust features from the iris image, followed by ensemble multi-class-SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi-class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA-v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
25
|
Sadad T, Rehman A, Munir A, Saba T, Tariq U, Ayesha N, Abbasi R. Brain tumor detection and multi-classification using advanced deep learning techniques. Microsc Res Tech 2021; 84:1296-1308. [PMID: 33400339 DOI: 10.1002/jemt.23688] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/14/2020] [Accepted: 12/06/2020] [Indexed: 11/11/2022]
Abstract
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Asim Munir
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, Henan, China
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| |
Collapse
|
26
|
Saba T. Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features. Microsc Res Tech 2021; 84:1272-1283. [PMID: 33399251 DOI: 10.1002/jemt.23686] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/15/2020] [Accepted: 11/30/2020] [Indexed: 12/31/2022]
Abstract
Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non-handcrafted features. Additionally, clinical features such as Menzies method, seven-point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Toğaçar M, Ergen B, Cömert Z. Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks. Med Biol Eng Comput 2021; 59:57-70. [PMID: 33222016 DOI: 10.1007/s11517-020-02290-x/published] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/11/2020] [Indexed: 05/19/2023]
Abstract
Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.
Collapse
Affiliation(s)
- Mesut Toğaçar
- Department of Computer Technology, Technical Sciences Vocational School, Fırat University, Elazig, Turkey.
| | - Burhan Ergen
- Department of Computer Technology, Technical Sciences Vocational School, Fırat University, Elazig, Turkey
| | - Zafer Cömert
- Department of Software Engineering, Faculty of Engineering, Samsun University, Samsun, Turkey
| |
Collapse
|
28
|
Rehman A, Khan MA, Saba T, Mehmood Z, Tariq U, Ayesha N. Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture. Microsc Res Tech 2021; 84:133-149. [PMID: 32959422 DOI: 10.1002/jemt.23597] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 08/10/2020] [Accepted: 08/31/2020] [Indexed: 12/20/2022]
Abstract
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning-based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation-based selection method and as the output, the best features are selected. These selected features are validated through feed-forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
29
|
Sadad T, Rehman A, Hussain A, Abbasi AA, Khan MQ. A Review on Multi-organ Cancer Detection Using Advanced Machine Learning Techniques. Curr Med Imaging 2020; 17:686-694. [PMID: 33334293 DOI: 10.2174/1573405616666201217112521] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/24/2022]
Abstract
Abnormal behaviors of tumors pose a risk to human survival. Thus, the detection of cancers at their initial stage is beneficial for patients and lowers the mortality rate. However, this can be difficult due to various factors related to imaging modalities, such as complex background, low contrast, brightness issues, poorly defined borders and the shape of the affected area. Recently, computer-aided diagnosis (CAD) models have been used to accurately diagnose tumors in different parts of the human body, especially breast, brain, lung, liver, skin and colon cancers. These cancers are diagnosed using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), colonoscopy, mammography, dermoscopy and histopathology. The aim of this review was to investigate existing approaches for the diagnosis of breast, brain, lung, liver, skin and colon tumors. The review focuses on decision-making systems, including handcrafted features and deep learning architectures for tumor detection.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Aaqif Afzaal Abbasi
- Department of Software Engineering, Foundation University, Islamabad, Pakistan
| | - Muhammad Qasim Khan
- Department of Computer Science, COMSATS University (Attock Campus) Islamabad, Pakistan
| |
Collapse
|
30
|
Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks. Med Biol Eng Comput 2020; 59:57-70. [PMID: 33222016 DOI: 10.1007/s11517-020-02290-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/11/2020] [Indexed: 12/26/2022]
Abstract
Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.
Collapse
|
31
|
Saba T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J Infect Public Health 2020; 13:1274-1289. [DOI: 10.1016/j.jiph.2020.06.033] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 06/21/2020] [Accepted: 06/28/2020] [Indexed: 12/24/2022] Open
|
32
|
Lee H, Park BY, Byeon K, Won JH, Kim M, Kim SH, Park H. Multivariate association between brain function and eating disorders using sparse canonical correlation analysis. PLoS One 2020; 15:e0237511. [PMID: 32785278 PMCID: PMC7423138 DOI: 10.1371/journal.pone.0237511] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
Eating disorder is highly associated with obesity and it is related to brain dysfunction as well. Still, the functional substrates of the brain associated with behavioral traits of eating disorder are underexplored. Existing neuroimaging studies have explored the association between eating disorder and brain function without using all the information provided by the eating disorder related questionnaire but by adopting summary factors. Here, we aimed to investigate the multivariate association between brain function and eating disorder at fine-grained question-level information. Our study is a retrospective secondary analysis that re-analyzed resting-state functional magnetic resonance imaging of 284 participants from the enhanced Nathan Kline Institute-Rockland Sample database. Leveraging sparse canonical correlation analysis, we associated the functional connectivity of all brain regions and all questions in the eating disorder questionnaires. We found that executive- and inhibitory control-related frontoparietal networks showed positive associations with questions of restraint eating, while brain regions involved in the reward system showed negative associations. Notably, inhibitory control-related brain regions showed a positive association with the degree of obesity. Findings were well replicated in the independent validation dataset (n = 34). The results of this study might contribute to a better understanding of brain function with respect to eating disorder.
Collapse
Affiliation(s)
- Hyebin Lee
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
| | - Bo-yong Park
- McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Kyoungseob Byeon
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
| | - Ji Hye Won
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
| | - Mansu Kim
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Se-Hong Kim
- Department of Family Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Suwon, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
33
|
Gaussian hybrid fuzzy clustering and radial basis neural network for automatic brain tumor classification in MRI images. EVOLUTIONARY INTELLIGENCE 2020. [DOI: 10.1007/s12065-020-00433-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
34
|
Vadmal V, Junno G, Badve C, Huang W, Waite KA, Barnholtz-Sloan JS. MRI image analysis methods and applications: an algorithmic perspective using brain tumors as an exemplar. Neurooncol Adv 2020; 2:vdaa049. [PMID: 32642702 PMCID: PMC7236385 DOI: 10.1093/noajnl/vdaa049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
The use of magnetic resonance imaging (MRI) in healthcare and the emergence of radiology as a practice are both relatively new compared with the classical specialties in medicine. Having its naissance in the 1970s and later adoption in the 1980s, the use of MRI has grown exponentially, consequently engendering exciting new areas of research. One such development is the use of computational techniques to analyze MRI images much like the way a radiologist would. With the advent of affordable, powerful computing hardware and parallel developments in computer vision, MRI image analysis has also witnessed unprecedented growth. Due to the interdisciplinary and complex nature of this subfield, it is important to survey the current landscape and examine the current approaches for analysis and trend trends moving forward.
Collapse
Affiliation(s)
- Vachan Vadmal
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Grant Junno
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Chaitra Badve
- Department of Radiology, University Hospitals Health System (UHHS), Cleveland, Ohio
| | - William Huang
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Kristin A Waite
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio.,Cleveland Center for Health Outcomes Research (CCHOR), Cleveland, Ohio.,Cleveland Institute for Computational Biology, Cleveland, Ohio
| | - Jill S Barnholtz-Sloan
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio.,Cleveland Center for Health Outcomes Research (CCHOR), Cleveland, Ohio.,Research Health Analytics and Informatics, UHHS, Cleveland, Ohio.,Case Comprehensive Cancer Center, Cleveland, Ohio.,Cleveland Institute for Computational Biology, Cleveland, Ohio
| |
Collapse
|
35
|
Rehman A, Khan MA, Mehmood Z, Saba T, Sardaraz M, Rashid M. Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction. Microsc Res Tech 2020; 83:410-423. [PMID: 31898863 DOI: 10.1002/jemt.23429] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Revised: 11/26/2019] [Accepted: 12/15/2019] [Indexed: 11/06/2022]
Abstract
The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel-based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean-based function is implemented and fed input to top-hat and bottom-hat filters which later fused for contrast stretching, (b) seed region growing and graph-cut method-based lesion segmentation and fused both segmented lesions through pixel-based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy-based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Sardaraz
- Department of Computer Science, COMSATS University Islamabad, Attock, Pakistan
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia
| |
Collapse
|
36
|
Nadeem MW, Ghamdi MAA, Hussain M, Khan MA, Khan KM, Almotiri SH, Butt SA. Brain Tumor Analysis Empowered with Deep Learning: A Review, Taxonomy, and Future Challenges. Brain Sci 2020; 10:brainsci10020118. [PMID: 32098333 PMCID: PMC7071415 DOI: 10.3390/brainsci10020118] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 02/07/2020] [Accepted: 02/13/2020] [Indexed: 12/17/2022] Open
Abstract
Deep Learning (DL) algorithms enabled computational models consist of multiple processing layers that represent data with multiple levels of abstraction. In recent years, usage of deep learning is rapidly proliferating in almost every domain, especially in medical image processing, medical image analysis, and bioinformatics. Consequently, deep learning has dramatically changed and improved the means of recognition, prediction, and diagnosis effectively in numerous areas of healthcare such as pathology, brain tumor, lung cancer, abdomen, cardiac, and retina. Considering the wide range of applications of deep learning, the objective of this article is to review major deep learning concepts pertinent to brain tumor analysis (e.g., segmentation, classification, prediction, evaluation.). A review conducted by summarizing a large number of scientific contributions to the field (i.e., deep learning in brain tumor analysis) is presented in this study. A coherent taxonomy of research landscape from the literature has also been mapped, and the major aspects of this emerging field have been discussed and analyzed. A critical discussion section to show the limitations of deep learning techniques has been included at the end to elaborate open research challenges and directions for future work in this emergent area.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
- Correspondence:
| | - Mohammed A. Al Ghamdi
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
| | - Muhammad Adnan Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Khalid Masood Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Sultan H. Almotiri
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Suhail Ashfaq Butt
- Department of Information Sciences, Division of Science and Technology, University of Education Township, Lahore 54700, Pakistan;
| |
Collapse
|
37
|
A Review of Electrical Impedance Characterization of Cells for Label-Free and Real-Time Assays. BIOCHIP JOURNAL 2019. [DOI: 10.1007/s13206-019-3401-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
38
|
A comparative study of features selection for skin lesion detection from dermoscopic images. ACTA ACUST UNITED AC 2019. [DOI: 10.1007/s13721-019-0209-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
39
|
Saba T, Khan MA, Rehman A, Marie-Sainte SL. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. J Med Syst 2019; 43:289. [PMID: 31327058 DOI: 10.1007/s10916-019-1413-3] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 07/03/2019] [Indexed: 01/12/2023]
Abstract
Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign - depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
Collapse
Affiliation(s)
- Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC Universit, Museum Road, Taxila, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia.
| | | |
Collapse
|
40
|
Saba T. Automated lung nodule detection and classification based on multiple classifiers voting. Microsc Res Tech 2019; 82:1601-1609. [DOI: 10.1002/jemt.23326] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 03/30/2019] [Accepted: 06/08/2019] [Indexed: 01/06/2023]
Affiliation(s)
- Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| |
Collapse
|
41
|
Khan MA, Akram T, Sharif M, Saba T, Javed K, Lali IU, Tanik UJ, Rehman A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc Res Tech 2019; 82:741-763. [PMID: 30768826 DOI: 10.1002/jemt.23220] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/09/2018] [Accepted: 12/29/2018] [Indexed: 01/22/2023]
Abstract
Skin cancer is being a most deadly type of cancers which have grown extensively worldwide from the last decade. For an accurate detection and classification of melanoma, several measures should be considered which include, contrast stretching, irregularity measurement, selection of most optimal features, and so forth. A poor contrast of lesion affects the segmentation accuracy and also increases classification error. To overcome this problem, an efficient model for accurate border detection and classification is presented. The proposed model improves the segmentation accuracy in its preprocessing phase, utilizing contrast enhancement of lesion area compared to the background. The enhanced 2D blue channel is selected for the construction of saliency map, at the end of which threshold function produces the binary image. In addition, particle swarm optimization (PSO) based segmentation is also utilized for accurate border detection and refinement. Few selected features including shape, texture, local, and global are also extracted which are later selected based on genetic algorithm with an advantage of identifying the fittest chromosome. Finally, optimized features are later fed into the support vector machine (SVM) for classification. Comprehensive experiments have been carried out on three datasets named as PH2, ISBI2016, and ISIC (i.e., ISIC MSK-1, ISIC MSK-2, and ISIC UDA). The improved accuracy of 97.9, 99.1, 98.4, and 93.8%, respectively obtained for each dataset. The SVM outperforms on the selected dataset in terms of sensitivity, precision rate, accuracy, and FNR. Furthermore, the selection method outperforms and successfully removed the redundant features.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC University, Museum Road, Taxila, Pakistan
| | - Tallha Akram
- Department of Electrical Engineering, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, SA
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Ikram Ullah Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Urcun John Tanik
- Computer Science and Information Systems Texas A&M University-Commerce, USA
| | - Amjad Rehman
- Department of Information Systems, Al Yamamah University, Riyadh, KSA
| |
Collapse
|
42
|
Khan MA, Lali IU, Rehman A, Ishaq M, Sharif M, Saba T, Zahoor S, Akram T. Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microsc Res Tech 2019; 82:909-922. [PMID: 30801840 DOI: 10.1002/jemt.23238] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 01/20/2019] [Accepted: 01/28/2019] [Indexed: 08/25/2024]
Abstract
Brain tumor identification using magnetic resonance images (MRI) is an important research domain in the field of medical imaging. Use of computerized techniques helps the doctors for the diagnosis and treatment against brain cancer. In this article, an automated system is developed for tumor extraction and classification from MRI. It is based on marker-based watershed segmentation and features selection. Five primary steps are involved in the proposed system including tumor contrast, tumor extraction, multimodel features extraction, features selection, and classification. A gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then, segmentation is done using marker-based watershed algorithm. Shape, texture, and point features are extracted in the next step and high ranked 70% features are only selected through chi-square max conditional priority features approach. In the later step, selected features are fused using a serial-based concatenation method before classifying using support vector machine. All the experiments are performed on three data sets including Harvard, BRATS 2013, and privately collected MR images data set. Simulation results clearly reveal that the proposed system outperforms existing methods with greater precision and accuracy.
Collapse
Affiliation(s)
- Muhammad A Khan
- Department of Computer Science and Engineering, HITEC University Museum Road, Taxila, Pakistan
| | - Ikram U Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Amjad Rehman
- College of Business Administration, Al Yamamah University, Riyadh 11512, Saudi Arabia
| | - Mubashar Ishaq
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saliha Zahoor
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Tallha Akram
- Department of EE, COMSATS University Islamabad, Wah Cantt, Pakistan
| |
Collapse
|
43
|
Iqbal S, Ghani Khan MU, Saba T, Mehmood Z, Javaid N, Rehman A, Abbasi R. Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation. Microsc Res Tech 2019; 82:1302-1315. [DOI: 10.1002/jemt.23281] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/24/2019] [Accepted: 04/12/2019] [Indexed: 01/09/2023]
Affiliation(s)
- Sajid Iqbal
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Muhammad U. Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Rashid Abbasi
- School of Computer and TechnologyAnhui University Hefei China
| |
Collapse
|
44
|
Abbas N, Saba T, Rehman A, Mehmood Z, Javaid N, Tahir M, Khan NU, Ahmed KT, Shah R. Plasmodium
species aware based quantification of malaria parasitemia in light microscopy thin blood smear. Microsc Res Tech 2019; 82:1198-1214. [DOI: 10.1002/jemt.23269] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/19/2019] [Accepted: 03/15/2019] [Indexed: 01/03/2023]
Affiliation(s)
- Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- College of Business AdministrationAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Muhammad Tahir
- Department of Computer ScienceCOMSATS University Islamabad, Attock Campus Pakistan
| | | | | | - Roaider Shah
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| |
Collapse
|
45
|
Tahir B, Iqbal S, Usman Ghani Khan M, Saba T, Mehmood Z, Anjum A, Mahmood T. Feature enhancement framework for brain tumor segmentation and classification. Microsc Res Tech 2019; 82:803-811. [DOI: 10.1002/jemt.23224] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 11/20/2018] [Accepted: 12/29/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Bilal Tahir
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Sajid Iqbal
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
| | - M. Usman Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- Department of Information Systems, College of Computer and Information Sciences, Prince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Toqeer Mahmood
- Department of Computer ScienceUniversity of Engineering and Technology Taxila Pakistan
| |
Collapse
|
46
|
Saba T, Khan SU, Islam N, Abbas N, Rehman A, Javaid N, Anjum A. Cloud‐based decision support system for the detection and classification of malignant cells in breast cancer using breast cytology images. Microsc Res Tech 2019; 82:775-785. [DOI: 10.1002/jemt.23222] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 11/14/2018] [Accepted: 12/30/2018] [Indexed: 12/16/2022]
Affiliation(s)
- Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Sana Ullah Khan
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Naveed Islam
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Naveed Abbas
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Amjad Rehman
- MIS Department COBAAl Yamamah University Riyadh Saudi Arabia
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| |
Collapse
|
47
|
Ullah H, Saba T, Islam N, Abbas N, Rehman A, Mehmood Z, Anjum A. An ensemble classification of exudates in color fundus images using an evolutionary algorithm based optimal features selection. Microsc Res Tech 2019; 82:361-372. [DOI: 10.1002/jemt.23178] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 10/13/2018] [Accepted: 10/31/2018] [Indexed: 11/10/2022]
Affiliation(s)
- Hidayat Ullah
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Tanzila Saba
- Information System, College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Naveed Islam
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Amjad Rehman
- Information System, College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| |
Collapse
|
48
|
Abbas N, Saba T, Rehman A, Mehmood Z, kolivand H, Uddin M, Anjum A. Plasmodium life cycle stage classification based quantification of malaria parasitaemia in thin blood smears. Microsc Res Tech 2018; 82:283-295. [DOI: 10.1002/jemt.23170] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 08/28/2018] [Accepted: 10/14/2018] [Indexed: 11/11/2022]
Affiliation(s)
- Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Hoshang kolivand
- Department of Computer ScienceLiverpool John Moores University Liverpool UK
| | - Mueen Uddin
- Information System DepartmentCollege of Engineering, Effat University of Jeddah Jeddah Saudi Arabia
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Islamabad Pakistan
| |
Collapse
|
49
|
Rehman A, Abbas N, Saba T, Rahman SIU, Mehmood Z, Kolivand H. Classification of acute lymphoblastic leukemia using deep learning. Microsc Res Tech 2018; 81:1310-1317. [DOI: 10.1002/jemt.23139] [Citation(s) in RCA: 128] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 08/25/2018] [Accepted: 09/01/2018] [Indexed: 11/11/2022]
Affiliation(s)
- Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Naveed Abbas
- Department of Computer ScienceIslamia College University Peshawar Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | | | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Hoshang Kolivand
- Department of Computer ScienceLiverpool John Moores University Liverpool United Kingdom
| |
Collapse
|
50
|
Yonekura A, Kawanaka H, Prasath VBS, Aronow BJ, Takase H. Automatic disease stage classification of glioblastoma multiforme histopathological images using deep convolutional neural network. Biomed Eng Lett 2018; 8:321-327. [PMID: 30603216 PMCID: PMC6208537 DOI: 10.1007/s13534-018-0077-0] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 05/28/2018] [Accepted: 06/17/2018] [Indexed: 02/07/2023] Open
Abstract
In the field of computational histopathology, computer-assisted diagnosis systems are important in obtaining patient-specific diagnosis for various diseases and help precision medicine. Therefore, many studies on automatic analysis methods for digital pathology images have been reported. In this work, we discuss an automatic feature extraction and disease stage classification method for glioblastoma multiforme (GBM) histopathological images. In this paper, we use deep convolutional neural networks (Deep CNNs) to acquire feature descriptors and a classification scheme simultaneously. Further, comparisons with other popular CNNs objectively as well as quantitatively in this challenging classification problem is undertaken. The experiments using Glioma images from The Cancer Genome Atlas shows that we obtain 96.5 % average classification accuracy for our network and for higher cross validation folds other networks perform similarly with a higher accuracy of 98.0 % . Deep CNNs could extract significant features from the GBM histopathology images with high accuracy. Overall, the disease stage classification of GBM from histopathological images with deep CNNs is very promising and with the availability of large scale histopathological image data the deep CNNs are well suited in tackling this challenging problem.
Collapse
Affiliation(s)
- Asami Yonekura
- Graduate School of Engineering, Mie University, 1577 Kurima-machiya, Tsu, Mie 514-8507 Japan
| | - Hiroharu Kawanaka
- Graduate School of Engineering, Mie University, 1577 Kurima-machiya, Tsu, Mie 514-8507 Japan
| | - V. B. Surya Prasath
- Division of Biomedical Informatics, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229 USA
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH 45267 USA
- Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA
| | - Bruce J. Aronow
- Division of Biomedical Informatics, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229 USA
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH 45267 USA
| | - Haruhiko Takase
- Graduate School of Engineering, Mie University, 1577 Kurima-machiya, Tsu, Mie 514-8507 Japan
| |
Collapse
|