1
|
Kiran A, Alsaadi M, Dutta AK, Raparthi M, Soni M, Alsubai S, Byeon H, Kulkarni MH, Asenso E. Bio-inspired Deep Learning-Personalized Ensemble Alzheimer's Diagnosis Model for Mental Well-being. SLAS Technol 2024:100161. [PMID: 38901762 DOI: 10.1016/j.slast.2024.100161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/18/2024] [Accepted: 06/18/2024] [Indexed: 06/22/2024]
Abstract
Most classification models for Alzheimer's Diagnosis (AD) do not have specific strategies for individual input samples, leading to the problem of easily overlooking personalized differences between samples. This research introduces a customized dynamically ensemble convolution neural network (PDECNN), which is able to build a specific integration strategy based on the distinctiveness of the sample. In this paper, we propose a personalized dynamic ensemble alzheimer's Diagnosis classification model. This model will dynamically modify the deteriorated brain areas of interest depending on various samples since it can adjust to variations in the degeneration of sample brain areas. In clinical problems, the PDECNN model has additional diagnostic importance since it can identify sample-specific degraded brain areas based on input samples. This model considers the variability of brain region degeneration levels between input samples, evaluates the degree of degeneration of specific brain regions using an attention mechanism, and selects and integrates brain region features based on the degree of degeneration. Furthermore, by redesigning the classification accuracy performance, we respectively improve it by 4%, 11%, and 8%. Moreover, the degraded brain regions identified by the model show high consistency with the clinical manifestations of AD.
Collapse
Affiliation(s)
- Ajmeera Kiran
- Dept. of Computer Science and Engineering, MLR Institute of Technology, Dundigal, Hyderabad, Telangana, 500043, India.
| | - Mahmood Alsaadi
- Department of computer science, Al-Maarif University College, Al Anbar, 31001, Iraq.
| | - Ashit Kumar Dutta
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh, 13713, Kingdom of Saudi Arabia.
| | - Mohan Raparthi
- Software Engineer, alphabet Life Science, Dallas Texas, 75063, US.
| | - Mukesh Soni
- Department of CSE, University Centre for Research & Development, Chandigarh University, Mohali, Punjab 140413, India.
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, P.O. Box 151, Al-Kharj 11942, Saudi Arabia.
| | - Haewon Byeon
- Department of AI and Software, Inje University, Gimhae 50834, Republic of Korea.
| | | | | |
Collapse
|
2
|
Rai HM, Yoo J, Dashkevych S. Two-headed UNetEfficientNets for parallel execution of segmentation and classification of brain tumors: incorporating postprocessing techniques with connected component labelling. J Cancer Res Clin Oncol 2024; 150:220. [PMID: 38684578 PMCID: PMC11058623 DOI: 10.1007/s00432-024-05718-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 03/21/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The purpose of this study is to develop accurate and automated detection and segmentation methods for brain tumors, given their significant fatality rates, with aggressive malignant tumors like Glioblastoma Multiforme (GBM) having a five-year survival rate as low as 5 to 10%. This underscores the urgent need to improve diagnosis and treatment outcomes through innovative approaches in medical imaging and deep learning techniques. METHODS In this work, we propose a novel approach utilizing the two-headed UNetEfficientNets model for simultaneous segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) images. The model combines the strengths of EfficientNets and a modified two-headed Unet model. We utilized a publicly available dataset consisting of 3064 brain MR images classified into three tumor classes: Meningioma, Glioma, and Pituitary. To enhance the training process, we performed 12 types of data augmentation on the training dataset. We evaluated the methodology using six deep learning models, ranging from UNetEfficientNet-B0 to UNetEfficientNet-B5, optimizing the segmentation and classification heads using binary cross entropy (BCE) loss with Dice and BCE with focal loss, respectively. Post-processing techniques such as connected component labeling (CCL) and ensemble models were applied to improve segmentation outcomes. RESULTS The proposed UNetEfficientNet-B4 model achieved outstanding results, with an accuracy of 99.4% after postprocessing. Additionally, it obtained high scores for DICE (94.03%), precision (98.67%), and recall (99.00%) after post-processing. The ensemble technique further improved segmentation performance, with a global DICE score of 95.70% and Jaccard index of 91.20%. CONCLUSION Our study demonstrates the high efficiency and accuracy of the proposed UNetEfficientNet-B4 model in the automatic and parallel detection and segmentation of brain tumors from MRI images. This approach holds promise for improving diagnosis and treatment planning for patients with brain tumors, potentially leading to better outcomes and prognosis.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea
| | - Serhii Dashkevych
- Department of Computer Engineering, Vistula University, Stokłosy 3, 02-787, Warszawa, Poland
| |
Collapse
|
3
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
4
|
Majumder S, Gautam N, Basu A, Sau A, Geem ZW, Sarkar R. MENet: A Mitscherlich function based ensemble of CNN models to classify lung cancer using CT scans. PLoS One 2024; 19:e0298527. [PMID: 38466701 PMCID: PMC10927148 DOI: 10.1371/journal.pone.0298527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 01/25/2024] [Indexed: 03/13/2024] Open
Abstract
Lung cancer is one of the leading causes of cancer-related deaths worldwide. To reduce the mortality rate, early detection and proper treatment should be ensured. Computer-aided diagnosis methods analyze different modalities of medical images to increase diagnostic precision. In this paper, we propose an ensemble model, called the Mitscherlich function-based Ensemble Network (MENet), which combines the prediction probabilities obtained from three deep learning models, namely Xception, InceptionResNetV2, and MobileNetV2, to improve the accuracy of a lung cancer prediction model. The ensemble approach is based on the Mitscherlich function, which produces a fuzzy rank to combine the outputs of the said base classifiers. The proposed method is trained and tested on the two publicly available lung cancer datasets, namely Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) and LIDC-IDRI, both of these are computed tomography (CT) scan datasets. The obtained results in terms of some standard metrics show that the proposed method performs better than state-of-the-art methods. The codes for the proposed work are available at https://github.com/SuryaMajumder/MENet.
Collapse
Affiliation(s)
- Surya Majumder
- Department of Computer Science and Engineering, Heritage Institute of Technology, Kolkata, India
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Nandita Gautam
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Abhishek Basu
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Arup Sau
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zong Woo Geem
- College of IT Convergence, Gachon University, Seongnam, South Korea
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|
5
|
Bhimavarapu U, Chintalapudi N, Battineni G. Brain Tumor Detection and Categorization with Segmentation of Improved Unsupervised Clustering Approach and Machine Learning Classifier. Bioengineering (Basel) 2024; 11:266. [PMID: 38534540 DOI: 10.3390/bioengineering11030266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/28/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024] Open
Abstract
There is no doubt that brain tumors are one of the leading causes of death in the world. A biopsy is considered the most important procedure in cancer diagnosis, but it comes with drawbacks, including low sensitivity, risks during biopsy treatment, and a lengthy wait for results. Early identification provides patients with a better prognosis and reduces treatment costs. The conventional methods of identifying brain tumors are based on medical professional skills, so there is a possibility of human error. The labor-intensive nature of traditional approaches makes healthcare resources expensive. A variety of imaging methods are available to detect brain tumors, including magnetic resonance imaging (MRI) and computed tomography (CT). Medical imaging research is being advanced by computer-aided diagnostic processes that enable visualization. Using clustering, automatic tumor segmentation leads to accurate tumor detection that reduces risk and helps with effective treatment. This study proposed a better Fuzzy C-Means segmentation algorithm for MRI images. To reduce complexity, the most relevant shape, texture, and color features are selected. The improved Extreme Learning machine classifies the tumors with 98.56% accuracy, 99.14% precision, and 99.25% recall. The proposed classifier consistently demonstrates higher accuracy across all tumor classes compared to existing models. Specifically, the proposed model exhibits accuracy improvements ranging from 1.21% to 6.23% when compared to other models. This consistent enhancement in accuracy emphasizes the robust performance of the proposed classifier, suggesting its potential for more accurate and reliable brain tumor classification. The improved algorithm achieved accuracy, precision, and recall rates of 98.47%, 98.59%, and 98.74% on the Fig share dataset and 99.42%, 99.75%, and 99.28% on the Kaggle dataset, respectively, which surpasses competing algorithms, particularly in detecting glioma grades. The proposed algorithm shows an improvement in accuracy, of approximately 5.39%, in the Fig share dataset and of 6.22% in the Kaggle dataset when compared to existing models. Despite challenges, including artifacts and computational complexity, the study's commitment to refining the technique and addressing limitations positions the improved FCM model as a noteworthy advancement in the realm of precise and efficient brain tumor identification.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India
| | - Nalini Chintalapudi
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| |
Collapse
|
6
|
Feng Y. An integrated machine learning-based model for joint diagnosis of ovarian cancer with multiple test indicators. J Ovarian Res 2024; 17:45. [PMID: 38378582 PMCID: PMC10877874 DOI: 10.1186/s13048-024-01365-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/01/2024] [Indexed: 02/22/2024] Open
Abstract
OBJECTIVE To construct a machine learning diagnostic model integrating feature dimensionality reduction techniques and artificial neural network classifiers to develop the value of clinical routine blood indexes for the auxiliary diagnosis of ovarian cancer. METHODS Patients with ovarian cancer clearly diagnosed in our hospital were collected as a case group (n = 185), and three groups of patients with other malignant otolaryngology tumors (n = 138), patients with benign otolaryngology diseases (n = 339) and those with normal physical examination (n = 92) were used as an overall control group. In this paper, a fully automated segmentation network for magnetic resonance images of ovarian cancer is proposed to improve the reproducibility of tumor segmentation results while effectively reducing the burden on radiologists. A pre-trained Res Net50 is used to the three edge output modules are fused to obtain the final segmentation results. The segmentation results of the proposed network architecture are compared with the segmentation results of the U-net based network architecture and the effect of different loss functions and region of interest sizes on the segmentation performance of the proposed network is analyzed. RESULTS The average Dice similarity coefficient, average sensitivity, average specificity (specificity) and average hausdorff distance of the proposed network segmentation results reached 83.62%, 89.11%, 96.37% and 8.50, respectively, which were better than the U-net based segmentation method. For ROIs containing tumor tissue, the smaller the size, the better the segmentation effect. Several loss functions do not differ much. The area under the ROC curve of the machine learning diagnostic model reached 0.948, with a sensitivity of 91.9% and a specificity of 86.9%, and its diagnostic efficacy was significantly better than that of the traditional way of detecting CA125 alone. The model was able to accurately diagnose ovarian cancer of different disease stages and showed certain discriminative ability for ovarian cancer in all three control subgroups. CONCLUSION Using machine learning to integrate multiple conventional test indicators can effectively improve the diagnostic efficacy of ovarian cancer, which provides a new idea for the intelligent auxiliary diagnosis of ovarian cancer.
Collapse
Affiliation(s)
- Yiwen Feng
- Departments of Obstetrics and Gynecology, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, 200080, P.R. China.
- Jiuquan Hospital, Shanghai General Hospital, 200003, Shanghai, China.
| |
Collapse
|
7
|
Jenul A, Stokmo HL, Schrunner S, Hjortland GO, Revheim ME, Tomic O. Novel ensemble feature selection techniques applied to high-grade gastroenteropancreatic neuroendocrine neoplasms for the prediction of survival. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107934. [PMID: 38016391 DOI: 10.1016/j.cmpb.2023.107934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/05/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Determining the most informative features for predicting the overall survival of patients diagnosed with high-grade gastroenteropancreatic neuroendocrine neoplasms is crucial to improve individual treatment plans for patients, as well as the biological understanding of the disease. The main objective of this study is to evaluate the use of modern ensemble feature selection techniques for this purpose with respect to (a) quantitative performance measures such as predictive performance, (b) clinical interpretability, and (c) the effect of integrating prior expert knowledge. METHODS The Repeated Elastic Net Technique for Feature Selection (RENT) and the User-Guided Bayesian Framework for Feature Selection (UBayFS) are recently developed ensemble feature selectors investigated in this work. Both allow the user to identify informative features in datasets with low sample sizes and focus on model interpretability. While RENT is purely data-driven, UBayFS can integrate expert knowledge a priori in the feature selection process. In this work, we compare both feature selectors on a dataset comprising 63 patients and 110 features from multiple sources, including baseline patient characteristics, baseline blood values, tumor histology, imaging, and treatment information. RESULTS Our experiments involve data-driven and expert-driven setups, as well as combinations of both. In a five-fold cross-validated experiment without expert knowledge, our results demonstrate that both feature selectors allow accurate predictions: A reduction from 110 to approximately 20 features (around 82%) delivers near-optimal predictive performances with minor variations according to the choice of the feature selector, the predictive model, and the fold. Thereafter, we use findings from clinical literature as a source of expert knowledge. In addition, expert knowledge has a stabilizing effect on the feature set (an increase in stability of approximately 40%), while the impact on predictive performance is limited. CONCLUSIONS The features WHO Performance Status, Albumin, Platelets, Ki-67, Tumor Morphology, Total MTV, Total TLG, and SUVmax are the most stable and predictive features in our study. Overall, this study demonstrated the practical value of feature selection in medical applications not only to improve quantitative performance but also to deliver potentially new insights to experts.
Collapse
Affiliation(s)
- Anna Jenul
- Department of Data Science, Norwegian University of Life Sciences, Universitetstunet 3, 1433 Ås, Norway.
| | - Henning Langen Stokmo
- Department of Nuclear Medicine, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway; Institute of Clinical Medicine, University of Oslo, Oslo, Norway.
| | - Stefan Schrunner
- Department of Data Science, Norwegian University of Life Sciences, Universitetstunet 3, 1433 Ås, Norway.
| | | | - Mona-Elisabeth Revheim
- Department of Nuclear Medicine, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway; Institute of Clinical Medicine, University of Oslo, Oslo, Norway; The Intervention Centre, Division of Technology and Innovation, Oslo University Hospital, Oslo, Norway.
| | - Oliver Tomic
- Department of Data Science, Norwegian University of Life Sciences, Universitetstunet 3, 1433 Ås, Norway.
| |
Collapse
|
8
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
9
|
Xu L, Guo C, Liu M. A weighted distance-based dynamic ensemble regression framework for gastric cancer survival time prediction. Artif Intell Med 2024; 147:102740. [PMID: 38184344 DOI: 10.1016/j.artmed.2023.102740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 10/28/2023] [Accepted: 11/28/2023] [Indexed: 01/08/2024]
Abstract
Accurate prediction of gastric cancer patient survival time is essential for clinical decision-making. However, unified static models lack specificity and flexibility in predictions owing to the varying survival outcomes among gastric cancer patients. We address these problems by using an ensemble learning approach and adaptively assigning greater weights to similar patients to make more targeted predictions when predicting an individual's survival time. We treat these problems as regression problems and introduce a weighted dynamic ensemble regression framework. To better identify similar patients, we devise a method to measure patient similarity, considering the diverse impacts of features. Subsequently, we use this measure to design both a weighted K-means clustering method and a fuzzy K-means sampling technique to group patients and train corresponding base regressors. To achieve more targeted predictions, we calculate the weight of each base regressor based on the similarity between the patient to be predicted and the patient clusters, culminating in the integration of the results. The model is validated on a dataset of 7791 patients, outperforming other models in terms of three evaluation metrics, namely, the root mean square error, mean absolute error, and the coefficient of determination. The weighted dynamic ensemble regression strategy can improve the baseline model by 1.75%, 2.12%, and 13.45% in terms of the three respective metrics while also mitigating the imbalanced survival time distribution issue. This enhanced performance has been statistically validated, even when tested on six public datasets with different sizes. By considering feature variations, patients with distinct survival profiles can be effectively differentiated, and the model predictive performance can be enhanced. The results generated by our proposed model can be invaluable in guiding decisions related to treatment plans and resource allocation. Furthermore, the model has the potential for broader applications in prognosis for other types of cancers or similar regression problems in various domains.
Collapse
Affiliation(s)
- Liangchen Xu
- Institute of Systems Engineering, Dalian University of Technology, Dalian 116024, China.
| | - Chonghui Guo
- Institute of Systems Engineering, Dalian University of Technology, Dalian 116024, China.
| | - Mucan Liu
- Institute of Systems Engineering, Dalian University of Technology, Dalian 116024, China.
| |
Collapse
|
10
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
11
|
Xu J, Lodge T, Kingdon C, Strong JWL, Maclennan J, Lacerda E, Kujawski S, Zalewski P, Huang WE, Morten KJ. Developing a Blood Cell-Based Diagnostic Test for Myalgic Encephalomyelitis/Chronic Fatigue Syndrome Using Peripheral Blood Mononuclear Cells. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2302146. [PMID: 37653608 PMCID: PMC10602530 DOI: 10.1002/advs.202302146] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/12/2023] [Indexed: 09/02/2023]
Abstract
Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is characterized by debilitating fatigue that profoundly impacts patients' lives. Diagnosis of ME/CFS remains challenging, with most patients relying on self-report, questionnaires, and subjective measures to receive a diagnosis, and many never receiving a clear diagnosis at all. In this study, a single-cell Raman platform and artificial intelligence are utilized to analyze blood cells from 98 human subjects, including 61 ME/CFS patients of varying disease severity and 37 healthy and disease controls. These results demonstrate that Raman profiles of blood cells can distinguish between healthy individuals, disease controls, and ME/CFS patients with high accuracy (91%), and can further differentiate between mild, moderate, and severe ME/CFS patients (84%). Additionally, specific Raman peaks that correlate with ME/CFS phenotypes and have the potential to provide insights into biological changes and support the development of new therapeutics are identified. This study presents a promising approach for aiding in the diagnosis and management of ME/CFS and can be extended to other unexplained chronic diseases such as long COVID and post-treatment Lyme disease syndrome, which share many of the same symptoms as ME/CFS.
Collapse
Affiliation(s)
- Jiabao Xu
- Department of Engineering ScienceUniversity of OxfordParks RoadOxfordOX1 3PJUK
- Division of Biomedical Engineering, James Watt School of EngineeringUniversity of GlasgowGlasgowG12 8LTUK
| | - Tiffany Lodge
- Nuffield Department of Women's and Reproductive HealthUniversity of OxfordThe Women CentreJohn Radcliffe HospitalHeadley Way, HeadingtonOxfordOX3 9DUUK
| | - Caroline Kingdon
- Faculty of Infectious DiseasesLondon School of Hygiene and Tropical MedicineKeppel StLondonWC1E 7HTUK
| | - James W. L. Strong
- Nuffield Department of Women's and Reproductive HealthUniversity of OxfordThe Women CentreJohn Radcliffe HospitalHeadley Way, HeadingtonOxfordOX3 9DUUK
| | - John Maclennan
- Soft Cell Biological ResearchAttwood Innovation Center453 S 600 ESt. GeorgeUT84770USA
| | - Eliana Lacerda
- Faculty of Infectious DiseasesLondon School of Hygiene and Tropical MedicineKeppel StLondonWC1E 7HTUK
| | - Slawomir Kujawski
- Department of Exercise Physiology and Functional AnatomyCollegium Medicum in BydgoszczNicolaus Copernicus University in TorunSwietojanska 20Bydgoszcz85‐077Poland
| | - Pawel Zalewski
- Department of Exercise Physiology and Functional AnatomyCollegium Medicum in BydgoszczNicolaus Copernicus University in TorunSwietojanska 20Bydgoszcz85‐077Poland
- Department of Experimental and Clinical PhysiologyWarsaw Medical UniversityStefana Banacha 2aWarszawa02‐097Poland
| | - Wei E. Huang
- Department of Engineering ScienceUniversity of OxfordParks RoadOxfordOX1 3PJUK
| | - Karl J. Morten
- Nuffield Department of Women's and Reproductive HealthUniversity of OxfordThe Women CentreJohn Radcliffe HospitalHeadley Way, HeadingtonOxfordOX3 9DUUK
| |
Collapse
|
12
|
Mercaldo F, Brunese L, Martinelli F, Santone A, Cesarelli M. Explainable Convolutional Neural Networks for Brain Cancer Detection and Localisation. SENSORS (BASEL, SWITZERLAND) 2023; 23:7614. [PMID: 37688069 PMCID: PMC10490676 DOI: 10.3390/s23177614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/07/2023] [Accepted: 08/31/2023] [Indexed: 09/10/2023]
Abstract
Brain cancer is widely recognised as one of the most aggressive types of tumors. In fact, approximately 70% of patients diagnosed with this malignant cancer do not survive. In this paper, we propose a method aimed to detect and localise brain cancer, starting from the analysis of magnetic resonance images. The proposed method exploits deep learning, in particular convolutional neural networks and class activation mapping, in order to provide explainability by highlighting the areas of the medical image related to brain cancer (from the model point of view). We evaluate the proposed method with 3000 magnetic resonances using a free available dataset. The results we obtained are encouraging. We reach an accuracy ranging from 97.83% to 99.67% in brain cancer detection by exploiting four different models: VGG16, ResNet50, Alex_Net, and MobileNet, thus showing the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Francesco Mercaldo
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
- Institute for Informatics and Telematics, National Research Council of Italy, 56121 Pisa, Italy
| | - Luca Brunese
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
| | - Fabio Martinelli
- Institute for Informatics and Telematics, National Research Council of Italy, 56121 Pisa, Italy
| | - Antonella Santone
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
| | - Mario Cesarelli
- Department of Engineering, University of Sannio, 82100 Benevento, Italy;
| |
Collapse
|
13
|
Alzahrani N, Henry A, Clark A, Murray L, Nix M, Al-Qaisieh B. Geometric evaluations of CT and MRI based deep learning segmentation for brain OARs in radiotherapy. Phys Med Biol 2023; 68:175035. [PMID: 37579753 DOI: 10.1088/1361-6560/acf023] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 08/14/2023] [Indexed: 08/16/2023]
Abstract
Objective.Deep-learning auto-contouring (DL-AC) promises standardisation of organ-at-risk (OAR) contouring, enhancing quality and improving efficiency in radiotherapy. No commercial models exist for OAR contouring based on brain magnetic resonance imaging (MRI). We trained and evaluated computed tomography (CT) and MRI OAR autosegmentation models in RayStation. To ascertain clinical usability, we investigated the geometric impact of contour editing before training on model quality.Approach.Retrospective glioma cases were randomly selected for training (n= 32, 47) and validation (n= 9, 10) for MRI and CT, respectively. Clinical contours were edited using international consensus (gold standard) based on MRI and CT. MRI models were trained (i) using the original clinical contours based on planning CT and rigidly registered T1-weighted gadolinium-enhanced MRI (MRIu), (ii) as (i), further edited based on CT anatomy, to meet international consensus guidelines (MRIeCT), and (iii) as (i), further edited based on MRI anatomy (MRIeMRI). CT models were trained using: (iv) original clinical contours (CTu) and (v) clinical contours edited based on CT anatomy (CTeCT). Auto-contours were geometrically compared to gold standard validation contours (CTeCT or MRIeMRI) using Dice Similarity Coefficient, sensitivity, and mean distance to agreement. Models' performances were compared using paired Student's t-testing.Main results.The edited autosegmentation models successfully generated more segmentations than the unedited models. Paired t-testing showed editing pituitary, orbits, optic nerves, lenses, and optic chiasm on MRI before training significantly improved at least one geometry metric. MRI-based DL-AC performed worse than CT-based in delineating the lacrimal gland, whereas the CT-based performed worse in delineating the optic chiasm. No significant differences were found between the CTeCT and CTu except for optic chiasm.Significance.T1w-MRI DL-AC could segment all brain OARs except the lacrimal glands, which cannot be easily visualized on T1w-MRI. Editing contours on MRI before model training improved geometric performance. MRI DL-AC in RT may improve consistency, quality and efficiency but requires careful editing of training contours.
Collapse
Affiliation(s)
- Nouf Alzahrani
- King Abdulaziz University, Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Ann Henry
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Anna Clark
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Louise Murray
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Michael Nix
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Bashar Al-Qaisieh
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| |
Collapse
|
14
|
Li Z, Li H, Ralescu AL, Dillman JR, Parikh NA, He L. A novel collaborative self-supervised learning method for radiomic data. Neuroimage 2023; 277:120229. [PMID: 37321358 PMCID: PMC10440826 DOI: 10.1016/j.neuroimage.2023.120229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 05/19/2023] [Accepted: 06/12/2023] [Indexed: 06/17/2023] Open
Abstract
The computer-aided disease diagnosis from radiomic data is important in many medical applications. However, developing such a technique relies on labeling radiological images, which is a time-consuming, labor-intensive, and expensive process. In this work, we present the first novel collaborative self-supervised learning method to solve the challenge of insufficient labeled radiomic data, whose characteristics are different from text and image data. To achieve this, we present two collaborative pretext tasks that explore the latent pathological or biological relationships between regions of interest and the similarity and dissimilarity of information between subjects. Our method collaboratively learns the robust latent feature representations from radiomic data in a self-supervised manner to reduce human annotation efforts, which benefits the disease diagnosis. We compared our proposed method with other state-of-the-art self-supervised learning methods on a simulation study and two independent datasets. Extensive experimental results demonstrated that our method outperforms other self-supervised learning methods on both classification and regression tasks. With further refinement, our method will have the potential advantage in automatic disease diagnosis with large-scale unlabeled data available.
Collapse
Affiliation(s)
- Zhiyuan Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Anca L Ralescu
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Jonathan R Dillman
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nehal A Parikh
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, U niversity of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
15
|
Kong W, Zhu J, Bi S, Huang L, Wu P, Zhu S. Adaptive best subset selection algorithm and genetic algorithm aided ensemble learning method identified a robust severity score of COVID-19 patients. IMETA 2023; 2:e126. [PMID: 38867930 PMCID: PMC10989835 DOI: 10.1002/imt2.126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2024]
Abstract
We used an integrated ensemble learning method to build a stable prediction model for severity in COVID-19 patients, which was validated in multicenter cohorts.
Collapse
Affiliation(s)
- Weikaixin Kong
- Institute for Molecular Medicine Finland (FIMM), HiLIFEUniversity of HelsinkiHelsinkiFinland
| | - Jie Zhu
- Institute for Molecular Medicine Finland (FIMM), HiLIFEUniversity of HelsinkiHelsinkiFinland
| | - Suzhen Bi
- Institute of Translational Medicine, The Affiliated Hospital of Qingdao University, College of MedicineQingdao UniversityQingdaoChina
| | - Liting Huang
- Institute of Translational Medicine, The Affiliated Hospital of Qingdao University, College of MedicineQingdao UniversityQingdaoChina
| | - Peng Wu
- Cancer Biology Research Center (Key Laboratory of the Ministry of Education), Tongji Medical College, Tongji HospitalHuazhong University of Science and TechnologyWuhanChina
- Department of Gynecologic Oncology, Tongji Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Su‐Jie Zhu
- Institute of Translational Medicine, The Affiliated Hospital of Qingdao University, College of MedicineQingdao UniversityQingdaoChina
| |
Collapse
|
16
|
Wang Y, Li L, Li C, Xi Y, Lin Y, Wang S. Expert knowledge guided manifold representation learning for magnetic resonance imaging-based glioma grading. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023]
|
17
|
Tabassum M, Suman AA, Suero Molina E, Pan E, Di Ieva A, Liu S. Radiomics and Machine Learning in Brain Tumors and Their Habitat: A Systematic Review. Cancers (Basel) 2023; 15:3845. [PMID: 37568660 PMCID: PMC10417709 DOI: 10.3390/cancers15153845] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Radiomics is a rapidly evolving field that involves extracting and analysing quantitative features from medical images, such as computed tomography or magnetic resonance images. Radiomics has shown promise in brain tumor diagnosis and patient-prognosis prediction by providing more detailed and objective information about tumors' features than can be obtained from the visual inspection of the images alone. Radiomics data can be analyzed to determine their correlation with a tumor's genetic status and grade, as well as in the assessment of its recurrence vs. therapeutic response, among other features. In consideration of the multi-parametric and high-dimensional space of features extracted by radiomics, machine learning can further improve tumor diagnosis, treatment response, and patients' prognoses. There is a growing recognition that tumors and their microenvironments (habitats) mutually influence each other-tumor cells can alter the microenvironment to increase their growth and survival. At the same time, habitats can also influence the behavior of tumor cells. In this systematic review, we investigate the current limitations and future developments in radiomics and machine learning in analysing brain tumors and their habitats.
Collapse
Affiliation(s)
- Mehnaz Tabassum
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW 2109, Australia;
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
| | - Abdulla Al Suman
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
| | - Eric Suero Molina
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
- Department of Neurosurgery, University Hospital of Münster, 48149 Münster, Germany
| | - Elizabeth Pan
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
- Faculty of Medicine and Health Sciences, Macquarie University, Sydney, NSW 2109, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
| | - Sidong Liu
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW 2109, Australia;
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW 2109, Australia; (A.A.S.); (E.S.M.); (E.P.)
| |
Collapse
|
18
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
19
|
Lee S, Lee SY, Jung JY, Nam Y, Jeon HJ, Jung CK, Shin SH, Chung YG. Ensemble learning-based radiomics with multi-sequence magnetic resonance imaging for benign and malignant soft tissue tumor differentiation. PLoS One 2023; 18:e0286417. [PMID: 37256875 DOI: 10.1371/journal.pone.0286417] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/15/2023] [Indexed: 06/02/2023] Open
Abstract
Many previous studies focused on differentiating between benign and malignant soft tissue tumors using radiomics model based on various magnetic resonance imaging (MRI) sequences, but it is still unclear how to set up the input radiomic features from multiple MRI sequences. Here, we evaluated two types of radiomics models generated using different feature incorporation strategies. In order to differentiate between benign and malignant soft tissue tumors (STTs), we compared the diagnostic performance of an ensemble of random forest (R) models with single-sequence MRI inputs to R models with pooled multi-sequence MRI inputs. One-hundred twenty-five STT patients with preoperative MRI were retrospectively included and consisted of training (n = 100) and test (n = 25) sets. MRI included T1-weighted (T1-WI), T2-weighted (T2-WI), contrast-enhanced (CE)-T1-WI, diffusion-weighted images (DWIs, b = 800 sec/mm2) and apparent diffusion coefficient (ADC) maps. After tumor segmentation on each sequence, 100 original radiomic features were extracted from each sequence image and divided into three-feature sets: T features from T1- and T2-WI, CE features from CE-T1-WI, and D features from DWI and ADC maps. Four radiomics models were built using Lasso and R with four combinations of three-feature sets as inputs: T features (R-T), T+CE features (R-C), T+D features (R-D), and T+CE+D features (R-A) (Type-1 model). An ensemble model was built by soft voting of five, single-sequence-based R models (Type-2 model). AUC, sensitivity, specificity, and accuracy of each model was calculated with five-fold cross validation. In Type-1 model, AUC, sensitivity, specificity, and accuracy were 0.752, 71.8%, 61.1%, and 67.2% in R-T; 0.756, 76.1%, 70.4%, and 73.6% in R-C; 0.750, 77.5%, 63.0%, and 71.2% in R-D; and 0.749, 74.6%, 61.1%, and 68.8% R-A models, respectively. AUC, sensitivity, specificity, and accuracy of Type-2 model were 0.774, 76.1%, 68.5%, and 72.8%. In conclusion, an ensemble method is beneficial to incorporate features from multi-sequence MRI and showed diagnostic robustness for differentiating malignant STTs.
Collapse
Affiliation(s)
- Seungeun Lee
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - So-Yeon Lee
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Joon-Yong Jung
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yoonho Nam
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Seoul, Gyeonggi-do, Republic of Korea
| | - Hyeon Jun Jeon
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Seoul, Gyeonggi-do, Republic of Korea
| | - Chan-Kwon Jung
- Department of Pathology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
20
|
Rajinikanth V, Vincent PMDR, Gnanaprakasam CN, Srinivasan K, Chang CY. Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics (Basel) 2023; 13:diagnostics13111832. [PMID: 37296683 DOI: 10.3390/diagnostics13111832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/19/2023] [Indexed: 06/12/2023] Open
Abstract
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
| | - P M Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - C N Gnanaprakasam
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, OMR, Chennai 600119, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
| |
Collapse
|
21
|
Le NQK, Ho DKN, Ta HDK, Nguyen HT. Using ensemble learning and genetic algorithm on magnetic resonance imaging radiomics to classify molecular subtypes of breast cancer. PRECISION MEDICAL SCIENCES 2022. [DOI: 10.1002/prm2.12089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Affiliation(s)
- Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine Taipei Medical University Taipei Taiwan
- Research Center for Artificial Intelligence in Medicine Taipei Medical University Taipei Taiwan
- Translational Imaging Research Center Taipei Medical University Hospital Taipei Taiwan
| | - Dang Khanh Ngan Ho
- School of Nutrition and Health Sciences, College of Nutrition Taipei Medical University Taipei Taiwan
| | - Hoang Dang Khoa Ta
- Ph.D. Program for Cancer Molecular Biology and Drug Discovery, College of Medical Science and Technology Taipei Medical University and Academia Sinica Taipei Taiwan
- Graduate Institute of Cancer Biology and Drug Discovery, College of Medical Science and Technology Taipei Medical University Taipei Taiwan
| | - Hieu Trung Nguyen
- Department of Orthopedic and Trauma, Faculty of Medicine University of Medicine and Pharmacy at Ho Chi Minh City Ho Chi Minh City Vietnam
| |
Collapse
|
22
|
Automatic detection of Alzheimer’s disease progression: An efficient information fusion approach with heterogeneous ensemble classifiers. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
23
|
Klyuzhin IS, Xu Y, Ortiz A, Ferres JL, Hamarneh G, Rahmim A. Testing the Ability of Convolutional Neural Networks to Learn Radiomic Features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106750. [PMID: 35381490 DOI: 10.1016/j.cmpb.2022.106750] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 02/27/2022] [Accepted: 03/10/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Radiomics and deep learning have emerged as two distinct approaches to medical image analysis. However, their relative expressive power remains largely unknown. Theoretically, hand-crafted radiomic features represent a mere subset of features that neural networks can approximate, thus making deep learning a more powerful approach. On the other hand, automated learning of hand-crafted features may require a prohibitively large number of training samples. Here we directly test the ability of convolutional neural networks (CNNs) to learn and predict the intensity, shape, and texture properties of tumors as defined by standardized radiomic features. METHODS Conventional 2D and 3D CNN architectures with an increasing number of convolutional layers were trained to predict the values of 16 standardized radiomic features from real and synthetic PET images of tumors, and tested. In addition, several ImageNet-pretrained advanced networks were tested. A total of 4000 images were used for training, 500 for validation, and 500 for testing. RESULTS Features quantifying size and intensity were predicted with high accuracy, while shape irregularity and heterogeneity features had very high prediction errors and generalized poorly. For example, mean normalized prediction error of tumor diameter with a 5-layer CNN was 4.23 ± 0.25, while the error for tumor sphericity was 15.64 ± 0.93. We additionally found that learning shape features required an order of magnitude more samples compared to intensity and size features. CONCLUSIONS Our findings imply that CNNs trained to perform various image-based clinical tasks may generally under-utilize the shape and texture information that is more easily captured by radiomics. We speculate that to improve the CNN performance, shape and texture features can be computed explicitly and added as auxiliary variables to the networks, or supplied as synthetic inputs.
Collapse
Affiliation(s)
- Ivan S Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Department of Radiology, University of British Columbia, Vancouver, BC, Canada; AI for Health, Microsoft, Redmond, WA, USA.
| | - Yixi Xu
- AI for Health, Microsoft, Redmond, WA, USA
| | | | | | - Ghassan Hamarneh
- Department of Computing Science, Simon Fraser University, Burnaby, BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Department of Radiology, University of British Columbia, Vancouver, BC, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
24
|
A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background: Respiratory sound analysis represents a research topic of growing interest in recent times. In fact, in this area, there is the potential to automatically infer the abnormalities in the preliminary stages of a lung dysfunction. Methods: In this paper, we propose a method to analyse respiratory sounds in an automatic way. The aim is to show the effectiveness of machine learning techniques in respiratory sound analysis. A feature vector is gathered directly from breath audio and, thus, by exploiting supervised machine learning techniques, we detect if the feature vector is related to a patient affected by a lung disease. Moreover, the proposed method is able to characterise the lung disease in asthma, bronchiectasis, bronchiolitis, chronic obstructive pulmonary disease, pneumonia, and lower or upper respiratory tract infection. Results: A retrospective experimental analysis on 126 patients with 920 recording sessions showed the effectiveness of the proposed method. Conclusion: The experimental analysis demonstrated that it is possible to detect lung disease by exploiting machine learning techniques. We considered several supervised machine learning algorithms, obtaining the most interesting performance with the neural network model, with an F-Measure of 0.983 in lung disease detection and equal to 0.923 in lung disease characterisation, increasing the state-of-the-art performance.
Collapse
|
25
|
Tripathi PC, Bag S. A computer-aided grading of glioma tumor using deep residual networks fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106597. [PMID: 34974232 DOI: 10.1016/j.cmpb.2021.106597] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 10/19/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Among different cancer types, glioma is considered as a potentially fatal brain cancer that arises from glial cells. Early diagnosis of glioma helps the physician in offering effective treatment to the patients. Magnetic Resonance Imaging (MRI)-based Computer-Aided Diagnosis for the brain tumors has attracted a lot of attention in the literature in recent years. In this paper, we propose a novel deep learning-based computer-aided diagnosis for glioma tumors. METHODS The proposed method incorporates a two-level classification of gliomas. Firstly, the tumor is classified into low-or high-grade and secondly, the low-grade tumors are classified into two types based on the presence of chromosome arms 1p/19q. The feature representations of four residual networks have been used to solve the problem by utilizing transfer learning approach. Furthermore, we have fused these trained models using a novel Dempster-shafer Theory (DST)-based fusion scheme in order to enhance the classification performance. Extensive data augmentation strategies are also utilized to avoid over-fitting of the discrimination models. RESULTS Extensive experiments have been performed on an MRI dataset to show the effectiveness of the method. It has been found that our method achieves 95.87% accuracy for glioma classification. The results obtained by our method have also been compared with different existing methods. The comparative study reveals that our method not only outperforms traditional machine learning-based methods, but it also produces better results to state-of-the-art deep learning-based methods. CONCLUSION The fusion of different residual networks enhances the tumor classification performance. The experimental findings indicates that Dempster-shafer Theory (DST)-based fusion technique produces superior performance in comparison to other fusion schemes.
Collapse
Affiliation(s)
- Prasun Chandra Tripathi
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| | - Soumen Bag
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| |
Collapse
|
26
|
Zhang X, Zhang Y, Zhang G, Qiu X, Tan W, Yin X, Liao L. Deep Learning With Radiomics for Disease Diagnosis and Treatment: Challenges and Potential. Front Oncol 2022; 12:773840. [PMID: 35251962 PMCID: PMC8891653 DOI: 10.3389/fonc.2022.773840] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 01/17/2022] [Indexed: 12/12/2022] Open
Abstract
The high-throughput extraction of quantitative imaging features from medical images for the purpose of radiomic analysis, i.e., radiomics in a broad sense, is a rapidly developing and emerging research field that has been attracting increasing interest, particularly in multimodality and multi-omics studies. In this context, the quantitative analysis of multidimensional data plays an essential role in assessing the spatio-temporal characteristics of different tissues and organs and their microenvironment. Herein, recent developments in this method, including manually defined features, data acquisition and preprocessing, lesion segmentation, feature extraction, feature selection and dimension reduction, statistical analysis, and model construction, are reviewed. In addition, deep learning-based techniques for automatic segmentation and radiomic analysis are being analyzed to address limitations such as rigorous workflow, manual/semi-automatic lesion annotation, and inadequate feature criteria, and multicenter validation. Furthermore, a summary of the current state-of-the-art applications of this technology in disease diagnosis, treatment response, and prognosis prediction from the perspective of radiology images, multimodality images, histopathology images, and three-dimensional dose distribution data, particularly in oncology, is presented. The potential and value of radiomics in diagnostic and therapeutic strategies are also further analyzed, and for the first time, the advances and challenges associated with dosiomics in radiotherapy are summarized, highlighting the latest progress in radiomics. Finally, a robust framework for radiomic analysis is presented and challenges and recommendations for future development are discussed, including but not limited to the factors that affect model stability (medical big data and multitype data and expert knowledge in medical), limitations of data-driven processes (reproducibility and interpretability of studies, different treatment alternatives for various institutions, and prospective researches and clinical trials), and thoughts on future directions (the capability to achieve clinical applications and open platform for radiomics analysis).
Collapse
Affiliation(s)
- Xingping Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Yanchun Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Guijuan Zhang
- Department of Respiratory Medicine, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Xingting Qiu
- Department of Radiology, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Xiaoxia Yin
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
| | - Liefa Liao
- School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, China
| |
Collapse
|
27
|
Zhang J, Li C, Liu G, Min M, Wang C, Li J, Wang Y, Yan H, Zuo Z, Huang W, Chen H. A CNN-transformer hybrid approach for decoding visual neural activity into text. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106586. [PMID: 34963092 DOI: 10.1016/j.cmpb.2021.106586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/19/2021] [Accepted: 12/12/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Most studies used neural activities evoked by linguistic stimuli such as phrases or sentences to decode the language structure. However, compared to linguistic stimuli, it is more common for the human brain to perceive the outside world through non-linguistic stimuli such as natural images, so only relying on linguistic stimuli cannot fully understand the information perceived by the human brain. To address this, an end-to-end mapping model between visual neural activities evoked by non-linguistic stimuli and visual contents is demanded. METHODS Inspired by the success of the Transformer network in neural machine translation and the convolutional neural network (CNN) in computer vision, here a CNN-Transformer hybrid language decoding model is constructed in an end-to-end fashion to decode functional magnetic resonance imaging (fMRI) signals evoked by natural images into descriptive texts about the visual stimuli. Specifically, this model first encodes a semantic sequence extracted by a two-layer 1D CNN from the multi-time visual neural activity into a multi-level abstract representation, then decodes this representation, step by step, into an English sentence. RESULTS Experimental results show that the decoded texts are semantically consistent with the corresponding ground truth annotations. Additionally, by varying the encoding and decoding layers and modifying the original positional encoding of the Transformer, we found that a specific architecture of the Transformer is required in this work. CONCLUSIONS The study results indicate that the proposed model can decode the visual neural activities evoked by natural images into descriptive text about the visual stimuli in the form of sentences. Hence, it may be considered as a potential computer-aided tool for neuroscientists to understand the neural mechanism of visual information processing in the human brain in the future.
Collapse
Affiliation(s)
- Jiang Zhang
- College of Electrical Engineering, Sichuan University, Chengdu 610065, China
| | - Chen Li
- College of Electrical Engineering, Sichuan University, Chengdu 610065, China
| | - Ganwanming Liu
- College of Electrical Engineering, Sichuan University, Chengdu 610065, China
| | - Min Min
- College of Electrical Engineering, Sichuan University, Chengdu 610065, China
| | - Chong Wang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jiyi Li
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yuting Wang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Hongmei Yan
- High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Beijing MR Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Wei Huang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Huafu Chen
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| |
Collapse
|
28
|
Latif G, Yousif Al Anezi F, Iskandar DNFA, Bashar A, Alghazo J. Recent Advances in Classification of Brain Tumor from MR Images - State of the Art Review from 2017 to 2021. Curr Med Imaging 2022; 18:903-918. [PMID: 35040408 DOI: 10.2174/1573405618666220117151726] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 09/14/2021] [Accepted: 10/28/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND The task of identifying a tumor in the brain is a complex problem that requires sophisticated skills and inference mechanisms to accurately locate the tumor region. The complex nature of the brain tissue makes the problem of locating, segmenting, and ultimately classifying Magnetic Resonance (MR) images a complex problem. The aim of this review paper is to consolidate the details of the most relevant and recent approaches proposed in this domain for the binary and multi-class classification of brain tumors using brain MR images. OBJECTIVE In this review paper, a detailed summary of the latest techniques used for brain MR image feature extraction and classification is presented. A lot of research papers have been published recently with various techniques proposed for identifying an efficient method for the correct recognition and diagnosis of brain MR images. The review paper allows researchers in the field to familiarize themselves with the latest developments and be able to propose novel techniques that have not yet been explored in this research domain. In addition, the review paper will facilitate researchers, who are new to machine learning algorithms for brain tumor recognition, to understand the basics of the field and pave the way for them to be able to contribute to this vital field of medical research. RESULTS In this paper, the review is performed for all recently proposed methods for both feature extraction and classification. It also identifies the combination of feature extraction methods and classification methods that when combined would be the most efficient technique for the recognition and diagnosis of brain tumor from MR images. In addition, the paper presents the performance metrics particularly the recognition accuracy, of selected research published between 2017- 2021.
Collapse
Affiliation(s)
- Ghazanfar Latif
- College of Computer Engineering and Sciences, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia.
- Université du Québec a Chicoutimi, 555 boulevard de l'Université, Chicoutimi, QC, G7H2B1, Canada
| | - Faisal Yousif Al Anezi
- Management Information Department, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia
| | - D N F Awang Iskandar
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Malaysia
| | - Abul Bashar
- College of Computer Engineering and Sciences, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA Corresponding Author *: Ghazanfar Latif, Department of Computer Science, Prince Mohammad bin Fahd University, Al-Khobar, 31952, Saudi Arabia
| |
Collapse
|
29
|
Vamvakas A, Tsivaka D, Logothetis A, Vassiou K, Tsougos I. Breast Cancer Classification on Multiparametric MRI - Increased Performance of Boosting Ensemble Methods. Technol Cancer Res Treat 2022; 21:15330338221087828. [PMID: 35341421 PMCID: PMC8966070 DOI: 10.1177/15330338221087828] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Introduction: This study aims to assess the utility of Boosting ensemble classification methods for increasing the diagnostic performance of multiparametric Magnetic Resonance Imaging (mpMRI) radiomic models, in differentiating benign and malignant breast lesions. Methods: The dataset includes mpMR images of 140 female patients with mass-like breast lesions (70 benign and 70 malignant), consisting of Dynamic Contrast Enhanced (DCE) and T2-weighted sequences, and the Apparent Diffusion Coefficient (ADC) calculated from the Diffusion Weighted Imaging (DWI) sequence. Tumor masks were manually defined in all consecutive slices of the respective MRI volumes and 3D radiomic features were extracted with the Pyradiomics package. Feature dimensionality reduction was based on statistical tests and the Boruta wrapper. Hierarchical Clustering on Spearman's rank correlation coefficients between features and Random Forest classification for obtaining feature importance, were implemented for selecting the final feature subset. Adaptive Boosting (AdaBoost), Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) classifiers, were trained and tested with bootstrap validation in differentiating breast lesions. A Support Vector Machine (SVM) classifier was also exploited for comparison. The Receiver Operator Characteristic (ROC) curves and DeLong's test were utilized to evaluate the classification performances. Results: The final feature subset consisted of 5 features derived from the lesion shape and the first order histogram of DCE and ADC images volumes. XGboost and LGBM achieved statistically significantly higher average classification performances [AUC = 0.95 and 0.94 respectively], followed by Adaboost [AUC = 0.90], GB [AUC = 0.89] and SVM [AUC = 0.88]. Conclusion: Overall, the integration of Ensemble Learning methods within mpMRI radiomic analysis can improve the performance of computer-assisted diagnosis of breast cancer lesions.
Collapse
Affiliation(s)
- Alexandros Vamvakas
- Medical Physics Department, Medical School, 37786University of Thessaly, Larissa, Greece
| | - Dimitra Tsivaka
- Medical Physics Department, Medical School, 37786University of Thessaly, Larissa, Greece
| | - Andreas Logothetis
- Medical Physics Laboratory, Medical School, 393206National and Kapodistrian University of Athens, Athens, Greece
| | - Katerina Vassiou
- Department of Anatomy and Radiology, Medical School, 37786University of Thessaly, Larissa, Greece
| | - Ioannis Tsougos
- Medical Physics Department, Medical School, 37786University of Thessaly, Larissa, Greece
| |
Collapse
|
30
|
Yurttakal AH, Erbay H, İkizceli T, Karaçavuş S, Biçer C. Diagnosing breast cancer tumors using stacked ensemble model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-219176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Breast cancer is the most common cancer that progresses from cells in the breast tissue among women. Early-stage detection could reduce death rates significantly, and the detection-stage determines the treatment process. Mammography is utilized to discover breast cancer at an early stage prior to any physical sign. However, mammography might return false-negative, in which case, if it is suspected that lesions might have cancer of chance greater than two percent, a biopsy is recommended. About 30 percent of biopsies result in malignancy that means the rate of unnecessary biopsies is high. So to reduce unnecessary biopsies, recently, due to its excellent capability in soft tissue imaging, Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been utilized to detect breast cancer. Nowadays, DCE-MRI is a highly recommended method not only to identify breast cancer but also to monitor its development, and to interpret tumorous regions. However, in addition to being a time-consuming process, the accuracy depends on radiologists’ experience. Radiomic data, on the other hand, are used in medical imaging and have the potential to extract disease characteristics that can not be seen by the naked eye. Radiomics are hard-coded features and provide crucial information about the disease where it is imaged. Conversely, deep learning methods like convolutional neural networks(CNNs) learn features automatically from the dataset. Especially in medical imaging, CNNs’ performance is better than compared to hard-coded features-based methods. However, combining the power of these two types of features increases accuracy significantly, which is especially critical in medicine. Herein, a stacked ensemble of gradient boosting and deep learning models were developed to classify breast tumors using DCE-MRI images. The model makes use of radiomics acquired from pixel information in breast DCE-MRI images. Prior to train the model, radiomics had been applied to the factor analysis to refine the feature set and eliminate unuseful features. The performance metrics, as well as the comparisons to some well-known machine learning methods, state the ensemble model outperforms its counterparts. The ensembled model’s accuracy is 94.87% and its AUC value is 0.9728. The recall and precision are 1.0 and 0.9130, respectively, whereas F1-score is 0.9545.
Collapse
Affiliation(s)
- Ahmet Haşim Yurttakal
- Computer Engineering Department, EngineeringFaculty, Afyon Kocatepe University, Afyon-Turkey
| | - Hasan Erbay
- Computer Engineering Department, EngineeringFaculty, University of Turkish Aeronautical Association, 06790Etimesgut Ankara-Turkey
| | - Türkan İkizceli
- Haseki Training and Research Hospital, Departmentof Radiology, University of Health Sciences, İstanbul-Turkey
| | - Seyhan Karaçavuş
- Kayseri Training and Research Hospital, Departmentof Nuclear Medicine, University of Health Sciences, Kayseri-Turkey
| | - Cenker Biçer
- Statistcs Department, Arts & Science Faculty, Kırıkkale University, Kırıkkale-Turkey
| |
Collapse
|
31
|
Pasquini L, Napolitano A, Lucignani M, Tagliente E, Dellepiane F, Rossi-Espagnet MC, Ritrovato M, Vidiri A, Villani V, Ranazzi G, Stoppacciaro A, Romano A, Di Napoli A, Bozzao A. AI and High-Grade Glioma for Diagnosis and Outcome Prediction: Do All Machine Learning Models Perform Equally Well? Front Oncol 2021; 11:601425. [PMID: 34888226 PMCID: PMC8649764 DOI: 10.3389/fonc.2021.601425] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 11/02/2021] [Indexed: 12/30/2022] Open
Abstract
Radiomic models outperform clinical data for outcome prediction in high-grade gliomas (HGG). However, lack of parameter standardization limits clinical applications. Many machine learning (ML) radiomic models employ single classifiers rather than ensemble learning, which is known to boost performance, and comparative analyses are lacking in the literature. We aimed to compare ML classifiers to predict clinically relevant tasks for HGG: overall survival (OS), isocitrate dehydrogenase (IDH) mutation, O-6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation, epidermal growth factor receptor vIII (EGFR) amplification, and Ki-67 expression, based on radiomic features from conventional and advanced magnetic resonance imaging (MRI). Our objective was to identify the best algorithm for each task. One hundred fifty-six adult patients with pathologic diagnosis of HGG were included. Three tumoral regions were manually segmented: contrast-enhancing tumor, necrosis, and non-enhancing tumor. Radiomic features were extracted with a custom version of Pyradiomics and selected through Boruta algorithm. A Grid Search algorithm was applied when computing ten times K-fold cross-validation (K=10) to get the highest mean and lowest spread of accuracy. Model performance was assessed as AUC-ROC curve mean values with 95% confidence intervals (CI). Extreme Gradient Boosting (xGB) obtained highest accuracy for OS (74,5%), Adaboost (AB) for IDH mutation (87.5%), MGMT methylation (70,8%), Ki-67 expression (86%), and EGFR amplification (81%). Ensemble classifiers showed the best performance across tasks. High-scoring radiomic features shed light on possible correlations between MRI and tumor histology.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Service, Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, United States
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Martina Lucignani
- Medical Physics Department, Bambino Gesù Children’s Hospital, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Francesco Dellepiane
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| | - Maria Camilla Rossi-Espagnet
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
- Neuroradiology Unit, Imaging Department, Bambino Gesù Children’s Hospital, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Matteo Ritrovato
- Unit of Health Technology Assessment (HTA), Biomedical Technology Risk Manager, Bambino Gesù Children’s Hospital, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Antonello Vidiri
- Radiology and Diagnostic Imaging Department, Regina Elena National Cancer Institute, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Veronica Villani
- Neuro-Oncology Unit, Regina Elena National Cancer Institute, Scientific Institute for Research, Hospitalization and Healthcare (IRCCS), Rome, Italy
| | - Giulio Ranazzi
- Department of Clinical and Molecular Medicine, Surgical Pathology Units, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| | - Antonella Stoppacciaro
- Department of Clinical and Molecular Medicine, Surgical Pathology Units, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
- Radiology Department, Castelli Romani Hospital, Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, Neuroscience, Mental Health and Sensory Organs (NESMOS) Department, Sant’Andrea Hospital, La Sapienza University, Rome, Italy
| |
Collapse
|
32
|
Prediction of oral squamous cell carcinoma based on machine learning of breath samples: a prospective controlled study. BMC Oral Health 2021; 21:500. [PMID: 34615514 PMCID: PMC8496028 DOI: 10.1186/s12903-021-01862-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 09/24/2021] [Indexed: 12/30/2022] Open
Abstract
Background The aim of this study was to evaluate the possibility of breath testing as a method of cancer detection in patients with oral squamous cell carcinoma (OSCC). Methods Breath analysis was performed in 35 OSCC patients prior to surgery. In 22 patients, a subsequent breath test was carried out after surgery. Fifty healthy subjects were evaluated in the control group. Breath sampling was standardized regarding location and patient preparation. All analyses were performed using gas chromatography coupled with ion mobility spectrometry and machine learning. Results Differences in imaging as well as in pre- and postoperative findings of OSCC patients and healthy participants were observed. Specific volatile organic compound signatures were found in OSCC patients. Samples from patients and healthy individuals could be correctly assigned using machine learning with an average accuracy of 86–90%. Conclusions Breath analysis to determine OSCC in patients is promising, and the identification of patterns and the implementation of machine learning require further assessment and optimization. Larger prospective studies are required to use the full potential of machine learning to identify disease signatures in breath volatiles. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-021-01862-z.
Collapse
|
33
|
Park YW, Eom J, Kim S, Kim H, Ahn SS, Ku CR, Kim EH, Lee EJ, Kim SH, Lee SK. Radiomics With Ensemble Machine Learning Predicts Dopamine Agonist Response in Patients With Prolactinoma. J Clin Endocrinol Metab 2021; 106:e3069-e3077. [PMID: 33713414 DOI: 10.1210/clinem/dgab159] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Indexed: 11/19/2022]
Abstract
CONTEXT Early identification of the response of prolactinoma patients to dopamine agonists (DA) is crucial in treatment planning. OBJECTIVE To develop a radiomics model using an ensemble machine learning classifier with conventional magnetic resonance images (MRIs) to predict the DA response in prolactinoma patients. DESIGN Retrospective study. SETTING Severance Hospital, Seoul, Korea. PATIENTS A total of 177 prolactinoma patients who underwent baseline MRI (109 DA responders and 68 DA nonresponders) were allocated to the training (n = 141) and test (n = 36) sets. Radiomic features (n = 107) were extracted from coronal T2-weighed MRIs. After feature selection, single models (random forest, light gradient boosting machine, extra-trees, quadratic discrimination analysis, and linear discrimination analysis) with oversampling methods were trained to predict the DA response. A soft voting ensemble classifier was used to achieve the final performance. The performance of the classifier was validated in the test set. RESULTS The ensemble classifier showed an area under the curve (AUC) of 0.81 [95% confidence interval (CI), 0.74-0.87] in the training set. In the test set, the ensemble classifier showed an AUC, accuracy, sensitivity, and specificity of 0.81 (95% CI, 0.67-0.96), 77.8%, 78.6%, and 77.3%, respectively. The ensemble classifier achieved the highest performance among all the individual models in the test set. CONCLUSIONS Radiomic features may be useful biomarkers to predict the DA response in prolactinoma patients.
Collapse
Affiliation(s)
- Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
| | - Jihwan Eom
- Department of Computer Science, Yonsei University, Seoul, Korea
| | - Sooyon Kim
- Department of Statistics and Data Science, Yonsei University, Seoul, Korea
| | - Hwiyoung Kim
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
| | - Cheol Ryong Ku
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
- Department of Endocrinology, Yonsei University College of Medicine, Seoul, Korea
| | - Eui Hyun Kim
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
- Department of Endocrinology, Yonsei University College of Medicine, Seoul, Korea
| | - Eun Jig Lee
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
- Department of Neurosurgery, Yonsei University College of Medicine, Seoul, Korea
| | - Sun Ho Kim
- Department of Neurosurgery, Ewha Womans University College of Medicine, Seoul, Korea
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
- Pituitary Tumor Center, Severance Hospital, Seoul, Korea
| |
Collapse
|
34
|
Abstract
PURPOSE OF REVIEW Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis. RECENT FINDINGS Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets. SUMMARY To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| | - Yousef Katib
- Department of Radiology, Taibah University, Al-Madinah, Saudi Arabia
| | - Lama Hassan
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
35
|
A framework for supporting ransomware detection and prevention based on hybrid analysis. JOURNAL OF COMPUTER VIROLOGY AND HACKING TECHNIQUES 2021. [DOI: 10.1007/s11416-021-00388-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
36
|
Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy. Phys Med 2021; 83:221-241. [DOI: 10.1016/j.ejmp.2021.04.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/31/2021] [Accepted: 04/03/2021] [Indexed: 02/06/2023] Open
|
37
|
MRI brain tumor medical images analysis using deep learning techniques: a systematic review. HEALTH AND TECHNOLOGY 2021. [DOI: 10.1007/s12553-020-00514-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
38
|
Mercaldo F, Santone A. Audio signal processing for Android malware detection and family identification. JOURNAL OF COMPUTER VIROLOGY AND HACKING TECHNIQUES 2021. [DOI: 10.1007/s11416-020-00376-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
39
|
Santone A, Brunese MC, Donnarumma F, Guerriero P, Mercaldo F, Reginelli A, Miele V, Giovagnoni A, Brunese L. Radiomic features for prostate cancer grade detection through formal verification. Radiol Med 2021; 126:688-697. [PMID: 33394366 DOI: 10.1007/s11547-020-01314-8] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 11/16/2020] [Indexed: 02/07/2023]
Abstract
AIM Prostate cancer represents the most common cancer afflicting men. It may be asymptomatic at the early stage. In this paper, we propose a methodology aimed to detect the prostate cancer grade by computing non-invasive shape-based radiomic features directly from magnetic resonance images. MATERIALS AND METHODS We use a freely available dataset composed by coronal magnetic resonance images belonging to 112 patients. We represent magnetic resonance slices in terms of formal model, and we exploit model checking to check whether a set of properties (formulated with the support of pathologists and radiologists) is verified on the formal model. Each property is related to a different cancer grade with the aim to cover all the cancer grade groups. RESULTS An average specificity equal to 0.97 and an average sensitivity equal to 1 have been obtained with our methodology. CONCLUSION The experimental analysis demonstrates the effectiveness of radiomics and formal verification for Gleason grade group detection from magnetic resonance.
Collapse
Affiliation(s)
- Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Maria Chiara Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Federico Donnarumma
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Pasquale Guerriero
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy.
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "Luigi Vanvitelli", Napoli, Italy
| | | | - Andrea Giovagnoni
- Department of Radiology, Ospedali Riuniti, Universit Politecnica delle Marche, Ancona, Italy
| | - Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| |
Collapse
|
40
|
Ahsan MM, Ahad MT, Soma FA, Paul S, Chowdhury A, Luna SA, Yazdan MMS, Rahman A, Siddique Z, Huebner P. Detecting SARS-CoV-2 From Chest X-Ray Using Artificial Intelligence. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:35501-35513. [PMID: 34976572 PMCID: PMC8675556 DOI: 10.1109/access.2021.3061621] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 02/16/2021] [Indexed: 05/19/2023]
Abstract
Chest radiographs (X-rays) combined with Deep Convolutional Neural Network (CNN) methods have been demonstrated to detect and diagnose the onset of COVID-19, the disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). However, questions remain regarding the accuracy of those methods as they are often challenged by limited datasets, performance legitimacy on imbalanced data, and have their results typically reported without proper confidence intervals. Considering the opportunity to address these issues, in this study, we propose and test six modified deep learning models, including VGG16, InceptionResNetV2, ResNet50, MobileNetV2, ResNet101, and VGG19 to detect SARS-CoV-2 infection from chest X-ray images. Results are evaluated in terms of accuracy, precision, recall, and f- score using a small and balanced dataset (Study One), and a larger and imbalanced dataset (Study Two). With 95% confidence interval, VGG16 and MobileNetV2 show that, on both datasets, the model could identify patients with COVID-19 symptoms with an accuracy of up to 100%. We also present a pilot test of VGG16 models on a multi-class dataset, showing promising results by achieving 91% accuracy in detecting COVID-19, normal, and Pneumonia patients. Furthermore, we demonstrated that poorly performing models in Study One (ResNet50 and ResNet101) had their accuracy rise from 70% to 93% once trained with the comparatively larger dataset of Study Two. Still, models like InceptionResNetV2 and VGG19's demonstrated an accuracy of 97% on both datasets, which posits the effectiveness of our proposed methods, ultimately presenting a reasonable and accessible alternative to identify patients with COVID-19.
Collapse
Affiliation(s)
- Md Manjurul Ahsan
- School of Industrial and Systems EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Md Tanvir Ahad
- School of Aerospace and Mechanical EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Farzana Akter Soma
- Holy Family Red Crescent Medical College & Hospital Dhaka 1000 Bangladesh
| | - Shuva Paul
- School of Electrical and Computer EngineeringGeorgia Institute of Technology Atlanta GA 30332 USA
| | - Ananna Chowdhury
- Z. H. Sikder Women's Medical College & Hospital Dhaka 1212 Bangladesh
| | | | | | - Akhlaqur Rahman
- School of Industrial Automation and Electrical EngineeringEngineering Institute of Technology Melbourne VIC 3000 Australia
| | - Zahed Siddique
- School of Aerospace and Mechanical EngineeringThe University of Oklahoma Norman OK 73019 USA
| | - Pedro Huebner
- School of Industrial and Systems EngineeringThe University of Oklahoma Norman OK 73019 USA
| |
Collapse
|
41
|
Brunese L, Mercaldo F, Reginelli A, Santone A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105608. [PMID: 32599338 PMCID: PMC7831868 DOI: 10.1016/j.cmpb.2020.105608] [Citation(s) in RCA: 233] [Impact Index Per Article: 58.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 06/09/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Coronavirus disease (COVID-19) is an infectious disease caused by a new virus never identified before in humans. This virus causes respiratory disease (for instance, flu) with symptoms such as cough, fever and, in severe cases, pneumonia. The test to detect the presence of this virus in humans is performed on sputum or blood samples and the outcome is generally available within a few hours or, at most, days. Analysing biomedical imaging the patient shows signs of pneumonia. In this paper, with the aim of providing a fully automatic and faster diagnosis, we propose the adoption of deep learning for COVID-19 detection from X-rays. METHOD In particular, we propose an approach composed by three phases: the first one to detect if in a chest X-ray there is the presence of a pneumonia. The second one to discern between COVID-19 and pneumonia. The last step is aimed to localise the areas in the X-ray symptomatic of the COVID-19 presence. RESULTS AND CONCLUSION Experimental analysis on 6,523 chest X-rays belonging to different institutions demonstrated the effectiveness of the proposed approach, with an average time for COVID-19 detection of approximately 2.5 seconds and an average accuracy equal to 0.97.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy; Institute for Informatics and Telematics, National Research Council of Italy (CNR), Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "Luigi Vanvitelli", Napoli, Italy
| | - Antonella Santone
- Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy
| |
Collapse
|
42
|
Brunese L, Martinelli F, Mercaldo F, Santone A. Machine learning for coronavirus covid-19 detection from chest x-rays. PROCEDIA COMPUTER SCIENCE 2020; 176:2212-2221. [PMID: 33042308 PMCID: PMC7531990 DOI: 10.1016/j.procs.2020.09.258] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
At the end of 2019, a new form of Coronavirus, called COVID-19, has widely spread in the world. To quickly screen patients with the aim to detect this new form of pulmonary disease, in this paper we propose a method aimed to automatically detect the COVID-19 disease by analysing medical images. We exploit supervised machine learning techniques building a model considering a data-set freely available for research purposes of 85 chest X-rays. The experiment shows the effectiveness of the proposed method in the discrimination between the COVID-19 disease and other pulmonary diseases.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Fabio Martinelli
- Institute for Informatics and Telematics, National Research Council of Italy (CNR), Pisa, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
- Institute for Informatics and Telematics, National Research Council of Italy (CNR), Pisa, Italy
| | - Antonella Santone
- Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy
| |
Collapse
|
43
|
Książek W, Hammad M, Pławiak P, Acharya UR, Tadeusiewicz R. Development of novel ensemble model using stacking learning and evolutionary computation techniques for automated hepatocellular carcinoma detection. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
44
|
Brunese L, Mercaldo F, Reginelli A, Santone A. Radiomics for Gleason Score Detection through Deep Learning. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5411. [PMID: 32967291 PMCID: PMC7570598 DOI: 10.3390/s20185411] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 09/18/2020] [Indexed: 02/07/2023]
Abstract
Prostate cancer is classified into different stages, each stage is related to a different Gleason score. The labeling of a diagnosed prostate cancer is a task usually performed by radiologists. In this paper we propose a deep architecture, based on several convolutional layers, aimed to automatically assign the Gleason score to Magnetic Resonance Imaging (MRI) under analysis. We exploit a set of 71 radiomic features belonging to five categories: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. The radiomic features are gathered directly from segmented MRIs using two free-available dataset for research purpose obtained from different institutions. The results, obtained in terms of accuracy, are promising: they are ranging between 0.96 and 0.98 for Gleason score prediction.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
- Institute for Informatics and Telematics, National Research Council of Italy, 56121 Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania “Luigi Vanvitelli”, 80100 Napoli, Italy;
| | - Antonella Santone
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (L.B.); (A.S.)
| |
Collapse
|
45
|
Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186296] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Gliomas are the most common primary brain tumors. They are classified into 4 grades (Grade I–II-III–IV) according to the guidelines of the World Health Organization (WHO). The accurate grading of gliomas has clinical significance for planning prognostic treatments, pre-diagnosis, monitoring and administration of chemotherapy. The purpose of this study is to develop a deep learning-based classification method using radiomic features of brain tumor glioma grades with deep neural network (DNN). The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool. This study primarily focuses on the four main aspects of the radiomic workflow, namely tumor segmentation, feature extraction, analysis, and classification. We evaluated data from 121 patients with brain tumors (Grade II, n = 77; Grade III, n = 44) from The Cancer Imaging Archive, and 744 radiomic features were obtained by applying low sub-band and high sub-band 3D wavelet transform filters to the 3D tumor images. Quantitative values were statistically analyzed with MannWhitney U tests and 126 radiomic features with significant statistical properties were selected in eight different wavelet filters. Classification performances of 3D wavelet transform filter groups were measured using accuracy, sensitivity, F1 score, and specificity values using the deep learning classifier model. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve. As a result, deep learning and feature selection techniques with wavelet transform filters can be accurately applied using the proposed method in glioma grade classification.
Collapse
|
46
|
The texture analysis as a predictive method in the assessment of the cytological specimen of CT-guided FNAC of the lung cancer. Med Oncol 2020; 37:54. [PMID: 32424733 DOI: 10.1007/s12032-020-01375-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Accepted: 04/13/2020] [Indexed: 02/07/2023]
Abstract
The lung cancer is the principle cause of the worldwide deaths and its prognosis is poor with a 5-year overall survival rate. Computed tomography (CT) gives many information about the prognosis, but the problem is the subject interpretation of the findings. Thanks to the computer-aided diagnosis/detection (CAD), it is possible to reduce the second opinion. "Radiomics" is an extension of CAD and overlaps the quantitative imaging data of the CT texture analysis (CTTA) with the clinical information, increasing the power and precision of the decision going through the personalized medicine. The aim of this study is to describe the role of the radiomics in the characterization of the pulmonary nodule. For this study, we retrospectively analyzed the images of the 87 NSCLC patients with a waiver of informed consent from the Institutional Review Board (IRB) at the Campania University "Luigi Vanvitelli" of Naples. All tumors were semiautomatically segmented by a radiologist with 10 years of experience using three diameters (AW Server 3.2). The examinations were acquired using 128 MDCT (GSI CT, GE) with a peak tube voltage of 120 kVp, tube current of 100 or 200 mA, and rotation times of 0.5 or 0.8 s. To confirm the imaging results, the FNAC was performed and for every nodule the following parameters were extracted: the presence of the solid component (named = 1), papillary component (named = 2), and mixed component (named = 3). Feature calculation was performed using the HealthMyne software and Integrated Platform That Enables Better Patient Management Decisions For Oncology. The radiologist uses the Rapid Precise Metrics (RPM)™ functionality to identify a lesion with the algorithm and these methods are put to work. The correlation between each feature and the tumor volume was calculated using a two-step cluster statistical analysis. In this retrospective study, in one year from 2018 to 2019 20 patients with lung adenocarcinoma confirmed with FNAC were enrolled. The pathologic results were subdivided into three categories: the solid architecture (group 1), papillary architecture (group 2), and mixed architecture (group 3). Nine lesions resulted with component 1, seven patients with component 2, and 3 patients with component 3. Eight females and 12 males with a median age 61 and 15 years (mean ± SD = 67.4 ± 9.7 years, range 39-73 years) were enrolled. The two results suggest, with p < 0.05, that the GGO variable is a good discriminating estimator of the kurtosis variable: GGO = "no" implies a high kurtosis value, while GGO = "yes" implies a low value. The numerous data obtained from the automatic analysis allow to have a fertile ground on which to develop a new concept of medicine which is precision medicine. The limit of this study is the poor sample. In the future, in order to have a more mature and consolidated discipline, it is necessary to increase the large scale of observations with further studies to establish the rigorous evaluation criteria. In order for radiomics to mature as a discipline in the future, it will be necessary to develop studies that consolidate its role to standardize the collected data.
Collapse
|
47
|
Mercaldo F, Santone A. Deep learning for image-based mobile malware detection. JOURNAL OF COMPUTER VIROLOGY AND HACKING TECHNIQUES 2020. [DOI: 10.1007/s11416-019-00346-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|