1
|
Bose S, Banerjee S, Kumar S, Saha A, Nandy D, Hazra S. Review of applications of artificial intelligence (AI) methods in crop research. J Appl Genet 2024; 65:225-240. [PMID: 38216788 DOI: 10.1007/s13353-023-00826-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 12/23/2023] [Accepted: 12/26/2023] [Indexed: 01/14/2024]
Abstract
Sophisticated and modern crop improvement techniques can bridge the gap for feeding the ever-increasing population. Artificial intelligence (AI) refers to the simulation of human intelligence in machines, which refers to the application of computational algorithms, machine learning (ML) and deep learning (DL) techniques. This is aimed to generalise patterns and relationships from historical data, employing various mathematical optimisation techniques thus making prediction models for facilitating selection of superior genotypes. These techniques are less resource intensive and can solve the problem based on the analysis of large-scale phenotypic datasets. ML for genomic selection (GS) uses high-throughput genotyping technologies to gather genetic information on a large number of markers across the genome. The prediction of GS models is based on the mathematical relation between genotypic and phenotypic data from the training population. ML techniques have emerged as powerful tools for genome editing through analysing large-scale genomic data and facilitating the development of accurate prediction models. Precise phenotyping is a prerequisite to advance crop breeding for solving agricultural production-related issues. ML algorithms can solve this problem through generating predictive models, based on the analysis of large-scale phenotypic datasets. DL models also have the potential reliability of precise phenotyping. This review provides a comprehensive overview on various ML and DL models, their applications, potential to enhance the efficiency, specificity and safety towards advanced crop improvement protocols such as genomic selection, genome editing, along with phenotypic prediction to promote accelerated breeding.
Collapse
Affiliation(s)
- Suvojit Bose
- Department of Vegetables and Spice Crops, Uttar Banga Krishi Viswavidyalaya, Pundibari, Cooch Behar, 736165, West Bengal, India
| | | | - Soumya Kumar
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Akash Saha
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Debalina Nandy
- School of Agricultural Sciences, JIS University, Kolkata, 700109, West Bengal, India
| | - Soham Hazra
- Department of Agriculture, Brainware University, Barasat, 700125, West Bengal, India.
| |
Collapse
|
2
|
Ditmer S, Dwenger N, Jensen LN, Kim H, Boel RV, Ghaffari A, Rahbek O. Fully automatic system to detect and segment the proximal femur in pelvic radiographic images for Legg-Calvé-Perthes disease. J Orthop Res 2024; 42:1074-1085. [PMID: 38053300 DOI: 10.1002/jor.25761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 11/23/2023] [Accepted: 11/28/2023] [Indexed: 12/07/2023]
Abstract
This study aimed to develop a method using computer vision techniques to accurately detect and delineate the proximal femur in radiographs of Legg-Calvé-Perthes disease (LCPD) patients. Currently, evaluating femoral head deformity, a crucial predictor of LCPD outcomes, relies on unreliable categorical and qualitative classifications. To address this limitation, we employed the pretrained object detection model YOLOv5 to detect the proximal femur on over 2000 radiographs, including images of shoulders and chests, to enhance robustness and generalizability. Subsequently, we utilized the U-Net convolutional neural network architecture for image segmentation of the proximal femur in more than 800 manually annotated images of stage IV LCPD. The results demonstrate outstanding performance, with the object detection model achieving high accuracy (mean average precision of 0.99) and the segmentation model attaining an accuracy score of 91%, dice coefficient of 0.75, and binary IoU score of 0.85 on the held-out test set. The proposed fully automatic proximal femur detection and segmentation system offers a promising approach to accurately detect and delineate the proximal femoral bone contour in radiographic images, which is essential for further image analysis in LCPD patients. Clinical significance: This study highlights the potential of computer vision techniques for enhancing the reliability of Legg-Calvé-Perthes disease staging and outcome prediction.
Collapse
Affiliation(s)
- Sofie Ditmer
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Nicole Dwenger
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Louise N Jensen
- School of Communication and Culture, University of Aarhus, Aarhus, Denmark
| | - Harry Kim
- Scottish Rite for Children, Dallas, Texas, USA
| | - Rikke V Boel
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| | - Arash Ghaffari
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| | - Ole Rahbek
- Department of Interdisciplinary Orthopedics, Aalborg University Hospital, Aalborg, Denmark
| |
Collapse
|
3
|
Xu Y, Cao L, Chen Y, Zhang Z, Liu W, Li H, Ding C, Pu J, Qian K, Xu W. Integrating Machine Learning in Metabolomics: A Path to Enhanced Diagnostics and Data Interpretation. SMALL METHODS 2024:e2400305. [PMID: 38682615 DOI: 10.1002/smtd.202400305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 04/07/2024] [Indexed: 05/01/2024]
Abstract
Metabolomics, leveraging techniques like NMR and MS, is crucial for understanding biochemical processes in pathophysiological states. This field, however, faces challenges in metabolite sensitivity, data complexity, and omics data integration. Recent machine learning advancements have enhanced data analysis and disease classification in metabolomics. This study explores machine learning integration with metabolomics to improve metabolite identification, data efficiency, and diagnostic methods. Using deep learning and traditional machine learning, it presents advancements in metabolic data analysis, including novel algorithms for accurate peak identification, robust disease classification from metabolic profiles, and improved metabolite annotation. It also highlights multiomics integration, demonstrating machine learning's potential in elucidating biological phenomena and advancing disease diagnostics. This work contributes significantly to metabolomics by merging it with machine learning, offering innovative solutions to analytical challenges and setting new standards for omics data analysis.
Collapse
Affiliation(s)
- Yudian Xu
- Department of Traditional Chinese Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| | - Linlin Cao
- State Key Laboratory for Oncogenes and Related Genes, Division of Cardiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, 160 Pujian Road, Shanghai, 200127, P. R. China
| | - Yifan Chen
- State Key Laboratory for Oncogenes and Related Genes, Division of Cardiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, 160 Pujian Road, Shanghai, 200127, P. R. China
| | - Ziyue Zhang
- School of Biomedical Engineering, Institute of Medical Robotics and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Wanshan Liu
- School of Biomedical Engineering, Institute of Medical Robotics and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - He Li
- Department of Traditional Chinese Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| | - Chenhuan Ding
- Department of Traditional Chinese Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| | - Jun Pu
- State Key Laboratory for Oncogenes and Related Genes, Division of Cardiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, 160 Pujian Road, Shanghai, 200127, P. R. China
| | - Kun Qian
- State Key Laboratory for Oncogenes and Related Genes, Division of Cardiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, 160 Pujian Road, Shanghai, 200127, P. R. China
- School of Biomedical Engineering, Institute of Medical Robotics and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Wei Xu
- State Key Laboratory for Oncogenes and Related Genes, Division of Cardiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, 160 Pujian Road, Shanghai, 200127, P. R. China
| |
Collapse
|
4
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
5
|
Khan R, Xiao C, Liu Y, Tian J, Chen Z, Su L, Li D, Hassan H, Li H, Xie W, Zhong W, Huang B. Transformative Deep Neural Network Approaches in Kidney Ultrasound Segmentation: Empirical Validation with an Annotated Dataset. Interdiscip Sci 2024:10.1007/s12539-024-00620-3. [PMID: 38413547 DOI: 10.1007/s12539-024-00620-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 01/06/2024] [Accepted: 02/05/2024] [Indexed: 02/29/2024]
Abstract
Kidney ultrasound (US) images are primarily employed for diagnosing different renal diseases. Among them, one is renal localization and detection, which can be carried out by segmenting the kidney US images. However, kidney segmentation from US images is challenging due to low contrast, speckle noise, fluid, variations in kidney shape, and modality artifacts. Moreover, well-annotated US datasets for renal segmentation and detection are scarce. This study aims to build a novel, well-annotated dataset containing 44,880 US images. In addition, we propose a novel training scheme that utilizes the encoder and decoder parts of a state-of-the-art segmentation algorithm. In the pre-processing step, pixel intensity normalization improves contrast and facilitates model convergence. The modified encoder-decoder architecture improves pyramid-shaped hole pooling, cascaded multiple-hole convolutions, and batch normalization. The pre-processing step gradually reconstructs spatial information, including the capture of complete object boundaries, and the post-processing module with a concave curvature reduces the false positive rate of the results. We present benchmark findings to validate the quality of the proposed training scheme and dataset. We applied six evaluation metrics and several baseline segmentation approaches to our novel kidney US dataset. Among the evaluated models, DeepLabv3+ performed well and achieved the highest dice, Hausdorff distance 95, accuracy, specificity, average symmetric surface distance, and recall scores of 89.76%, 9.91, 98.14%, 98.83%, 3.03, and 90.68%, respectively. The proposed training strategy aids state-of-the-art segmentation models, resulting in better-segmented predictions. Furthermore, the large, well-annotated kidney US public dataset will serve as a valuable baseline source for future medical image analysis research.
Collapse
Affiliation(s)
- Rashid Khan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, 518060, China
| | - Chuda Xiao
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
- Wuerzburg Dynamics Inc., Shenzhen, 518188, China
| | - Yang Liu
- Department of Urology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China
| | - Jinyu Tian
- Wuerzburg Dynamics Inc., Shenzhen, 518188, China
| | - Zhuo Chen
- Wuerzburg Dynamics Inc., Shenzhen, 518188, China
| | - Liyilei Su
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, 518060, China
| | - Dan Li
- Wuerzburg Dynamics Inc., Shenzhen, 518188, China
| | - Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Haoyu Li
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Weiguo Xie
- Wuerzburg Dynamics Inc., Shenzhen, 518188, China
| | - Wen Zhong
- Department of Urology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China.
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| |
Collapse
|
6
|
Weerarathna IN, Kamble AR, Luharia A. Artificial Intelligence Applications for Biomedical Cancer Research: A Review. Cureus 2023; 15:e48307. [PMID: 38058345 PMCID: PMC10697339 DOI: 10.7759/cureus.48307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 11/05/2023] [Indexed: 12/08/2023] Open
Abstract
Artificial intelligence (AI) has rapidly evolved and demonstrated its potential in transforming biomedical cancer research, offering innovative solutions for cancer diagnosis, treatment, and overall patient care. Over the past two decades, AI has played a pivotal role in revolutionizing various facets of cancer clinical research. In this comprehensive review, we delve into the diverse applications of AI across the cancer care continuum, encompassing radiodiagnosis, radiotherapy, chemotherapy, immunotherapy, targeted therapy, surgery, and nanotechnology. AI has revolutionized cancer diagnosis, enabling early detection and precise characterization through advanced image analysis techniques. In radiodiagnosis, AI-driven algorithms enhance the accuracy of medical imaging, making it an invaluable tool for clinicians in the detection and assessment of cancer. AI has also revolutionized radiotherapy, facilitating precise tumor boundary delineation, optimizing treatment planning, and enabling real-time adjustments to improve therapeutic outcomes while minimizing collateral damage to healthy tissues. In chemotherapy, AI models have emerged as powerful tools for predicting patient responses to different treatment regimens, allowing for more personalized and effective strategies. In immunotherapy, AI analyzes genetic and imaging data to select ideal candidates for treatment and predict responses. Targeted therapy has seen great advancements with AI, aiding in the identification of specific molecular targets for tailored treatments. AI plays a vital role in surgery by offering real-time navigation and support, enhancing surgical precision. Moreover, the synergy between AI and nanotechnology promises the development of personalized nanomedicines, offering more efficient and targeted cancer treatments. While challenges related to data quality, interpretability, and ethical considerations persist, the future of AI in cancer research holds tremendous promise for improving patient outcomes through advanced and individualized care.
Collapse
Affiliation(s)
- Induni N Weerarathna
- Biomedical Sciences, School of Allied Health Sciences, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aahash R Kamble
- Artificial Intelligence and Data Science, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Anurag Luharia
- Radiotherapy, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
7
|
Stabile AM, Pistilli A, Mariangela R, Rende M, Bartolini D, Di Sante G. New Challenges for Anatomists in the Era of Omics. Diagnostics (Basel) 2023; 13:2963. [PMID: 37761332 PMCID: PMC10529314 DOI: 10.3390/diagnostics13182963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/08/2023] [Accepted: 09/10/2023] [Indexed: 09/29/2023] Open
Abstract
Anatomic studies have traditionally relied on macroscopic, microscopic, and histological techniques to investigate the structure of tissues and organs. Anatomic studies are essential in many fields, including medicine, biology, and veterinary science. Advances in technology, such as imaging techniques and molecular biology, continue to provide new insights into the anatomy of living organisms. Therefore, anatomy remains an active and important area in the scientific field. The consolidation in recent years of some omics technologies such as genomics, transcriptomics, proteomics, and metabolomics allows for a more complete and detailed understanding of the structure and function of cells, tissues, and organs. These have been joined more recently by "omics" such as radiomics, pathomics, and connectomics, supported by computer-assisted technologies such as neural networks, 3D bioprinting, and artificial intelligence. All these new tools, although some are still in the early stages of development, have the potential to strongly contribute to the macroscopic and microscopic characterization in medicine. For anatomists, it is time to hitch a ride and get on board omics technologies to sail to new frontiers and to explore novel scenarios in anatomy.
Collapse
Affiliation(s)
- Anna Maria Stabile
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
| | - Alessandra Pistilli
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
| | - Ruggirello Mariangela
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
| | - Mario Rende
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
| | - Desirée Bartolini
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
- Department of Pharmaceutical Sciences, University of Perugia, 06126 Perugia, Italy
| | - Gabriele Di Sante
- Department of Medicine and Surgery, Section of Human, Clinical and Forensic Anatomy, University of Perugia, 60132 Perugia, Italy; (A.M.S.); (A.P.); (R.M.); (M.R.)
| |
Collapse
|
8
|
Patel SK, Khan S, Dasari V, Gupta S. Beyond Pain Relief: An In-Depth Review of Vertebral Height Restoration After Balloon Kyphoplasty in Vertebral Compression Fractures. Cureus 2023; 15:e46124. [PMID: 37900521 PMCID: PMC10612383 DOI: 10.7759/cureus.46124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 09/26/2023] [Indexed: 10/31/2023] Open
Abstract
This comprehensive review delves into the intricate landscape of vertebral height restoration after balloon kyphoplasty in cases of vertebral compression fractures. With a comprehensive examination of procedural intricacies, radiological evaluations, clinical outcomes, and influential factors, a nuanced comprehension unfolds. Beyond its immediate alleviation of pain, vertebral height restoration emerges as a linchpin in enhancing spinal alignment, fostering functional recuperation, and augmenting the overall quality of life. This review underscores the pivotal role of balloon kyphoplasty, transcending its mere medical utility to become a conduit for renewed independence and well-being among individuals grappling with vertebral compression fractures. The ongoing advancements in medical science and the continued pursuit of research stand poised to amplify the significance of vertebral height restoration, manifesting a promising horizon for individuals seeking respite from pain, a revitalised capacity for movement, and a life unburdened by its constraints.
Collapse
Affiliation(s)
- Siddharth K Patel
- Orthopaedics, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Sohael Khan
- Orthopaedics, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Ventaktesh Dasari
- Orthopaedics, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Suvarn Gupta
- Orthopaedics, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
9
|
Vallée R, Vallée JN, Guillevin C, Lallouette A, Thomas C, Rittano G, Wager M, Guillevin R, Vallée A. Machine learning decision tree models for multiclass classification of common malignant brain tumors using perfusion and spectroscopy MRI data. Front Oncol 2023; 13:1089998. [PMID: 37614505 PMCID: PMC10442801 DOI: 10.3389/fonc.2023.1089998] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 07/17/2023] [Indexed: 08/25/2023] Open
Abstract
Background To investigate the contribution of machine learning decision tree models applied to perfusion and spectroscopy MRI for multiclass classification of lymphomas, glioblastomas, and metastases, and then to bring out the underlying key pathophysiological processes involved in the hierarchization of the decision-making algorithms of the models. Methods From 2013 to 2020, 180 consecutive patients with histopathologically proved lymphomas (n = 77), glioblastomas (n = 45), and metastases (n = 58) were included in machine learning analysis after undergoing MRI. The perfusion parameters (rCBVmax, PSRmax) and spectroscopic concentration ratios (lac/Cr, Cho/NAA, Cho/Cr, and lip/Cr) were applied to construct Classification and Regression Tree (CART) models for multiclass classification of these brain tumors. A 5-fold random cross validation was performed on the dataset. Results The decision tree model thus constructed successfully classified all 3 tumor types with a performance (AUC) of 0.98 for PCNSLs, 0.98 for GBM and 1.00 for METs. The model accuracy was 0.96 with a RSquare of 0.887. Five rules of classifier combinations were extracted with a predicted probability from 0.907 to 0.989 for that end nodes of the decision tree for tumor multiclass classification. In hierarchical order of importance, the root node (Cho/NAA) in the decision tree algorithm was primarily based on the proliferative, infiltrative, and neuronal destructive characteristics of the tumor, the internal node (PSRmax), on tumor tissue capillary permeability characteristics, and the end node (Lac/Cr or Cho/Cr), on tumor energy glycolytic (Warburg effect), or on membrane lipid tumor metabolism. Conclusion Our study shows potential implementation of machine learning decision tree model algorithms based on a hierarchical, convenient, and personalized use of perfusion and spectroscopy MRI data for multiclass classification of these brain tumors.
Collapse
Affiliation(s)
- Rodolphe Vallée
- Interdisciplinary Laboratory in Neurosciences, Physiology and Psychology (LINP2), Université Paris Lumière (UPL), Paris Nanterre University, Nanterre, France
- Laboratory of Mathematics and Applications (LMA) Centre National de la Recherche Scientifique - Unité Mixte de Recherche (CNRS UMR)7348, i3M-DACTIM-MIH (Data Analysis and Computations Through Imaging Modeling - Mathematics, Image, Health), Poitiers University, Poitiers, France
- Glaucoma Research Center, Swiss Visio Network, Lausanne, Switzerland
| | - Jean-Noël Vallée
- Laboratory of Mathematics and Applications (LMA) Centre National de la Recherche Scientifique - Unité Mixte de Recherche (CNRS UMR)7348, i3M-DACTIM-MIH (Data Analysis and Computations Through Imaging Modeling - Mathematics, Image, Health), Poitiers University, Poitiers, France
- Diagnostic and Functional Neuroradiology and Brain stimulation Department, 15-20 National Vision Hospital of Paris - Paris University Hospital Center, University of PARIS-SACLAY - UVSQ, Paris, France
| | - Carole Guillevin
- Laboratory of Mathematics and Applications (LMA) Centre National de la Recherche Scientifique - Unité Mixte de Recherche (CNRS UMR)7348, i3M-DACTIM-MIH (Data Analysis and Computations Through Imaging Modeling - Mathematics, Image, Health), Poitiers University, Poitiers, France
- Radiology Department, Poitiers University Hospital, Poitiers University, Poitiers, France
| | | | - Clément Thomas
- Laboratory of Mathematics and Applications (LMA) Centre National de la Recherche Scientifique - Unité Mixte de Recherche (CNRS UMR)7348, i3M-DACTIM-MIH (Data Analysis and Computations Through Imaging Modeling - Mathematics, Image, Health), Poitiers University, Poitiers, France
- Diagnostic and Functional Neuroradiology and Brain stimulation Department, 15-20 National Vision Hospital of Paris - Paris University Hospital Center, University of PARIS-SACLAY - UVSQ, Paris, France
| | | | - Michel Wager
- Neurosurgery Department, Poitiers University Hospital, Poitiers University, Poitiers, France
| | - Rémy Guillevin
- Laboratory of Mathematics and Applications (LMA) Centre National de la Recherche Scientifique - Unité Mixte de Recherche (CNRS UMR)7348, i3M-DACTIM-MIH (Data Analysis and Computations Through Imaging Modeling - Mathematics, Image, Health), Poitiers University, Poitiers, France
- Radiology Department, Poitiers University Hospital, Poitiers University, Poitiers, France
| | - Alexandre Vallée
- Department of Epidemiology and Public Health, Foch Hospital, Suresnes, France
| |
Collapse
|
10
|
Bhakar S, Sinwar D, Pradhan N, Dhaka VS, Cherrez-Ojeda I, Parveen A, Hassan MU. Computational Intelligence-Based Disease Severity Identification: A Review of Multidisciplinary Domains. Diagnostics (Basel) 2023; 13:diagnostics13071212. [PMID: 37046431 PMCID: PMC10093052 DOI: 10.3390/diagnostics13071212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/06/2023] [Accepted: 03/08/2023] [Indexed: 04/14/2023] Open
Abstract
Disease severity identification using computational intelligence-based approaches is gaining popularity nowadays. Artificial intelligence and deep-learning-assisted approaches are proving to be significant in the rapid and accurate diagnosis of several diseases. In addition to disease identification, these approaches have the potential to identify the severity of a disease. The problem of disease severity identification can be considered multi-class classification, where the class labels are the severity levels of the disease. Plenty of computational intelligence-based solutions have been presented by researchers for severity identification. This paper presents a comprehensive review of recent approaches for identifying disease severity levels using computational intelligence-based approaches. We followed the PRISMA guidelines and compiled several works related to the severity identification of multidisciplinary diseases of the last decade from well-known publishers, such as MDPI, Springer, IEEE, Elsevier, etc. This article is devoted toward the severity identification of two main diseases, viz. Parkinson's Disease and Diabetic Retinopathy. However, severity identification of a few other diseases, such as COVID-19, autonomic nervous system dysfunction, tuberculosis, sepsis, sleep apnea, psychosis, traumatic brain injury, breast cancer, knee osteoarthritis, and Alzheimer's disease, was also briefly covered. Each work has been carefully examined against its methodology, dataset used, and the type of disease on several performance metrics, accuracy, specificity, etc. In addition to this, we also presented a few public repositories that can be utilized to conduct research on disease severity identification. We hope that this review not only acts as a compendium but also provides insights to the researchers working on disease severity identification using computational intelligence-based approaches.
Collapse
Affiliation(s)
- Suman Bhakar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Deepak Sinwar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Nitesh Pradhan
- Department of Computer Science and Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Ivan Cherrez-Ojeda
- Allergy and Pulmonology, Espíritu Santo University, Samborondón 0901-952, Ecuador
| | - Amna Parveen
- College of Pharmacy, Gachon University, Medical Campus, No. 191, Hambakmoero, Yeonsu-gu, Incheon 21936, Republic of Korea
| | - Muhammad Umair Hassan
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology (NTNU), 6009 Ålesund, Norway
| |
Collapse
|
11
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
12
|
Al Fryan LH, Shomo MI, Alazzam MB. Application of Deep Learning System Technology in Identification of Women’s Breast Cancer. Medicina (B Aires) 2023; 59:medicina59030487. [PMID: 36984487 PMCID: PMC10052988 DOI: 10.3390/medicina59030487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/03/2023] [Indexed: 03/06/2023] Open
Abstract
Background and Objectives: The classification of breast cancer is performed based on its histological subtypes using the degree of differentiation. However, there have been low levels of intra- and inter-observer agreement in the process. The use of convolutional neural networks (CNNs) in the field of radiology has shown potential in categorizing medical images, including the histological classification of malignant neoplasms. Materials and Methods: This study aimed to use CNNs to develop an automated approach to aid in the histological classification of breast cancer, with a focus on improving accuracy, reproducibility, and reducing subjectivity and bias. The study identified regions of interest (ROIs), filtered images with low representation of tumor cells, and trained the CNN to classify the images. Results: The major contribution of this research was the application of CNNs as a machine learning technique for histologically classifying breast cancer using medical images. The study resulted in the development of a low-cost, portable, and easy-to-use AI model that can be used by healthcare professionals in remote areas. Conclusions: This study aimed to use artificial neural networks to improve the accuracy and reproducibility of the process of histologically classifying breast cancer and reduce the subjectivity and bias that can be introduced by human observers. The results showed the potential for using CNNs in the development of an automated approach for the histological classification of breast cancer.
Collapse
Affiliation(s)
- Latefa Hamad Al Fryan
- Department of Educational Technology, College of Education, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Mahasin Ibrahim Shomo
- Applied College, Curriculum and Instruction, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Malik Bader Alazzam
- Information Technology College, Ajloun National University, Ajloun 26873, Jordan
- Research Center, The University of Mashreq, Baghdad 11001, Iraq
- Correspondence:
| |
Collapse
|
13
|
Thong LT, Chou HS, Chew HSJ, Lau Y. Diagnostic test accuracy of artificial intelligence-based imaging for lung cancer screening: A systematic review and meta-analysis. Lung Cancer 2023; 176:4-13. [PMID: 36566582 DOI: 10.1016/j.lungcan.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 12/04/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Lung cancer is the principal cause of cancer-related deaths worldwide. Early detection of lung cancer with screening is indispensable to reduce the high morbidity and mortality rates. Artificial intelligence (AI) is widely utilised in healthcare, including in the assessment of medical images. A growing number of reviews studied the application of AI in lung cancer screening, but no overarching meta-analysis has examined the diagnostic test accuracy (DTA) of AI-based imaging for lung cancer screening. OBJECTIVE To systematically review the DTA of AI-based imaging for lung cancer screening. METHODS PubMed, EMBASE, Cochrane Library, CINAHL, IEEE Xplore, Web of Science, ACM Digital Library, Scopus, PsycINFO, and ProQuest Dissertations and Theses were searched from inception to date. Studies that were published in English and that evaluated the performance of AI-based imaging for lung cancer screening were included. Two independent reviewers screened titles and abstracts and used the Quality Assessment of Diagnostic Accuracy Studies-2 tool to appraise the quality of selected studies. Grading of Recommendations Assessment, Development, and Evaluation to diagnostic tests was used to assess the certainty of evidence. RESULTS Twenty-six studies with 150,721 imaging data were included. Hierarchical summary receiver-operating characteristic model used for meta-analysis demonstrated that the pooled sensitivity for AI-based imaging for lung cancer screening was 94.6 % (95 % CI: 91.4 % to 96.7 %) and specificity was 93.6 % (95 % CI: 88.5 % to 96.6 %). Subgroup analyses revealed that similar results were found among different types of AI, region, data source, and year of publication, but the overall quality of evidence was very low. CONCLUSION AI-based imaging could effectively detect lung cancer and be incorporated into lung cancer screening programs. Further high-quality DTA studies on large lung cancer screening populations are required to validate AI's role in early lung cancer detection.
Collapse
Affiliation(s)
- Lay Teng Thong
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Hui Shan Chou
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Ying Lau
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
14
|
Shahnavazi M, Mohamadrahimi H. The application of artificial neural networks in the detection of mandibular fractures using panoramic radiography. Dent Res J (Isfahan) 2023; 20:27. [PMID: 36960025 PMCID: PMC10028573 DOI: 10.4103/1735-3327.369629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/06/2022] [Accepted: 12/20/2022] [Indexed: 03/25/2023] Open
Abstract
Background Panoramic radiography is a standard diagnostic imaging method for dentists. However, it is challenging to detect mandibular trauma and fractures in panoramic radiographs due to the superimposed facial skeleton structures. The objective of this study was to develop a deep learning algorithm that is capable of detecting mandibular fractures and trauma automatically and compare its performance with general dentists. Materials and Methods This is a retrospective diagnostic test accuracy study. This study used a two-stage deep learning framework. To train the model, 190 panoramic images were collected from four different sources. The mandible was first segmented using a U-net model. Then, to detect fractures, a model named Faster region-based convolutional neural network was applied. In the end, a comparison was made between the accuracy, specificity, and sensitivity of artificial intelligence and general dentists in trauma diagnosis. Results The mAP50 and mAP75 for object detection were 98.66% and 57.90%, respectively. The classification accuracy of the model was 91.67%. The sensitivity and specificity of the model were 100% and 83.33%, respectively. On the other hand, human-level diagnostic accuracy, sensitivity, and specificity were 87.22 ± 8.91, 82.22 ± 16.39, and 92.22 ± 6.33, respectively. Conclusion Our framework can provide a level of performance better than general dentists when it comes to diagnosing trauma or fractures.
Collapse
Affiliation(s)
- Maryam Shahnavazi
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aja University of Medical Sciences, Tehran, Iran
- Address for correspondence: Dr. Maryam Shahnavazi, School of Dentistry, Aja University of Medical Sciences, Misaq Complex, 13th East Street, Ajoudanieh, Tehran, Iran. E-mail:
| | - Hosein Mohamadrahimi
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
15
|
Ghashghaei S, Wood DA, Sadatshojaei E, Jalilpoor M. Grayscale Image Statistical Attributes Effectively Distinguish the Severity of Lung Abnormalities in CT Scan Slices of COVID-19 Patients. SN COMPUTER SCIENCE 2023; 4:201. [PMID: 36789248 PMCID: PMC9912234 DOI: 10.1007/s42979-022-01642-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 12/27/2022] [Indexed: 02/12/2023]
Abstract
Grayscale statistical attributes analysed for 513 extract images taken from pulmonary computed tomography (CT) scan slices of 57 individuals (49 confirmed COVID-19 positive; eight confirmed COVID-19 negative) are able to accurately predict a visual score (VS from 0 to 4) used by a clinician to assess the severity of lung abnormalities in the patients. Some of these attributes can be used graphically to distinguish useful but overlapping distributions for the VS classes. Using machine and deep learning (ML/DL) algorithms with twelve grayscale image attributes as inputs enables the VS classes to be accurately distinguished. A convolutional neural network achieves this with better than 96% accuracy (only 18 images misclassified out of 513) on a supervised learning basis. Analysis of confusion matrices enables the VS prediction performance of ML/DL algorithms to be explored in detail. Those matrices demonstrate that the best performing ML/DL algorithms successfully distinguish between VS classes 0 and 1, which clinicians cannot readily do with the naked eye. Just five image grayscale attributes can also be used to generate an algorithmically defined scoring system (AS) that can also graphically distinguish the degree of pulmonary impacts in the dataset evaluated. The AS classification illustrated involves less overlap between its classes than the VS system and could be exploited as an automated expert system. The best-performing ML/DL models are able to predict the AS classes with better than 99% accuracy using twelve grayscale attributes as inputs. The decision tree and random forest algorithms accomplish that distinction with just one classification error in the 513 images tested.
Collapse
Affiliation(s)
- Sara Ghashghaei
- Medical School, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Erfan Sadatshojaei
- Department of Chemical Engineering, Shiraz University, Shiraz, 71345 Iran
| | | |
Collapse
|
16
|
Kong Z, Ouyang H, Cao Y, Huang T, Ahn E, Zhang M, Liu H. Automated periodontitis bone loss diagnosis in panoramic radiographs using a bespoke two-stage detector. Comput Biol Med 2023; 152:106374. [PMID: 36512876 DOI: 10.1016/j.compbiomed.2022.106374] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 11/02/2022] [Accepted: 11/27/2022] [Indexed: 11/30/2022]
Abstract
Periodontitis is a serious oral disease that can lead to severe conditions such as bone loss and teeth falling out if left untreated. Diagnosis of radiographic bone loss (RBL) is critical for the staging and treatment of periodontitis. Unfortunately, the RBL diagnosis by examining the panoramic radiographs is time-consuming. The demand for automated image analysis is urgent. However, existing deep learning methods have limited performances in diagnosis accuracy and have certain difficulties in implementation. Hence, we propose a novel two-stage periodontitis detection convolutional neural network (PDCNN), where we optimize the detector with an anchor-free encoding that allows fast and accurate prediction. We also introduce a proposal-connection module in our detector that excludes less relevant regions of interests (ROIs), making the network focus on more relevant ROIs to improve detection accuracy. Furthermore, we introduced a large-scale, high-resolution panoramic radiograph dataset that captures various complex cases with professional periodontitis annotations. Experiments on our panoramic-image dataset show that the proposed approach achieved an RBL classification accuracy of 0.762. This result shows that our approach outperforms state-of-the-art detectors such as Faster R-CNN and YOLO-v4. We can conclude that the proposed method successfully improves the RBL detection performance. The dataset and our code have been released on GitHub. (https://github.com/PuckBlink/PDCNN).
Collapse
Affiliation(s)
- Zhengmin Kong
- School of Electrical Engineering and Automation, Wuhan University, Wuhan, 430072, China.
| | - Hui Ouyang
- School of Electrical Engineering and Automation, Wuhan University, Wuhan, 430072, China.
| | - Yiyuan Cao
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China.
| | - Tao Huang
- College of Science and Engineering, James Cook University, Queensland, Australia
| | - Euijoon Ahn
- College of Science and Engineering, James Cook University, Queensland, Australia
| | - Maoqi Zhang
- The State Key Laboratory Breeding Base of Basic Science of Stomatology & Key Laboratory for Oral Biomedicine of Ministry of Education, School and Hospital of Stomatology, Wuhan University, Wuhan, 430079, China; Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430079, China
| | - Huan Liu
- The State Key Laboratory Breeding Base of Basic Science of Stomatology & Key Laboratory for Oral Biomedicine of Ministry of Education, School and Hospital of Stomatology, Wuhan University, Wuhan, 430079, China; Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430079, China
| |
Collapse
|
17
|
Ding H, Yang Y, Li X, Cheung GSP, Matinlinna JP, Burrow M, Tsoi JKH. A simple AI-enabled method for quantifying bacterial adhesion on dental materials. Biomater Investig Dent 2022; 9:75-83. [PMID: 36081491 PMCID: PMC9448434 DOI: 10.1080/26415275.2022.2114479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022] Open
Affiliation(s)
- Hao Ding
- Dental Materials Science, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| | - Yunzhen Yang
- Dental Materials Science, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| | - Xin Li
- Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
- Department of Stomatology Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen, China
| | - Gary Shun-Pan Cheung
- Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| | - Jukka Pekka Matinlinna
- Dental Materials Science, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| | - Michael Burrow
- Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| | - James Kit-Hon Tsoi
- Dental Materials Science, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Pokfulam, Hong Kong
| |
Collapse
|
18
|
Reddy PG, Ramashri T, Krishna KL. Brain Tumour Region Extraction Using Novel Self-Organising Map-Based KFCM Algorithm. PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY 2022. [DOI: 10.47836/pjst.31.1.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Medical professionals need help finding tumours in the ground truth image of the brain because the tumours’ location, contrast, intensity, size, and shape vary between images because of different acquisition methods, modalities, and the patient’s age. The medical examiner has difficulty manually separating a tumour from other parts of a Magnetic Resonance Imaging (MRI) image. Many semi- and fully automated brain tumour detection systems have been written about in the literature, and they keep improving. The segmentation literature has seen several transformations throughout the years. An in-depth examination of these methods will be the focus of this investigation. We look at the most recent soft computing technologies used in MRI brain analysis through several review papers. This study looks at Self-Organising maps (SOM) with K-means and the kernel Fuzzy c-means (KFCM) method for segmenting them. The suggested SOM networks were first compared to K-means analysis in an experiment based on datasets with well-known cluster solutions. Later, the SOM is combined with KFCM, reducing time complexity and producing more accurate results than other methods. Experiments show that skewed data improves networks’ performance with more SOMs. Finally, performance measures in real-time datasets are analysed using machine learning approaches. The results show that the proposed algorithm has good sensitivity and better accuracy than k-means and other state-of-art methods.
Collapse
|
19
|
Zhang J, Liu Z, Ma Y, Zhao X, Yang B. Part-and-whole: A novel framework for deformable medical image registration. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04329-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
20
|
Marchetti L, Nifosì R, Martelli PL, Da Pozzo E, Cappello V, Banterle F, Trincavelli ML, Martini C, D’Elia M. Quantum computing algorithms: getting closer to critical problems in computational biology. Brief Bioinform 2022; 23:6758194. [PMID: 36220772 PMCID: PMC9677474 DOI: 10.1093/bib/bbac437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/15/2022] [Accepted: 09/08/2022] [Indexed: 12/14/2022] Open
Abstract
The recent biotechnological progress has allowed life scientists and physicians to access an unprecedented, massive amount of data at all levels (molecular, supramolecular, cellular and so on) of biological complexity. So far, mostly classical computational efforts have been dedicated to the simulation, prediction or de novo design of biomolecules, in order to improve the understanding of their function or to develop novel therapeutics. At a higher level of complexity, the progress of omics disciplines (genomics, transcriptomics, proteomics and metabolomics) has prompted researchers to develop informatics means to describe and annotate new biomolecules identified with a resolution down to the single cell, but also with a high-throughput speed. Machine learning approaches have been implemented to both the modelling studies and the handling of biomedical data. Quantum computing (QC) approaches hold the promise to resolve, speed up or refine the analysis of a wide range of these computational problems. Here, we review and comment on recently developed QC algorithms for biocomputing, with a particular focus on multi-scale modelling and genomic analyses. Indeed, differently from other computational approaches such as protein structure prediction, these problems have been shown to be adequately mapped onto quantum architectures, the main limit for their immediate use being the number of qubits and decoherence effects in the available quantum machines. Possible advantages over the classical counterparts are highlighted, along with a description of some hybrid classical/quantum approaches, which could be the closest to be realistically applied in biocomputation.
Collapse
Affiliation(s)
| | | | - Pier Luigi Martelli
- Corresponding authors: Pier Luigi Martelli. Tel.: +39 0512094005; Fax: +39 0512094005; E-mail: ; Claudia Martini. Tel.: +39 0502219522; Fax: +39 050 2210680; E-mail:
| | - Eleonora Da Pozzo
- University of Pisa, Department of Pharmacy, via Bonanno 6, 56126 Pisa Italy
| | - Valentina Cappello
- Italian Institute of Technology, Center for Materials Interfaces, Viale Rinaldo Piaggio 34, 56025 Pontedera (PI), Italy
| | | | | | - Claudia Martini
- Corresponding authors: Pier Luigi Martelli. Tel.: +39 0512094005; Fax: +39 0512094005; E-mail: ; Claudia Martini. Tel.: +39 0502219522; Fax: +39 050 2210680; E-mail:
| | - Massimo D’Elia
- University of Pisa, Department of Physics, Largo Bruno Pontecorvo 3, 56127, Pisa Italy
- INFN, Sezione di Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy
| |
Collapse
|
21
|
Kline A, Wang H, Li Y, Dennis S, Hutch M, Xu Z, Wang F, Cheng F, Luo Y. Multimodal machine learning in precision health: A scoping review. NPJ Digit Med 2022; 5:171. [PMID: 36344814 PMCID: PMC9640667 DOI: 10.1038/s41746-022-00712-8] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
Abstract
Machine learning is frequently being leveraged to tackle problems in the health sector including utilization for clinical decision-support. Its use has historically been focused on single modal data. Attempts to improve prediction and mimic the multimodal nature of clinical expert decision-making has been met in the biomedical field of machine learning by fusing disparate data. This review was conducted to summarize the current studies in this field and identify topics ripe for future research. We conducted this review in accordance with the PRISMA extension for Scoping Reviews to characterize multi-modal data fusion in health. Search strings were established and used in databases: PubMed, Google Scholar, and IEEEXplore from 2011 to 2021. A final set of 128 articles were included in the analysis. The most common health areas utilizing multi-modal methods were neurology and oncology. Early fusion was the most common data merging strategy. Notably, there was an improvement in predictive performance when using data fusion. Lacking from the papers were clear clinical deployment strategies, FDA-approval, and analysis of how using multimodal approaches from diverse sub-populations may improve biases and healthcare disparities. These findings provide a summary on multimodal data fusion as applied to health diagnosis/prognosis problems. Few papers compared the outputs of a multimodal approach with a unimodal prediction. However, those that did achieved an average increase of 6.4% in predictive accuracy. Multi-modal machine learning, while more robust in its estimations over unimodal methods, has drawbacks in its scalability and the time-consuming nature of information concatenation.
Collapse
Affiliation(s)
- Adrienne Kline
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Hanyin Wang
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Yikuan Li
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Saya Dennis
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Meghan Hutch
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Zhenxing Xu
- Department of Population Health Sciences, Cornell University, New York, 10065, NY, USA
| | - Fei Wang
- Department of Population Health Sciences, Cornell University, New York, 10065, NY, USA
| | - Feixiong Cheng
- Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, Cleveland, 44195, OH, USA
| | - Yuan Luo
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA.
| |
Collapse
|
22
|
An evaluation of lightweight deep learning techniques in medical imaging for high precision COVID-19 diagnostics. HEALTHCARE ANALYTICS 2022. [PMID: 37520618 PMCID: PMC9396460 DOI: 10.1016/j.health.2022.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Timely and rapid diagnoses are core to informing on optimum interventions that curb the spread of COVID-19. The use of medical images such as chest X-rays and CTs has been advocated to supplement the Reverse-Transcription Polymerase Chain Reaction (RT-PCR) test, which in turn has stimulated the application of deep learning techniques in the development of automated systems for the detection of infections. Decision support systems relax the challenges inherent to the physical examination of images, which is both time consuming and requires interpretation by highly trained clinicians. A review of relevant reported studies to date shows that most deep learning algorithms utilised approaches are not amenable to implementation on resource-constrained devices. Given the rate of infections is increasing, rapid, trusted diagnoses are a central tool in the management of the spread, mandating a need for a low-cost and mobile point-of-care detection systems, especially for middle- and low-income nations. The paper presents the development and evaluation of the performance of lightweight deep learning technique for the detection of COVID-19 using the MobileNetV2 model. Results demonstrate that the performance of the lightweight deep learning model is competitive with respect to heavyweight models but delivers a significant increase in the efficiency of deployment, notably in the lowering of the cost and memory requirements of computing resources.
Collapse
|
23
|
Dillman JR, Somasundaram E, Brady SL, He L. Current and emerging artificial intelligence applications for pediatric abdominal imaging. Pediatr Radiol 2022; 52:2139-2148. [PMID: 33844048 DOI: 10.1007/s00247-021-05057-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 01/25/2021] [Accepted: 03/16/2021] [Indexed: 12/12/2022]
Abstract
Artificial intelligence (AI) uses computers to mimic cognitive functions of the human brain, allowing inferences to be made from generally large datasets. Traditional machine learning (e.g., decision tree analysis, support vector machines) and deep learning (e.g., convolutional neural networks) are two commonly employed AI approaches both outside and within the field of medicine. Such techniques can be used to evaluate medical images for the purposes of automated detection and segmentation, classification tasks (including diagnosis, lesion or tissue characterization, and prediction), and image reconstruction. In this review article we highlight recent literature describing current and emerging AI methods applied to abdominal imaging (e.g., CT, MRI and US) and suggest potential future applications of AI in the pediatric population.
Collapse
Affiliation(s)
- Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA. .,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Elan Somasundaram
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Samuel L Brady
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
24
|
Santos A, Caramelo F, Melo JB, Castelo-Branco M. Dopaminergic Gene Dosage Reveals Distinct Biological Partitions between Autism and Developmental Delay as Revealed by Complex Network Analysis and Machine Learning Approaches. J Pers Med 2022; 12:jpm12101579. [PMID: 36294718 PMCID: PMC9604562 DOI: 10.3390/jpm12101579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 09/11/2022] [Accepted: 09/20/2022] [Indexed: 11/21/2022] Open
Abstract
The neurobiological mechanisms underlying Autism Spectrum Disorders (ASD) remains controversial. One factor contributing to this debate is the phenotypic heterogeneity observed in ASD, which suggests that multiple system disruptions may contribute to diverse patterns of impairment which have been reported between and within study samples. Here, we used SFARI data to address genetic imbalances affecting the dopaminergic system. Using complex network analysis, we investigated the relations between phenotypic profiles, gene dosage and gene ontology (GO) terms related to dopaminergic neurotransmission from a polygenic point-of-view. We observed that the degree of distribution of the networks matched a power-law distribution characterized by the presence of hubs, gene or GO nodes with a large number of interactions. Furthermore, we identified interesting patterns related to subnetworks of genes and GO terms, which suggested applicability to separation of clinical clusters (Developmental Delay (DD) versus ASD). This has the potential to improve our understanding of genetic variability issues and has implications for diagnostic categorization. In ASD, we identified the separability of four key dopaminergic mechanisms disrupted with regard to receptor binding, synaptic physiology and neural differentiation, each belonging to particular subgroups of ASD participants, whereas in DD a more unitary biological pattern was found. Finally, network analysis was fed into a machine learning binary classification framework to differentiate between the diagnosis of ASD and DD. Subsets of 1846 participants were used to train a Random Forest algorithm. Our best classifier achieved, on average, a diagnosis-predicting accuracy of 85.18% (sd 1.11%) on the test samples of 790 participants using 117 genes. The achieved accuracy surpassed results using genetic data and closely matched imaging approaches addressing binary diagnostic classification. Importantly, we observed a similar prediction accuracy when the classifier uses only 62 GO features. This result further corroborates the complex network analysis approach, suggesting that different genetic causes might converge to the dysregulation of the same set of biological mechanisms, leading to a similar disease phenotype. This new biology-driven ontological framework yields a less variable and more compact domain-related set of features with potential mechanistic generalization. The proposed network analysis, allowing for the determination of a clearcut biological distinction between ASD and DD (the latter presenting much lower modularity and heterogeneity), is amenable to machine learning approaches and provides an interesting avenue of research for the future.
Collapse
Affiliation(s)
- André Santos
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), ICNAS, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
| | - Francisco Caramelo
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), ICNAS, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
- CIBB, iCBR, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
| | - Joana Barbosa Melo
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), ICNAS, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
- CIBB, iCBR, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), ICNAS, Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
- Correspondence:
| |
Collapse
|
25
|
Droppelmann G, Tello M, García N, Greene C, Jorquera C, Feijoo F. Lateral elbow tendinopathy and artificial intelligence: Binary and multilabel findings detection using machine learning algorithms. Front Med (Lausanne) 2022; 9:945698. [PMID: 36213676 PMCID: PMC9537568 DOI: 10.3389/fmed.2022.945698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ultrasound (US) is a valuable technique to detect degenerative findings and intrasubstance tears in lateral elbow tendinopathy (LET). Machine learning methods allow supporting this radiological diagnosis. Aim To assess multilabel classification models using machine learning models to detect degenerative findings and intrasubstance tears in US images with LET diagnosis. Materials and methods A retrospective study was performed. US images and medical records from patients with LET diagnosis from January 1st, 2017, to December 30th, 2018, were selected. Datasets were built for training and testing models. For image analysis, features extraction, texture characteristics, intensity distribution, pixel-pixel co-occurrence patterns, and scales granularity were implemented. Six different supervised learning models were implemented for binary and multilabel classification. All models were trained to classify four tendon findings (hypoechogenicity, neovascularity, enthesopathy, and intrasubstance tear). Accuracy indicators and their confidence intervals (CI) were obtained for all models following a K-fold-repeated-cross-validation method. To measure multilabel prediction, multilabel accuracy, sensitivity, specificity, and receiver operating characteristic (ROC) with 95% CI were used. Results A total of 30,007 US images (4,324 exams, 2,917 patients) were included in the analysis. The RF model presented the highest mean values in the area under the curve (AUC), sensitivity, and also specificity by each degenerative finding in the binary classification. The AUC and sensitivity showed the best performance in intrasubstance tear with 0.991 [95% CI, 099, 0.99], and 0.775 [95% CI, 0.77, 0.77], respectively. Instead, specificity showed upper values in hypoechogenicity with 0.821 [95% CI, 0.82, −0.82]. In the multilabel classifier, RF also presented the highest performance. The accuracy was 0.772 [95% CI, 0.771, 0.773], a great macro of 0.948 [95% CI, 0.94, 0.94], and a micro of 0.962 [95% CI, 0.96, 0.96] AUC scores were detected. Diagnostic accuracy, sensitivity, and specificity with 95% CI were calculated. Conclusion Machine learning algorithms based on US images with LET presented high diagnosis accuracy. Mainly the random forest model shows the best performance in binary and multilabel classifiers, particularly for intrasubstance tears.
Collapse
Affiliation(s)
- Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, RM, Chile
- Health Sciences Ph.D. Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Principles and Practice of Clinical Research (PPCR), Harvard T.H. Chan School of Public Health, Boston, MA, United States
- *Correspondence: Guillermo Droppelmann,
| | - Manuel Tello
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Nicolás García
- MSK Diagnostic and Interventional Radiology Department, MEDS Clinic, Santiago, RM, Chile
| | - Cristóbal Greene
- Hand and Elbow Unit, Department of Orthopaedic Surgery, MEDS Clinic, Santiago, RM, Chile
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, RM, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
26
|
Hossain MZ, Daskalaki E, Brüstle A, Desborough J, Lueck CJ, Suominen H. The role of machine learning in developing non-magnetic resonance imaging based biomarkers for multiple sclerosis: a systematic review. BMC Med Inform Decis Mak 2022; 22:242. [PMID: 36109726 PMCID: PMC9476596 DOI: 10.1186/s12911-022-01985-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 09/02/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Multiple sclerosis (MS) is a neurological condition whose symptoms, severity, and progression over time vary enormously among individuals. Ideally, each person living with MS should be provided with an accurate prognosis at the time of diagnosis, precision in initial and subsequent treatment decisions, and improved timeliness in detecting the need to reassess treatment regimens. To manage these three components, discovering an accurate, objective measure of overall disease severity is essential. Machine learning (ML) algorithms can contribute to finding such a clinically useful biomarker of MS through their ability to search and analyze datasets about potential biomarkers at scale. Our aim was to conduct a systematic review to determine how, and in what way, ML has been applied to the study of MS biomarkers on data from sources other than magnetic resonance imaging.
Methods
Systematic searches through eight databases were conducted for literature published in 2014–2020 on MS and specified ML algorithms.
Results
Of the 1, 052 returned papers, 66 met the inclusion criteria. All included papers addressed developing classifiers for MS identification or measuring its progression, typically, using hold-out evaluation on subsets of fewer than 200 participants with MS. These classifiers focused on biomarkers of MS, ranging from those derived from omics and phenotypical data (34.5% clinical, 33.3% biological, 23.0% physiological, and 9.2% drug response). Algorithmic choices were dependent on both the amount of data available for supervised ML (91.5%; 49.2% classification and 42.3% regression) and the requirement to be able to justify the resulting decision-making principles in healthcare settings. Therefore, algorithms based on decision trees and support vector machines were commonly used, and the maximum average performance of 89.9% AUC was found in random forests comparing with other ML algorithms.
Conclusions
ML is applicable to determining how candidate biomarkers perform in the assessment of disease severity. However, applying ML research to develop decision aids to help clinicians optimize treatment strategies and analyze treatment responses in individual patients calls for creating appropriate data resources and shared experimental protocols. They should target proceeding from segregated classification of signals or natural language to both holistic analyses across data modalities and clinically-meaningful differentiation of disease.
Collapse
|
27
|
Medical Data Classification Assisted by Machine Learning Strategy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9699612. [PMID: 36124172 PMCID: PMC9482495 DOI: 10.1155/2022/9699612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
With the development of science and technology, data plays an increasingly important role in our daily life. Therefore, much attention has been paid to the field of data mining. Data classification is the premise of data mining, and how well the data is classified directly affects the performance of subsequent models. In particular, in the medical field, data classification can help accurately determine the location of patients' lesions and reduce the workload of doctors in the treatment process. However, medical data has the characteristics of high noise, strong correlation, and high data dimension, which brings great challenges to the traditional classification model. Therefore, it is very important to design an advanced model to improve the effect of medical data classification. In this context, this paper first introduces the structure and characteristics of the convolutional neural network (CNN) model and then demonstrates its unique advantages in medical data processing, especially in data classification. Secondly, we design a new kind of medical data classification model based on the CNN model. Finally, the simulation results show that the proposed method achieves higher classification accuracy with faster model convergence speed and the lower training error when compared with conventional machine leaning methods, which has demonstrated the effectiveness of the new method in respect to medical data classification.
Collapse
|
28
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
29
|
Ghashghaei S, Wood DA, Sadatshojaei E, Jalilpoor M. Grayscale image statistics of COVID‐19 patient CT scans characterize lung condition with machine and deep learning. Chronic Dis Transl Med 2022; 8:191-206. [PMID: 35942198 PMCID: PMC9347876 DOI: 10.1002/cdt3.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/28/2022] [Accepted: 04/08/2022] [Indexed: 11/30/2022] Open
Abstract
Background Grayscale image attributes of computed tomography (CT) of pulmonary scans contain valuable information relating to patients with respiratory ailments. These attributes are used to evaluate the severity of lung conditions of patients confirmed to be with and without COVID‐19. Method Five hundred thirteen CT images relating to 57 patients (49 with COVID‐19; 8 free of COVID‐19) were collected at Namazi Medical Centre (Shiraz, Iran) in 2020 and 2021. Five visual scores (VS: 0, 1, 2, 3, or 4) are clinically assigned to these images with the score increasing with the severity of COVID‐19‐related lung conditions. Eleven deep learning and machine learning techniques (DL/ML) are used to distinguish the VS class based on 12 grayscale image attributes. Results The convolutional neural network achieves 96.49% VS accuracy (18 errors from 513 images) successfully distinguishing VS Classes 0 and 1, outperforming clinicians’ visual inspections. An algorithmic score (AS), involving just five grayscale image attributes, is developed independently of clinicians’ assessments (99.81% AS accuracy; 1 error from 513 images). Conclusion Grayscale CT image attributes can be successfully used to distinguish the severity of COVID‐19 lung damage. The AS technique developed provides a suitable basis for an automated system using ML/DL methods and 12 image attributes. Grayscale image statistics of CT scans can effectively classify lung abnormalities Graphical trends of grayscale statistics distinguish visual assessments COVID‐19 classes Machine/deep learning algorithms predict severity from image grayscale attributes Algorithmic class systems can be established using just five grayscale attributes Confusion matrices provide detailed insight to algorithm prediction capabilities
Collapse
Affiliation(s)
- Sara Ghashghaei
- Medical School Shiraz University of Medical Sciences Shiraz Iran
| | - David A. Wood
- Department of Research DWA Energy Limited Lincoln LN5 9JP UK
| | | | | |
Collapse
|
30
|
Senthilkumar T, Kumarganesh S, Sivakumar P, Periyarselvam K. Primitive detection of Alzheimer’s disease using neuroimaging: A progression model for Alzheimer’s disease: Their applications, benefits, and drawbacks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Alzheimer’s disease (A.D.) is the most widespread type of Dementia, and it is not a curable neurodegenerative disease that affects millions of older people. Researchers were able to use their understanding of Alzheimer’s disease risk variables to develop enrichment processes for longitudinal imaging studies. Using this method, they reduced their sample size and study time. This paper describes the primitive detective of Alzheimer’s diseases using Neuroimaging techniques. Several preprocessing methods were used to ensure that the dataset was ready for subsequent feature extraction and categorization. The noise was reduced by converting and averaging many scan frames from real to DCT space. Both sides of the averaged image were filtered and combined into a single shot after being converted to real space. InceptionV3 and DenseNet201 are two pre-trained models used in the suggested model. The PCA approach was used to select the traits, and the resulting explained variance ratio was 0.99The Simons Foundation Autism Research Initiative (SFARI)—Simon’s Simplex Collection (SSC)—and UCI machine learning datasets showed that our method is faster and more successful at identifying complete long-risk patterns when compared to existing methods.
Collapse
Affiliation(s)
- T. Senthilkumar
- GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| | | | - P. Sivakumar
- GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| | - K. Periyarselvam
- GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
31
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
32
|
Artificial Intelligence: A New Diagnostic Software in Dentistry: A Preliminary Performance Diagnostic Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031728. [PMID: 35162751 PMCID: PMC8835112 DOI: 10.3390/ijerph19031728] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/18/2021] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
Background: Artificial intelligence (AI) has taken hold in public health because more and more people are looking to make a diagnosis using technology that allows them to work faster and more accurately, reducing costs and the number of medical errors. Methods: In the present study, 120 panoramic X-rays (OPGs) were randomly selected from the Department of Oral and Maxillofacial Sciences of Sapienza University of Rome, Italy. The OPGs were acquired and analyzed using Apox, which takes a panoramic X-rayand automatically returns the dental formula, the presence of dental implants, prosthetic crowns, fillings and root remnants. A descriptive analysis was performed presenting the categorical variables as absolute and relative frequencies. Results: In total, the number of true positive (TP) values was 2.195 (19.06%); true negative (TN), 8.908 (77.34%); false positive (FP), 132 (1.15%); and false negative (FN), 283 (2.46%). The overall sensitivity was 0.89, while the overall specificity was 0.98. Conclusions: The present study shows the latest achievements in dentistry, analyzing the application and credibility of a new diagnostic method to improve the work of dentists and the patients’ care.
Collapse
|
33
|
Wang S, Niu K, Chen L, Rao X. Method for counting labeled neurons in mouse brain regions based on image representation and registration. Med Biol Eng Comput 2022; 60:487-500. [PMID: 35015271 DOI: 10.1007/s11517-021-02495-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/18/2021] [Indexed: 11/25/2022]
Abstract
An important step in brain image analysis is to divide specific brain regions by matching brain slices to standard brain reference atlases, and perform statistical analysis on the labeled neurons in each brain region. Taking mouse fluorescently labeled brain slices as an example, due to the noise and distortion introduced during the preparation of brain slices, and the modal differences with standard brain atlas, the brain slices cannot directly establish an accurate one-to-one correspondence with the brain atlas, which in turn affects the accuracy of the number of labeled neurons in each brain region. This paper introduces the idea of image representation, uses neural networks to realize the registration of different modal mouse brain slices and brain atlas, completes the regional localization of the brain slices, and uses threshold segmentation to detect and count the labeled neurons in each brain region. The method proposed in this paper can effectively solve the problem of large deviation of neurons count caused by the inaccurate division of brain regions in large deformed brain slices, and can automatically realize accurate count of labeled neurons in each brain region of brain slices. The whole framework of method for counting labeled neurons in mouse brain regions based on image representation and registration.
Collapse
Affiliation(s)
- Songwei Wang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Ke Niu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Liwei Chen
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China.
| | - Xiaoping Rao
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Key Laboratory of Magnetic Resonance in Biological Systems, Wuhan Center for Magnetic Resonance, Innovation Academy for Precision Measurement Science and Methodology, Chinese Academy of Sciences, Wuhan, 430071, China.
| |
Collapse
|
34
|
|
35
|
O'Connell AM, Bartolotta TV, Orlando A, Jung SH, Baek J, Parker KJ. Diagnostic Performance of an Artificial Intelligence System in Breast Ultrasound. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:97-105. [PMID: 33665833 DOI: 10.1002/jum.15684] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 05/26/2023]
Abstract
OBJECTIVES We study the performance of an artificial intelligence (AI) program designed to assist radiologists in the diagnosis of breast cancer, relative to measures obtained from conventional readings by radiologists. METHODS A total of 10 radiologists read a curated, anonymized group of 299 breast ultrasound images that contained at least one suspicious lesion and for which a final diagnosis was independently determined. Separately, the AI program was initialized by a lead radiologist and the computed results compared against those of the radiologists. RESULTS The AI program's diagnoses of breast lesions had concordance with the 10 radiologists' readings across a number of BI-RADS descriptors. The sensitivity, specificity, and accuracy of the AI program's diagnosis of benign versus malignant was above 0.8, in agreement with the highest performing radiologists and commensurate with recent studies. CONCLUSION The trained AI program can contribute to accuracy of breast cancer diagnoses with ultrasound.
Collapse
Affiliation(s)
- Avice M O'Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, USA
| | - Tommaso V Bartolotta
- Department of Radiology, University Hospital, Palermo, Italy
- Fondazione Istituto G. Giglio Hospital, Cefalù, Italy
| | - Alessia Orlando
- Department of Radiology, University Hospital, Palermo, Italy
| | - Sin-Ho Jung
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jihye Baek
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, New York, USA
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, New York, USA
| |
Collapse
|
36
|
Early Diagnosis of Alzheimer’s Disease Using Cerebral Catheter Angiogram Neuroimaging: A Novel Model Based on Deep Learning Approaches. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc6010002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Neuroimaging refers to the techniques that provide efficient information about the neural structure of the human brain, which is utilized for diagnosis, treatment, and scientific research. The problem of classifying neuroimages is one of the most important steps that are needed by medical staff to diagnose their patients early by investigating the indicators of different neuroimaging types. Early diagnosis of Alzheimer’s disease is of great importance in preventing the deterioration of the patient’s situation. In this research, a novel approach was devised based on a digital subtracted angiogram scan that provides sufficient features of a new biomarker cerebral blood flow. The used dataset was acquired from the database of K.A.U.H hospital and contains digital subtracted angiograms of participants who were diagnosed with Alzheimer’s disease, besides samples of normal controls. Since each scan included multiple frames for the left and right ICA’s, pre-processing steps were applied to make the dataset prepared for the next stages of feature extraction and classification. The multiple frames of scans transformed from real space into DCT space and averaged to remove noises. Then, the averaged image was transformed back to the real space, and both sides filtered with Meijering and concatenated in a single image. The proposed model extracts the features using different pre-trained models: InceptionV3 and DenseNet201. Then, the PCA method was utilized to select the features with 0.99 explained variance ratio, where the combination of selected features from both pre-trained models is fed into machine learning classifiers. Overall, the obtained experimental results are at least as good as other state-of-the-art approaches in the literature and more efficient according to the recent medical standards with a 99.14% level of accuracy, considering the difference in dataset samples and the used cerebral blood flow biomarker.
Collapse
|
37
|
Zhang F, Petersen M, Johnson L, Hall J, O’Bryant SE. Accelerating Hyperparameter Tuning in Machine Learning for Alzheimer's Disease With High Performance Computing. Front Artif Intell 2021; 4:798962. [PMID: 34957393 PMCID: PMC8692864 DOI: 10.3389/frai.2021.798962] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 11/15/2021] [Indexed: 11/27/2022] Open
Abstract
Driven by massive datasets that comprise biomarkers from both blood and magnetic resonance imaging (MRI), the need for advanced learning algorithms and accelerator architectures, such as GPUs and FPGAs has increased. Machine learning (ML) methods have delivered remarkable prediction for the early diagnosis of Alzheimer's disease (AD). Although ML has improved accuracy of AD prediction, the requirement for the complexity of algorithms in ML increases, for example, hyperparameters tuning, which in turn, increases its computational complexity. Thus, accelerating high performance ML for AD is an important research challenge facing these fields. This work reports a multicore high performance support vector machine (SVM) hyperparameter tuning workflow with 100 times repeated 5-fold cross-validation for speeding up ML for AD. For demonstration and evaluation purposes, the high performance hyperparameter tuning model was applied to public MRI data for AD and included demographic factors such as age, sex and education. Results showed that computational efficiency increased by 96%, which helped to shed light on future diagnostic AD biomarker applications. The high performance hyperparameter tuning model can also be applied to other ML algorithms such as random forest, logistic regression, xgboost, etc.
Collapse
Affiliation(s)
- Fan Zhang
- Institute for Translational Research, University of North Texas Health Science Center, Fort Worth, TX, United States
- Department of Family Medicine, University of North Texas Health Science Center, Fort Worth, TX, United States
| | - Melissa Petersen
- Institute for Translational Research, University of North Texas Health Science Center, Fort Worth, TX, United States
- Department of Family Medicine, University of North Texas Health Science Center, Fort Worth, TX, United States
| | - Leigh Johnson
- Institute for Translational Research, University of North Texas Health Science Center, Fort Worth, TX, United States
- Department of Pharmacology and Neuroscience, University of North Texas Health Science Center, Fort Worth, TX, United States
| | - James Hall
- Institute for Translational Research, University of North Texas Health Science Center, Fort Worth, TX, United States
- Department of Pharmacology and Neuroscience, University of North Texas Health Science Center, Fort Worth, TX, United States
| | - Sid E. O’Bryant
- Institute for Translational Research, University of North Texas Health Science Center, Fort Worth, TX, United States
- Department of Pharmacology and Neuroscience, University of North Texas Health Science Center, Fort Worth, TX, United States
| |
Collapse
|
38
|
Liu L, Parker KJ, Jung SH. Design and Analysis Methods for Trials with AI-Based Diagnostic Devices for Breast Cancer. J Pers Med 2021; 11:jpm11111150. [PMID: 34834502 PMCID: PMC8617855 DOI: 10.3390/jpm11111150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 11/02/2021] [Accepted: 11/02/2021] [Indexed: 11/24/2022] Open
Abstract
Imaging is important in cancer diagnostics. It takes a long period of medical training and clinical experience for radiologists to be able to accurately interpret diagnostic images. With the advance of big data analysis, machine learning and AI-based devices are currently under development and taking a role in imaging diagnostics. If an AI-based imaging device can read the image as accurately as experienced radiologists, it may be able to help radiologists increase the accuracy of their reading and manage their workloads. In this paper, we consider two potential study objectives of a clinical trial to evaluate an AI-based device for breast cancer diagnosis by comparing its concordance with human radiologists. We propose statistical design and analysis methods for each study objective. Extensive numerical studies are conducted to show that the proposed statistical testing methods control the type I error rate accurately and the design methods provide required sample sizes with statistical powers close to pre-specified nominal levels. The proposed methods were successfully used to design and analyze a real device trial.
Collapse
Affiliation(s)
- Lu Liu
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC 27710, USA;
| | - Kevin J. Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627, USA;
| | - Sin-Ho Jung
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC 27710, USA;
- Correspondence:
| |
Collapse
|
39
|
Zhang Z, Mao S, Coyle J, Sejdić E. Automatic annotation of cervical vertebrae in videofluoroscopy images via deep learning. Med Image Anal 2021; 74:102218. [PMID: 34487983 DOI: 10.1016/j.media.2021.102218] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/07/2023]
Abstract
Judging swallowing kinematic impairments via videofluoroscopy represents the gold standard for the detection and evaluation of swallowing disorders. However, the efficiency and accuracy of such a biomechanical kinematic analysis vary significantly among human judges affected mainly by their training and experience. Here, we showed that a novel machine learning algorithm can with high accuracy automatically detect key anatomical points needed for a routine swallowing assessment in real-time. We trained a novel two-stage convolutional neural network to localize and measure the vertebral bodies using 1518 swallowing videofluoroscopies from 265 patients. Our network model yielded high accuracy as the mean distance between predicted points and annotations was 4.20 ± 5.54 pixels. In comparison, human inter-rater error was 4.35 ± 3.12 pixels. Furthermore, 93% of predicted points were less than five pixels from annotated pixels when tested on an independent dataset from 70 subjects. Our model offers more choices for speech language pathologists in their routine clinical swallowing assessments as it provides an efficient and accurate method for anatomic landmark localization in real-time, a task previously accomplished using an off-line time-sinking procedure.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Shitong Mao
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - James Coyle
- Department of Communication Science and Disorders, School of Health and Rehabilitation Science, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- The Edward S. Rogers Department of Electrical and Computer Engineering, Faculty of Applied Science and Engineering, University of Toronto, Toronto, Ontario, Canada; North York General Hospital, Toronto, Ontario, Canada.
| |
Collapse
|
40
|
Malignant and nonmalignant classification of breast lesions in mammograms using convolutional neural networks. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102954] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
41
|
Ferjaoui R, Cherni MA, Boujnah S, Kraiem NEH, Kraiem T. Machine learning for evolutive lymphoma and residual masses recognition in whole body diffusion weighted magnetic resonance images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106320. [PMID: 34390938 DOI: 10.1016/j.cmpb.2021.106320] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND After the treatment of the patients with malignant lymphoma, there may persist lesions that must be labeled either as evolutive lymphoma requiring new treatments or as residual masses. We present in this work, a machine learning-based computer-aided diagnosis (CAD) applied to whole-body diffusion-weighted magnetic resonance images. METHODS The database consists of a total of 1005 MRI images with evolutive lymphoma and residual masses. More specifically, we propose a novel approach that leverages: (1)-The complementarity of the functional and anatomical criteria of MRI images through a fusion step based on the discrete wavelet transforms (DWT). (2)- The automatic segmentation of the lesions, their localization, and their enumeration using the Chan-Vese algorithm. (3)- The generation of the parametric image which contains the apparent diffusion coefficient value named ADC map. (4)- The features selection through the application of the sequential forward selection (SFS), Entropy, Symmetric uncertainty and Gain Ratio algorithm on 72 extracted features. (5)- The classification of the lesions by applying five well known supervised machine learning classification algorithms: the back-propagation artificial neural network (ANN), the support vector machine (SVM), the K-nearest neighbours (K-NN), Relevance Vectors Machine (RVM), and the random forest (RF) compared to deep learning based on convolutional neural network (CNN). Moreover, this study is achieved with an evaluation of the classification using 335 DW-MR images where 80% of them are used for the training and the remaining 20% for the test. RESULTS The obtained accuracy for the five classifiers recorded a slight superiority to the proposed method based on the back-propagation 3-9-1 ANN model which reaches 96,5%. In addition, we compared the proposed method to five other works from the literature. The proposed method gives much better results in terms of SE, SP, accuracy, F1-measure, and geometric-mean which reaches respectively 96.4%, 90.9%, 95.5%, 0.97, and 91.61%. CONCLUSIONS Our initial results suggest that Combining functional, anatomical, and morphological features of ROI's have very good accuracy (97.01%) for evolutive lymphoma and residual masses recognition when we based on the new proposed approach using the back-propagation 3-9-1 ANN model. Proposed method based on machine learning gives less than Deep learning CNN, which is 98.5%.
Collapse
Affiliation(s)
- Radhia Ferjaoui
- University of Tunis El Manar, Research Laboratory of biophysics and Medical technologies (LRBTM), ISTMT, Tunis, 1006, Tunisia.
| | - Mohamed Ali Cherni
- University of Tunis, LR13 ES03 SIME Laboratory, ENSIT, Montfleury 1008 Tunisia
| | - Sana Boujnah
- University of Tunis El Manar, National Engineering School of Tunis, Tunisia
| | | | - Tarek Kraiem
- University of Tunis El Manar, Faculty of Medicine of Tunis, Tunis, 1007, Tunisia; University of Tunis El Manar, Research Laboratory of biophysics and Medical technologies (LRBTM), ISTMT, Tunis, 1006, Tunisia
| |
Collapse
|
42
|
Hajjo R, Sabbah DA, Bardaweel SK, Tropsha A. Identification of Tumor-Specific MRI Biomarkers Using Machine Learning (ML). Diagnostics (Basel) 2021; 11:742. [PMID: 33919342 PMCID: PMC8143297 DOI: 10.3390/diagnostics11050742] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/09/2021] [Accepted: 04/12/2021] [Indexed: 02/06/2023] Open
Abstract
The identification of reliable and non-invasive oncology biomarkers remains a main priority in healthcare. There are only a few biomarkers that have been approved as diagnostic for cancer. The most frequently used cancer biomarkers are derived from either biological materials or imaging data. Most cancer biomarkers suffer from a lack of high specificity. However, the latest advancements in machine learning (ML) and artificial intelligence (AI) have enabled the identification of highly predictive, disease-specific biomarkers. Such biomarkers can be used to diagnose cancer patients, to predict cancer prognosis, or even to predict treatment efficacy. Herein, we provide a summary of the current status of developing and applying Magnetic resonance imaging (MRI) biomarkers in cancer care. We focus on all aspects of MRI biomarkers, starting from MRI data collection, preprocessing and machine learning methods, and ending with summarizing the types of existing biomarkers and their clinical applications in different cancer types.
Collapse
Affiliation(s)
- Rima Hajjo
- Department of Pharmacy, Faculty of Pharmacy, Al-Zaytoonah University of Jordan, P.O. Box 130, Amman 11733, Jordan;
- Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, The University of North Carlina at Chapel Hill, Chapel Hill, NC 27599, USA;
- National Center for Epidemics and Communicable Disease Control, Amman 11118, Jordan
| | - Dima A. Sabbah
- Department of Pharmacy, Faculty of Pharmacy, Al-Zaytoonah University of Jordan, P.O. Box 130, Amman 11733, Jordan;
| | - Sanaa K. Bardaweel
- Department of Pharmaceutical Sciences, School of Pharmacy, University of Jordan, Amman 11942, Jordan;
| | - Alexander Tropsha
- Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, The University of North Carlina at Chapel Hill, Chapel Hill, NC 27599, USA;
| |
Collapse
|
43
|
Chakraborty T, Banik SK, Bhadra AK, Nandi D. Dynamically learned PSO based neighborhood influenced fuzzy c-means for pre-treatment and post-treatment organ segmentation from CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105971. [PMID: 33611030 DOI: 10.1016/j.cmpb.2021.105971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 02/01/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The accurate segmentation of pre-treatment and post-treatment organs is always perceived as a challenging task in medical image analysis field. Especially, in those situations where the amount of data set is limited, the researchers are compelled to design unsupervised model for segmentation. In this paper, we propose a novel dynamically learned particle swarm optimization based neighborhood influenced fuzzy c-means (DLPSO-NIFCM) clustering (unsupervised learning model) for solving pre-treatment and post-treatment organs segmentation problems. The proposed segmentation technique has been successfully applied to segment the liver parts from the Computed Tomography (CT) images of abdomen and also the lung parenchyma from the lungs CT images. METHODOLOGY In the proposed method, we formulate a primary convex objective function by considering the membership value of a pixel as well as the membership of its other neighboring pixels. Then we apply a new algebraic transformation on the primary objective function to design a new and more suitable objective function without losing convexity of the primary objective function. This new objective function is compatible for hybridization with any heuristic search technique in true sense. In this work, we propose a dynamically learned PSO to obtain the initial cluster centroids from the final objective function. Finally, we use a graph-based isolation mechanism for refining the segmentation results. RESULTS AND CONCLUSION This hybrid method, along with the restructured single variable objective function of the distance, leads to accurate clustering results with relatively lesser converging time as compared to the state-of-the-art methods. The segmentation results, obtained through several experiments with real CT images, are encouraging. The numerical values of different performance metrics obtained over the same data set confirm that the proposed algorithm performs better with respect to the state-of-the-art methods. Hence, we may consider the proposed method as a promising tool for clustering and CT image segmentation in a Computer Aided Diagnostic (CAD) system.
Collapse
Affiliation(s)
- Tiyasa Chakraborty
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, India.
| | - Samiran Kumar Banik
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, India.
| | | | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, India.
| |
Collapse
|
44
|
Automated Spleen Injury Detection Using 3D Active Contours and Machine Learning. ENTROPY 2021; 23:e23040382. [PMID: 33804831 PMCID: PMC8063804 DOI: 10.3390/e23040382] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/20/2021] [Accepted: 03/22/2021] [Indexed: 12/18/2022]
Abstract
The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes.
Collapse
|
45
|
Cha DI, Kang TW, Min JH, Joo I, Sinn DH, Ha SY, Kim K, Lee G, Yi J. Deep learning-based automated quantification of the hepatorenal index for evaluation of fatty liver by ultrasonography. Ultrasonography 2021; 40:565-574. [PMID: 33966363 PMCID: PMC8446496 DOI: 10.14366/usg.20179] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/24/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The aim of this study was to develop and validate a fully-automatic quantification of the hepatorenal index (HRI) calculated by a deep convolutional neural network (DCNN) comparable to the interpretations of radiologists experienced in ultrasound (US) imaging. METHODS In this retrospective analysis, DCNN-based organ segmentation with Gaussian mixture modeling for automated quantification of the HRI was developed using abdominal US images from a previous study. For validation, 294 patients who underwent abdominal US examination before living-donor liver transplantation were selected. Interobserver agreement for the measured brightness of the liver and kidney and the calculated HRI were analyzed between two board-certified radiologists and DCNN using intraclass correlation coefficients (ICCs). RESULTS Most patients had normal (n=95) or mild (n=198) fatty liver. The ICCs of hepatic and renal brightness measurements and the calculated HRI between the two radiologists were 0.892 (95% confidence interval [CI], 0.866 to 0.913), 0.898 (95% CI, 0.873 to 0.918), and 0.681 (95% CI, 0.615 to 0.738) for the first session and 0.920 (95% CI, 0.901 to 0.936), 0.874 (95% CI, 0.844 to 0.898), and 0.579 (95% CI, 0.497 to 0.650) for the second session, respectively; the results ranged from moderate to excellent agreement. Using the same task, the ICCs of the hepatic and renal measurements and the calculated HRI between the average values of the two radiologists and DCNN were 0.919 (95% CI, 0.899 to 0.935), 0.916 (95% CI, 0.895 to 0.932), and 0.734 (95% CI, 0.676 to 0.782), respectively, showing high to excellent agreement. CONCLUSION Automated quantification of HRI using DCNN can yield HRI measurements similar to those obtained by experienced radiologists in patients with normal or mild fatty liver.
Collapse
Affiliation(s)
- Dong Ik Cha
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Tae Wook Kang
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Ji Hye Min
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Ijin Joo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Dong Hyun Sinn
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Sang Yun Ha
- Department of Pathology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kyunga Kim
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Korea
| | - Gunwoo Lee
- Medical Imaging R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea
| | - Jonghyon Yi
- Medical Imaging R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea
| |
Collapse
|
46
|
Blaivas L, Blaivas M. Are Convolutional Neural Networks Trained on ImageNet Images Wearing Rose-Colored Glasses?: A Quantitative Comparison of ImageNet, Computed Tomographic, Magnetic Resonance, Chest X-Ray, and Point-of-Care Ultrasound Images for Quality. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:377-383. [PMID: 32757235 DOI: 10.1002/jum.15413] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/16/2020] [Accepted: 06/22/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Deep learning for medical imaging analysis uses convolutional neural networks pretrained on ImageNet (Stanford Vision Lab, Stanford, CA). Little is known about how such color- and scene-rich standard training images compare quantitatively to medical images. We sought to quantitatively compare ImageNet images to point-of-care ultrasound (POCUS), computed tomographic (CT), magnetic resonance (MR), and chest x-ray (CXR) images. METHODS Using a quantitative image quality assessment technique (Blind/Referenceless Image Spatial Quality Evaluator), we compared images based on pixel complexity, relationships, variation, and distinguishing features. We compared 5500 ImageNet images to 2700 CXR, 2300 CT, 1800 MR, and 18,000 POCUS images. Image quality results ranged from 0 to 100 (worst). A 1-way analysis of variance was performed, and the standardized mean-difference effect size value (d) was calculated. RESULTS ImageNet images showed the best image quality rating of 21.7 (95% confidence interval [CI], 0.41) except for CXR at 13.2 (95% CI, 0.28), followed by CT at 35.1 (95% CI, 0.79), MR at 31.6 (95% CI, 0.75), and POCUS at 56.6 (95% CI, 0.21). The differences between ImageNet and all of the medical images were statistically significant (P ≤ .000001). The greatest difference in image quality was between ImageNet and POCUS (d = 2.38). CONCLUSIONS Point-of-care ultrasound (US) quality is significantly different from that of ImageNet and other medical images. This brings considerable implications for convolutional neural network training with medical images for various applications, which may be even more significant in the case of US images. Ultrasound deep learning developers should consider pretraining networks from scratch on US images, as training techniques used for CT, CXR, and MR images may not apply to US.
Collapse
Affiliation(s)
- Laura Blaivas
- Michigan State University, East Lansing, Michigan, USA
| | - Michael Blaivas
- Department of Emergency Medicine, University of South Carolina School of Medicine, Columbia, South Carolina, USA
- St Francis Hospital, Columbus, Georgia, USA
| |
Collapse
|
47
|
Islam KT, Wijewickrema S, O'Leary S. A deep learning based framework for the registration of three dimensional multi-modal medical images of the head. Sci Rep 2021; 11:1860. [PMID: 33479305 PMCID: PMC7820610 DOI: 10.1038/s41598-021-81044-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/31/2020] [Indexed: 01/16/2023] Open
Abstract
Image registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has important implications for clinical diagnosis, treatment planning, and image-guided surgery as it provides the means of bringing together complimentary information obtained from different image modalities. However, since different image modalities have different properties due to their different acquisition methods, it remains a challenging task to find a fast and accurate match between multi-modal images. Furthermore, due to reasons such as ethical issues and need for human expert intervention, it is difficult to collect a large database of labelled multi-modal medical images. In addition, manual input is required to determine the fixed and moving images as input to registration algorithms. In this paper, we address these issues and introduce a registration framework that (1) creates synthetic data to augment existing datasets, (2) generates ground truth data to be used in the training and testing of algorithms, (3) registers (using a combination of deep learning and conventional machine learning methods) multi-modal images in an accurate and fast manner, and (4) automatically classifies the image modality so that the process of registration can be fully automated. We validate the performance of the proposed framework on CT and MRI images of the head obtained from a publicly available registration database.
Collapse
Affiliation(s)
- Kh Tohidul Islam
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia.
| | - Sudanthi Wijewickrema
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia
| | - Stephen O'Leary
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia
| |
Collapse
|
48
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
49
|
Bowes MA, Kacena K, Alabas OA, Brett AD, Dube B, Bodick N, Conaghan PG. Machine-learning, MRI bone shape and important clinical outcomes in osteoarthritis: data from the Osteoarthritis Initiative. Ann Rheum Dis 2020; 80:502-508. [PMID: 33188042 PMCID: PMC7958089 DOI: 10.1136/annrheumdis-2020-217160] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 10/21/2020] [Accepted: 10/23/2020] [Indexed: 12/15/2022]
Abstract
Objectives Osteoarthritis (OA) structural status is imperfectly classified using radiographic assessment. Statistical shape modelling (SSM), a form of machine-learning, provides precise quantification of a characteristic 3D OA bone shape. We aimed to determine the benefits of this novel measure of OA status for assessing risks of clinically important outcomes. Methods The study used 4796 individuals from the Osteoarthritis Initiative cohort. SSM-derived femur bone shape (B-score) was measured from all 9433 baseline knee MRIs. We examined the relationship between B-score, radiographic Kellgren-Lawrence grade (KLG) and current and future pain and function as well as total knee replacement (TKR) up to 8 years. Results B-score repeatability supported 40 discrete grades. KLG and B-score were both associated with risk of current and future pain, functional limitation and TKR; logistic regression curves were similar. However, each KLG included a wide range of B-scores. For example, for KLG3, risk of pain was 34.4 (95% CI 31.7 to 37.0)%, but B-scores within KLG3 knees ranged from 0 to 6; for B-score 0, risk was 17.0 (16.1 to 17.9)% while for B-score 6, it was 52.1 (48.8 to 55.4)%. For TKR, KLG3 risk was 15.3 (13.3 to 17.3)%; while B-score 0 had negligible risk, B-score 6 risk was 35.6 (31.8 to 39.6)%. Age, sex and body mass index had negligible effects on association between B-score and symptoms. Conclusions B-score provides reader-independent quantification using a single time-point, providing unambiguous OA status with defined clinical risks across the whole range of disease including pre-radiographic OA. B-score heralds a step-change in OA stratification for interventions and improved personalised assessment, analogous to the T-score in osteoporosis.
Collapse
Affiliation(s)
| | | | - Oras A Alabas
- Leeds Institute of Rheumatic and Musculoskeletal Medicine, University of Leeds, and NIHR Leeds Biomedical Research Centre, University of Leeds School of Medicine, Leeds, UK
| | | | - Bright Dube
- Leeds Institute of Rheumatic and Musculoskeletal Medicine, University of Leeds, and NIHR Leeds Biomedical Research Centre, University of Leeds School of Medicine, Leeds, UK
| | - Neil Bodick
- Clinical Research and Medical Affairs, Flexion Therapeutics. Inc, Burlington, Massachusetts, USA
| | - Philip G Conaghan
- Leeds Institute of Rheumatic and Musculoskeletal Medicine, University of Leeds, and NIHR Leeds Biomedical Research Centre, University of Leeds School of Medicine, Leeds, UK
| |
Collapse
|
50
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|