201
|
Bouget D, Pedersen A, Jakola AS, Kavouridis V, Emblem KE, Eijgelaar RS, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sciortino T, Van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, De Witt Hamer PC, Solheim O, Reinertsen I. Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting. Front Neurol 2022; 13:932219. [PMID: 35968292 PMCID: PMC9364874 DOI: 10.3389/fneur.2022.932219] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/23/2022] [Indexed: 11/23/2022] Open
Abstract
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16-54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5-15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asgeir S. Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Vasileios Kavouridis
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Kyrre E. Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, Tilburg, Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, United Kingdom
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Wien, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | | | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Netherlands
| | | | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Marnix G. Witte
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
202
|
A Deep Learning Method for Early Detection of Diabetic Foot Using Decision Fusion and Thermal Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157524] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Diabetes mellitus (DM) is one of the major diseases that cause death worldwide and lead to complications of diabetic foot ulcers (DFU). Improper and late handling of a diabetic foot patient can result in an amputation of the patient’s foot. Early detection of DFU symptoms can be observed using thermal imaging with a computer-assisted classifier. Previous study of DFU detection using thermal image only achieved 97% of accuracy, and it has to be improved. This article proposes a novel framework for DFU classification based on thermal imaging using deep neural networks and decision fusion. Here, decision fusion combines the classification result from a parallel classifier. We used the convolutional neural network (CNN) model of ShuffleNet and MobileNetV2 as the baseline classifier. In developing the classifier model, firstly, the MobileNetV2 and ShuffleNet were trained using plantar thermogram datasets. Then, the classification results of those two models were fused using a novel decision fusion method to increase the accuracy rate. The proposed framework achieved 100% accuracy in classifying the DFU thermal images in binary classes of positive and negative cases. The accuracy of the proposed Decision Fusion (DF) was increased by about 3.4% from baseline ShuffleNet and MobileNetV2. Overall, the proposed framework outperformed in classifying the images compared with the state-of-the-art deep learning and the traditional machine-learning-based classifier.
Collapse
|
203
|
An Efficient Multi-Scale Convolutional Neural Network Based Multi-Class Brain MRI Classification for SaMD. TOMOGRAPHY (ANN ARBOR, MICH.) 2022; 8:1905-1927. [PMID: 35894026 PMCID: PMC9330870 DOI: 10.3390/tomography8040161] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/28/2022] [Accepted: 07/13/2022] [Indexed: 01/05/2023]
Abstract
A brain tumor is the growth of abnormal cells in certain brain tissues with a high mortality rate; therefore, it requires high precision in diagnosis, as a minor human judgment can eventually cause severe consequences. Magnetic Resonance Image (MRI) serves as a non-invasive tool to detect the presence of a tumor. However, Rician noise is inevitably instilled during the image acquisition process, which leads to poor observation and interferes with the treatment. Computer-Aided Diagnosis (CAD) systems can perform early diagnosis of the disease, potentially increasing the chances of survival, and lessening the need for an expert to analyze the MRIs. Convolutional Neural Networks (CNN) have proven to be very effective in tumor detection in brain MRIs. There have been multiple studies dedicated to brain tumor classification; however, these techniques lack the evaluation of the impact of the Rician noise on state-of-the-art deep learning techniques and the consideration of the scaling impact on the performance of the deep learning as the size and location of tumors vary from image to image with irregular shape and boundaries. Moreover, transfer learning-based pre-trained models such as AlexNet and ResNet have been used for brain tumor detection. However, these architectures have many trainable parameters and hence have a high computational cost. This study proposes a two-fold solution: (a) Multi-Scale CNN (MSCNN) architecture to develop a robust classification model for brain tumor diagnosis, and (b) minimizing the impact of Rician noise on the performance of the MSCNN. The proposed model is a multi-class classification solution that classifies MRIs into glioma, meningioma, pituitary, and non-tumor. The core objective is to develop a robust model for enhancing the performance of the existing tumor detection systems in terms of accuracy and efficiency. Furthermore, MRIs are denoised using a Fuzzy Similarity-based Non-Local Means (FSNLM) filter to improve the classification results. Different evaluation metrics are employed, such as accuracy, precision, recall, specificity, and F1-score, to evaluate and compare the performance of the proposed multi-scale CNN and other state-of-the-art techniques, such as AlexNet and ResNet. In addition, trainable and non-trainable parameters of the proposed model and the existing techniques are also compared to evaluate the computational efficiency. The experimental results show that the proposed multi-scale CNN model outperforms AlexNet and ResNet in terms of accuracy and efficiency at a lower computational cost. Based on experimental results, it is found that our proposed MCNN2 achieved accuracy and F1-score of 91.2% and 91%, respectively, which is significantly higher than the existing AlexNet and ResNet techniques. Moreover, our findings suggest that the proposed model is more effective and efficient in facilitating clinical research and practice for MRI classification.
Collapse
|
204
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
205
|
Meshkov A, Khafizov A, Buzmakov A, Bukreeva I, Junemann O, Fratini M, Cedola A, Chukalina M, Yamaev A, Gigli G, Wilde F, Longo E, Asadchikov V, Saveliev S, Nikolaev D. Deep Learning-Based Segmentation of Post-Mortem Human’s Olfactory Bulb Structures in X-ray Phase-Contrast Tomography. Tomography 2022; 8:1854-1868. [PMID: 35894021 PMCID: PMC9331385 DOI: 10.3390/tomography8040156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/12/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022] Open
Abstract
The human olfactory bulb (OB) has a laminar structure. The segregation of cell populations in the OB image poses a significant challenge because of indistinct boundaries of the layers. Standard 3D visualization tools usually have a low resolution and cannot provide the high accuracy required for morphometric analysis. X-ray phase contrast tomography (XPCT) offers sufficient resolution and contrast to identify single cells in large volumes of the brain. The numerous microanatomical structures detectable in XPCT image of the OB, however, greatly complicate the manual delineation of OB neuronal cell layers. To address the challenging problem of fully automated segmentation of XPCT images of human OB morphological layers, we propose a new pipeline for tomographic data processing. Convolutional neural networks (CNN) were used to segment XPCT image of native unstained human OB. Virtual segmentation of the whole OB and an accurate delineation of each layer in a healthy non-demented OB is mandatory as the first step for assessing OB morphological changes in smell impairment research. In this framework, we proposed an effective tool that could help to shed light on OB layer-specific degeneration in patients with olfactory disorder.
Collapse
Affiliation(s)
- Alexandr Meshkov
- The Moscow Institute of Physics and Technology, 9 Institutskiy per., 141701 Moscow, Russia;
| | - Anvar Khafizov
- FSRC «Crystallography and Photonics» RAS, Leninskiy pr. 59, 119333 Moscow, Russia; (A.K.); (A.B.); (V.A.)
- Croc Inc. Company, Volochayevskaya Ulitsa 5/3, 111033 Moscow, Russia
| | - Alexey Buzmakov
- FSRC «Crystallography and Photonics» RAS, Leninskiy pr. 59, 119333 Moscow, Russia; (A.K.); (A.B.); (V.A.)
- Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Vavilova Str. 44b2, 119333 Moscow, Russia
| | - Inna Bukreeva
- Institute of Nanotechnology—CNR, c/o Department of Physics, La Sapienza University, Piazzale Aldo Moro 5, 00185 Rome, Italy; (I.B.); (M.F.); (A.C.)
- P.N. Lebedev Physical Institute, RAS, Leninskiy pr. 53, 119991 Moscow, Russia
| | - Olga Junemann
- FSSI Research Institute of Human Morphology, Tsyurupy Str. 3, 117418 Moscow, Russia; (O.J.); (S.S.)
| | - Michela Fratini
- Institute of Nanotechnology—CNR, c/o Department of Physics, La Sapienza University, Piazzale Aldo Moro 5, 00185 Rome, Italy; (I.B.); (M.F.); (A.C.)
- IRCCS Santa Lucia Foundation, Via Ardeatina 306/354, 00142 Rome, Italy
| | - Alessia Cedola
- Institute of Nanotechnology—CNR, c/o Department of Physics, La Sapienza University, Piazzale Aldo Moro 5, 00185 Rome, Italy; (I.B.); (M.F.); (A.C.)
| | - Marina Chukalina
- FSRC «Crystallography and Photonics» RAS, Leninskiy pr. 59, 119333 Moscow, Russia; (A.K.); (A.B.); (V.A.)
- Smart Engines Service LLC, 60-Letiya Oktyabrya pr. 9, 117312 Moscow, Russia; (A.Y.); (D.N.)
- Institute for Information Transmission Problems of Russian Academy of Sciences (Kharkevich Institute), Bol’shoi Karetnii per. 19 Str. 1, 127051 Moscow, Russia
- Correspondence:
| | - Andrei Yamaev
- Smart Engines Service LLC, 60-Letiya Oktyabrya pr. 9, 117312 Moscow, Russia; (A.Y.); (D.N.)
| | - Giuseppe Gigli
- Institute of Nanotechnology—CNR, c/o Campus Ecotekne—Universita del Salento, Via Monteroni, 73100 Lecce, Italy;
| | - Fabian Wilde
- Institute of Materials Research, Helmholtz-Zentrum Hereon, Max-Planck-Str. 1, 21502 Geesthacht, Germany;
| | - Elena Longo
- Elettra-Sincrotrone Trieste S.C.p.A., 34149 Trieste, Italy;
| | - Victor Asadchikov
- FSRC «Crystallography and Photonics» RAS, Leninskiy pr. 59, 119333 Moscow, Russia; (A.K.); (A.B.); (V.A.)
| | - Sergey Saveliev
- FSSI Research Institute of Human Morphology, Tsyurupy Str. 3, 117418 Moscow, Russia; (O.J.); (S.S.)
| | - Dmitry Nikolaev
- Smart Engines Service LLC, 60-Letiya Oktyabrya pr. 9, 117312 Moscow, Russia; (A.Y.); (D.N.)
- Institute for Information Transmission Problems of Russian Academy of Sciences (Kharkevich Institute), Bol’shoi Karetnii per. 19 Str. 1, 127051 Moscow, Russia
| |
Collapse
|
206
|
Almalki YE, Ali MU, Ahmed W, Kallu KD, Zafar A, Alduraibi SK, Irfan M, Basha MAA, Alshamrani HA, Alduraibi AK. Robust Gaussian and Nonlinear Hybrid Invariant Clustered Features Aided Approach for Speeded Brain Tumor Diagnosis. LIFE (BASEL, SWITZERLAND) 2022; 12:life12071084. [PMID: 35888172 PMCID: PMC9315657 DOI: 10.3390/life12071084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 11/16/2022]
Abstract
Brain tumors reduce life expectancy due to the lack of a cure. Moreover, their diagnosis involves complex and costly procedures such as magnetic resonance imaging (MRI) and lengthy, careful examination to determine their severity. However, the timely diagnosis of brain tumors in their early stages may save a patient's life. Therefore, this work utilizes MRI with a machine learning approach to diagnose brain tumor severity (glioma, meningioma, no tumor, and pituitary) in a timely manner. MRI Gaussian and nonlinear scale features are extracted due to their robustness over rotation, scaling, and noise issues, which are common in image processing features such as texture, local binary patterns, histograms of oriented gradient, etc. For the features, each MRI is broken down into multiple small 8 × 8-pixel MR images to capture small details. To counter memory issues, the strongest features based on variance are selected and segmented into 400 Gaussian and 400 nonlinear scale features, and these features are hybridized against each MRI. Finally, classical machine learning classifiers are utilized to check the performance of the proposed hybrid feature vector. An available online brain MRI image dataset is utilized to validate the proposed approach. The results show that the support vector machine-trained model has the highest classification accuracy of 95.33%, with a low computational time. The results are also compared with the recent literature, which shows that the proposed model can be helpful for clinicians/doctors for the early diagnosis of brain tumors.
Collapse
Affiliation(s)
- Yassir Edrees Almalki
- Division of Radiology, Department of Internal Medicine, Medical College, Najran University, Najran 61441, Saudi Arabia;
| | - Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea;
| | - Waqas Ahmed
- Secret Minds, Entrepreneurial Organization, Islamabad 44000, Pakistan;
| | - Karam Dad Kallu
- Department of Robotics and Intelligent Machine Engineering (RIME), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), H-12, Islamabad 44000, Pakistan;
| | - Amad Zafar
- Department of Electrical Engineering, The Ibadat International University, Islamabad 54590, Pakistan
- Correspondence:
| | - Sharifa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia;
| | | | - Hassan A. Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Alaa Khalid Alduraibi
- Department of Radiology, College of Medicine, Qassim University, Buraidah 52571, Saudi Arabia; (S.K.A.); (A.K.A.)
| |
Collapse
|
207
|
A novel automatic approach for glioma segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07583-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
208
|
Gryska E, Björkman-Burtscher I, Jakola AS, Dunås T, Schneiderman J, Heckemann RA. Deep learning for automatic brain tumour segmentation on MRI: evaluation of recommended reporting criteria via a reproduction and replication study. BMJ Open 2022; 12:e059000. [PMID: 35851016 PMCID: PMC9297223 DOI: 10.1136/bmjopen-2021-059000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVES To determine the reproducibility and replicability of studies that develop and validate segmentation methods for brain tumours on MRI and that follow established reproducibility criteria; and to evaluate whether the reporting guidelines are sufficient. METHODS Two eligible validation studies of distinct deep learning (DL) methods were identified. We implemented the methods using published information and retraced the reported validation steps. We evaluated to what extent the description of the methods enabled reproduction of the results. We further attempted to replicate reported findings on a clinical set of images acquired at our institute consisting of high-grade and low-grade glioma (HGG, LGG), and meningioma (MNG) cases. RESULTS We successfully reproduced one of the two tumour segmentation methods. Insufficient description of the preprocessing pipeline and our inability to replicate the pipeline resulted in failure to reproduce the second method. The replication of the first method showed promising results in terms of Dice similarity coefficient (DSC) and sensitivity (Sen) on HGG cases (DSC=0.77, Sen=0.88) and LGG cases (DSC=0.73, Sen=0.83), however, poorer performance was observed for MNG cases (DSC=0.61, Sen=0.71). Preprocessing errors were identified that contributed to low quantitative scores in some cases. CONCLUSIONS Established reproducibility criteria do not sufficiently emphasise description of the preprocessing pipeline. Discrepancies in preprocessing as a result of insufficient reporting are likely to influence segmentation outcomes and hinder clinical utilisation. A detailed description of the whole processing chain, including preprocessing, is thus necessary to obtain stronger evidence of the generalisability of DL-based brain tumour segmentation methods and to facilitate translation of the methods into clinical practice.
Collapse
Affiliation(s)
- Emilia Gryska
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Isabella Björkman-Burtscher
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Asgeir Store Jakola
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Tora Dunås
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Justin Schneiderman
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Rolf A Heckemann
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
209
|
Jin L, Chen Q, Shi A, Wang X, Ren R, Zheng A, Song P, Zhang Y, Wang N, Wang C, Wang N, Cheng X, Wang S, Ge H. Deep Learning for Automated Contouring of Gross Tumor Volumes in Esophageal Cancer. Front Oncol 2022; 12:892171. [PMID: 35924169 PMCID: PMC9339638 DOI: 10.3389/fonc.2022.892171] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/21/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose The aim of this study was to propose and evaluate a novel three-dimensional (3D) V-Net and two-dimensional (2D) U-Net mixed (VUMix-Net) architecture for a fully automatic and accurate gross tumor volume (GTV) in esophageal cancer (EC)–delineated contours. Methods We collected the computed tomography (CT) scans of 215 EC patients. 3D V-Net, 2D U-Net, and VUMix-Net were developed and further applied simultaneously to delineate GTVs. The Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95HD) were used as quantitative metrics to evaluate the performance of the three models in ECs from different segments. The CT data of 20 patients were randomly selected as the ground truth (GT) masks, and the corresponding delineation results were generated by artificial intelligence (AI). Score differences between the two groups (GT versus AI) and the evaluation consistency were compared. Results In all patients, there was a significant difference in the 2D DSCs from U-Net, V-Net, and VUMix-Net (p=0.01). In addition, VUMix-Net showed achieved better 3D-DSC and 95HD values. There was a significant difference among the 3D-DSC (mean ± STD) and 95HD values for upper-, middle-, and lower-segment EC (p<0.001), and the middle EC values were the best. In middle-segment EC, VUMix-Net achieved the highest 2D-DSC values (p<0.001) and lowest 95HD values (p=0.044). Conclusion The new model (VUMix-Net) showed certain advantages in delineating the GTVs of EC. Additionally, it can generate the GTVs of EC that meet clinical requirements and have the same quality as human-generated contours. The system demonstrated the best performance for the ECs of the middle segment.
Collapse
Affiliation(s)
- Linzhi Jin
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Qi Chen
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Aiwei Shi
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Xiaomin Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Runchuan Ren
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Anping Zheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Ping Song
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Yaowen Zhang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nan Wang
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
| | - Chenyu Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nengchao Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Xinyu Cheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Shaobin Wang
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Hong Ge
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- *Correspondence: Hong Ge,
| |
Collapse
|
210
|
Ali MB, Bai X, Gu IYH, Berger MS, Jakola AS. A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas. SENSORS (BASEL, SWITZERLAND) 2022; 22:5292. [PMID: 35890972 PMCID: PMC9317052 DOI: 10.3390/s22145292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/11/2022] [Accepted: 07/13/2022] [Indexed: 05/03/2023]
Abstract
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance.
Collapse
Affiliation(s)
- Muhaddisa Barat Ali
- Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden; (M.B.A.); (X.B.)
| | - Xiaohan Bai
- Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden; (M.B.A.); (X.B.)
| | - Irene Yu-Hua Gu
- Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden; (M.B.A.); (X.B.)
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California San Fransisco, San Francisco, CA 94143-0112, USA;
| | - Asgeir Store Jakola
- Department of Clinical Neuroscience, University of Gothenburg, 40530 Gothenburg, Sweden;
- Department of Neurosurgery, Sahlgrenska University Hospital, 41345 Gothenberg, Sweden
| |
Collapse
|
211
|
Application of Smooth Fuzzy Model in Image Denoising and Edge Detection. MATHEMATICS 2022. [DOI: 10.3390/math10142421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In this paper, the bounded variation property of fuzzy models with smooth compositions have been studied, and they have been compared with the standard fuzzy composition (e.g., min–max). Moreover, the contribution of the bounded variation of the smooth fuzzy model for the noise removal and edge preservation of the digital images has been investigated. Different simulations on the test images have been employed to verify the results. The performance index related to the detected edges of the smooth fuzzy models in the presence of both Gaussian and Impulse (also known as salt-and-pepper noise) noises of different densities has been found to be higher than the standard well-known fuzzy models (e.g., min–max composition), which demonstrates the efficiency of smooth compositions in comparison to the standard composition.
Collapse
|
212
|
Zeng Y, Long C, Zhao W, Liu J. Predicting the Severity of Neurological Impairment Caused by Ischemic Stroke Using Deep Learning Based on Diffusion-Weighted Images. J Clin Med 2022; 11:jcm11144008. [PMID: 35887776 PMCID: PMC9325315 DOI: 10.3390/jcm11144008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/23/2022] [Accepted: 07/05/2022] [Indexed: 02/01/2023] Open
Abstract
Purpose: To develop a preliminary deep learning model that uses diffusion-weighted imaging (DWI) images to classify the severity of neurological impairment caused by ischemic stroke. Materials and Methods: This retrospective study included 851 ischemic stroke patients (711 patients in the training set and 140 patients in the test set). The patients’ NIHSS scores, which reflect the severity of neurological impairment, were reviewed upon admission and on Day 7 of hospitalization and were classified into two stages (stage 1 for NIHSS < 5 and stage 2 for NIHSS ≥ 5). A 3D-CNN was trained to predict the stage of NIHSS based on different preprocessed DWI images. The performance in predicting the severity of anterior and posterior circulation stroke was also investigated. The AUC, specificity, and sensitivity were calculated to evaluate the performance of the model. Results: Our proposed model obtained better performance in predicting the NIHSS stage on Day 7 of hospitalization than that at admission (best AUC 0.895 vs. 0.846). Model D trained with DWI images (normalized with z-score and resized to 256 × 256 × 64 voxels) achieved the best AUC of 0.846 in predicting the NIHSS stage at admission. Model E rained with DWI images (normalized with maximum−minimum and resized to 128 × 128 × 32 voxels) achieved the best AUC of 0.895 in predicting the NIHSS stage on Day 7 of hospitalization. Our model also showed promising performance in predicting the NIHSS stage on Day 7 of hospitalization for anterior and posterior circulation stroke, with the best AUCs of 0.905 and 0.903, respectively. Conclusions: Our proposed 3D-CNN model can effectively predict the neurological severity of IS using DWI images and performs better in predicting the NIHSS stage on Day 7 of hospitalization. The model also obtained promising performance in subgroup analysis, which can potentially help clinical decision making.
Collapse
Affiliation(s)
- Ying Zeng
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Department of Radiology, Xiangtan Central Hospital, Xiangtan 411199, China
| | - Chen Long
- Department of Stroke Unit, Xiangtan Central Hospital, Xiangtan 411199, China;
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Clinical Research Center for Medical Imaging, Changsha 410011, China
- Correspondence: (W.Z.); (J.L.)
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Clinical Research Center for Medical Imaging, Changsha 410011, China
- Department of Radiology Quality Control Center, Changsha 410011, China
- Correspondence: (W.Z.); (J.L.)
| |
Collapse
|
213
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
214
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
215
|
Sun H, Zhao C, Qin Y, Li C, Jia H, Yu B, Wang Z. In vivo detection of plaque erosion by intravascular optical coherence tomography using artificial intelligence. BIOMEDICAL OPTICS EXPRESS 2022; 13:3922-3938. [PMID: 35991920 PMCID: PMC9352282 DOI: 10.1364/boe.459623] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/14/2022] [Accepted: 05/27/2022] [Indexed: 05/11/2023]
Abstract
Plaque erosion is one of the most common underlying mechanisms for acute coronary syndrome (ACS). Optical coherence tomography (OCT) allows in vivo diagnosis of plaque erosion. However, challenge remains due to high inter- and intra-observer variability. We developed an artificial intelligence method based on deep learning for fully automated detection of plaque erosion in vivo, which achieved a recall of 0.800 ± 0.175, a precision of 0.734 ± 0.254, and an area under the precision-recall curve (AUC) of 0.707. Our proposed method is in good agreement with physicians, and can help improve the clinical diagnosis of plaque erosion and develop individualized treatment strategies for optimal management of ACS patients.
Collapse
Affiliation(s)
- Haoyue Sun
- School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Contributed equally
| | - Chen Zhao
- Department of Cardiology, The 2nd Affiliated Hospital of Harbin Medical University, Harbin, China
- The Key Laboratory of Medical Ischemia, Chinese Ministry of Education, Harbin, China
- Contributed equally
| | - Yuhan Qin
- Department of Cardiology, The 2nd Affiliated Hospital of Harbin Medical University, Harbin, China
- The Key Laboratory of Medical Ischemia, Chinese Ministry of Education, Harbin, China
| | - Chao Li
- School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Haibo Jia
- Department of Cardiology, The 2nd Affiliated Hospital of Harbin Medical University, Harbin, China
- The Key Laboratory of Medical Ischemia, Chinese Ministry of Education, Harbin, China
| | - Bo Yu
- Department of Cardiology, The 2nd Affiliated Hospital of Harbin Medical University, Harbin, China
- The Key Laboratory of Medical Ischemia, Chinese Ministry of Education, Harbin, China
| | - Zhao Wang
- School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
216
|
Yang Y, Yan T, Jiang X, Xie R, Li C, Zhou T. MH-Net: Model-data-driven hybrid-fusion network for medical image segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
217
|
Segmentation and classification of brain tumors from MRI images based on adaptive mechanisms and ELDP feature descriptor. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
218
|
Zhao C, Tang H, McGonigle D, He Z, Zhang C, Wang YP, Deng HW, Bober R, Zhou W. Development of an approach to extracting coronary arteries and detecting stenosis in invasive coronary angiograms. J Med Imaging (Bellingham) 2022; 9:044002. [PMID: 35875389 PMCID: PMC9295705 DOI: 10.1117/1.jmi.9.4.044002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 06/28/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: In stable coronary artery disease (CAD), reduction in mortality and/or myocardial infarction with revascularization over medical therapy has not been reliably achieved. Coronary arteries are usually extracted to perform stenosis detection. As such, developing accurate segmentation of vascular structures and quantification of coronary arterial stenosis in invasive coronary angiograms (ICA) is necessary. Approach: A multi-input and multiscale (MIMS) U-Net with a two-stage recurrent training strategy was proposed for the automatic vessel segmentation. The proposed model generated a refined prediction map with the following two training stages: (i) stage I coarsely segmented the major coronary arteries from preprocessed single-channel ICAs and generated the probability map of arteries; and (ii) during the stage II, a three-channel image consisting of the original preprocessed image, a generated probability map, and an edge-enhanced image generated from the preprocessed image was fed to the proposed MIMS U-Net to produce the final segmentation result. After segmentation, an arterial stenosis detection algorithm was developed to extract vascular centerlines and calculate arterial diameters to evaluate stenotic level. Results: Experimental results demonstrated that the proposed method achieved an average Dice similarity coefficient of 0.8329, an average sensitivity of 0.8281, and an average specificity of 0.9979 in our dataset with 294 ICAs obtained from 73 patients. Moreover, our stenosis detection algorithm achieved a true positive rate of 0.6668 and a positive predictive value of 0.7043. Conclusions: Our proposed approach has great promise for clinical use and could help physicians improve diagnosis and therapeutic decisions for CAD.
Collapse
Affiliation(s)
- Chen Zhao
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
| | - Haipeng Tang
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Daniel McGonigle
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Zhuo He
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
| | - Chaoyang Zhang
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Yu-Ping Wang
- Tulane University School of Public Health and Tropical Medicine, Tulane Center of Bioinformatics and Genomics, New Orleans, Louisiana, United States
| | - Hong-Wen Deng
- Tulane University School of Public Health and Tropical Medicine, Tulane Center of Bioinformatics and Genomics, New Orleans, Louisiana, United States
| | - Robert Bober
- Ochsner Medical Center, Department of Cardiology, New Orleans, Louisiana, United States
| | - Weihua Zhou
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
- Michigan Technological University, Institute of Computing and Cybersystems, and Health Research Institute, Center of Biocomputing and Digital Health, Houghton, Michigan, United States
| |
Collapse
|
219
|
Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3514988. [PMID: 35785083 PMCID: PMC9249491 DOI: 10.1155/2022/3514988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 02/27/2022] [Accepted: 05/18/2022] [Indexed: 11/18/2022]
Abstract
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
Collapse
|
220
|
Diagnosis and Nursing Intervention of Gynecological Ovarian Endometriosis with Magnetic Resonance Imaging under Artificial Intelligence Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3123310. [PMID: 35726287 PMCID: PMC9206576 DOI: 10.1155/2022/3123310] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 05/14/2022] [Indexed: 11/17/2022]
Abstract
This research was aimed to study the application value of the magnetic resonance imaging (MRI) diagnosis under artificial intelligence algorithms and the effect of nursing intervention on patients with gynecological ovarian endometriosis. 116 patients with ovarian endometriosis were randomly divided into a control group (routine nursing) and an experimental group (comprehensive nursing), with 58 cases in each group. The artificial intelligence fuzzy C-means (FCM) clustering algorithm was proposed and used in the MRI diagnosis of ovarian endometriosis. The application value of the FCM algorithm was evaluated through the accuracy, Dice, sensitivity, and specificity of the imaging diagnosis, and the nursing satisfaction and the incidence of adverse reactions were used to evaluate the effect of nursing intervention. The results showed that, compared with the traditional hard C-means (HCM) algorithm, the artificial intelligence FCM algorithm gave a significantly higher partition coefficient, and its partition entropy and running time were significantly reduced, with significant differences (P < 0.05). The average values of Dice, sensitivity, and specificity of patients' MRI images were 0.77, 0.73, and 0.72, respectively, which were processed by the traditional HCM algorithm, while those values obtained by the improved artificial intelligence FCM algorithm were 0.92, 0.90, and 0.93, respectively; all the values were significantly improved (P < 0.05). In addition, the accuracy of MRI diagnosis based on the artificial intelligence FCM algorithm was 94.32 ± 3.05%, which was significantly higher than the 81.39 ± 3.11% under the HCM algorithm (P < 0.05). The overall nursing satisfaction of the experimental group was 96.5%, which was significantly better than the 87.9% of the control group (P < 0.05). The incidence of postoperative adverse reactions in the experimental group (7.9%) was markedly lower than that in the control group (24.1%), with a significant difference (P < 0.05). In short, MRI images under the artificial intelligence FCM algorithm could greatly improve the clinical diagnosis of ovarian endometriosis, and the comprehensive nursing intervention would also improve the prognosis and recovery of patients.
Collapse
|
221
|
Ali TM, Nawaz A, Ur Rehman A, Ahmad RZ, Javed AR, Gadekallu TR, Chen CL, Wu CM. A Sequential Machine Learning-cum-Attention Mechanism for Effective Segmentation of Brain Tumor. Front Oncol 2022; 12:873268. [PMID: 35719987 PMCID: PMC9202559 DOI: 10.3389/fonc.2022.873268] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/18/2022] [Indexed: 12/21/2022] Open
Abstract
Magnetic resonance imaging is the most generally utilized imaging methodology that permits radiologists to look inside the cerebrum using radio waves and magnets for tumor identification. However, it is tedious and complex to identify the tumorous and nontumorous regions due to the complexity in the tumorous region. Therefore, reliable and automatic segmentation and prediction are necessary for the segmentation of brain tumors. This paper proposes a reliable and efficient neural network variant, i.e., an attention-based convolutional neural network for brain tumor segmentation. Specifically, an encoder part of the UNET is a pre-trained VGG19 network followed by the adjacent decoder parts with an attention gate for segmentation noise induction and a denoising mechanism for avoiding overfitting. The dataset we are using for segmentation is BRATS’20, which comprises four different MRI modalities and one target mask file. The abovementioned algorithm resulted in a dice similarity coefficient of 0.83, 0.86, and 0.90 for enhancing, core, and whole tumors, respectively.
Collapse
Affiliation(s)
- Tahir Mohammad Ali
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Ali Nawaz
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Attique Ur Rehman
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait.,Department of Software Engineering, University of Sialkot, Sialkot, Pakistan
| | - Rana Zeeshan Ahmad
- Department of Information Technology, University of Sialkot, Sialkot, Pakistan
| | | | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Chin-Ling Chen
- School of Information Engineering, Changchun Sci-Tech University, Changchun, China.,School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China.,Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung, Taiwan
| | - Chih-Ming Wu
- School of Civil Engineering and Architecture, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
222
|
Khodadadi Shoushtari F, Sina S, Dehkordi ANV. Automatic segmentation of glioblastoma multiform brain tumor in MRI images: Using Deeplabv3+ with pre-trained Resnet18 weights. Phys Med 2022; 100:51-63. [PMID: 35732092 DOI: 10.1016/j.ejmp.2022.06.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 06/05/2022] [Accepted: 06/11/2022] [Indexed: 10/17/2022] Open
Abstract
PURPOSE To assess the effectiveness of deep learning algorithms in automated segmentation of magnetic resonance brain images for determining the enhanced tumor, the peri-tumoral edema, the necrotic/ non-enhancing tumor, and Normal tissue volumes. METHODS AND MATERIALS A new deep neural network algorithm, Deep-Net, was developed for semantic segmentation of the glioblastoma tumors in MR images, using the Deeplabv3+ architecture, and the pre-trained Resnet18 initial weights. The MR image Dataset used for training the network was taken from the BraTS 2020 training set, with the ground truth labels for different tumor subregions manually drawn by a group of expert neuroradiologists. In this work, two multi-modal MRI scans, i.e., T1ce and FLAIR of 293 patients with high-grade glioma (HGG), were used for deep network training (Deep-Net). The performance of the network was assessed for different hyper-parameters, to obtain the optimum set of parameters. The similarity scores were used for the evaluation of the optimized network. RESULTS According to the results of this study, epoch #37 is the optimum epoch giving the best global accuracy (97.53%), and loss function (0.14). The Deep-Net sensitivity in the delineation of the enhanced tumor is more than 90%. CONCLUSIONS The results indicate that the Deep-Net was able to segment GBM tumors with high accuracy.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran
| | - Azimeh N V Dehkordi
- Department of Physics, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
| |
Collapse
|
223
|
Zhou R, Hu S, Ma B, Ma B. Automatic Segmentation of MRI of Brain Tumor Using Deep Convolutional Network. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4247631. [PMID: 35757482 PMCID: PMC9217534 DOI: 10.1155/2022/4247631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/25/2022] [Accepted: 05/24/2022] [Indexed: 11/17/2022]
Abstract
Computer-aided diagnosis and treatment of multimodal magnetic resonance imaging (MRI) brain tumor image segmentation has always been a hot and significant topic in the field of medical image processing. Multimodal MRI brain tumor image segmentation utilizes the characteristics of each modal in the MRI image to segment the entire tumor and tumor core area and enhanced them from normal brain tissues. However, the grayscale similarity between brain tissues in various MRI images is very immense making it difficult to deal with the segmentation of multimodal MRI brain tumor images through traditional algorithms. Therefore, we employ the deep learning method as a tool to make full use of the complementary feature information between the multimodalities and instigate the following research: (i) build a network model suitable for brain tumor segmentation tasks based on the fully convolutional neural network framework and (ii) adopting an end-to-end training method, using two-dimensional slices of MRI images as network input data. The problem of unbalanced categories in various brain tumor image data is overcome by introducing the Dice loss function into the network to calculate the network training loss; at the same time, parallel Dice loss is proposed to further improve the substructure segmentation effect. We proposed a cascaded network model based on a fully convolutional neural network to improve the tumor core area and enhance the segmentation accuracy of the tumor area and achieve good prediction results for the substructure segmentation on the BraTS 2017 data set.
Collapse
Affiliation(s)
- Runwei Zhou
- Department of Radiology, Wenzhou Seventh People's Hospital, Ouhai District, Wenzhou City, Zhejiang Province 325006, China
| | - Shijun Hu
- Department of Radiology, Wenzhou Seventh People's Hospital, Ouhai District, Wenzhou City, Zhejiang Province 325006, China
| | - Baoxiang Ma
- Department of Radiology, Wenzhou Seventh People's Hospital, Ouhai District, Wenzhou City, Zhejiang Province 325006, China
| | - Bangcheng Ma
- Department of Radiology, Wenzhou Seventh People's Hospital, Ouhai District, Wenzhou City, Zhejiang Province 325006, China
| |
Collapse
|
224
|
De Asis-Cruz J, Krishnamurthy D, Jose C, Cook KM, Limperopoulos C. FetalGAN: Automated Segmentation of Fetal Functional Brain MRI Using Deep Generative Adversarial Learning and Multi-Scale 3D U-Net. Front Neurosci 2022; 16:887634. [PMID: 35747213 PMCID: PMC9209698 DOI: 10.3389/fnins.2022.887634] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/16/2022] [Indexed: 01/02/2023] Open
Abstract
An important step in the preprocessing of resting state functional magnetic resonance images (rs-fMRI) is the separation of brain from non-brain voxels. Widely used imaging tools such as FSL's BET2 and AFNI's 3dSkullStrip accomplish this task effectively in children and adults. In fetal functional brain imaging, however, the presence of maternal tissue around the brain coupled with the non-standard position of the fetal head limit the usefulness of these tools. Accurate brain masks are thus generated manually, a time-consuming and tedious process that slows down preprocessing of fetal rs-fMRI. Recently, deep learning-based segmentation models such as convolutional neural networks (CNNs) have been increasingly used for automated segmentation of medical images, including the fetal brain. Here, we propose a computationally efficient end-to-end generative adversarial neural network (GAN) for segmenting the fetal brain. This method, which we call FetalGAN, yielded whole brain masks that closely approximated the manually labeled ground truth. FetalGAN performed better than 3D U-Net model and BET2: FetalGAN, Dice score = 0.973 ± 0.013, precision = 0.977 ± 0.015; 3D U-Net, Dice score = 0.954 ± 0.054, precision = 0.967 ± 0.037; BET2, Dice score = 0.856 ± 0.084, precision = 0.758 ± 0.113. FetalGAN was also faster than 3D U-Net and the manual method (7.35 s vs. 10.25 s vs. ∼5 min/volume). To the best of our knowledge, this is the first successful implementation of 3D CNN with GAN on fetal fMRI brain images and represents a significant advance in fully automating processing of rs-MRI images.
Collapse
Affiliation(s)
- Josepheen De Asis-Cruz
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Dhineshvikram Krishnamurthy
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Chris Jose
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Kevin M. Cook
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Catherine Limperopoulos
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| |
Collapse
|
225
|
Deep pattern-based tumor segmentation in brain MRIs. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07422-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
226
|
Seg Net and Salp Water Optimization-driven Deep Belief network for segmentation and classification of brain tumor. Gene Expr Patterns 2022; 45:119248. [PMID: 35667619 DOI: 10.1016/j.gep.2022.119248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 03/19/2022] [Accepted: 05/28/2022] [Indexed: 11/21/2022]
Abstract
Classification of brain tumor in Magnetic Resonance Imaging (MRI) images is highly popular in treatment planning, early diagnosis, and outcome evaluation. It is very difficult for classifying and diagnosing tumors from several images. Thus, an automatic prediction strategy is essential in classifying brain tumors as malignant, core, edema, or benign. In this research, a novel approach using Salp Water Optimization-based Deep Belief network (SWO-based DBN) is introduced to classify brain tumor. At the initial stage, the input image is pre-processed to eradicate the artifacts present in input image. Following pre-processing, the segmentation is executed by SegNet, where the SegNet is trained using the proposed SWO. Moreover, the Convolutional Neural Network (CNN) features are employed to mine the features for future processing. At last, the introduced SWO-based DBN technique efficiently categorizes the brain tumor with respect to the extracted features. Thereafter, the produced output of the introduced SegNet + SWO-based DBN is made use of in brain tumor segmentation and classification. The developed technique produced better results with highest values of accuracy at 0.933, specificity at 0.880, and sensitivity at 0.938 using BRATS, 2018 datasets and accuracy at 0.921, specificity at 0.853, and sensitivity at 0.928 for BRATS, 2020 dataset.
Collapse
|
227
|
Cao J, Lai H, Zhang J, Zhang J, Xie T, Wang H, Bu J, Feng Q, Huang M. 2D-3D cascade network for glioma segmentation in multisequence MRI images using multiscale information. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106894. [PMID: 35613498 DOI: 10.1016/j.cmpb.2022.106894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 04/21/2022] [Accepted: 05/14/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Glioma segmentation is an important procedure for the treatment plan and follow-up evaluation of patients with glioma. UNet-based networks are widely used in medical image segmentation tasks and have achieved state-of-the-art performance. However, context information along the third dimension is ignored in 2D convolutions, whereas difference between z-axis and in-plane resolutions is large in 3D convolutions. Moreover, an original UNet structure cannot capture fine details because of the reduced resolution of feature maps near bottleneck layers. METHODS To address these issues, a novel 2D-3D cascade network with multiscale information module is proposed for the multiclass segmentation of gliomas in multisequence MRI images. First, a 2D network is applied to fully exploit potential intra-slice features. A variational autoencoder module is incorporated into 2D DenseUNet to regularize a shared encoder, extract useful information, and represent glioma heterogeneity. Second, we integrated 3D DenseUNet with the 2D network in cascade mode to extract useful inter-slice features and alleviate the influence of large difference between z-axis and in-plane resolutions. Moreover, a multiscale information module is used in the 2D and 3D networks to further capture the fine details of gliomas. Finally, the whole 2D-3D cascade network is trained in an end-to-end manner, where the intra-slice and inter-slice features are fused and optimized jointly to take full advantage of 3D image information. RESULTS Our method is evaluated on publicly available and clinical datasets and achieves competitive performance in these two datasets. CONCLUSIONS These results indicate that the proposed method may be a useful tool for glioma segmentation.
Collapse
Affiliation(s)
- Jianyun Cao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Junde Zhang
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Tao Xie
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Heqing Wang
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Junguo Bu
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
228
|
Mukherkjee D, Saha P, Kaplun D, Sinitca A, Sarkar R. Brain tumor image generation using an aggregation of GAN models with style transfer. Sci Rep 2022; 12:9141. [PMID: 35650252 PMCID: PMC9160042 DOI: 10.1038/s41598-022-12646-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 05/11/2022] [Indexed: 12/21/2022] Open
Abstract
In the recent past, deep learning-based models have achieved tremendous success in computer vision-related tasks with the help of large-scale annotated datasets. An interesting application of deep learning is synthetic data generation, especially in the domain of medical image analysis. The need for such a task arises due to the scarcity of original data. Class imbalance is another reason for applying data augmentation techniques. Generative Adversarial Networks (GANs) are beneficial for synthetic image generation in various fields. However, stand-alone GANs may only fetch the localized features in the latent representation of an image, whereas combining different GANs might understand the distributed features. To this end, we have proposed AGGrGAN, an aggregation of three base GAN models-two variants of Deep Convolutional Generative Adversarial Network (DCGAN) and a Wasserstein GAN (WGAN) to generate synthetic MRI scans of brain tumors. Further, we have applied the style transfer technique to enhance the image resemblance. Our proposed model efficiently overcomes the limitation of data unavailability and can understand the information variance in multiple representations of the raw images. We have conducted all the experiments on the two publicly available datasets - the brain tumor dataset and the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset. Results show that the proposed model can generate fine-quality images with maximum Structural Similarity Index Measure (SSIM) scores of 0.57 and 0.83 on the said two datasets.
Collapse
Affiliation(s)
- Debadyuti Mukherkjee
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Pritam Saha
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032, India
| | - Dmitry Kaplun
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, 197022, Russian Federation.
| | - Aleksandr Sinitca
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", Saint Petersburg, 197022, Russian Federation
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
229
|
Gupta RK, Bharti S, Kunhare N, Sahu Y, Pathik N. Brain Tumor Detection and Classification Using Cycle Generative Adversarial Networks. Interdiscip Sci 2022; 14:485-502. [PMID: 35137330 DOI: 10.1007/s12539-022-00502-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 01/10/2022] [Accepted: 01/18/2022] [Indexed: 11/30/2022]
Abstract
Brain cancer ranks tenth on the list of leading causes of death in both men and women. Biopsy is one of the most used methods for diagnosing cancer. However, the biopsy process is quite dangerous and take a long time to reach a decision. Furthermore, as the tumor size is rising quickly, non-invasive, automatic diagnostic equipment is required which can automatically detect the tumor and its stage precisely in a few seconds. In recent years, techniques based on Machine Learning and Deep Learning (DL) for detecting and classifying cancers has gained remarkable success in recent years. This paper suggested an ensemble method for detecting and classifying brain tumor and its stages using brain Magnetic Resonance Imaging (MRI). A modified InceptionResNetV2 pre-trained model is used for tumor detection from MRI image. After tumor detection, a combination of InceptionResNetV2 and Random Forest Tree (RFT) is used to determine the cancer stage, which includes glioma, meningioma, and pituitary cancer. The size of the dataset is small, so C-GAN (Cyclic Generative Adversarial Networks) is used to increase the dataset size. The experiment results demonstrate that the suggested tumor detection and tumor classification models achieve the accuracy of 99% and 98%, respectively.
Collapse
|
230
|
Zhou J, Ye J, Liang Y, Zhao J, Wu Y, Luo S, Lai X, Wang J. scSE-NL V-Net: A Brain Tumor Automatic Segmentation Method Based on Spatial and Channel "Squeeze-and-Excitation" Network With Non-local Block. Front Neurosci 2022; 16:916818. [PMID: 35712454 PMCID: PMC9197379 DOI: 10.3389/fnins.2022.916818] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 04/27/2022] [Indexed: 11/23/2022] Open
Abstract
Intracranial tumors are commonly known as brain tumors, which can be life-threatening in severe cases. Magnetic resonance imaging (MRI) is widely used in diagnosing brain tumors because of its harmless to the human body and high image resolution. Due to the heterogeneity of brain tumor height, MRI imaging is exceptionally irregular. How to accurately and quickly segment brain tumor MRI images is still one of the hottest topics in the medical image analysis community. However, according to the brain tumor segmentation algorithms, we could find now, most segmentation algorithms still stay in two-dimensional (2D) image segmentation, which could not obtain the spatial dependence between features effectively. In this study, we propose a brain tumor automatic segmentation method called scSE-NL V-Net. We try to use three-dimensional (3D) data as the model input and process the data by 3D convolution to get some relevance between dimensions. Meanwhile, we adopt non-local block as the self-attention block, which can reduce inherent image noise interference and make up for the lack of spatial dependence due to convolution. To improve the accuracy of convolutional neural network (CNN) image recognition, we add the "Spatial and Channel Squeeze-and-Excitation" Network (scSE-Net) to V-Net. The dataset used in this paper is from the brain tumor segmentation challenge 2020 database. In the test of the official BraTS2020 verification set, the Dice similarity coefficient is 0.65, 0.82, and 0.76 for the enhanced tumor (ET), whole tumor (WT), and tumor core (TC), respectively. Thereby, our model can make an auxiliary effect on the diagnosis of brain tumors established.
Collapse
Affiliation(s)
- Juhua Zhou
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Jianming Ye
- The First Affiliated Hospital, Gannan Medical University, Ganzhou, China
| | - Yu Liang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Jialu Zhao
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yan Wu
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Siyuan Luo
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xiaobo Lai
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | - Jianqing Wang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
231
|
Ali MB, Gu IYH, Lidemar A, Berger MS, Widhalm G, Jakola AS. Prediction of glioma-subtypes: comparison of performance on a DL classifier using bounding box areas versus annotated tumors. BMC Biomed Eng 2022; 4:4. [PMID: 35590389 PMCID: PMC9118766 DOI: 10.1186/s42490-022-00061-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 04/07/2022] [Indexed: 11/10/2022] Open
Abstract
Background For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. Method In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. Results Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). Conclusion Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data.
Collapse
Affiliation(s)
- Muhaddisa Barat Ali
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden.
| | - Irene Yu-Hua Gu
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Alice Lidemar
- Department of Clinical Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Mitchel S Berger
- Department of Neurological Surgery,, University of California San Francisco, San Francisco, USA
| | - Georg Widhalm
- Department of Neurosurgery, Medical University of Vienna, Vienna, Austria
| | - Asgeir Store Jakola
- Department of Clinical Neuroscience, University of Gothenburg, Gothenburg, Sweden.,Department of Neurosurgery, Sahlgrenska University Hospital, Gothenberg, Sweden
| |
Collapse
|
232
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
233
|
Qin C, Tu P, Chen X, Troccaz J. A novel registration-based algorithm for prostate segmentation via the combination of SSM and CNN. Med Phys 2022; 49:5268-5282. [PMID: 35506596 DOI: 10.1002/mp.15698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 04/18/2022] [Accepted: 04/22/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we proposed a novel supervised registration-based algorithm for precise prostate segmentation, which combine the convolutional neural network (CNN) with a statistical shape model (SSM). METHODS The proposed network mainly consists of two branches. One called SSM-Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine-tuning vector, for the generation of the prostate boundary. Furtherly, according to the inferred boundary, a normalized distance map was calculated as the output of SSM-Net. Another branch named ResU-Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation. RESULTS Two public datasets PROMISE12 and NCI-ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrate that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine-tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7% respectively. CONCLUSIONS Our segmentation method has the potential to be an effective and robust approach for prostate segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chunxia Qin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Puxun Tu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jocelyne Troccaz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| |
Collapse
|
234
|
Segmentation Algorithm-Based Safety Analysis of Cardiac Computed Tomography Angiography to Evaluate Doctor-Nurse-Patient Integrated Nursing Management for Cardiac Interventional Surgery. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2148566. [PMID: 35572833 PMCID: PMC9095376 DOI: 10.1155/2022/2148566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/03/2022] [Accepted: 04/05/2022] [Indexed: 11/17/2022]
Abstract
To deeply analyze the influences of doctor-nurse-patient integrated nursing management on cardiac interventional surgery, 120 patients with coronary heart disease undergoing cardiac interventional therapy were selected as the subjects and randomly divided into two groups, 60 cases in each group. The experimental group used the doctor-nurse-patient integrated nursing, while the control group adopted the routine nursing. The Hessian matrix enhanced filter segmentation algorithm was used to process the cardiac computed tomography angiography (CTA) images of patients to assess the algorithm performance and the safety of nursing methods. The results showed that the Jaccard, Dice, sensitivity, and specificity of cardiac CTA images of patients with coronary heart disease processed by Hessian matrix enhanced filter segmentation algorithm were 0.86, 0.93, 0.94, and 0.95, respectively; the disease self-management ability score and quality of life score of patients in the experimental group after nursing intervention were significantly better than those before nursing intervention, with significant differences (
). The number of cases with adverse vascular events in the experimental group was 3 cases, which was obviously lower than that in the control group (15 cases). The diagnostic accuracy of the two groups of patients after segmentation algorithm processing was 0.87 and 0.88, respectively, which was apparently superior than the diagnostic accuracy of conventional CTA (0.58 and 0.61). In summary, cardiac CTA evaluation of doctor-nurse-patient integrated nursing management cardiac interventional surgery based on segmentation algorithm had good safety and was worthy of further promotion in clinical cardiac interventional surgery.
Collapse
|
235
|
Huang P, Li D, Jiao Z, Wei D, Cao B, Mo Z, Wang Q, Zhang H, Shen D. Common Feature Learning for Brain Tumor MRI Synthesis by Context-aware Generative Adversarial Network. Med Image Anal 2022; 79:102472. [DOI: 10.1016/j.media.2022.102472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 02/18/2022] [Accepted: 05/03/2022] [Indexed: 11/28/2022]
|
236
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
237
|
Zhang J, Jiang Z, Liu D, Sun Q, Hou Y, Liu B. 3D asymmetric expectation-maximization attention network for brain tumor segmentation. NMR IN BIOMEDICINE 2022; 35:e4657. [PMID: 34859922 DOI: 10.1002/nbm.4657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 10/23/2021] [Accepted: 11/02/2021] [Indexed: 06/13/2023]
Abstract
Automatic brain tumor segmentation on MRI is a prerequisite to provide a quantitative and intuitive assistance for clinical diagnosis and treatment. Meanwhile, 3D deep neural network related brain tumor segmentation models have demonstrated considerable accuracy improvement over corresponding 2D methodologies. However, 3D brain tumor segmentation models generally suffer from high computation cost. Motivated by a recently proposed 3D dilated multi-fiber network (DMF-Net) architecture that pays more attention to reduction of computation cost, we present in this work a novel encoder-decoder neural network, ie a 3D asymmetric expectation-maximization attention network (AEMA-Net), to automatically segment brain tumors. We modify DMF-Net by introducing an asymmetric convolution block into a multi-fiber unit and a dilated multi-fiber unit to capture more powerful deep features for the brain tumor segmentation. In addition, AEMA-Net further incorporates an expectation-maximization attention (EMA) module into the DMF-Net by embedding the EMA block in the third stage of skip connection, which focuses on capturing the long-range dependence of context. We extensively evaluate AEMA-Net on three MRI brain tumor segmentation benchmarks of BraTS 2018, 2019 and 2020 datasets. Experimental results demonstrate that AEMA-Net outperforms both 3D U-Net and DMF-Net, and it achieves competitive performance compared with the state-of-the-art brain tumor segmentation methods.
Collapse
Affiliation(s)
- Jianxin Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| | - Zongkang Jiang
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| | - Dongwei Liu
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
| | - Qiule Sun
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Yaqing Hou
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Bin Liu
- International School of Information Science and Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
| |
Collapse
|
238
|
Al Zoubi O, Misaki M, Tsuchiyagaito A, Zotev V, White E, Paulus M, Bodurka J. Machine Learning Evidence for Sex Differences Consistently Influences Resting-State Functional Magnetic Resonance Imaging Fluctuations Across Multiple Independently Acquired Data Sets. Brain Connect 2022; 12:348-361. [PMID: 34269609 PMCID: PMC9131354 DOI: 10.1089/brain.2020.0878] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background/Introduction: Sex classification using functional connectivity from resting-state functional magnetic resonance imaging (rs-fMRI) has shown promising results. This suggested that sex difference might also be embedded in the blood-oxygen-level-dependent properties such as the amplitude of low-frequency fluctuation (ALFF) and the fraction of ALFF (fALFF). This study comprehensively investigates sex differences using a reliable and explainable machine learning (ML) pipeline. Five independent cohorts of rs-fMRI with over than 5500 samples were used to assess sex classification performance and map the spatial distribution of the important brain regions. Methods: Five rs-fMRI samples were used to extract ALFF and fALFF features from predefined brain parcellations and then were fed into an unbiased and explainable ML pipeline with a wide range of methods. The pipeline comprehensively assessed unbiased performance for within-sample and across-sample validation. In addition, the parcellation effect, classifier selection, scanning length, spatial distribution, reproducibility, and feature importance were analyzed and evaluated thoroughly in the study. Results: The results demonstrated high sex classification accuracies from healthy adults (area under the curve >0.89), while degrading for nonhealthy subjects. Sex classification showed moderate to good intraclass correlation coefficient based on parcellation. Linear classifiers outperform nonlinear classifiers. Sex differences could be detected even with a short rs-fMRI scan (e.g., 2 min). The spatial distribution of important features overlaps with previous results from studies. Discussion: Sex differences are consistent in rs-fMRI and should be considered seriously in any study design, analysis, or interpretation. Features that discriminate males and females were found to be distributed across several different brain regions, suggesting a complex mosaic for sex differences in rs-fMRI. Impact statement The presented study unraveled that sex differences are embedded in the blood-oxygen-level dependent (BOLD) and can be predicted using unbiased and explainable machine learning pipeline. The study revealed that psychiatric disorders and demographics might influence the BOLD signal and interact with the classification of sex. The spatial distribution of the important features presented here supports the notion that the brain is a mosaic of male and female features. The findings emphasize the importance of controlling for sex when conducting brain imaging analysis. In addition, the presented framework can be adapted to classify other variables from resting-state BOLD signals.
Collapse
Affiliation(s)
- Obada Al Zoubi
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
- Department of Psychiatry, Harvard Medical School/McLean Hospital, Boston, Massachusetts, USA
| | - Masaya Misaki
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
| | | | - Vadim Zotev
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
| | - Evan White
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
| | - Martin Paulus
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
| | - Jerzy Bodurka
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, USA
| |
Collapse
|
239
|
Borwankar S, Verma JP, Jain R, Nayyar A. Improvise approach for respiratory pathologies classification with multilayer convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:39185-39205. [PMID: 35505670 PMCID: PMC9047583 DOI: 10.1007/s11042-022-12958-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 02/16/2022] [Accepted: 03/09/2022] [Indexed: 06/01/2023]
Abstract
Every respiratory-related checkup includes audio samples collected from the individual, collected through different tools (sonograph, stethoscope). This audio is analyzed to identify pathology, which requires time and effort. The research work proposed in this paper aims at easing the task with deep learning by the diagnosis of lung-related pathologies using Convolutional Neural Network (CNN) with the help of transformed features from the audio samples. International Conference on Biomedical and Health Informatics (ICBHI) corpus dataset was used for lung sound. Here a novel approach is proposed to pre-process the data and pass it through a newly proposed CNN architecture. The combination of pre-processing steps MFCC, Melspectrogram, and Chroma CENS with CNN improvise the performance of the proposed system, which helps to make an accurate diagnosis of lung sounds. The comparative analysis shows how the proposed approach performs better with previous state-of-the-art research approaches. It also shows that there is no need for a wheeze or a crackle to be present in the lung sound to carry out the classification of respiratory pathologies.
Collapse
Affiliation(s)
- Saumya Borwankar
- Institute of Technology, Nirma University, Ahmedabad, Gujarat India
| | | | - Rachna Jain
- IT department, Bhagwan Parshuram Institute of Technology, New Delhi, India
| | - Anand Nayyar
- Graduate School, Faculty of Information Technology, Duy Tan University, Da Nang, 550000 Vietnam
| |
Collapse
|
240
|
Billardello R, Ntolkeras G, Chericoni A, Madsen JR, Papadelis C, Pearl PL, Grant PE, Taffoni F, Tamilia E. Novel User-Friendly Application for MRI Segmentation of Brain Resection following Epilepsy Surgery. Diagnostics (Basel) 2022; 12:diagnostics12041017. [PMID: 35454065 PMCID: PMC9032020 DOI: 10.3390/diagnostics12041017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Delineation of resected brain cavities on magnetic resonance images (MRIs) of epilepsy surgery patients is essential for neuroimaging/neurophysiology studies investigating biomarkers of the epileptogenic zone. The gold standard to delineate the resection on MRI remains manual slice-by-slice tracing by experts. Here, we proposed and validated a semiautomated MRI segmentation pipeline, generating an accurate model of the resection and its anatomical labeling, and developed a graphical user interface (GUI) for user-friendly usage. We retrieved pre- and postoperative MRIs from 35 patients who had focal epilepsy surgery, implemented a region-growing algorithm to delineate the resection on postoperative MRIs and tested its performance while varying different tuning parameters. Similarity between our output and hand-drawn gold standards was evaluated via dice similarity coefficient (DSC; range: 0-1). Additionally, the best segmentation pipeline was trained to provide an automated anatomical report of the resection (based on presurgical brain atlas). We found that the best-performing set of parameters presented DSC of 0.83 (0.72-0.85), high robustness to seed-selection variability and anatomical accuracy of 90% to the clinical postoperative MRI report. We presented a novel user-friendly open-source GUI that implements a semiautomated segmentation pipeline specifically optimized to generate resection models and their anatomical reports from epilepsy surgery patients, while minimizing user interaction.
Collapse
Affiliation(s)
- Roberto Billardello
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
- Correspondence: (R.B.); (E.T.)
| | - Georgios Ntolkeras
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Baystate Children’s Hospital, Springfield, MA 01199, USA
| | - Assia Chericoni
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Joseph R. Madsen
- Epilepsy Surgery Program, Department of Neurosurgery, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Christos Papadelis
- Jane and John Justin Neurosciences Center, Cook Children’s Health Care System, Fort Worth, TX 76104, USA;
| | - Phillip L. Pearl
- Division of Epilepsy and Clinical Neurophysiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Patricia Ellen Grant
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
| | - Fabrizio Taffoni
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Eleonora Tamilia
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Correspondence: (R.B.); (E.T.)
| |
Collapse
|
241
|
Hsu WW, Guo JM, Pei L, Chiang LA, Li YF, Hsiao JC, Colen R, Liu P. A weakly supervised deep learning-based method for glioma subtype classification using WSI and mpMRIs. Sci Rep 2022; 12:6111. [PMID: 35414643 PMCID: PMC9005548 DOI: 10.1038/s41598-022-09985-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 03/30/2022] [Indexed: 11/09/2022] Open
Abstract
Accurate glioma subtype classification is critical for the treatment management of patients with brain tumors. Developing an automatically computer-aided algorithm for glioma subtype classification is challenging due to many factors. One of the difficulties is the label constraint. Specifically, each case is simply labeled the glioma subtype without precise annotations of lesion regions information. In this paper, we propose a novel hybrid fully convolutional neural network (CNN)-based method for glioma subtype classification using both whole slide imaging (WSI) and multiparametric magnetic resonance imagings (mpMRIs). It is comprised of two methods: a WSI-based method and a mpMRIs-based method. For the WSI-based method, we categorize the glioma subtype using a 2D CNN on WSIs. To overcome the label constraint issue, we extract the truly representative patches for the glioma subtype classification in a weakly supervised fashion. For the mpMRIs-based method, we develop a 3D CNN-based method by analyzing the mpMRIs. The mpMRIs-based method consists of brain tumor segmentation and classification. Finally, to enhance the robustness of the predictions, we fuse the WSI-based and mpMRIs-based results guided by a confidence index. The experimental results on the validation dataset in the competition of CPM-RadPath 2020 show the comprehensive judgments from both two modalities can achieve better performance than the ones by solely using WSI or mpMRIs. Furthermore, our result using the proposed method ranks the third place in the CPM-RadPath 2020 in the testing phase. The proposed method demonstrates a competitive performance, which is creditable to the success of weakly supervised approach and the strategy of label agreement from multi-modality data.
Collapse
Affiliation(s)
- Wei-Wen Hsu
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Jing-Ming Guo
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Linmin Pei
- Imaging and Visualization Group, ABCS, Frederick National Laboratory for Cancer Research, Frederick, MD, 21702, USA.
| | - Ling-An Chiang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Yao-Feng Li
- Department of Pathology, Tri-Service General Hospital and National Defense Medical Center, Taipei, 11490, Taiwan, ROC
| | - Jui-Chien Hsiao
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Rivka Colen
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15232, USA.,Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, 15260, USA
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
242
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
243
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
244
|
Shah SY, Larijani H, Gibson RM, Liarokapis D. Random Neural Network Based Epileptic Seizure Episode Detection Exploiting Electroencephalogram Signals. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072466. [PMID: 35408080 PMCID: PMC9002775 DOI: 10.3390/s22072466] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/14/2022] [Accepted: 03/17/2022] [Indexed: 06/12/2023]
Abstract
Epileptic seizures are caused by abnormal electrical activity in the brain that manifests itself in a variety of ways, including confusion and loss of awareness. Correct identification of epileptic seizures is critical in the treatment and management of patients with epileptic disorders. One in four patients present resistance against seizures episodes and are in dire need of detecting these critical events through continuous treatment in order to manage the specific disease. Epileptic seizures can be identified by reliably and accurately monitoring the patients' neuro and muscle activities, cardiac activity, and oxygen saturation level using state-of-the-art sensing techniques including electroencephalograms (EEGs), electromyography (EMG), electrocardiograms (ECGs), and motion or audio/video recording that focuses on the human head and body. EEG analysis provides a prominent solution to distinguish between the signals associated with epileptic episodes and normal signals; therefore, this work aims to leverage on the latest EEG dataset using cutting-edge deep learning algorithms such as random neural network (RNN), convolutional neural network (CNN), extremely random tree (ERT), and residual neural network (ResNet) to classify multiple variants of epileptic seizures from non-seizures. The results obtained highlighted that RNN outperformed all other algorithms used and provided an overall accuracy of 97%, which was slightly improved after cross validation.
Collapse
Affiliation(s)
- Syed Yaseen Shah
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| | - Hadi Larijani
- SMART Technology Research Centre, Glasgow Caledonian University, Cowcaddens Road, Glasgow G4 0BA, UK
| | - Ryan M. Gibson
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| | - Dimitrios Liarokapis
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| |
Collapse
|
245
|
Wang J, Yu Z, Luan Z, Ren J, Zhao Y, Yu G. RDAU-Net: Based on a Residual Convolutional Neural Network With DFP and CBAM for Brain Tumor Segmentation. Front Oncol 2022; 12:805263. [PMID: 35311076 PMCID: PMC8924611 DOI: 10.3389/fonc.2022.805263] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 01/14/2022] [Indexed: 12/20/2022] Open
Abstract
Due to the high heterogeneity of brain tumors, automatic segmentation of brain tumors remains a challenging task. In this paper, we propose RDAU-Net by adding dilated feature pyramid blocks with 3D CBAM blocks and inserting 3D CBAM blocks after skip-connection layers. Moreover, a CBAM with channel attention and spatial attention facilitates the combination of more expressive feature information, thereby leading to more efficient extraction of contextual information from images of various scales. The performance was evaluated on the Multimodal Brain Tumor Segmentation (BraTS) challenge data. Experimental results show that RDAU-Net achieves state-of-the-art performance. The Dice coefficient for WT on the BraTS 2019 dataset exceeded the baseline value by 9.2%.
Collapse
Affiliation(s)
- Jingjing Wang
- College of Physics and Electronics Science, Shandong Normal University, Jinan, China
| | - Zishu Yu
- College of Physics and Electronics Science, Shandong Normal University, Jinan, China
| | - Zhenye Luan
- College of Physics and Electronics Science, Shandong Normal University, Jinan, China
| | - Jinwen Ren
- College of Physics and Electronics Science, Shandong Normal University, Jinan, China
| | - Yanhua Zhao
- Obstetrics and Gynecology, Tengzhou Xigang Central Health Center, Tengzhou, China
| | - Gang Yu
- College of Physics and Electronics Science, Shandong Normal University, Jinan, China
| |
Collapse
|
246
|
Umer M, Sadiq S, Karamti H, Karamti W, Majeed R, NAPPI M. IoT Based Smart Monitoring of Patients' with Acute Heart Failure. SENSORS (BASEL, SWITZERLAND) 2022; 22:2431. [PMID: 35408045 PMCID: PMC9003513 DOI: 10.3390/s22072431] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/01/2022] [Accepted: 03/04/2022] [Indexed: 12/05/2022]
Abstract
The prediction of heart failure survivors is a challenging task and helps medical professionals to make the right decisions about patients. Expertise and experience of medical professionals are required to care for heart failure patients. Machine Learning models can help with understanding symptoms of cardiac disease. However, manual feature engineering is challenging and requires expertise to select the appropriate technique. This study proposes a smart healthcare framework using the Internet-of-Things (IoT) and cloud technologies that improve heart failure patients' survival prediction without considering manual feature engineering. The smart IoT-based framework monitors patients on the basis of real-time data and provides timely, effective, and quality healthcare services to heart failure patients. The proposed model also investigates deep learning models in classifying heart failure patients as alive or deceased. The framework employs IoT-based sensors to obtain signals and send them to the cloud web server for processing. These signals are further processed by deep learning models to determine the state of patients. Patients' health records and processing results are shared with a medical professional who will provide emergency help if required. The dataset used in this study contains 13 features and was attained from the UCI repository known as Heart Failure Clinical Records. The experimental results revealed that the CNN model is superior to other deep learning and machine learning models with a 0.9289 accuracy value.
Collapse
Affiliation(s)
- Muhammad Umer
- Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan 64200, Pakistan; (M.U.); (S.S.)
- Department of Computer Science Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
| | - Saima Sadiq
- Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan 64200, Pakistan; (M.U.); (S.S.)
| | - Hanen Karamti
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Walid Karamti
- Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
- Data Engineering and Semantics Research Unit, Faculty of Sciences of Sfax, University of Sfax, Sfax 3052, Tunisia
| | - Rizwan Majeed
- Directorate of Information Technology, The Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan;
| | - Michele NAPPI
- Department of Computer Science, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
247
|
Citation Context Analysis Using Combined Feature Embedding and Deep Convolutional Neural Network Model. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063203] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Citation creates a link between citing and the cited author, and the frequency of citation has been regarded as the basic element to measure the impact of research and knowledge-based achievements. Citation frequency has been widely used to calculate the impact factor, H index, i10 index, etc., of authors and journals. However, for a fair evaluation, the qualitative aspect should be considered along with the quantitative measures. The sentiments expressed in citation play an important role in evaluating the quality of the research because the citation may be used to indicate appreciation, criticism, or a basis for carrying on research. In-text citation analysis is a challenging task, despite the use of machine learning models and automatic sentiment annotation. Additionally, the use of deep learning models and word embedding is not studied very well. This study performs several experiments with machine learning and deep learning models using fastText, fastText subword, global vectors, and their blending for word representation to perform in-text sentiment analysis. A dimensionality reduction technique called principal component analysis (PCA) is utilized to reduce the feature vectors before passing them to the classifier. Additionally, a customized convolutional neural network (CNN) is presented to obtain higher classification accuracy. Results suggest that the deep learning CNN coupled with fastText word embedding produces the best results in terms of accuracy, precision, recall, and F1 measure.
Collapse
|
248
|
Zhu X, Wu Y, Hu H, Zhuang X, Yao J, Ou D, Li W, Song M, Feng N, Xu D. Medical lesion segmentation by combining multi‐modal images with modality weighted UNet. Med Phys 2022; 49:3692-3704. [PMID: 35312077 DOI: 10.1002/mp.15610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/25/2022] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Xiner Zhu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Yichao Wu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Haoji Hu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Xianwei Zhuang
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Mei Song
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Na Feng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| |
Collapse
|
249
|
Diagnosis System of Microscopic Hyperspectral Image of Hepatobiliary Tumors Based on Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3794844. [PMID: 35341163 PMCID: PMC8947895 DOI: 10.1155/2022/3794844] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/09/2022] [Accepted: 02/16/2022] [Indexed: 11/29/2022]
Abstract
Hepatobiliary tumor is one of the common tumors and cancers in medicine, which seriously affects people's lives, so how to accurately diagnose it is a very serious problem. This article mainly studies a diagnostic method of microscopic images of liver and gallbladder tumors. Under this research direction, this article proposes to use convolutional neural network to learn and use hyperspectral images to diagnose it. It is found that the addition of the convolutional neural network can greatly improve the actual map classification and the accuracy of the map, and effectively improve the success rate of the treatment. At the same time, the article designs related experiments to compare its feature extraction performance and classification situation. The experimental results in this article show that the improved diagnostic method based on convolutional neural network has an accuracy rate of 85%–90%, which is as high as 6%–8% compared with the traditional accuracy rate, and thus it effectively improves the clinical problem of hepatobiliary tumor treatment.
Collapse
|
250
|
Konar D, Bhattacharyya S, Dey S, Panigrahi BK. Optimized activation for quantum-inspired self-supervised neural network based fully automated brain lesion segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03108-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|