1
|
Gopichand G, Bhargavi KN, Ramprasad MVS, Kodavanti PV, Padmavathi M. An Intelligent Model of Segmentation and Classification Using Enhanced Optimization-Based Attentive Mask RCNN and Recurrent MobileNet With LSTM for Multiple Sclerosis Types With Clinical Brain MRI. NMR IN BIOMEDICINE 2025; 38:e70036. [PMID: 40269999 DOI: 10.1002/nbm.70036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2024] [Revised: 03/03/2025] [Accepted: 03/26/2025] [Indexed: 04/25/2025]
Abstract
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill because abnormalities on scans are frequently inconsistent with clinical symptoms, making it difficult to convert the findings into effective treatment strategies. Furthermore, MRI is an expensive process, and its frequent utilization to monitor an illness increases healthcare costs. To overcome these drawbacks, this research employs advanced technological approaches to develop a deep learning system for classifying types of MS through clinical brain MRI scans. The major innovation of this model is to influence the convolution network with attention concept and recurrent-based deep learning for classifying the disorder; this also proposes an optimization algorithm for tuning the parameter to enhance the performance. Initially, the total images as 3427 are collected from database, in which the collected samples are categorized for training and testing phase. Here, the segmentation is carried out by adaptive and attentive-based mask regional convolution neural network (AA-MRCNN). In this phase, the MRCNN's parameters are finely tuned with an enhanced pine cone optimization algorithm (EPCOA) to guarantee outstanding efficiency. Further, the segmented image is given to recurrent MobileNet with long short term memory (RM-LSTM) for getting the classification outcomes. Through experimental analysis, this deep learning model is acquired 95.4% for accuracy, 95.3% for sensitivity, and 95.4% for specificity. Hence, these results prove that it has high potential for appropriately classifying the sclerosis disorder.
Collapse
Affiliation(s)
- G Gopichand
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | | | - M V S Ramprasad
- Department of EECE, GITAM (Deemed to be University), Visakhapatnam, Andhra Pradesh, India
| | | | - M Padmavathi
- Department of Computer Science and Engineering, Swarna Bharathi Institute of Science & Technology, Khammam, India
| |
Collapse
|
2
|
Andrews M, Di Ieva A. Artificial intelligence for brain neuroanatomical segmentation in magnetic resonance imaging: A literature review. J Clin Neurosci 2025; 134:111073. [PMID: 39879724 DOI: 10.1016/j.jocn.2025.111073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 01/21/2025] [Indexed: 01/31/2025]
Abstract
PURPOSE This literature review aims to synthesise current research on the application of artificial intelligence (AI) for the segmentation of brain neuroanatomical structures in magnetic resonance imaging (MRI). METHODS A literature search was conducted using the databases Embase, Medline, Scopus, and Web of Science, and captured articles were assessed for inclusion in the review. Data extraction was performed for the summary of the AI model used, and key findings of each article, advantages and disadvantages were identified. RESULTS Following full-text screening, 21 articles were included in the review. The review covers models for segmentation models applied to the whole brain, cerebral cortex, subcortical structures, the cerebellum, blood vessels, perivascular spaces, and the ventricles. Accuracy of segmentation was generally high, particularly for segmenting neuroanatomical structures in healthy cohorts. CONCLUSION The use of AI for automatic brain segmentation is generally highly accurate and fast for all regions of the human brain. Challenges include robustness to anatomical variability and pathology, largely due to difficulties with accessing sufficient training data.
Collapse
Affiliation(s)
- Mitchell Andrews
- Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, NSW, Australia.
| | - Antonio Di Ieva
- Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, NSW, Australia; Computational NeuroSurgery (CNS) Lab, Macquarie University, NSW, Australia
| |
Collapse
|
3
|
Wu J, Qin F, Tian F, Li H, Yong X, Liu T, Zhang H, Wu D. Age-specific optimization of the T 2-weighted MRI contrast in infant and toddler brain. Magn Reson Med 2025; 93:1014-1025. [PMID: 39428905 DOI: 10.1002/mrm.30339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 07/26/2024] [Accepted: 09/15/2024] [Indexed: 10/22/2024]
Abstract
PURPOSE In 0-2-year-old brains, the T2-weighted (T2w) contrast between white matter (WM) and gray matter (GM) is weaker compared with that in adult brains and rapidly changes with age. This study aims to design variable-flip-angle (VFA) trains in 3D fast spin-echo sequence that adapt to the dynamically changing relaxation times to improve the contrast in the T2w images of the developing brains. METHODS T1 and T2 relaxation times in 0-2-year-old brains were measured, and several age groups were defined according to the age-dependent pattern of T2 values. Based on the static pseudo-steady-state theory and the extended phase graph algorithm, VFA trains were designed for each age group to maximize WM/GM contrast, constrained by the maximum specific absorption rate and overall signal intensity. The optimized VFA trains were compared with the default one used for adult brains based on the relative contrast between WM and GM. Dice coefficient was used to demonstrate the advantage of contrast-improved images as inputs for automatic tissue segmentation in infant brains. RESULTS The 0-2-year-old pool was divided into groups of 0-8 months, 8-12 months, and 12-24 months. The optimal VFA trains were tested in each age group in comparison with the default sequence. Quantitative analyses demonstrated improved relative contrasts in infant and toddler brains by 1.5-2.3-fold at different ages. The Dice coefficient for contrast-optimized images was improved compared with default images (p < 0.001). CONCLUSION An effective strategy was proposed to improve the 3D T2w contrast in 0-2-year-old brains.
Collapse
Affiliation(s)
- Jiani Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Fenjie Qin
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Fengyu Tian
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Haotian Li
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xingwang Yong
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Tingting Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Hongxi Zhang
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| |
Collapse
|
4
|
Chau M, Vu H, Debnath T, Rahman MG. A scoping review of automatic and semi-automatic MRI segmentation in human brain imaging. Radiography (Lond) 2025; 31:102878. [PMID: 39892049 DOI: 10.1016/j.radi.2025.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2024] [Revised: 01/15/2025] [Accepted: 01/16/2025] [Indexed: 02/03/2025]
Abstract
INTRODUCTION AI-based segmentation techniques in brain MRI have revolutionized neuroimaging by enhancing the accuracy and efficiency of brain structure analysis. These techniques are pivotal for diagnosing neurodegenerative diseases, classifying psychiatric conditions, and predicting brain age. This scoping review synthesizes current methodologies, identifies key trends, and highlights gaps in the use of automatic and semi-automatic segmentation tools in brain MRI, particularly focusing on their application to healthy populations and clinical utility. METHODS A scoping review was conducted following Arksey and O'Malley's framework and PRISMA-ScR guidelines. A comprehensive search was performed across six databases for studies published between 2014 and 2024. Studies focused on AI-based brain segmentation in healthy populations, and patients with neurodegenerative diseases, and psychiatric disorders were included, while reviews, case series, and studies without human participants were excluded. RESULTS Thirty-two studies were included, employing various segmentation tools and AI models such as convolutional neural networks for segmenting gray matter, white matter, cerebrospinal fluid, and pathological regions. FreeSurfer, which utilizes algorithmic techniques, are also commonly used for automated segmentation. AI models demonstrated high accuracy in brain age prediction, neurodegenerative disease classification, and psychiatric disorder subtyping. Longitudinal studies tracked disease progression, while multimodal approaches integrating MRI with fMRI and PET enhanced diagnostic precision. CONCLUSION AI-based segmentation techniques provide scalable solutions for neuroimaging, advancing personalized brain health strategies and supporting early diagnosis of neurological and psychiatric conditions. However, challenges related to standardization, generalizability, and ethical considerations remain. IMPLICATIONS FOR PRACTICE The integration of AI tools and algorithm-based methods into clinical workflows can enhance diagnostic accuracy and efficiency, but greater focus on model interpretability, standardization of imaging protocols, and patient consent processes is needed to ensure responsible adoption in practice.
Collapse
Affiliation(s)
- M Chau
- Faculty of Science and Health, Charles Sturt University, Wagga Wagga, NSW 2678, Australia.
| | - H Vu
- Allied Health and Human Performance Unit, University of South Australia, Adelaide, SA 5000, Australia
| | - T Debnath
- School of Computing, Mathematics and Engineering, Charles Sturt University, NSW, Australia
| | - M G Rahman
- School of Computing, Mathematics and Engineering, Charles Sturt University, NSW, Australia
| |
Collapse
|
5
|
Hendrickson TJ, Reiners P, Moore LA, Lundquist JT, Fayzullobekova B, Perrone AJ, Lee EG, Moser J, Day TKM, Alexopoulos D, Styner M, Kardan O, Chamberlain TA, Mummaneni A, Caldas HA, Bower B, Stoyell S, Martin T, Sung S, Fair EA, Carter K, Uriarte-Lopez J, Rueter AR, Yacoub E, Rosenberg MD, Smyser CD, Elison JT, Graham A, Fair DA, Feczko E. BIBSNet: A Deep Learning Baby Image Brain Segmentation Network for MRI Scans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2023.03.22.533696. [PMID: 36993540 PMCID: PMC10055337 DOI: 10.1101/2023.03.22.533696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Objectives Brain segmentation of infant magnetic resonance (MR) images is vitally important for studying typical and atypical brain development. The infant brain undergoes many changes throughout the first years of postnatal life, making tissue segmentation difficult for most existing algorithms. Here we introduce a deep neural network BIBSNet ( B aby and I nfant B rain S egmentation Neural Net work), an open-source, community-driven model for robust and generalizable brain segmentation leveraging data augmentation and a large sample size of manually annotated images. Experimental Design Included in model training and testing were MR brain images from 90 participants with an age range of 0-8 months (median age 4.6 months). Using the BOBs repository of manually annotated real images along with synthetic segmentation images produced using SynthSeg, the model was trained using a 10-fold procedure. Model performance of segmentations was assessed by comparing BIBSNet, joint label fusion (JLF) inferred segmentation to ground truth segmentations using Dice Similarity Coefficient (DSC). Additionally, MR data along with the FreeSurfer compatible segmentations were processed with the DCAN labs infant-ABCD-BIDS processing pipeline from ground truth, JLF, and BIBSNet to further assess model performance on derivative data, including cortical thickness, resting state connectivity and brain region volumes. Principal Observations BIBSNet segmentations outperforms JLF across all regions based on DSC comparisons. Additionally, with processed derived metrics, BIBSNet segmentations outperforms JLF segmentations across nearly all metrics. Conclusions BIBSNet segmentation shows marked improvement over JLF across all age groups analyzed. The BIBSNet model is 600x faster compared to JLF, produces FreeSurfer-compatible segmentation labels, and can be easily included in other processing pipelines. BIBSNet provides a viable alternative for segmenting the brain in the earliest stages of development.
Collapse
|
6
|
Verma M, Mirza M, Sayal K, Shenoy S, Sahoo SS, Goel A, Kakkar R. Leveraging artificial intelligence to promote COVID-19 appropriate behaviour in a healthcare institution from north India: A feasibility study. Indian J Med Res 2025; 161:81-90. [PMID: 40036109 DOI: 10.25259/ijmr_337_2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 12/18/2024] [Indexed: 03/06/2025] Open
Abstract
Background & Objectives Non-pharmacological interventions (NPI) were crucial in curbing the initial COVID-19 pandemic waves, but compliance was difficult. The primary aim of this study was to assess the changes in compliance with NPIs in healthcare settings using Artificial intelligence (AI) and examine the barriers and facilitators of using AI systems in healthcare. Methods A pre-post-intervention study was conducted in a north-Indian hospital between April and July 2022. YOLO-V5 and 3D Cartesian distance algorithm-based AI modules were used to ascertain compliance through several parameters like confidence threshold, intersection-over-union threshold, image size, distance threshold (6 feet), and 3D Euclidean Distance estimation. Validation was done by evaluating model performance on a labelled test dataset, and accuracy was 91.3 per cent. Interventions included daily sensitization and health education for the hospital staff and visitors, display of information, education and communication (IEC) materials, and administrative surveillance. In-depth interviews were conducted with the stakeholders to assess the feasibility issues. Flagged events during the three phases were compared using One-way ANOVA tests in SPSS. Results Higher social distancing (SD) compliance events were flagged by the module in the intervention phase compared to the pre-intervention and post-intervention phases (P<0.05). Mask non-compliance was significantly lower (P <0.05) in the pre-intervention phase and highest in the post-intervention phase, with varied differences between different intervention phases in the registration hall and medicine out-patient department (OPD). The modules' data safety, transfer, and cost were the most common concerns. Interpretation & conclusions AI can supplement our efforts against the pandemic and offer indispensable help with minimal feasibility issues that can be resolved through adequate sensitization and training.
Collapse
Affiliation(s)
- Madhur Verma
- Department of Community Medicine and Family Medicine, All India Institute of Medical Sciences (AIIMS), Bathinda, Punjab, India
| | - Moonis Mirza
- Department of Hospital Administration, All India Institute of Medical Sciences (AIIMS), Bathinda, Punjab, India
| | - Karan Sayal
- Department of Machine Intelligence, iVIZZ-AI, Newark, United States
| | - Sukesh Shenoy
- Department of Machine Intelligence, iVIZZ-AI, New Delhi, India
| | - Soumya Swaroop Sahoo
- Department of Community Medicine and Family Medicine, All India Institute of Medical Sciences (AIIMS), Bathinda, Punjab, India
| | - Anil Goel
- Department of Radiation Oncology, All India Institute of Medical Sciences (AIIMS), Bathinda, Punjab, India
| | - Rakesh Kakkar
- Department of Community Medicine and Family Medicine, All India Institute of Medical Sciences (AIIMS), Bathinda, Punjab, India
| |
Collapse
|
7
|
Zhang H, Lu Z, Gong P, Zhang S, Yang X, Li X, Feng Z, Li A, Xiao C. High-throughput mesoscopic optical imaging data processing and parsing using differential-guided filtered neural networks. Brain Inform 2024; 11:32. [PMID: 39692944 DOI: 10.1186/s40708-024-00246-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 11/27/2024] [Indexed: 12/19/2024] Open
Abstract
High-throughput mesoscopic optical imaging technology has tremendously boosted the efficiency of procuring massive mesoscopic datasets from mouse brains. Constrained by the imaging field of view, the image strips obtained by such technologies typically require further processing, such as cross-sectional stitching, artifact removal, and signal area cropping, to meet the requirements of subsequent analyse. However, obtaining a batch of raw array mouse brain data at a resolution of 0.65 × 0.65 × 3 μ m 3 can reach 220TB, and the cropping of the outer contour areas in the disjointed processing still relies on manual visual observation, which consumes substantial computational resources and labor costs. In this paper, we design an efficient deep differential guided filtering module (DDGF) by fusing multi-scale iterative differential guided filtering with deep learning, which effectively refines image details while mitigating background noise. Subsequently, by amalgamating DDGF with deep learning network, we propose a lightweight deep differential guided filtering segmentation network (DDGF-SegNet), which demonstrates robust performance on our dataset, achieving Dice of 0.92, Precision of 0.98, Recall of 0.91, and Jaccard index of 0.86. Building on the segmentation, we utilize connectivity analysis for ascertaining three-dimensional spatial orientation of each brain within the array. Furthermore, we streamline the entire processing workflow by developing an automated pipeline optimized for cluster-based message passing interface(MPI) parallel computation, which reduces the processing time for a mouse brain dataset to a mere 1.1 h, enhancing manual efficiency by 25 times and overall data processing efficiency by 2.4 times, paving the way for enhancing the efficiency of big data processing and parsing for high-throughput mesoscopic optical imaging techniques.
Collapse
Affiliation(s)
- Hong Zhang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
| | - Zhikang Lu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
| | - Peicong Gong
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
| | - Shilong Zhang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
| | - Xiaoquan Yang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, 215123, China
| | - Xiangning Li
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, 215123, China
| | - Zhao Feng
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, 215123, China
| | - Anan Li
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, 215123, China
| | - Chi Xiao
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya, 572025, China.
| |
Collapse
|
8
|
Claros-Olivares CC, Clements RG, McIlvain G, Johnson CL, Brockmeier AJ. MRI-based whole-brain elastography and volumetric measurements to predict brain age. Biol Methods Protoc 2024; 10:bpae086. [PMID: 39902188 PMCID: PMC11790219 DOI: 10.1093/biomethods/bpae086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 11/03/2024] [Accepted: 11/12/2024] [Indexed: 02/05/2025] Open
Abstract
Brain age, as a correlate of an individual's chronological age obtained from structural and functional neuroimaging data, enables assessing developmental or neurodegenerative pathology relative to the overall population. Accurately inferring brain age from brain magnetic resonance imaging (MRI) data requires imaging methods sensitive to tissue health and sophisticated statistical models to identify the underlying age-related brain changes. Magnetic resonance elastography (MRE) is a specialized MRI technique which has emerged as a reliable, non-invasive method to measure the brain's mechanical properties, such as the viscoelastic shear stiffness and damping ratio. These mechanical properties have been shown to change across the life span, reflect neurodegenerative diseases, and are associated with individual differences in cognitive function. Here, we aim to develop a machine learning framework to accurately predict a healthy individual's chronological age from maps of brain mechanical properties. This framework can later be applied to understand neurostructural deviations from normal in individuals with neurodevelopmental or neurodegenerative conditions. Using 3D convolutional networks as deep learning models and more traditional statistical models, we relate chronological age as a function of multiple modalities of whole-brain measurements: stiffness, damping ratio, and volume. Evaluations on held-out subjects show that combining stiffness and volume in a multimodal approach achieves the most accurate predictions. Interpretation of the different models highlights important regions that are distinct between the modalities. The results demonstrate the complementary value of MRE measurements in brain age models, which, in future studies, could improve model sensitivity to brain integrity differences in individuals with neuropathology.
Collapse
Affiliation(s)
| | - Rebecca G Clements
- Department of Biomedical Engineering, University of Delaware, Newark, DE 19713, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, United States
- Department of Physical Therapy and Human Movement Sciences, Northwestern University, Chicago, IL 60611, United States
| | - Grace McIlvain
- Department of Biomedical Engineering, University of Delaware, Newark, DE 19713, United States
- Department of Biomedical Engineering, Columbia University, New York, NY 10027, United States
| | - Curtis L Johnson
- Department of Electrical & Computer Engineering, University of Delaware, Newark, DE 19716, United States
- Department of Biomedical Engineering, University of Delaware, Newark, DE 19713, United States
| | - Austin J Brockmeier
- Department of Electrical & Computer Engineering, University of Delaware, Newark, DE 19716, United States
- Department of Computer & Information Sciences, University of Delaware, Newark, DE 19716, United States
| |
Collapse
|
9
|
Zhang R, Du X, Li H. Application and performance enhancement of FAIMS spectral data for deep learning analysis using generative adversarial network reinforcement. Anal Biochem 2024; 694:115627. [PMID: 39033946 DOI: 10.1016/j.ab.2024.115627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 06/21/2024] [Accepted: 07/18/2024] [Indexed: 07/23/2024]
Abstract
When using High-field asymmetric ion mobility spectrometry (FAIMS) to process complex mixtures for deep learning analysis, there is a problem of poor recognition performance due to the lack of high-quality data and low sample diversity. In this paper, a Generative Adversarial Network (GAN) method is introduced to simulate and generate highly realistic and diverse spectral for expanding the dataset using real mixture spectral data of 15 classes collected by FAIMS. The mixed datasets were put into VGG and ResNeXt for testing respectively, and the experimental results proved that the best recognition effect was achieved when the ratio of real data to generated data was 1:4: where accuracy improved by 24.19 % and 6.43 %; precision improved by 23.71 % and 6.97 %; recall improved by 21.08 % and 7.09 %; and F1-score improved by 24.50 % and 8.23 %. The above results strongly demonstrate that GAN can effectively expand the data volume and increase the sample diversity without increasing the additional experimental cost, which significantly enhances the experimental effect of FAIMS spectral for the analysis of complex mixtures.
Collapse
Affiliation(s)
- Ruilong Zhang
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China
| | - Xiaoxia Du
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China.
| | - Hua Li
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China.
| |
Collapse
|
10
|
Diniz E, Santini T, Helmet K, Aizenstein HJ, Ibrahim TS. Cross-modality image translation of 3 Tesla Magnetic Resonance Imaging to 7 Tesla using Generative Adversarial Networks. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.16.24315609. [PMID: 39484249 PMCID: PMC11527090 DOI: 10.1101/2024.10.16.24315609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
The rapid advancements in magnetic resonance imaging (MRI) technology have precipitated a new paradigm wherein cross-modality data translation across diverse imaging platforms, field strengths, and different sites is increasingly challenging. This issue is particularly accentuated when transitioning from 3 Tesla (3T) to 7 Tesla (7T) MRI systems. This study proposes a novel solution to these challenges using generative adversarial networks (GANs)-specifically, the CycleGAN architecture-to create synthetic 7T images from 3T data. Employing a dataset of 1112 and 490 unpaired 3T and 7T MR images, respectively, we trained a 2-dimensional (2D) CycleGAN model, evaluating its performance on a paired dataset of 22 participants scanned at 3T and 7T. Independent testing on 22 distinct participants affirmed the model's proficiency in accurately predicting various tissue types, encompassing cerebral spinal fluid, gray matter, and white matter. Our approach provides a reliable and efficient methodology for synthesizing 7T images, achieving a median Dice of 6.82%,7,63%, and 4.85% for Cerebral Spinal Fluid (CSF), Gray Matter (GM), and White Matter (WM), respectively, in the testing dataset, thereby significantly aiding in harmonizing heterogeneous datasets. Furthermore, it delineates the potential of GANs in amplifying the contrast-to-noise ratio (CNR) from 3T, potentially enhancing the diagnostic capability of the images. While acknowledging the risk of model overfitting, our research underscores a promising progression towards harnessing the benefits of 7T MR systems in research investigations while preserving compatibility with existent 3T MR data. This work was previously presented at the ISMRM 2021 conference (Diniz, Helmet, Santini, Aizenstein, & Ibrahim, 2021).
Collapse
Affiliation(s)
- Eduardo Diniz
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pennsylvania, United States
| | - Tales Santini
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| | - Karim Helmet
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Howard J. Aizenstein
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Tamer S. Ibrahim
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| |
Collapse
|
11
|
Jafrasteh B, Lubián-Gutiérrez M, Lubián-López SP, Benavente-Fernández I. Enhanced Spatial Fuzzy C-Means Algorithm for Brain Tissue Segmentation in T1 Images. Neuroinformatics 2024; 22:407-420. [PMID: 38656595 PMCID: PMC11579192 DOI: 10.1007/s12021-024-09661-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2024] [Indexed: 04/26/2024]
Abstract
Magnetic Resonance Imaging (MRI) plays an important role in neurology, particularly in the precise segmentation of brain tissues. Accurate segmentation is crucial for diagnosing brain injuries and neurodegenerative conditions. We introduce an Enhanced Spatial Fuzzy C-means (esFCM) algorithm for 3D T1 MRI segmentation to three tissues, i.e. White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). The esFCM employs a weighted least square algorithm utilizing the Structural Similarity Index (SSIM) for polynomial bias field correction. It also takes advantage of the information from the membership function of the last iteration to compute neighborhood impact. This strategic refinement enhances the algorithm's adaptability to complex image structures, effectively addressing challenges such as intensity irregularities and contributing to heightened segmentation accuracy. We compare the segmentation accuracy of esFCM against four variants of FCM, Gaussian Mixture Model (GMM) and FSL and ANTs algorithms using four various dataset, employing three measurement criteria. Comparative assessments underscore esFCM's superior performance, particularly in scenarios involving added noise and bias fields.The obtained results emphasize the significant potential of the proposed method in the segmentation of MRI images.
Collapse
Affiliation(s)
- Bahram Jafrasteh
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain.
| | - Manuel Lubián-Gutiérrez
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
| | - Simón Pedro Lubián-López
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
| | - Isabel Benavente-Fernández
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Area of Paediatrics, Department of Child and Mother Health and Radiology, Medical School, University of Cádiz, Cádiz, 11003, Spain
| |
Collapse
|
12
|
Yousefpanah K, Ebadi MJ, Sabzekar S, Zakaria NH, Osman NA, Ahmadian A. An emerging network for COVID-19 CT-scan classification using an ensemble deep transfer learning model. Acta Trop 2024; 257:107277. [PMID: 38878849 DOI: 10.1016/j.actatropica.2024.107277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/28/2024] [Accepted: 05/31/2024] [Indexed: 07/09/2024]
Abstract
Over the past few years, the widespread outbreak of COVID-19 has caused the death of millions of people worldwide. Early diagnosis of the virus is essential to control its spread and provide timely treatment. Artificial intelligence methods are often used as powerful tools to reach a COVID-19 diagnosis via computed tomography (CT) samples. In this paper, artificial intelligence-based methods are introduced to diagnose COVID-19. At first, a network called CT6-CNN is designed, and then two ensemble deep transfer learning models are developed based on Xception, ResNet-101, DenseNet-169, and CT6-CNN to reach a COVID-19 diagnosis by CT samples. The publicly available SARS-CoV-2 CT dataset is utilized for our implementation, including 2481 CT scans. The dataset is separated into 2108, 248, and 125 images for training, validation, and testing, respectively. Based on experimental results, the CT6-CNN model achieved 94.66% accuracy, 94.67% precision, 94.67% sensitivity, and 94.65% F1-score rate. Moreover, the ensemble learning models reached 99.2% accuracy. Experimental results affirm the effectiveness of designed models, especially the ensemble deep learning models, to reach a diagnosis of COVID-19.
Collapse
Affiliation(s)
| | - M J Ebadi
- Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186, Roma, Italy.
| | - Sina Sabzekar
- Civil Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Nor Hidayati Zakaria
- Azman Hashim International Business School, Universiti Teknologi Malaysia, Kuala Lumpur, 54100, Malaysia
| | - Nurul Aida Osman
- Computer and Information Sciences Department, Faculty of Science and Information Technology, Universiti Teknologi Petronas, Malaysia
| | - Ali Ahmadian
- Decisions Lab, Mediterranea University of Reggio Calabria, Reggio Calabria, Italy; Faculty of Engineering and Natural Sciences, Istanbul Okan University, Istanbul, Turkey.
| |
Collapse
|
13
|
Castro K, Frye RE, Silva E, Vasconcelos C, Hoffmann L, Riesgo R, Vaz J. Feeding-Related Early Signs of Autism Spectrum Disorder: A Narrative Review. J Pers Med 2024; 14:823. [PMID: 39202014 PMCID: PMC11355084 DOI: 10.3390/jpm14080823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 07/17/2024] [Accepted: 07/25/2024] [Indexed: 09/03/2024] Open
Abstract
Feeding difficulties are prevalent among individuals with autism spectrum disorder (ASD). Nevertheless, the knowledge about the association between feeding-related early signs and child development remains limited. This review aimed to describe the signs and symptoms related to feeding during child development and to explore their relevance to the diagnosis of ASD. Specialists in nutrition and/or ASD conducted a search of MEDLINE, PsycINFO, and Web of Science databases. Although studies in typically developing children demonstrate age-related variations in hunger and satiety cues, the literature about early feeding indicators in ASD is scarce. Challenges such as shortened breastfeeding duration, difficulties in introducing solid foods, and atypical mealtime behaviors are frequently observed in children with ASD. The eating difficulties experienced during childhood raise concerns for caregivers who base their feeding practices on their perceptions of food acceptance or refusal. Considering the observed associations between feeding difficulties and ASD, the importance of recognizing feeding-related signs according to developmental milestones is emphasized to alert medical professionals that deviation in the formation of feeding habits and skills could indicate the need for ASD diagnostic investigation.
Collapse
Affiliation(s)
- Kamila Castro
- Serviço de Neuropediatria do Hospital de Clínicas de Porto Alegre, Porto Alegre 90035-903, RS, Brazil;
- Programa de Pós Graduação em Saúde da Criança e do Adolescente, Universidade Federal do Rio Grande do Sul, Porto Alegre 90610-000, RS, Brazil;
- Programa de Pós-Graduação em Nutrição e Alimentos, Universidade Federal de Pelotas, Pelotas 96010-610, RS, Brazil; (E.S.); (L.H.); (J.V.)
| | - Richard E Frye
- Autism Discovery and Treatment Foundation and Rossignol Medical Center, 4045 E Union Hills Rd, Phoenix, AZ 85050, USA;
| | - Eduarda Silva
- Programa de Pós-Graduação em Nutrição e Alimentos, Universidade Federal de Pelotas, Pelotas 96010-610, RS, Brazil; (E.S.); (L.H.); (J.V.)
| | - Cristiane Vasconcelos
- Programa de Pós Graduação em Saúde da Criança e do Adolescente, Universidade Federal do Rio Grande do Sul, Porto Alegre 90610-000, RS, Brazil;
| | - Laura Hoffmann
- Programa de Pós-Graduação em Nutrição e Alimentos, Universidade Federal de Pelotas, Pelotas 96010-610, RS, Brazil; (E.S.); (L.H.); (J.V.)
| | - Rudimar Riesgo
- Serviço de Neuropediatria do Hospital de Clínicas de Porto Alegre, Porto Alegre 90035-903, RS, Brazil;
- Programa de Pós Graduação em Saúde da Criança e do Adolescente, Universidade Federal do Rio Grande do Sul, Porto Alegre 90610-000, RS, Brazil;
| | - Juliana Vaz
- Programa de Pós-Graduação em Nutrição e Alimentos, Universidade Federal de Pelotas, Pelotas 96010-610, RS, Brazil; (E.S.); (L.H.); (J.V.)
- Faculdade de Nutrição, Universidade Federal de Pelotas, Pelotas 96010-610, RS, Brazil
| |
Collapse
|
14
|
Konar D, Bhattacharyya S, Gandhi TK, Panigrahi BK, Jiang R. 3-D Quantum-Inspired Self-Supervised Tensor Network for Volumetric Segmentation of Medical Images. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10312-10325. [PMID: 37022399 DOI: 10.1109/tnnls.2023.3240238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article introduces a novel shallow 3-D self-supervised tensor neural network in quantum formalism for volumetric segmentation of medical images with merits of obviating training and supervision. The proposed network is referred to as the 3-D quantum-inspired self-supervised tensor neural network (3-D-QNet). The underlying architecture of 3-D-QNet is composed of a trinity of volumetric layers, viz., input, intermediate, and output layers interconnected using an S -connected third-order neighborhood-based topology for voxelwise processing of 3-D medical image data, suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation of tensor decomposition in quantum formalism leads to faster convergence of network operations to preclude the inherent slow convergence problems faced by the classical supervised and self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3-D-QNet is tailored and tested on the BRATS 2019 Brain MR image dataset and the Liver Tumor Segmentation Challenge (LiTS17) dataset extensively in our experiments. The 3-D-QNet has achieved promising dice similarity (DS) as compared with the time-intensive supervised convolutional neural network (CNN)-based models, such as 3-D-UNet, voxelwise residual network (VoxResNet), Dense-Res-Inception Net (DRINet), and 3-D-ESPNet, thereby showing a potential advantage of our self-supervised shallow network on facilitating semantic segmentation.
Collapse
|
15
|
Faghihpirayesh R, Karimi D, Erdoğmuş D, Gholipour A. Fetal-BET: Brain Extraction Tool for Fetal MRI. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:551-562. [PMID: 39157057 PMCID: PMC11329220 DOI: 10.1109/ojemb.2024.3426969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/09/2024] [Accepted: 07/07/2024] [Indexed: 08/20/2024] Open
Abstract
Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions:By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.
Collapse
Affiliation(s)
- Razieh Faghihpirayesh
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Davood Karimi
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Deniz Erdoğmuş
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Ali Gholipour
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| |
Collapse
|
16
|
Zhang L, Ning G, Liang H, Han B, Liao H. One-shot neuroanatomy segmentation through online data augmentation and confidence aware pseudo label. Med Image Anal 2024; 95:103182. [PMID: 38688039 DOI: 10.1016/j.media.2024.103182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 11/26/2023] [Accepted: 04/18/2024] [Indexed: 05/02/2024]
Abstract
Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.
Collapse
Affiliation(s)
- Liutong Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Guochen Ning
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Hanying Liang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Boxuan Han
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Biomedical Engineering, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
17
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
18
|
Zhang Z, Han C, Wang X, Li H, Li J, Zeng J, Sun S, Wu W. Large field-of-view pine wilt disease tree detection based on improved YOLO v4 model with UAV images. FRONTIERS IN PLANT SCIENCE 2024; 15:1381367. [PMID: 38966144 PMCID: PMC11222607 DOI: 10.3389/fpls.2024.1381367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Accepted: 05/29/2024] [Indexed: 07/06/2024]
Abstract
Introduction Pine wilt disease spreads rapidly, leading to the death of a large number of pine trees. Exploring the corresponding prevention and control measures for different stages of pine wilt disease is of great significance for its prevention and control. Methods To address the issue of rapid detection of pine wilt in a large field of view, we used a drone to collect multiple sets of diseased tree samples at different times of the year, which made the model trained by deep learning more generalizable. This research improved the YOLO v4(You Only Look Once version 4) network for detecting pine wilt disease, and the channel attention mechanism module was used to improve the learning ability of the neural network. Results The ablation experiment found that adding the attention mechanism SENet module combined with the self-designed feature enhancement module based on the feature pyramid had the best improvement effect, and the mAP of the improved model was 79.91%. Discussion Comparing the improved YOLO v4 model with SSD, Faster RCNN, YOLO v3, and YOLO v5, it was found that the mAP of the improved YOLO v4 model was significantly higher than the other four models, which provided an efficient solution for intelligent diagnosis of pine wood nematode disease. The improved YOLO v4 model enables precise location and identification of pine wilt trees under changing light conditions. Deployment of the model on a UAV enables large-scale detection of pine wilt disease and helps to solve the challenges of rapid detection and prevention of pine wilt disease.
Collapse
Affiliation(s)
- Zhenbang Zhang
- College of Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Utilization and Conservation of Food and Medicinal Resources in Northern Region, Shaoguan University, Shaoguan, China
- College of Intelligent Engineering, Shaoguan University, Shaoguan, China
| | - Chongyang Han
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Xinrong Wang
- College of Plant Protection, South China Agricultural University, Guangzhou, China
| | - Haoxin Li
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Jie Li
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Jinbin Zeng
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Si Sun
- College of Forestry and Landscape Architecture, South China Agricultural University, Guangzhou, China
| | - Weibin Wu
- College of Engineering, South China Agricultural University, Guangzhou, China
| |
Collapse
|
19
|
Bongratz F, Rickmann AM, Wachinger C. Neural deformation fields for template-based reconstruction of cortical surfaces from MRI. Med Image Anal 2024; 93:103093. [PMID: 38281362 DOI: 10.1016/j.media.2024.103093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 12/19/2023] [Accepted: 01/22/2024] [Indexed: 01/30/2024]
Abstract
The reconstruction of cortical surfaces is a prerequisite for quantitative analyses of the cerebral cortex in magnetic resonance imaging (MRI). Existing segmentation-based methods separate the surface registration from the surface extraction, which is computationally inefficient and prone to distortions. We introduce Vox2Cortex-Flow (V2C-Flow), a deep mesh-deformation technique that learns a deformation field from a brain template to the cortical surfaces of an MRI scan. To this end, we present a geometric neural network that models the deformation-describing ordinary differential equation in a continuous manner. The network architecture comprises convolutional and graph-convolutional layers, which allows it to work with images and meshes at the same time. V2C-Flow is not only very fast, requiring less than two seconds to infer all four cortical surfaces, but also establishes vertex-wise correspondences to the template during reconstruction. In addition, V2C-Flow is the first approach for cortex reconstruction that models white matter and pial surfaces jointly, therefore avoiding intersections between them. Our comprehensive experiments on internal and external test data demonstrate that V2C-Flow results in cortical surfaces that are state-of-the-art in terms of accuracy. Moreover, we show that the established correspondences are more consistent than in FreeSurfer and that they can directly be utilized for cortex parcellation and group analyses of cortical thickness.
Collapse
Affiliation(s)
- Fabian Bongratz
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Munich Center for Machine Learning, Munich, Germany.
| | - Anne-Marie Rickmann
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich 80336, Germany
| | - Christian Wachinger
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich 80336, Germany; Munich Center for Machine Learning, Munich, Germany
| |
Collapse
|
20
|
Kim MJ, Hong E, Yum MS, Lee YJ, Kim J, Ko TS. Deep learning-based, fully automated, pediatric brain segmentation. Sci Rep 2024; 14:4344. [PMID: 38383725 PMCID: PMC10881508 DOI: 10.1038/s41598-024-54663-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 02/15/2024] [Indexed: 02/23/2024] Open
Abstract
The purpose of this study was to demonstrate the performance of a fully automated, deep learning-based brain segmentation (DLS) method in healthy controls and in patients with neurodevelopmental disorders, SCN1A mutation, under eleven. The whole, cortical, and subcortical volumes of previously enrolled 21 participants, under 11 years of age, with a SCN1A mutation, and 42 healthy controls, were obtained using a DLS method, and compared to volumes measured by Freesurfer with manual correction. Additionally, the volumes which were calculated with the DLS method between the patients and the control group. The volumes of total brain gray and white matter using DLS method were consistent with that volume which were measured by Freesurfer with manual correction in healthy controls. Among 68 cortical parcellated volume analysis, the volumes of only 7 areas measured by DLS methods were significantly different from that measured by Freesurfer with manual correction, and the differences decreased with increasing age in the subgroup analysis. The subcortical volume measured by the DLS method was relatively smaller than that of the Freesurfer volume analysis. Further, the DLS method could perfectly detect the reduced volume identified by the Freesurfer software and manual correction in patients with SCN1A mutations, compared with healthy controls. In a pediatric population, this new, fully automated DLS method is compatible with the classic, volumetric analysis with Freesurfer software and manual correction, and it can also well detect brain morphological changes in children with a neurodevelopmental disorder.
Collapse
Affiliation(s)
- Min-Jee Kim
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | | | - Mi-Sun Yum
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea.
| | - Yun-Jeong Lee
- Department of Pediatrics, Kyungpook National University Hospital and School of Medicine, Kyungpook National University, Daegu, South Korea
| | | | - Tae-Sung Ko
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| |
Collapse
|
21
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
22
|
Valdes C, Nataraj P, Kisilewicz K, Simenson A, Leon G, Kang D, Nguyen D, Sura L, Bliznyuk N, Weiss M. Impact of Nutritional Status on Total Brain Tissue Volumes in Preterm Infants. CHILDREN (BASEL, SWITZERLAND) 2024; 11:121. [PMID: 38255433 PMCID: PMC10813841 DOI: 10.3390/children11010121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 01/04/2024] [Accepted: 01/11/2024] [Indexed: 01/24/2024]
Abstract
Preterm infants bypass the crucial in utero period of brain development and are at increased risk of malnutrition. We aimed to determine if their nutritional status is associated with brain tissue volumes at term equivalent age (TEA), applying recently published malnutrition guidelines for preterm infants. We performed a single center retrospective chart review of 198 infants < 30 weeks' gestation between 2018 and 2021. We primarily analyzed the relationship between the manually obtained neonatal MR-based brain tissue volumes with the maximum weight and length z-score. Significant positive linear associations between brain tissue volumes at TEA and weight and length z-scores were found (p < 0.05). Recommended nutrient intake for preterm infants is not routinely achieved despite efforts to optimize nutrition. Neonatal MR-based brain tissue volumes of preterm infants could serve as objective, quantitative and reproducible surrogate parameters of early brain development. Nutrition is a modifiable factor affecting neurodevelopment and these results could perhaps be used as reference data for future timely nutritional interventions to promote optimal brain volume.
Collapse
Affiliation(s)
- Cyndi Valdes
- Division of Neonatology, Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA; (C.V.); (P.N.); (K.K.); (L.S.)
| | - Parvathi Nataraj
- Division of Neonatology, Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA; (C.V.); (P.N.); (K.K.); (L.S.)
| | - Katherine Kisilewicz
- Division of Neonatology, Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA; (C.V.); (P.N.); (K.K.); (L.S.)
| | - Ashley Simenson
- College of Medicine, Gainesville Campus, University of Florida, Gainesville, FL 32608, USA; (A.S.); (G.L.); (D.K.)
| | - Gabriela Leon
- College of Medicine, Gainesville Campus, University of Florida, Gainesville, FL 32608, USA; (A.S.); (G.L.); (D.K.)
| | - Dahyun Kang
- College of Medicine, Gainesville Campus, University of Florida, Gainesville, FL 32608, USA; (A.S.); (G.L.); (D.K.)
| | - Dai Nguyen
- Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA;
| | - Livia Sura
- Division of Neonatology, Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA; (C.V.); (P.N.); (K.K.); (L.S.)
| | - Nikolay Bliznyuk
- Department of Agricultural & Biological Engineering, University of Florida, Gainesville, FL 32608, USA;
| | - Michael Weiss
- Division of Neonatology, Department of Pediatrics, University of Florida, Gainesville, FL 32608, USA; (C.V.); (P.N.); (K.K.); (L.S.)
| |
Collapse
|
23
|
Mu S, Lu W, Yu G, Zheng L, Qiu J. Deep learning-based grading of white matter hyperintensities enables identification of potential markers in multi-sequence MRI data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107904. [PMID: 37924768 DOI: 10.1016/j.cmpb.2023.107904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 10/06/2023] [Accepted: 10/27/2023] [Indexed: 11/06/2023]
Abstract
BACKGROUND White matter hyperintensities (WMHs) are widely-seen in the aging population, which are associated with cerebrovascular risk factors and age-related cognitive decline. At present, structural atrophy and functional alterations coexisted with WMHs lacks comprehensive investigation. This study developed a WMHs risk prediction model to evaluate WHMs according to Fazekas scales, and to locate potential regions with high risks across the entire brain. METHODS We developed a WMHs risk prediction model, which consisted of the following steps: T2 fluid attenuated inversion recovery (T2-FLAIR) image of each participant was firstly segmented into 1000 tiles with the size of 32 × 32 × 1, features from the tiles were extracted using the ResNet18-based feature extractor, and then a 1D convolutional neural network (CNN) was used to score all tiles based on the extracted features. Finally, a multi-layer perceptron (MLP) was constructed to predict the Fazekas scales based on the tile scores. The proposed model was trained using T2-FLAIR images, we selected tiles with abnormal scores in the test set after prediction, and evaluated their corresponding gray matter (GM) volume, white matter (WM) volume, fractional anisotropy (FA), mean diffusivity (MD), and cerebral blood flow (CBF) via longitudinal and multi-sequence Magnetic Resonance Imaging (MRI) data analysis. RESULTS The proposed WMHs risk prediction model could accurately predict the Fazekas ratings based on the tile scores from T2-FLAIR MRI images with accuracy of 0.656, 0.621 in training data set and test set, respectively. The longitudinal MRI validation revealed that most of the high-risk tiles predicted by the WMHs risk prediction model in the baseline images had WMHs in the corresponding positions in the longitudinal images. The validation on multi-sequence MRI demonstrated that WMHs were associated with GM and WM atrophies, WM micro-structural and perfusion alterations in high-risk tiles, and multi-modal MRI measures of most high-risk tiles showed significant associations with Mini Mental State Examination (MMSE) score. CONCLUSION Our proposed WMHs risk prediction model can not only accurately evaluate WMH severities according to Fazekas scales, but can also uncover potential markers of WMHs across modalities. The WMHs risk prediction model has the potential to be used for the early detection of WMH-related alterations in the entire brain and WMH-induced cognitive decline.
Collapse
Affiliation(s)
- Si Mu
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai'an, Shandong, 271000, China
| | - Weizhao Lu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Guanghui Yu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Lei Zheng
- Department of Radiology, Rushan Hospital of Chinese Medicine, Rushan, Shandong, 264500, China.
| | - Jianfeng Qiu
- School of Radiology, Shandong First Medical University & Shandong Academy of Medicine Sciences, Tai'an, Shandong, 271000, China; Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250000, China.
| |
Collapse
|
24
|
Ma J, Kong D, Wu F, Bao L, Yuan J, Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med 2024; 168:107725. [PMID: 38006827 DOI: 10.1016/j.compbiomed.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
Delineating lesion boundaries play a central role in diagnosing thyroid and breast cancers, making related therapy plans and evaluating therapeutic effects. However, it is often time-consuming and error-prone with limited reproducibility to manually annotate low-quality ultrasound (US) images, given high speckle noises, heterogeneous appearances, ambiguous boundaries etc., especially for nodular lesions with huge intra-class variance. It is hence appreciative but challenging for accurate lesion segmentations from US images in clinical practices. In this study, we propose a new densely connected convolutional network (called MDenseNet) architecture to automatically segment nodular lesions from 2D US images, which is first pre-trained over ImageNet database (called PMDenseNet) and then retrained upon the given US image datasets. Moreover, we also designed a deep MDenseNet with pre-training strategy (PDMDenseNet) for segmentation of thyroid and breast nodules by adding a dense block to increase the depth of our MDenseNet. Extensive experiments demonstrate that the proposed MDenseNet-based method can accurately extract multiple nodular lesions, with even complex shapes, from input thyroid and breast US images. Moreover, additional experiments show that the introduced MDenseNet-based method also outperforms three state-of-the-art convolutional neural networks in terms of accuracy and reproducibility. Meanwhile, promising results in nodular lesion segmentation from thyroid and breast US images illustrate its great potential in many other clinical segmentation tasks.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Integrated Circuits, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China; State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fa Wu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Jing Yuan
- School of Mathematics and Statistics, Xidian University, China
| | - Yusheng Liu
- State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
25
|
Deng Z, Huang G, Yuan X, Zhong G, Lin T, Pun CM, Huang Z, Liang Z. QMLS: quaternion mutual learning strategy for multi-modal brain tumor segmentation. Phys Med Biol 2023; 69:015014. [PMID: 38061066 DOI: 10.1088/1361-6560/ad135e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/07/2023] [Indexed: 12/27/2023]
Abstract
Objective.Due to non-invasive imaging and the multimodality of magnetic resonance imaging (MRI) images, MRI-based multi-modal brain tumor segmentation (MBTS) studies have attracted more and more attention in recent years. With the great success of convolutional neural networks in various computer vision tasks, lots of MBTS models have been proposed to address the technical challenges of MBTS. However, the problem of limited data collection usually exists in MBTS tasks, making existing studies typically have difficulty in fully exploring the multi-modal MRI images to mine complementary information among different modalities.Approach.We propose a novel quaternion mutual learning strategy (QMLS), which consists of a voxel-wise lesion knowledge mutual learning mechanism (VLKML mechanism) and a quaternion multi-modal feature learning module (QMFL module). Specifically, the VLKML mechanism allows the networks to converge to a robust minimum so that aggressive data augmentation techniques can be applied to expand the limited data fully. In particular, the quaternion-valued QMFL module treats different modalities as components of quaternions to sufficiently learn complementary information among different modalities on the hypercomplex domain while significantly reducing the number of parameters by about 75%.Main results.Extensive experiments on the dataset BraTS 2020 and BraTS 2019 indicate that QMLS achieves superior results to current popular methods with less computational cost.Significance.We propose a novel algorithm for brain tumor segmentation task that achieves better performance with fewer parameters, which helps the clinical application of automatic brain tumor segmentation.
Collapse
Affiliation(s)
- Zhengnan Deng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Macao, People's Republic of China
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou, 510006, People's Republic of China
| | - Tongxu Lin
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Chi-Man Pun
- Department of Computer and Information Science, University of Macau, Macao, People's Republic of China
| | - Zhixin Huang
- Department of Neurology, Guangdong Second Provincial General Hospital, Guangzhou, 510317, People's Republic of China
| | - Zhixin Liang
- Department of Nuclear Medicine, Jinshazhou Hospital, Guangzhou University of Chinese Medicine, Guangzhou, 510168, People's Republic of China
| |
Collapse
|
26
|
Wang K, Chen J, Martiniuk J, Ma X, Li Q, Measday V, Lu X. Species identification and strain discrimination of fermentation yeasts Saccharomyces cerevisiae and Saccharomyces uvarum using Raman spectroscopy and convolutional neural networks. Appl Environ Microbiol 2023; 89:e0167323. [PMID: 38038459 PMCID: PMC10734496 DOI: 10.1128/aem.01673-23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
IMPORTANCE The use of S. cerevisiae and S. uvarum yeast starter cultures is a common practice in the alcoholic beverage fermentation industry. As yeast strains from different or the same species have variable fermentation properties, rapid and reliable typing of yeast strains plays an important role in the final quality of the product. In this study, Raman spectroscopy combined with CNN achieved accurate identification of S. cerevisiae and S. uvarum isolates at both the species and strain levels in a rapid, non-destructive, and easy-to-operate manner. This approach can be utilized to test the identity of commercialized dry yeast products and to monitor the diversity of yeast strains during fermentation. It provides great benefits as a high-throughput screening method for agri-food and the alcoholic beverage fermentation industry. This proposed method has the potential to be a powerful tool to discriminate S. cerevisiae and S. uvarum strains in taxonomic, ecological studies and fermentation applications.
Collapse
Affiliation(s)
- Kaidi Wang
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia, Canada
- Department of Food Science and Agricultural Chemistry, Faculty of Agricultural and Environmental Sciences, McGill University, Sainte-Anne-de-Bellevue, Quebec, Canada
| | - Jing Chen
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Jay Martiniuk
- Wine Research Centre, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Xiangyun Ma
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Qifeng Li
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Vivien Measday
- Wine Research Centre, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Xiaonan Lu
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia, Canada
- Department of Food Science and Agricultural Chemistry, Faculty of Agricultural and Environmental Sciences, McGill University, Sainte-Anne-de-Bellevue, Quebec, Canada
| |
Collapse
|
27
|
Mhlanga ST, Viriri S. Deep learning techniques for isointense infant brain tissue segmentation: a systematic literature review. Front Med (Lausanne) 2023; 10:1240360. [PMID: 38193036 PMCID: PMC10773803 DOI: 10.3389/fmed.2023.1240360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/01/2023] [Indexed: 01/10/2024] Open
Abstract
Introduction To improve comprehension of initial brain growth in wellness along with sickness, it is essential to precisely segment child brain magnetic resonance imaging (MRI) into white matter (WM) and gray matter (GM), along with cerebrospinal fluid (CSF). Nonetheless, in the isointense phase (6-8 months of age), the inborn myelination and development activities, WM along with GM display alike stages of intensity in both T1-weighted and T2-weighted MRI, making tissue segmentation extremely difficult. Methods The comprehensive review of studies related to isointense brain MRI segmentation approaches is highlighted in this publication. The main aim and contribution of this study is to aid researchers by providing a thorough review to make their search for isointense brain MRI segmentation easier. The systematic literature review is performed from four points of reference: (1) review of studies concerning isointense brain MRI segmentation; (2) research contribution and future works and limitations; (3) frequently applied evaluation metrics and datasets; (4) findings of this studies. Results and discussion The systemic review is performed on studies that were published in the period of 2012 to 2022. A total of 19 primary studies of isointense brain MRI segmentation were selected to report the research question stated in this review.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
28
|
Jing S, Li R, Mu S, Shan S, Li L, Li J, Sun Y, Cui X. Aggregating Dual Attention Residual Network and Convolutional Sparse Autoencoder to Enhance the Diagnosis of Alzheimer’s Disease. 2023 INTERNATIONAL CONFERENCE ON HUMAN-CENTERED COGNITIVE SYSTEMS (HCCS) 2023:1-6. [DOI: 10.1109/hccs59561.2023.10452464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Shuiqing Jing
- Qufu Normal University,School of Computer Science,Rizhao,China
| | - Ruohan Li
- Qufu Normal University,School of Computer Science,Rizhao,China
| | - Shiguan Mu
- Qufu Normal University,School of Computer Science,Rizhao,China
| | - Shixiao Shan
- Qufu Normal University,School of Computer Science,Rizhao,China
| | - Lanlan Li
- Second People’s Hospital of Rongcheng City,Department of Radiology,Weihai,China
| | - Jianlong Li
- Rizhao People’s Hospital,Department of Radiology,Rizhao,China
| | - Yuan Sun
- Digital Rizhao Company Limited,R&D Department,Rizhao,China
| | - Xinchun Cui
- University of Health and Rehabilitation Science,School of Foundational Education,Qingdao,China
| |
Collapse
|
29
|
Neves Silva S, Aviles Verdera J, Tomi‐Tricot R, Neji R, Uus A, Grigorescu I, Wilkinson T, Ozenne V, Lewin A, Story L, De Vita E, Rutherford M, Pushparajah K, Hajnal J, Hutter J. Real-time fetal brain tracking for functional fetal MRI. Magn Reson Med 2023; 90:2306-2320. [PMID: 37465882 PMCID: PMC10952752 DOI: 10.1002/mrm.29803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 07/03/2023] [Accepted: 07/03/2023] [Indexed: 07/20/2023]
Abstract
PURPOSE To improve motion robustness of functional fetal MRI scans by developing an intrinsic real-time motion correction method. MRI provides an ideal tool to characterize fetal brain development and growth. It is, however, a relatively slow imaging technique and therefore extremely susceptible to subject motion, particularly in functional MRI experiments acquiring multiple Echo-Planar-Imaging-based repetitions, for example, diffusion MRI or blood-oxygen-level-dependency MRI. METHODS A 3D UNet was trained on 125 fetal datasets to track the fetal brain position in each repetition of the scan in real time. This tracking, inserted into a Gadgetron pipeline on a clinical scanner, allows updating the position of the field of view in a modified echo-planar imaging sequence. The method was evaluated in real-time in controlled-motion phantom experiments and ten fetal MR studies (17 + 4-34 + 3 gestational weeks) at 3T. The localization network was additionally tested retrospectively on 29 low-field (0.55T) datasets. RESULTS Our method achieved real-time fetal head tracking and prospective correction of the acquisition geometry. Localization performance achieved Dice scores of 84.4% and 82.3%, respectively for both the unseen 1.5T/3T and 0.55T fetal data, with values higher for cephalic fetuses and increasing with gestational age. CONCLUSIONS Our technique was able to follow the fetal brain even for fetuses under 18 weeks GA in real-time at 3T and was successfully applied "offline" to new cohorts on 0.55T. Next, it will be deployed to other modalities such as fetal diffusion MRI and to cohorts of pregnant participants diagnosed with pregnancy complications, for example, pre-eclampsia and congenital heart disease.
Collapse
Affiliation(s)
- Sara Neves Silva
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Jordina Aviles Verdera
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Raphael Tomi‐Tricot
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- MR Research CollaborationsSiemens Healthcare LimitedCamberleyUK
| | - Radhouene Neji
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- MR Research CollaborationsSiemens Healthcare LimitedCamberleyUK
| | - Alena Uus
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Irina Grigorescu
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Thomas Wilkinson
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Valery Ozenne
- CNRS, CRMSB, UMR 5536, IHU LirycUniversité de BordeauxBordeauxFrance
| | - Alexander Lewin
- Institute of Neuroscience and Medicine 11, INM‐11Forschungszentrum JülichJülichGermany
- RWTHAachen UniversityAachenGermany
| | - Lisa Story
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Department of Women & Children's HealthKing's College LondonLondonUK
| | - Enrico De Vita
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- MRI Physics GroupGreat Ormond Street HospitalLondonUK
| | - Mary Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Kuberan Pushparajah
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Jo Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging SciencesKing's College LondonLondonUK
| |
Collapse
|
30
|
Russo C, Pirozzi MA, Mazio F, Cascone D, Cicala D, De Liso M, Nastro A, Covelli EM, Cinalli G, Quarantelli M. Fully automated measurement of intracranial CSF and brain parenchyma volumes in pediatric hydrocephalus by segmentation of clinical MRI studies. Med Phys 2023; 50:7921-7933. [PMID: 37166045 DOI: 10.1002/mp.16445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 03/29/2023] [Accepted: 04/18/2023] [Indexed: 05/12/2023] Open
Abstract
BACKGROUND Brain parenchyma (BP) and intracranial cerebrospinal fluid (iCSF) volumes measured by fully automated segmentation of clinical brain MRI studies may be useful for the diagnosis and follow-up of pediatric hydrocephalus. However, previously published segmentation techniques either rely on dedicated sequences, not routinely used in clinical practice, or on spatial normalization, which has limited accuracy when severe brain distortions, such as in hydrocephalic patients, are present. PURPOSE We developed a fully automated method to measure BP and iCSF volumes from clinical brain MRI studies of pediatric hydrocephalus patients, exploiting the complementary information contained in T2- and T1-weighted images commonly used in clinical practice. METHODS The proposed procedure, following skull-stripping of the combined volumes, performed using a multiparametric method to obtain a reliable definition of the inner skull profile, maximizes the CSF-to-parenchyma contrast by dividing the T2w- by the T1w- volume after full-scale dynamic rescaling, thus allowing separation of iCSF and BP through a simple thresholding routine. RESULTS Validation against manual tracing on 23 studies (four controls and 19 hydrocephalic patients) showed excellent concordance (ICC > 0.98) and spatial overlap (Dice coefficients ranging from 77.2% for iCSF to 96.8% for intracranial volume). Accuracy was comparable to the intra-operator reproducibility of manual segmentation, as measured in 14 studies processed twice by the same experienced neuroradiologist. Results of the application of the algorithm to a dataset of 63 controls and 57 hydrocephalic patients (19 with parenchymal damage), measuring volumes' changes with normal development and in hydrocephalic patients, are also reported for demonstration purposes. CONCLUSIONS The proposed approach allows fully automated segmentation of BP and iCSF in clinical studies, also in severely distorted brains, enabling to assess age- and disease-related changes in intracranial tissue volume with an accuracy comparable to expert manual segmentation.
Collapse
Affiliation(s)
- Carmela Russo
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Maria Agnese Pirozzi
- Institute of Biostructures and Bioimaging, National Research Council, Naples, Italy
- Department of Advanced Medical and Surgical Sciences, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Federica Mazio
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Daniele Cascone
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Domenico Cicala
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Maria De Liso
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Anna Nastro
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Eugenio Maria Covelli
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Giuseppe Cinalli
- Pediatric Neurosurgery Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Mario Quarantelli
- Institute of Biostructures and Bioimaging, National Research Council, Naples, Italy
| |
Collapse
|
31
|
Yuan G, Lv B, Hao C. Application of artificial neural networks in reproductive medicine. HUM FERTIL 2023; 26:1195-1201. [PMID: 36628627 DOI: 10.1080/14647273.2022.2156301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 09/01/2022] [Indexed: 01/12/2023]
Abstract
With the emergence of the age of information, the data on reproductive medicine has improved immensely. Nonetheless, healthcare workers who wish to utilise the relevance and implied value of the various data available to aid clinical decision-making encounter the difficulty of statistically analysing such large data. The application of artificial intelligence becoming widespread in recent years has emerged as a turning point in this regard. Artificial neural networks (ANNs) exhibit beneficial characteristics of comprehensive analysis and autonomous learning, owing to which these are being applied to disease diagnosis, embryo quality assessment, and prediction of pregnancy outcomes. The present report aims to summarise the application of ANNs in the field of reproduction and analyse its further application potential.
Collapse
Affiliation(s)
- Guanghui Yuan
- Department of Qingdao Medical College, Qingdao University, Qingdao, Shandong, China
| | - Bohan Lv
- Department of Intensive Care Unit, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Cuifang Hao
- Department of Reproductive Medicine, The Affiliated Women and Children's Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
32
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
33
|
Yang J, Jiao L, Shang R, Liu X, Li R, Xu L. EPT-Net: Edge Perception Transformer for 3D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3229-3243. [PMID: 37216246 DOI: 10.1109/tmi.2023.3278461] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The convolutional neural network has achieved remarkable results in most medical image seg- mentation applications. However, the intrinsic locality of convolution operation has limitations in modeling the long-range dependency. Although the Transformer designed for sequence-to-sequence global prediction was born to solve this problem, it may lead to limited positioning capability due to insufficient low-level detail features. Moreover, low-level features have rich fine-grained information, which greatly impacts edge segmentation decisions of different organs. However, a simple CNN module is difficult to capture the edge information in fine-grained features, and the computational power and memory consumed in processing high-resolution 3D features are costly. This paper proposes an encoder-decoder network that effectively combines edge perception and Transformer structure to segment medical images accurately, called EPT-Net. Under this framework, this paper proposes a Dual Position Transformer to enhance the 3D spatial positioning ability effectively. In addition, as low-level features contain detailed information, we conduct an Edge Weight Guidance module to extract edge information by minimizing the edge information function without adding network parameters. Furthermore, we verified the effectiveness of the proposed method on three datasets, including SegTHOR 2019, Multi-Atlas Labeling Beyond the Cranial Vault and the re-labeled KiTS19 dataset called KiTS19-M by us. The experimental results show that EPT-Net has significantly improved compared with the state-of-the-art medical image segmentation method.
Collapse
|
34
|
Taylor A, Habib AR, Kumar A, Wong E, Hasan Z, Singh N. An artificial intelligence algorithm for the classification of sphenoid sinus pneumatisation on sinus computed tomography scans. Clin Otolaryngol 2023; 48:888-894. [PMID: 37488094 DOI: 10.1111/coa.14088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 02/17/2023] [Accepted: 06/18/2023] [Indexed: 07/26/2023]
Abstract
BACKGROUND Classifying sphenoid pneumatisation is an important but often overlooked task in reporting sinus CT scans. Artificial intelligence (AI) and one of its key methods, convolutional neural networks (CNNs), can create algorithms that can learn from data without being programmed with explicit rules and have shown utility in radiological image classification. OBJECTIVE To determine if a trained CNN can accurately classify sphenoid sinus pneumatisation on CT sinus imaging. METHODS Sagittal slices through the natural ostium of the sphenoid sinus were extracted from retrospectively collected bone-window CT scans of the paranasal sinuses for consecutive patients over 6 years. Two blinded Otolaryngology residents reviewed each image and classified the sphenoid sinus pneumatisation as either conchal, presellar or sellar. An AI algorithm was developed using the Microsoft Azure Custom Vision deep learning platform to classify the pattern of pneumatisation. RESULTS Seven hundred eighty images from 400 patients were used to train the algorithm, which was then tested on a further 118 images from 62 patients. The algorithm achieved an accuracy of 93.2% (95% confidence interval [CI] 87.1-97.0), 87.3% (95% CI 79.9-92.7) and 85.6% (95% CI 78.0-91.4) in correctly identifying conchal, presellar and sellar sphenoid pneumatisation, respectively. The overall weighted accuracy of the CNN was 85.9%. CONCLUSION The CNN described demonstrated a moderately accurate classification of sphenoid pneumatisation subtypes on CT scans. The use of CNN-based assistive tools may enable surgeons to achieve safer operative planning through routine automated reporting allowing greater resources to be directed towards the identification of pathology.
Collapse
Affiliation(s)
- Alon Taylor
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, Australia
| | - Al-Rahim Habib
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Sydney, New South Wales, Australia
- ARC Training Centre for Innovative BioEngineering, Sydney, New South Wales, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, Australia
- Westmead Clinical School, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Zubair Hasan
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, Australia
| | - Narinder Singh
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, Australia
- Westmead Clinical School, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
35
|
Sun H, Yang S, Chen L, Liao P, Liu X, Liu Y, Wang N. Brain tumor image segmentation based on improved FPN. BMC Med Imaging 2023; 23:172. [PMID: 37904116 PMCID: PMC10617057 DOI: 10.1186/s12880-023-01131-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 10/19/2023] [Indexed: 11/01/2023] Open
Abstract
PURPOSE Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. MATERIALS AND METHODS Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. RESULTS Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. CONCLUSIONS The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.
Collapse
Affiliation(s)
- Haitao Sun
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Shuai Yang
- Department of Radiotherapy and Minimally Invasive Surgery, The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519020, China
| | - Lijuan Chen
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Pingyan Liao
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Xiangping Liu
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Ying Liu
- Department of the Radiotherapy, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510060, China
| | - Ning Wang
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China.
| |
Collapse
|
36
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
37
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
38
|
Ciceri T, Squarcina L, Giubergia A, Bertoldo A, Brambilla P, Peruzzo D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif Intell Med 2023; 143:102608. [PMID: 37673558 DOI: 10.1016/j.artmed.2023.102608] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Brain segmentation is often the first and most critical step in quantitative analysis of the brain for many clinical applications, including fetal imaging. Different aspects challenge the segmentation of the fetal brain in magnetic resonance imaging (MRI), such as the non-standard position of the fetus owing to his/her movements during the examination, rapid brain development, and the limited availability of imaging data. In recent years, several segmentation methods have been proposed for automatically partitioning the fetal brain from MR images. These algorithms aim to define regions of interest with different shapes and intensities, encompassing the entire brain, or isolating specific structures. Deep learning techniques, particularly convolutional neural networks (CNNs), have become a state-of-the-art approach in the field because they can provide reliable segmentation results over heterogeneous datasets. Here, we review the deep learning algorithms developed in the field of fetal brain segmentation and categorize them according to their target structures. Finally, we discuss the perceived research gaps in the literature of the fetal domain, suggesting possible future research directions that could impact the management of fetal MR images.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Giubergia
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy; University of Padua, Padova Neuroscience Center, Padua, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy; Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
39
|
AlHaddad U, Basuhail A, Khemakhem M, Eassa FE, Jambi K. Ensemble Model Based on Hybrid Deep Learning for Intrusion Detection in Smart Grid Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:7464. [PMID: 37687919 PMCID: PMC10490611 DOI: 10.3390/s23177464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/19/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023]
Abstract
The Smart Grid aims to enhance the electric grid's reliability, safety, and efficiency by utilizing digital information and control technologies. Real-time analysis and state estimation methods are crucial for ensuring proper control implementation. However, the reliance of Smart Grid systems on communication networks makes them vulnerable to cyberattacks, posing a significant risk to grid reliability. To mitigate such threats, efficient intrusion detection and prevention systems are essential. This paper proposes a hybrid deep-learning approach to detect distributed denial-of-service attacks on the Smart Grid's communication infrastructure. Our method combines the convolutional neural network and recurrent gated unit algorithms. Two datasets were employed: The Intrusion Detection System dataset from the Canadian Institute for Cybersecurity and a custom dataset generated using the Omnet++ simulator. We also developed a real-time monitoring Kafka-based dashboard to facilitate attack surveillance and resilience. Experimental and simulation results demonstrate that our proposed approach achieves a high accuracy rate of 99.86%.
Collapse
Affiliation(s)
- Ulaa AlHaddad
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University (KAU), Jeddah 21589, Saudi Arabia; (M.K.); (F.E.E.); (K.J.)
| | - Abdullah Basuhail
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University (KAU), Jeddah 21589, Saudi Arabia; (M.K.); (F.E.E.); (K.J.)
| | | | | | | |
Collapse
|
40
|
Xu C, Liao M, Wang C, Sun J, Lin H. Memristive competitive hopfield neural network for image segmentation application. Cogn Neurodyn 2023; 17:1061-1077. [PMID: 37522050 PMCID: PMC10374519 DOI: 10.1007/s11571-022-09891-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 09/06/2022] [Accepted: 09/18/2022] [Indexed: 11/30/2022] Open
Abstract
Image segmentation implementation provides simplified and effective feature information of image. Neural network algorithms have made significant progress in the application of image segmentation task. However, few studies focus on the implementation of hardware circuits with high-efficiency analog calculations and parallel operations for image segmentation problem. In this paper, a memristor-based competitive Hopfield neural network circuit is proposed to deal with the image segmentation problem. In this circuit, the memristive cross array is applied to store synaptic weights and perform matrix operations. The competition module based on the Winner-take-all mechanism is composed of the competition neurons and the competition control circuit, which simplifies the energy function of the Hopfield neural network and realizes the output function. Operational amplifiers and ABM modules are used to integrate operations and process external input information, respectively. Based on these designs, the circuit can automatically implement iteration and update of data. A series of PSPICE simulations are designed to verify the image segmentation capability of this circuit. Comparative experimental results and analysis show that this circuit has effective improvements both in processing speed and segmentation accuracy compared with other methods. Moreover, the proposed circuit shows good robustness to noise and memristive variation.
Collapse
Affiliation(s)
- Cong Xu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China
| | - Meiling Liao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China
| | - Chunhua Wang
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China
| | - Jingru Sun
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China
| | - Hairong Lin
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China
| |
Collapse
|
41
|
Shen DD, Bao SL, Wang Y, Chen YC, Zhang YC, Li XC, Ding YC, Jia ZZ. An automatic and accurate deep learning-based neuroimaging pipeline for the neonatal brain. Pediatr Radiol 2023; 53:1685-1697. [PMID: 36884052 DOI: 10.1007/s00247-023-05620-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 03/09/2023]
Abstract
BACKGROUND Accurate segmentation of neonatal brain tissues and structures is crucial for studying normal development and diagnosing early neurodevelopmental disorders. However, there is a lack of an end-to-end pipeline for automated segmentation and imaging analysis of the normal and abnormal neonatal brain. OBJECTIVE To develop and validate a deep learning-based pipeline for neonatal brain segmentation and analysis of structural magnetic resonance images (MRI). MATERIALS AND METHODS Two cohorts were enrolled in the study, including cohort 1 (582 neonates from the developing Human Connectome Project) and cohort 2 (37 neonates imaged using a 3.0-tesla MRI scanner in our hospital).We developed a deep leaning-based architecture capable of brain segmentation into 9 tissues and 87 structures. Then, extensive validations were performed for accuracy, effectiveness, robustness and generality of the pipeline. Furthermore, regional volume and cortical surface estimation were measured through in-house bash script implemented in FSL (Oxford Centre for Functional MRI of the Brain Software Library) to ensure reliability of the pipeline. Dice similarity score (DSC), the 95th percentile Hausdorff distance (H95) and intraclass correlation coefficient (ICC) were calculated to assess the quality of our pipeline. Finally, we finetuned and validated our pipeline on 2-dimensional thick-slice MRI in cohorts 1 and 2. RESULTS The deep learning-based model showed excellent performance for neonatal brain tissue and structural segmentation, with the best DSC and the 95th percentile Hausdorff distance (H95) of 0.96 and 0.99 mm, respectively. In terms of regional volume and cortical surface analysis, our model showed good agreement with ground truth. The ICC values for the regional volume were all above 0.80. Considering the thick-slice image pipeline, the same trend was observed for brain segmentation and analysis. The best DSC and H95 were 0.92 and 3.00 mm, respectively. The regional volumes and surface curvature had ICC values just below 0.80. CONCLUSIONS We propose an automatic, accurate, stable and reliable pipeline for neonatal brain segmentation and analysis from thin and thick structural MRI. The external validation showed very good reproducibility of the pipeline.
Collapse
Affiliation(s)
- Dan Dan Shen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Shan Lei Bao
- Department of Nuclear Medicine, Affiliated Hospital and Medical School of Nantong University, Jiangsu, People's Republic of China
| | - Yan Wang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Ying Chi Chen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Cheng Zhang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Xing Can Li
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Chen Ding
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Zhong Zheng Jia
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China.
| |
Collapse
|
42
|
Li X, Fang X, Yang G, Su S, Zhu L, Yu Z. TransU²-Net: An Effective Medical Image Segmentation Framework Based on Transformer and U²-Net. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:441-450. [PMID: 37817826 PMCID: PMC10561737 DOI: 10.1109/jtehm.2023.3289990] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 04/15/2023] [Accepted: 06/17/2023] [Indexed: 10/12/2023]
Abstract
BACKGROUND In the past few years, U-Net based U-shaped architecture and skip-connections have made incredible progress in the field of medical image segmentation. U2-Net achieves good performance in computer vision. However, in the medical image segmentation task, U2-Net with over nesting is easy to overfit. PURPOSE A 2D network structure TransU2-Net combining transformer and a lighter weight U2-Net is proposed for automatic segmentation of brain tumor magnetic resonance image (MRI). METHODS The light-weight U2-Net architecture not only obtains multi-scale information but also reduces redundant feature extraction. Meanwhile, the transformer block embedded in the stacked convolutional layer obtains more global information; the transformer with skip-connection enhances spatial domain information representation. A new multi-scale feature map fusion strategy as a postprocessing method was proposed for better fusing high and low-dimensional spatial information. RESULTS Our proposed model TransU2-Net achieves better segmentation results, on the BraTS2021 dataset, our method achieves an average dice coefficient of 88.17%; Evaluation on the publicly available MSD dataset, we perform tumor evaluation, we achieve a dice coefficient of 74.69%; in addition to comparing the TransU2-Net results are compared with previously proposed 2D segmentation methods. CONCLUSIONS We propose an automatic medical image segmentation method combining transformers and U2-Net, which has good performance and is of clinical importance. The experimental results show that the proposed method outperforms other 2D medical image segmentation methods. Clinical Translation Statement: We use the BarTS2021 dataset and the MSD dataset which are publicly available databases. All experiments in this paper are in accordance with medical ethics.
Collapse
Affiliation(s)
- Xiang Li
- School of Safety Science and EngineeringAnhui University of Science and TechnologyHuainan232000China
| | - Xianjin Fang
- School of Computer Science and EngineeringAnhui University of Science and TechnologyHuainan232000China
- Institute of Artificial IntelligenceHefei Comprehensive National Science CenterHefei230009China
| | - Gaoming Yang
- School of Computer Science and EngineeringAnhui University of Science and TechnologyHuainan232000China
| | - Shuzhi Su
- School of Computer Science and EngineeringAnhui University of Science and TechnologyHuainan232000China
| | - Li Zhu
- Shanghai Chest Hospital, School of MedicineShanghai Jiao Tong UniversityShanghai200030China
| | - Zekuan Yu
- School of Computer Science and EngineeringAnhui University of Science and TechnologyHuainan232000China
- Academy for Engineering and TechnologyFudan UniversityShanghai200433China
| |
Collapse
|
43
|
Murmu A, Kumar P. A novel Gateaux derivatives with efficient DCNN-Resunet method for segmenting multi-class brain tumor. Med Biol Eng Comput 2023:10.1007/s11517-023-02824-z. [PMID: 37338739 DOI: 10.1007/s11517-023-02824-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 03/14/2023] [Indexed: 06/21/2023]
Abstract
In hospitals and pathology, observing the features and locations of brain tumors in Magnetic Resonance Images (MRI) is a crucial task for assisting medical professionals in both treatment and diagnosis. The multi-class information about the brain tumor is often obtained from the patient's MRI dataset. However, this information may vary in different shapes and sizes for various brain tumors, making it difficult to detect their locations in the brain. To resolve these issues, a novel customized Deep Convolution Neural Network (DCNN) based Residual-Unet (ResUnet) model with Transfer Learning (TL) is proposed for predicting the locations of the brain tumor in an MRI dataset. The DCNN model has been used to extract the features from input images and select the Region Of Interest (ROI) by using the TL technique for training it faster. Furthermore, the min-max normalizing approach is used to enhance the color intensity value for particular ROI boundary edges in the brain tumor images. Specifically, the boundary edges of the brain tumors have been detected by utilizing Gateaux Derivatives (GD) method to identify the multi-class brain tumors precisely. The proposed scheme has been validated on two datasets namely the brain tumor, and Figshare MRI datasets for detecting multi-class Brain Tumor Segmentation (BTS).The experimental results have been analyzed by evaluation metrics namely, accuracy (99.78, and 99.03), Jaccard Coefficient (93.04, and 94.95), Dice Factor Coefficient (DFC) (92.37, and 91.94), Mean Absolute Error (MAE) (0.0019, and 0.0013), and Mean Squared Error (MSE) (0.0085, and 0.0012) for proper validation. The proposed system outperforms the state-of-the-art segmentation models on the MRI brain tumor dataset.
Collapse
Affiliation(s)
- Anita Murmu
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India.
| | - Piyush Kumar
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India
| |
Collapse
|
44
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
45
|
Generative adversarial feature learning for glomerulopathy histological classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
46
|
Zoetmulder R, Baak L, Khalili N, Marquering HA, Wagenaar N, Benders M, van der Aa NE, Išgum I. Brain segmentation in patients with perinatal arterial ischemic stroke. Neuroimage Clin 2023; 38:103381. [PMID: 36965456 PMCID: PMC10074207 DOI: 10.1016/j.nicl.2023.103381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/20/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND Perinatal arterial ischemic stroke (PAIS) is associated with adverse neurological outcomes. Quantification of ischemic lesions and consequent brain development in newborn infants relies on labor-intensive manual assessment of brain tissues and ischemic lesions. Hence, we propose an automatic method utilizing convolutional neural networks (CNNs) to segment brain tissues and ischemic lesions in MRI scans of infants suffering from PAIS. MATERIALS AND METHODS This single-center retrospective study included 115 patients with PAIS that underwent MRI after the stroke onset (baseline) and after three months (follow-up). Nine baseline and 12 follow-up MRI scans were manually annotated to provide reference segmentations (white matter, gray matter, basal ganglia and thalami, brainstem, ventricles, extra-ventricular cerebrospinal fluid, and cerebellum, and additionally on the baseline scans the ischemic lesions). Two CNNs were trained to perform automatic segmentation on the baseline and follow-up MRIs, respectively. Automatic segmentations were quantitatively evaluated using the Dice coefficient (DC) and the mean surface distance (MSD). Volumetric agreement between segmentations that were manually and automatically obtained was computed. Moreover, the scan quality and automatic segmentations were qualitatively evaluated in a larger set of MRIs without manual annotation by two experts. In addition, the scan quality was qualitatively evaluated in these scans to establish its impact on the automatic segmentation performance. RESULTS Automatic brain tissue segmentation led to a DC and MSD between 0.78-0.92 and 0.18-1.08 mm for baseline, and between 0.88-0.95 and 0.10-0.58 mm for follow-up scans, respectively. For the ischemic lesions at baseline the DC and MSD were between 0.72-0.86 and 1.23-2.18 mm, respectively. Volumetric measurements indicated limited oversegmentation of the extra-ventricular cerebrospinal fluid in both the follow-up and baseline scans, oversegmentation of the ischemic lesions in the left hemisphere, and undersegmentation of the ischemic lesions in the right hemisphere. In scans without imaging artifacts, brain tissue segmentation was graded as excellent in more than 85% and 91% of cases, respectively for the baseline and follow-up scans. For the ischemic lesions at baseline, this was in 61% of cases. CONCLUSIONS Automatic segmentation of brain tissue and ischemic lesions in MRI scans of patients with PAIS is feasible. The method may allow evaluation of the brain development and efficacy of treatment in large datasets.
Collapse
Affiliation(s)
- Riaan Zoetmulder
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands
| | - Lisanne Baak
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nadieh Khalili
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Henk A Marquering
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands
| | - Nienke Wagenaar
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Manon Benders
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Niek E van der Aa
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, the Netherlands.
| |
Collapse
|
47
|
Delayed Surgical Closure of the Patent Ductus Arteriosus: Does the Brain Pay the Price? J Pediatr 2023; 254:25-32. [PMID: 36241053 DOI: 10.1016/j.jpeds.2022.10.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 10/03/2022] [Accepted: 10/07/2022] [Indexed: 11/07/2022]
Abstract
OBJECTIVE To investigate the relation between duration of hemodynamically significant patent ductus arteriosus (PDA), cerebral oxygenation, magnetic resonance imaging-determined brain growth, and 2-year neurodevelopmental outcome in a cohort of infants born preterm whose duct was closed surgically. STUDY DESIGN Infants born preterm at <30 weeks of gestational age who underwent surgical ductal closure between 2008 and 2018 (n = 106) were included in this observational study. Near infrared spectroscopy-monitored cerebral oxygen saturation during and up to 24 hours after ductal closure and a Bayley III developmental test at the corrected age of 2 years is the institutional standard of care for this patient group. Infants also had magnetic resonance imaging at term-equivalent age. RESULTS In total, 90 infants fulfilled the inclusion criteria (median [range]: 25.9 weeks [24.0-28.9]; 856 g [540-1350]. Days of a PDA ranged from 1 to 41. Multivariable linear regression analysis showed that duration of a PDA negatively influenced cerebellar growth and motor and cognitive outcome at 2 years of corrected age. CONCLUSIONS Prolonged duration of a PDA in this surgical cohort is associated with reduced cerebellar growth and suboptimal neurodevelopmental outcome.
Collapse
|
48
|
Urru A, Nakaki A, Benkarim O, Crovetto F, Segalés L, Comte V, Hahner N, Eixarch E, Gratacos E, Crispi F, Piella G, González Ballester MA. An automatic pipeline for atlas-based fetal and neonatal brain segmentation and analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107334. [PMID: 36682108 DOI: 10.1016/j.cmpb.2023.107334] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 11/29/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE The automatic segmentation of perinatal brain structures in magnetic resonance imaging (MRI) is of utmost importance for the study of brain growth and related complications. While different methods exist for adult and pediatric MRI data, there is a lack for automatic tools for the analysis of perinatal imaging. METHODS In this work, a new pipeline for fetal and neonatal segmentation has been developed. We also report the creation of two new fetal atlases, and their use within the pipeline for atlas-based segmentation, based on novel registration methods. The pipeline is also able to extract cortical and pial surfaces and compute features, such as curvature, local gyrification index, sulcal depth, and thickness. RESULTS Results show that the introduction of the new templates together with our segmentation strategy leads to accurate results when compared to expert annotations, as well as better performances when compared to a reference pipeline (developing Human Connectome Project (dHCP)), for both early and late-onset fetal brains. CONCLUSIONS These findings show the potential of the presented atlases and the whole pipeline for application in both fetal, neonatal, and longitudinal studies, which could lead to dramatic improvements in the understanding of perinatal brain development.
Collapse
Affiliation(s)
- Andrea Urru
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Ayako Nakaki
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Oualid Benkarim
- McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Francesca Crovetto
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain
| | - Laura Segalés
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain
| | - Valentin Comte
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Nadine Hahner
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Eduard Gratacos
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Fàtima Crispi
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Miguel A González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; ICREA, Barcelona, Spain.
| |
Collapse
|
49
|
Clements RG, Claros-Olivares CC, McIlvain G, Brockmeier AJ, Johnson CL. Mechanical Property Based Brain Age Prediction using Convolutional Neural Networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.12.528186. [PMID: 36824781 PMCID: PMC9948973 DOI: 10.1101/2023.02.12.528186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
Abstract
Brain age is a quantitative estimate to explain an individual's structural and functional brain measurements relative to the overall population and is particularly valuable in describing differences related to developmental or neurodegenerative pathology. Accurately inferring brain age from brain imaging data requires sophisticated models that capture the underlying age-related brain changes. Magnetic resonance elastography (MRE) is a phase contrast MRI technology that uses external palpations to measure brain mechanical properties. Mechanical property measures of viscoelastic shear stiffness and damping ratio have been found to change across the entire life span and to reflect brain health due to neurodegenerative diseases and even individual differences in cognitive function. Here we develop and train a multi-modal 3D convolutional neural network (CNN) to model the relationship between age and whole brain mechanical properties. After training, the network maps the measurements and other inputs to a brain age prediction. We found high performance using the 3D maps of various mechanical properties to predict brain age. Stiffness maps alone were able to predict ages of the test group subjects with a mean absolute error (MAE) of 3.76 years, which is comparable to single inputs of damping ratio (MAE: 3.82) and outperforms single input of volume (MAE: 4.60). Combining stiffness and volume in a multimodal approach performed the best, with an MAE of 3.60 years, whereas including damping ratio worsened model performance. Our results reflect previous MRE literature that had demonstrated that stiffness is more strongly related to chronological age than damping ratio. This machine learning model provides the first prediction of brain age from brain biomechanical data-an advancement towards sensitively describing brain integrity differences in individuals with neuropathology.
Collapse
|
50
|
Wang S, Zheng K, Kong W, Huang R, Liu L, Wen G, Yu Y. Multimodal data fusion based on IGERNNC algorithm for detecting pathogenic brain regions and genes in Alzheimer's disease. Brief Bioinform 2023; 24:6887308. [PMID: 36502428 DOI: 10.1093/bib/bbac515] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 09/28/2022] [Accepted: 10/30/2022] [Indexed: 12/14/2022] Open
Abstract
At present, the study on the pathogenesis of Alzheimer's disease (AD) by multimodal data fusion analysis has been attracted wide attention. It often has the problems of small sample size and high dimension with the multimodal medical data. In view of the characteristics of multimodal medical data, the existing genetic evolution random neural network cluster (GERNNC) model combine genetic evolution algorithm and neural network for the classification of AD patients and the extraction of pathogenic factors. However, the model does not take into account the non-linear relationship between brain regions and genes and the problem that the genetic evolution algorithm can fall into local optimal solutions, which leads to the overall performance of the model is not satisfactory. In order to solve the above two problems, this paper made some improvements on the construction of fusion features and genetic evolution algorithm in GERNNC model, and proposed an improved genetic evolution random neural network cluster (IGERNNC) model. The IGERNNC model uses mutual information correlation analysis method to combine resting-state functional magnetic resonance imaging data with single nucleotide polymorphism data for the construction of fusion features. Based on the traditional genetic evolution algorithm, elite retention strategy and large variation genetic algorithm are added to avoid the model falling into the local optimal solution. Through multiple independent experimental comparisons, the IGERNNC model can more effectively identify AD patients and extract relevant pathogenic factors, which is expected to become an effective tool in the field of AD research.
Collapse
Affiliation(s)
- Shuaiqun Wang
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Kai Zheng
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Wei Kong
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Ruiwen Huang
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Lulu Liu
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Gen Wen
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Yaling Yu
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| |
Collapse
|