1
|
Sun Y, Wang L, Li G, Lin W, Wang L. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat Biomed Eng 2025; 9:521-538. [PMID: 39638876 DOI: 10.1038/s41551-024-01283-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 10/17/2024] [Indexed: 12/07/2024]
Abstract
In structural magnetic resonance (MR) imaging, motion artefacts, low resolution, imaging noise and variability in acquisition protocols frequently degrade image quality and confound downstream analyses. Here we report a foundation model for the motion correction, resolution enhancement, denoising and harmonization of MR images. Specifically, we trained a tissue-classification neural network to predict tissue labels, which are then leveraged by a 'tissue-aware' enhancement network to generate high-quality MR images. We validated the model's effectiveness on a large and diverse dataset comprising 2,448 deliberately corrupted images and 10,963 images spanning a wide age range (from foetuses to elderly individuals) acquired using a variety of clinical scanners across 19 public datasets. The model consistently outperformed state-of-the-art algorithms in improving the quality of MR images, handling pathological brains with multiple sclerosis or gliomas, generating 7-T-like images from 3 T scans and harmonizing images acquired from different scanners. The high-quality, high-resolution and harmonized images generated by the model can be used to enhance the performance of models for tissue segmentation, registration, diagnosis and other downstream tasks.
Collapse
Affiliation(s)
- Yue Sun
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Limei Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
2
|
Sun L, Zhao T, Liang X, Xia M, Li Q, Liao X, Gong G, Wang Q, Pang C, Yu Q, Bi Y, Chen P, Chen R, Chen Y, Chen T, Cheng J, Cheng Y, Cui Z, Dai Z, Deng Y, Ding Y, Dong Q, Duan D, Gao JH, Gong Q, Han Y, Han Z, Huang CC, Huang R, Huo R, Li L, Lin CP, Lin Q, Liu B, Liu C, Liu N, Liu Y, Liu Y, Lu J, Ma L, Men W, Qin S, Qiu J, Qiu S, Si T, Tan S, Tang Y, Tao S, Wang D, Wang F, Wang J, Wang P, Wang X, Wang Y, Wei D, Wu Y, Xie P, Xu X, Xu Y, Xu Z, Yang L, Yuan H, Zeng Z, Zhang H, Zhang X, Zhao G, Zheng Y, Zhong S, He Y. Human lifespan changes in the brain's functional connectome. Nat Neurosci 2025; 28:891-901. [PMID: 40181189 DOI: 10.1038/s41593-025-01907-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 02/04/2025] [Indexed: 04/05/2025]
Abstract
Functional connectivity of the human brain changes through life. Here, we assemble task-free functional and structural magnetic resonance imaging data from 33,250 individuals at 32 weeks of postmenstrual age to 80 years from 132 global sites. We report critical inflection points in the nonlinear growth curves of the global mean and variance of the connectome, peaking in the late fourth and late third decades of life, respectively. After constructing a fine-grained, lifespan-wide suite of system-level brain atlases, we show distinct maturation timelines for functional segregation within different systems. Lifespan growth of regional connectivity is organized along a spatiotemporal cortical axis, transitioning from primary sensorimotor regions to higher-order association regions. These findings elucidate the lifespan evolution of the functional connectome and can serve as a normative reference for quantifying individual variation in development, aging and neuropsychiatric disorders.
Collapse
Affiliation(s)
- Lianglong Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Tengda Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xinyuan Liang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Qiongling Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xuhong Liao
- School of Systems Science, Beijing Normal University, Beijing, China
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Qian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chenxuan Pang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Qian Yu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Pindong Chen
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Rui Chen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yuan Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Taolin Chen
- Department of Radiology, Huaxi MR Research Center (HMRRC), Institute of Radiology, Functional and Molecular Imaging Key Laboratory of Sichuan Province, West China Hospital of Sichuan University, Chengdu, China
- Xiamen Key Laboratory of Psychoradiology and Neuromodulation, Department of Radiology, West China Xiamen Hospital of Sichuan University, Xiamen, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yuqi Cheng
- Department of Psychiatry, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zaixu Cui
- Chinese Institute for Brain Research, Beijing, China
| | - Zhengjia Dai
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yao Deng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yuyin Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dingna Duan
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
| | - Qiyong Gong
- Department of Radiology, Huaxi MR Research Center (HMRRC), Institute of Radiology, Functional and Molecular Imaging Key Laboratory of Sichuan Province, West China Hospital of Sichuan University, Chengdu, China
- Xiamen Key Laboratory of Psychoradiology and Neuromodulation, Department of Radiology, West China Xiamen Hospital of Sichuan University, Xiamen, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chu-Chung Huang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), Affiliated Mental Health Center (ECNU), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Ruiwang Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ran Huo
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Lingjiang Li
- Department of Psychiatry, and National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Mental Health Institute of Central South University, China National Technology Institute on Mental Disorders, Hunan Technology Institute of Psychiatry, Hunan Key Laboratory of Psychiatry and Mental Health, Hunan Medical Center for Mental Health, Changsha, China
| | - Ching-Po Lin
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei, China
- Department of Education and Research, Taipei City Hospital, Taipei, China
| | - Qixiang Lin
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bangshan Liu
- Department of Psychiatry, and National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Mental Health Institute of Central South University, China National Technology Institute on Mental Disorders, Hunan Technology Institute of Psychiatry, Hunan Key Laboratory of Psychiatry and Mental Health, Hunan Medical Center for Mental Health, Changsha, China
| | - Chao Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ningyu Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Ying Liu
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yong Liu
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Leilei Ma
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
| | - Shaozheng Qin
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Shijun Qiu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Tianmei Si
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University, Beijing, China
| | - Shuping Tan
- Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yanqing Tang
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Sha Tao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dawei Wang
- Department of Radiology, Qilu Hospital of Shandong University, Ji'nan, China
| | - Fei Wang
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Jiali Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Pan Wang
- Department of Neurology, Tianjin Huanhu Hospital, Tianjin University, Tianjin, China
| | - Xiaoqin Wang
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Yanpei Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dongtao Wei
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Yankun Wu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University, Beijing, China
| | - Peng Xie
- Chongqing Key Laboratory of Neurobiology, Chongqing, China
- Department of Neurology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xiufeng Xu
- Department of Psychiatry, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Yuehua Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhilei Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Liyuan Yang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Zilong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Haibo Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Xi Zhang
- Department of Neurology, the Second Medical Centre, National Clinical Research Centre for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Gai Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yanting Zheng
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Suyu Zhong
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yong He
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.
- Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
3
|
Wu J, Qin F, Tian F, Li H, Yong X, Liu T, Zhang H, Wu D. Age-specific optimization of the T 2-weighted MRI contrast in infant and toddler brain. Magn Reson Med 2025; 93:1014-1025. [PMID: 39428905 DOI: 10.1002/mrm.30339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 07/26/2024] [Accepted: 09/15/2024] [Indexed: 10/22/2024]
Abstract
PURPOSE In 0-2-year-old brains, the T2-weighted (T2w) contrast between white matter (WM) and gray matter (GM) is weaker compared with that in adult brains and rapidly changes with age. This study aims to design variable-flip-angle (VFA) trains in 3D fast spin-echo sequence that adapt to the dynamically changing relaxation times to improve the contrast in the T2w images of the developing brains. METHODS T1 and T2 relaxation times in 0-2-year-old brains were measured, and several age groups were defined according to the age-dependent pattern of T2 values. Based on the static pseudo-steady-state theory and the extended phase graph algorithm, VFA trains were designed for each age group to maximize WM/GM contrast, constrained by the maximum specific absorption rate and overall signal intensity. The optimized VFA trains were compared with the default one used for adult brains based on the relative contrast between WM and GM. Dice coefficient was used to demonstrate the advantage of contrast-improved images as inputs for automatic tissue segmentation in infant brains. RESULTS The 0-2-year-old pool was divided into groups of 0-8 months, 8-12 months, and 12-24 months. The optimal VFA trains were tested in each age group in comparison with the default sequence. Quantitative analyses demonstrated improved relative contrasts in infant and toddler brains by 1.5-2.3-fold at different ages. The Dice coefficient for contrast-optimized images was improved compared with default images (p < 0.001). CONCLUSION An effective strategy was proposed to improve the 3D T2w contrast in 0-2-year-old brains.
Collapse
Affiliation(s)
- Jiani Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Fenjie Qin
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Fengyu Tian
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Haotian Li
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xingwang Yong
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Tingting Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Hongxi Zhang
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, Zhejiang, China
| |
Collapse
|
4
|
Dénes-Fazakas L, Kovács L, Eigner G, Szilágyi L. Enhanced U-Net for Infant Brain MRI Segmentation: A (2+1)D Convolutional Approach. SENSORS (BASEL, SWITZERLAND) 2025; 25:1531. [PMID: 40096351 PMCID: PMC11902485 DOI: 10.3390/s25051531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 02/15/2025] [Accepted: 02/27/2025] [Indexed: 03/19/2025]
Abstract
BACKGROUND Infant brain tissue segmentation from MRI data is a critical task in medical imaging, particularly challenging due to the evolving nature of tissue contrasts in the early months of life. The difficulty increases as gray matter (GM) and white matter (WM) intensities converge, making accurate segmentation challenging. This study aims to develop an improved U-net-based model to enhance the precision of automatic segmentation of cerebro-spinal fluid (CSF), GM, and WM in 10 infant brain MRIs using the iSeg-2017 dataset. METHODS The proposed method utilizes a U-net architecture with (2+1)Dconvolutional layers and skip connections. Preprocessing includes intensity normalization using histogram alignment to standardize MRI data across different records. The model was trained on the iSeg-2017 dataset, which comprises T1-weighted and T2-weighted MRI data from ten infant subjects. Cross-validation was performed to evaluate the model's segmentation performance. RESULTS The model achieved an average accuracy of 92.2%, improving on previous methods by 0.7%. Sensitivity, precision, and Dice similarity scores were used to evaluate the performance, showing high levels of accuracy across different tissue types. The model demonstrated a slight bias toward misclassifying GM and WM, indicating areas for potential improvement. CONCLUSIONS The results suggest that the U-net architecture is highly effective in segmenting infant brain tissues from MRI data. Future work will explore enhancements such as attention mechanisms and dual-network processing for further improving segmentation accuracy.
Collapse
Affiliation(s)
- Lehel Dénes-Fazakas
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (L.S.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
- Doctoral School of Applied Informatics and Applied Mathematics, Obuda University, 1034 Budapest, Hungary
| | - Levente Kovács
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (L.S.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
| | - György Eigner
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (L.S.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
| | - László Szilágyi
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (L.S.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
- Computational Intelligence Research Group, Sapientia Hungarian University of Transylvania, 547366 Targu Mures, Romania
| |
Collapse
|
5
|
Liu J, Liu F, Nie D, Gu Y, Sun Y, Shen D. Structure-Aware Brain Tissue Segmentation for Isointense Infant MRI Data Using Multi-Phase Multi-Scale Assistance Network. IEEE J Biomed Health Inform 2025; 29:1297-1307. [PMID: 39302775 DOI: 10.1109/jbhi.2024.3452310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Accurate and automatic brain tissue segmentation is crucial for tracking brain development and diagnosing brain disorders. However, due to inherently ongoing myelination and maturation during the first postnatal year, the intensity distributions of gray matter and white matter in the infant brain MRI at the age of around 6 months old (a.k.a. isointense phase) are highly overlapped, which makes tissue segmentation very challenging, even for experts. To address this issue, in this study, we propose a multi-phase multi-scale assistance segmentation framework, which comprises a structure-preserved generative adversarial network (SPGAN) and a multi-phase multi-scale assisted segmentation network (MASN). SPGAN bi-directionally synthesizes isointense and adult-like data. The synthetic isointense data essentially augment the training dataset, combined with high-quality annotations transferred from its adult-like counterpart. By contrast, the synthetic adult-like data offers clear tissue structures and is concatenated with isointense data to serve as the input of MASN. In particular, MASN is designed with two-branch networks, which simultaneously segment tissues with two phases (isointense and adult-like) and two scales by also preserving their correspondences. We further propose a boundary refinement module to extract maximum gradients from local feature maps to indicate tissue boundaries, prompting MASN to focus more on boundaries where segmentation errors are prone to occur. Extensive experiments on the National Database for Autism Research and Baby Connectome Project datasets quantitatively and qualitatively demonstrate the superiority of our proposed framework compared with seven state-of-the-art methods.
Collapse
|
6
|
Trinh MN, Tran TT, Nham DHN, Lo MT, Pham VT. GLAC-Unet: Global-Local Active Contour Loss with an Efficient U-Shaped Architecture for Multiclass Medical Image Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01387-9. [PMID: 39821780 DOI: 10.1007/s10278-025-01387-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Revised: 12/02/2024] [Accepted: 12/23/2024] [Indexed: 01/19/2025]
Abstract
The field of medical image segmentation powered by deep learning has recently received substantial attention, with a significant focus on developing novel architectures and designing effective loss functions. Traditional loss functions, such as Dice loss and Cross-Entropy loss, predominantly rely on global metrics to compare predictions with labels. However, these global measures often struggle to address challenges such as occlusion and nonuni-form intensity. To overcome these issues, in this study, we propose a novel loss function, termed Global-Local Active Contour (GLAC) loss, which integrates both global and local image features, reformulated within the Mumford-Shah framework and extended for multiclass segmentation. This approach enables the neural network model to be trained end-to-end while simultaneously segmenting multiple classes. In addition to this, we enhance the U-Net architecture by incorporating Dense Layers, Convolutional Block Attention Modules, and DropBlock. These improvements enable the model to more effectively combine contextual information across layers, capture richer semantic details, and mitigate overfitting, resulting in more precise segmentation outcomes. We validate our proposed method, namely GLAC-Unet, which utilizes the GLAC loss in conjunction with our modified U-shaped architecture, on three biomedical segmentation datasets that span a range of modalities, including two-dimensional and three-dimensional images, such as dermoscopy, cardiac magnetic resonance imaging, and brain magnetic resonance imaging. Extensive experiments demonstrate the promising performance of our approach, achieving a Dice score (DSC) of 0.9125 on the ISIC-2018 dataset, 0.9260 on the Automated Cardiac Diagnosis Challenge (ACDC) 2017, and 0.927 on the Infant Brain MRI Segmentation Challenge 2019. Furthermore, statistical significance testing with p-values consistently smaller than 0.05 on the ISIC-2018 and ACDC datasets confirms the superior performance of the proposed method compared to other state-of-the-art models. These results highlight the robustness and effectiveness of our multiclass segmentation technique, underscoring its potential for biomedical image analysis. Our code will be made available at https://github.com/minhnhattrinh312/Active-Contour-Loss-based-on-Global-and-Local-Intensity.
Collapse
Affiliation(s)
- Minh-Nhat Trinh
- Center of Marine Sciences, University of Algarve, Faro, Portugal
| | - Thi-Thao Tran
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Do-Hai-Ninh Nham
- Department of Mathematics, The University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany
| | - Men-Tzung Lo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan City, Taiwan
| | - Van-Truong Pham
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam.
| |
Collapse
|
7
|
Sarafraz I, Agahi H, Mahmoodzadeh A. Convolutional neural network (CNN) configuration using a learning automaton model for neonatal brain image segmentation. PLoS One 2025; 20:e0315538. [PMID: 39823471 PMCID: PMC11741644 DOI: 10.1371/journal.pone.0315538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Accepted: 11/26/2024] [Indexed: 01/19/2025] Open
Abstract
CNN is considered an efficient tool in brain image segmentation. However, neonatal brain images require specific methods due to their nature and structural differences from adult brain images. Hence, it is necessary to determine the optimal structure and parameters for these models to achieve the desired results. In this article, an adaptive method for CNN automatic configuration for neonatal brain image segmentation is presented based on the encoder-decoder structure, in which the hyperparameters of this network, i.e., size, length, and width of the filter in each layer along with the type of pooling functions with a reinforcement learning approach and an LA model are determined. These LA models determine the optimal configuration for the CNN model by using DICE and ASD segmentation quality evaluation criteria, so that the segmentation quality can be maximized based on the goal criteria. The effectiveness of the proposed method has been evaluated using a database of infant MRI images and the results have been compared with previous methods. The results show that by using the proposed method, it is possible to segment NBI with higher quality and accuracy.
Collapse
Affiliation(s)
- Iran Sarafraz
- Department of Electrical Engineering, Shiraz Branch, Islamic Azad University, Shiraz, Iran
| | - Hamed Agahi
- Department of Electrical Engineering, Shiraz Branch, Islamic Azad University, Shiraz, Iran
| | - Azar Mahmoodzadeh
- Department of Electrical Engineering, Shiraz Branch, Islamic Azad University, Shiraz, Iran
| |
Collapse
|
8
|
Gaillard L, Tjaberinga MC, Dremmen MHG, Mathijssen IMJ, Vrooman HA. Brain volume in infants with metopic synostosis: Less white matter volume with an accelerated growth pattern in early life. J Anat 2024; 245:894-902. [PMID: 38417842 PMCID: PMC11547220 DOI: 10.1111/joa.14028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 01/30/2024] [Accepted: 02/05/2024] [Indexed: 03/01/2024] Open
Abstract
Metopic synostosis patients are at risk for neurodevelopmental disorders despite a negligible risk of intracranial hypertension. To gain insight into the underlying pathophysiology of metopic synostosis and associated neurodevelopmental disorders, we aimed to investigate brain volumes of non-syndromic metopic synostosis patients using preoperative MRI brain scans. MRI brain scans were processed with HyperDenseNet to calculate total intracranial volume (TIV), total brain volume (TBV), total grey matter volume (TGMV), total white matter volume (TWMV) and total cerebrospinal fluid volume (TCBFV). We compared global brain volumes of patients with controls corrected for age and sex using linear regression. Lobe-specific grey matter volumes were assessed in secondary analyses. We included 45 metopic synostosis patients and 14 controls (median age at MRI 0.56 years [IQR 0.36] and 1.1 years [IQR 0.47], respectively). We found no significant differences in TIV, TBV, TGMV or TCBFV in patients compared to controls. TWMV was significantly smaller in patients (-62,233 mm3 [95% CI = -96,968; -27,498], Holm-corrected p = 0.004), and raw data show an accelerated growth pattern of white matter in metopic synostosis patients. Grey matter volume analyses per lobe indicated increased cingulate (1378 mm3 [95% CI = 402; 2355]) and temporal grey matter (4747 [95% CI = 178; 9317]) volumes in patients compared to controls. To conclude, we found smaller TWMV with an accelerated white matter growth pattern in metopic synostosis patients, similar to white matter growth patterns seen in autism. TIV, TBV, TGMV and TCBFV were comparable in patients and controls. Secondary analyses suggest larger cingulate and temporal lobe volumes. These findings suggest a generalized intrinsic brain anomaly in the pathophysiology of neurodevelopmental disorders associated with metopic synostosis.
Collapse
Affiliation(s)
- L. Gaillard
- Department of Plastic and Reconstructive Surgery and Hand SurgeryErasmus MC—Sophia Children's Hospital, University Medical Center RotterdamRotterdamThe Netherlands
| | - M. C. Tjaberinga
- Department of Plastic and Reconstructive Surgery and Hand SurgeryErasmus MC—Sophia Children's Hospital, University Medical Center RotterdamRotterdamThe Netherlands
| | - M. H. G. Dremmen
- Department of Radiology and Nuclear MedicineErasmus MC—Sophia Children's Hospital, University Medical Center RotterdamRotterdamThe Netherlands
| | - I. M. J. Mathijssen
- Department of Plastic and Reconstructive Surgery and Hand SurgeryErasmus MC—Sophia Children's Hospital, University Medical Center RotterdamRotterdamThe Netherlands
| | - H. A. Vrooman
- Department of Radiology and Nuclear MedicineErasmus MC—Sophia Children's Hospital, University Medical Center RotterdamRotterdamThe Netherlands
| |
Collapse
|
9
|
Kumar A, Jiang H, Imran M, Valdes C, Leon G, Kang D, Nataraj P, Zhou Y, Weiss MD, Shao W. A flexible 2.5D medical image segmentation approach with in-slice and cross-slice attention. Comput Biol Med 2024; 182:109173. [PMID: 39317055 DOI: 10.1016/j.compbiomed.2024.109173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/18/2024] [Accepted: 09/17/2024] [Indexed: 09/26/2024]
Abstract
Deep learning has become the de facto method for medical image segmentation, with 3D segmentation models excelling in capturing complex 3D structures and 2D models offering high computational efficiency. However, segmenting 2.5D images, characterized by high in-plane resolution but lower through-plane resolution, presents significant challenges. While applying 2D models to individual slices of a 2.5D image is feasible, it fails to capture the spatial relationships between slices. On the other hand, 3D models face challenges such as resolution inconsistencies in 2.5D images, along with computational complexity and susceptibility to overfitting when trained with limited data. In this context, 2.5D models, which capture inter-slice correlations using only 2D neural networks, emerge as a promising solution due to their reduced computational demand and simplicity in implementation. In this paper, we introduce CSA-Net, a flexible 2.5D segmentation model capable of processing 2.5D images with an arbitrary number of slices. CSA-Net features an innovative Cross-Slice Attention (CSA) module that effectively captures 3D spatial information by learning long-range dependencies between the center slice (for segmentation) and its neighboring slices. Moreover, CSA-Net utilizes the self-attention mechanism to learn correlations among pixels within the center slice. We evaluated CSA-Net on three 2.5D segmentation tasks: (1) multi-class brain MR image segmentation, (2) binary prostate MR image segmentation, and (3) multi-class prostate MR image segmentation. CSA-Net outperformed leading 2D, 2.5D, and 3D segmentation methods across all three tasks, achieving average Dice coefficients and HD95 values of 0.897 and 1.40 mm for the brain dataset, 0.921 and 1.06 mm for the prostate dataset, and 0.659 and 2.70 mm for the ProstateX dataset, demonstrating its efficacy and superiority. Our code is publicly available at: https://github.com/mirthAI/CSA-Net.
Collapse
Affiliation(s)
- Amarjeet Kumar
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32610, United States
| | - Hongxu Jiang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32610, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Cyndi Valdes
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Gabriela Leon
- College of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Dahyun Kang
- College of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Parvathi Nataraj
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Michael D Weiss
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, 32610, United States; Intelligent Clinical Care Center, University of Florida, Gainesville, FL, 32610, United States.
| |
Collapse
|
10
|
Liu Z, Kainth K, Zhou A, Deyer TW, Fayad ZA, Greenspan H, Mei X. A review of self-supervised, generative, and few-shot deep learning methods for data-limited magnetic resonance imaging segmentation. NMR IN BIOMEDICINE 2024; 37:e5143. [PMID: 38523402 DOI: 10.1002/nbm.5143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/26/2024]
Abstract
Magnetic resonance imaging (MRI) is a ubiquitous medical imaging technology with applications in disease diagnostics, intervention, and treatment planning. Accurate MRI segmentation is critical for diagnosing abnormalities, monitoring diseases, and deciding on a course of treatment. With the advent of advanced deep learning frameworks, fully automated and accurate MRI segmentation is advancing. Traditional supervised deep learning techniques have advanced tremendously, reaching clinical-level accuracy in the field of segmentation. However, these algorithms still require a large amount of annotated data, which is oftentimes unavailable or impractical. One way to circumvent this issue is to utilize algorithms that exploit a limited amount of labeled data. This paper aims to review such state-of-the-art algorithms that use a limited number of annotated samples. We explain the fundamental principles of self-supervised learning, generative models, few-shot learning, and semi-supervised learning and summarize their applications in cardiac, abdomen, and brain MRI segmentation. Throughout this review, we highlight algorithms that can be employed based on the quantity of annotated data available. We also present a comprehensive list of notable publicly available MRI segmentation datasets. To conclude, we discuss possible future directions of the field-including emerging algorithms, such as contrastive language-image pretraining, and potential combinations across the methods discussed-that can further increase the efficacy of image segmentation with limited labels.
Collapse
Affiliation(s)
- Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Komal Kainth
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Timothy W Deyer
- East River Medical Imaging, New York, New York, USA
- Department of Radiology, Cornell Medicine, New York, New York, USA
| | - Zahi A Fayad
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Xueyan Mei
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
11
|
Nerella S, Bandyopadhyay S, Zhang J, Contreras M, Siegel S, Bumin A, Silva B, Sena J, Shickel B, Bihorac A, Khezeli K, Rashidi P. Transformers and large language models in healthcare: A review. Artif Intell Med 2024; 154:102900. [PMID: 38878555 PMCID: PMC11638972 DOI: 10.1016/j.artmed.2024.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 08/09/2024]
Abstract
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
Collapse
Affiliation(s)
- Subhash Nerella
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | | | - Jiaqing Zhang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States
| | - Miguel Contreras
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Scott Siegel
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Aysegul Bumin
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Brandon Silva
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Jessica Sena
- Department Of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Benjamin Shickel
- Department of Medicine, University of Florida, Gainesville, United States
| | - Azra Bihorac
- Department of Medicine, University of Florida, Gainesville, United States
| | - Kia Khezeli
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, United States.
| |
Collapse
|
12
|
Zhong T, Wu X, Liang S, Ning Z, Wang L, Niu Y, Yang S, Kang Z, Feng Q, Li G, Zhang Y. nBEST: Deep-learning-based non-human primates Brain Extraction and Segmentation Toolbox across ages, sites and species. Neuroimage 2024; 295:120652. [PMID: 38797384 DOI: 10.1016/j.neuroimage.2024.120652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/21/2024] [Accepted: 05/22/2024] [Indexed: 05/29/2024] Open
Abstract
Accurate processing and analysis of non-human primate (NHP) brain magnetic resonance imaging (MRI) serves an indispensable role in understanding brain evolution, development, aging, and diseases. Despite the accumulation of diverse NHP brain MRI datasets at various developmental stages and from various imaging sites/scanners, existing computational tools designed for human MRI typically perform poor on NHP data, due to huge differences in brain sizes, morphologies, and imaging appearances across species, sites, and ages, highlighting the imperative for NHP-specialized MRI processing tools. To address this issue, in this paper, we present a robust, generic, and fully automated computational pipeline, called non-human primates Brain Extraction and Segmentation Toolbox (nBEST), whose main functionality includes brain extraction, non-cerebrum removal, and tissue segmentation. Building on cutting-edge deep learning techniques by employing lifelong learning to flexibly integrate data from diverse NHP populations and innovatively constructing 3D U-NeXt architecture, nBEST can well handle structural NHP brain MR images from multi-species, multi-site, and multi-developmental-stage (from neonates to the elderly). We extensively validated nBEST based on, to our knowledge, the largest assemblage dataset in NHP brain studies, encompassing 1,469 scans with 11 species (e.g., rhesus macaques, cynomolgus macaques, chimpanzees, marmosets, squirrel monkeys, etc.) from 23 independent datasets. Compared to alternative tools, nBEST outperforms in precision, applicability, robustness, comprehensiveness, and generalizability, greatly benefiting downstream longitudinal, cross-sectional, and cross-species quantitative analyses. We have made nBEST an open-source toolbox (https://github.com/TaoZhong11/nBEST) and we are committed to its continual refinement through lifelong learning with incoming data to greatly contribute to the research field.
Collapse
Affiliation(s)
- Tao Zhong
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xueyang Wu
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Shujun Liang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Yuyu Niu
- Yunnan Key Laboratory of Primate Biomedical Research, Institute of Primate Translational Medicine, Kunming University of Science and Technology, Kunming, China
| | - Shihua Yang
- College of Veterinary Medicine, South China Agricultural University, Guangzhou, China
| | - Zhuang Kang
- Department of Radiology, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA.
| | - Yu Zhang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
13
|
Sun L, Zhao T, Liang X, Xia M, Li Q, Liao X, Gong G, Wang Q, Pang C, Yu Q, Bi Y, Chen P, Chen R, Chen Y, Chen T, Cheng J, Cheng Y, Cui Z, Dai Z, Deng Y, Ding Y, Dong Q, Duan D, Gao JH, Gong Q, Han Y, Han Z, Huang CC, Huang R, Huo R, Li L, Lin CP, Lin Q, Liu B, Liu C, Liu N, Liu Y, Liu Y, Lu J, Ma L, Men W, Qin S, Qiu J, Qiu S, Si T, Tan S, Tang Y, Tao S, Wang D, Wang F, Wang J, Wang P, Wang X, Wang Y, Wei D, Wu Y, Xie P, Xu X, Xu Y, Xu Z, Yang L, Yuan H, Zeng Z, Zhang H, Zhang X, Zhao G, Zheng Y, Zhong S, He Y. Functional connectome through the human life span. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.12.557193. [PMID: 37745373 PMCID: PMC10515818 DOI: 10.1101/2023.09.12.557193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The lifespan growth of the functional connectome remains unknown. Here, we assemble task-free functional and structural magnetic resonance imaging data from 33,250 individuals aged 32 postmenstrual weeks to 80 years from 132 global sites. We report critical inflection points in the nonlinear growth curves of the global mean and variance of the connectome, peaking in the late fourth and late third decades of life, respectively. After constructing a fine-grained, lifespan-wide suite of system-level brain atlases, we show distinct maturation timelines for functional segregation within different systems. Lifespan growth of regional connectivity is organized along a primary-to-association cortical axis. These connectome-based normative models reveal substantial individual heterogeneities in functional brain networks in patients with autism spectrum disorder, major depressive disorder, and Alzheimer's disease. These findings elucidate the lifespan evolution of the functional connectome and can serve as a normative reference for quantifying individual variation in development, aging, and neuropsychiatric disorders.
Collapse
Affiliation(s)
- Lianglong Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Tengda Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xinyuan Liang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Qiongling Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xuhong Liao
- School of Systems Science, Beijing Normal University, Beijing, China
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Qian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chenxuan Pang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Qian Yu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Pindong Chen
- Brainnetome Center & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Rui Chen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yuan Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Taolin Chen
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yuqi Cheng
- Department of Psychiatry, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zaixu Cui
- Chinese Institute for Brain Research, Beijing, China
| | - Zhengjia Dai
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yao Deng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yuyin Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dingna Duan
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
- Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chu-Chung Huang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), Affiliated Mental Health Center (ECNU), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Ruiwang Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ran Huo
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Lingjiang Li
- Department of Psychiatry, and National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Mental Health Institute of Central South University, China National Technology Institute on Mental Disorders, Hunan Technology Institute of Psychiatry, Hunan Key Laboratory of Psychiatry and Mental Health, Hunan Medical Center for Mental Health, Changsha, China
| | - Ching-Po Lin
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei, China
- Department of Education and Research, Taipei City Hospital, Taipei, China
| | - Qixiang Lin
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bangshan Liu
- Department of Psychiatry, and National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Mental Health Institute of Central South University, China National Technology Institute on Mental Disorders, Hunan Technology Institute of Psychiatry, Hunan Key Laboratory of Psychiatry and Mental Health, Hunan Medical Center for Mental Health, Changsha, China
| | - Chao Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ningyu Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Ying Liu
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yong Liu
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Leilei Ma
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
| | - Shaozheng Qin
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Shijun Qiu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Tianmei Si
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University, Beijing, China
| | - Shuping Tan
- Beijing Huilongguan Hospital, Peking University Huilongguan Clinical Medical School, Beijing, China
| | - Yanqing Tang
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Sha Tao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dawei Wang
- Department of Radiology, Qilu Hospital of Shandong University, Ji’nan, China
| | - Fei Wang
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Jiali Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Pan Wang
- Department of Neurology, Tianjin Huanhu Hospital, Tianjin University, Tianjin, China
| | - Xiaoqin Wang
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Yanpei Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Dongtao Wei
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, China
- Department of Psychology, Southwest University, Chongqing, China
| | - Yankun Wu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University, Beijing, China
| | - Peng Xie
- Chongqing Key Laboratory of Neurobiology, Chongqing, China
- Department of Neurology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xiufeng Xu
- Department of Psychiatry, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Yuehua Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhilei Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Liyuan Yang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Zilong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Haibo Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Xi Zhang
- Department of Neurology, the Second Medical Centre, National Clinical Research Centre for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Gai Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yanting Zheng
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Suyu Zhong
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | | | | | | | | | | | | | - Yong He
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
14
|
Henschel L, Kügler D, Zöllei L, Reuter M. VINNA for neonates: Orientation independence through latent augmentations. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:1-26. [PMID: 39575178 PMCID: PMC11576933 DOI: 10.1162/imag_a_00180] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/16/2024] [Accepted: 04/19/2024] [Indexed: 11/24/2024]
Abstract
A robust, fast, and accurate segmentation of neonatal brain images is highly desired to better understand and detect changes during development and disease, specifically considering the rise in imaging studies for this cohort. Yet, the limited availability of ground truth datasets, lack of standardized acquisition protocols, and wide variations of head positioning in the scanner pose challenges for method development. A few automated image analysis pipelines exist for newborn brain Magnetic Resonance Image (MRI) segmentation, but they often rely on time-consuming non-linear spatial registration procedures and require resampling to a common resolution, subject to loss of information due to interpolation and down-sampling. Without registration and image resampling, variations with respect to head positions and voxel resolutions have to be addressed differently. In deep learning, external augmentations such as rotation, translation, and scaling are traditionally used to artificially expand the representation of spatial variability, which subsequently increases both the training dataset size and robustness. However, these transformations in the image space still require resampling, reducing accuracy specifically in the context of label interpolation. We recently introduced the concept of resolution-independence with the Voxel-size Independent Neural Network framework, VINN. Here, we extend this concept by additionally shifting all rigid-transforms into the network architecture with a four degree of freedom (4-DOF) transform module, enabling resolution-aware internal augmentations (VINNA) for deep learning. In this work, we show that VINNA (i) significantly outperforms state-of-the-art external augmentation approaches, (ii) effectively addresses the head variations present specifically in newborn datasets, and (iii) retains high segmentation accuracy across a range of resolutions (0.5-1.0 mm). Furthermore, the 4-DOF transform module together with internal augmentations is a powerful, general approach to implement spatial augmentation without requiring image or label interpolation. The specific network application to newborns will be made publicly available as VINNA4neonates.
Collapse
Affiliation(s)
- Leonie Henschel
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - David Kügler
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Lilla Zöllei
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
15
|
Abstract
Objective Accurate infant brain parcellation is crucial for understanding early brain development; however, it is challenging due to the inherent low tissue contrast, high noise, and severe partial volume effects in infant magnetic resonance images (MRIs). The aim of this study was to develop an end-to-end pipeline that enabled accurate parcellation of infant brain MRIs. Methods We proposed an end-to-end pipeline that employs a two-stage global-to-local approach for accurate parcellation of infant brain MRIs. Specifically, in the global regions of interest (ROIs) localization stage, a combination of transformer and convolution operations was employed to capture both global spatial features and fine texture features, enabling an approximate localization of the ROIs across the whole brain. In the local ROIs refinement stage, leveraging the position priors from the first stage along with the raw MRIs, the boundaries o the ROIs are refined for a more accurate parcellation. Results We utilized the Dice ratio to evaluate the accuracy of parcellation results. Results on 263 subjects from National Database for Autism Research (NDAR), Baby Connectome Project (BCP) and Cross-site datasets demonstrated the better accuracy and robustness of our method than other competing methods. Conclusion Our end-to-end pipeline may be capable of accurately parcellating 6-month-old infant brain MRIs.
Collapse
Affiliation(s)
- Limei Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, New Caledonia, 27599, USA
| | - Yue Sun
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, New Caledonia, 27599, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, New Caledonia, 27599, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, New Caledonia, 27599, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, New Caledonia, 27599, USA
| |
Collapse
|
16
|
He K, Peng B, Yu W, Liu Y, Liu S, Cheng J, Dai Y. A Novel Mis-Seg-Focus Loss Function Based on a Two-Stage nnU-Net Framework for Accurate Brain Tissue Segmentation. Bioengineering (Basel) 2024; 11:427. [PMID: 38790294 PMCID: PMC11118222 DOI: 10.3390/bioengineering11050427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/14/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024] Open
Abstract
Brain tissue segmentation plays a critical role in the diagnosis, treatment, and study of brain diseases. Accurately identifying these boundaries is essential for improving segmentation accuracy. However, distinguishing boundaries between different brain tissues can be challenging, as they often overlap. Existing deep learning methods primarily calculate the overall segmentation results without adequately addressing local regions, leading to error propagation and mis-segmentation along boundaries. In this study, we propose a novel mis-segmentation-focused loss function based on a two-stage nnU-Net framework. Our approach aims to enhance the model's ability to handle ambiguous boundaries and overlapping anatomical structures, thereby achieving more accurate brain tissue segmentation results. Specifically, the first stage targets the identification of mis-segmentation regions using a global loss function, while the second stage involves defining a mis-segmentation loss function to adaptively adjust the model, thus improving its capability to handle ambiguous boundaries and overlapping anatomical structures. Experimental evaluations on two datasets demonstrate that our proposed method outperforms existing approaches both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Keyi He
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
- The School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China;
| | - Bo Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Weibo Yu
- The School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China;
| | - Yan Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Surui Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| | - Jian Cheng
- State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing 100191, China
- International Innovation Institute, Beihang University, 166 Shuanghongqiao Street, Pingyao Town, Yuhang District, Hangzhou 311115, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; (K.H.); (B.P.); (Y.L.); (S.L.)
| |
Collapse
|
17
|
Adamson CL, Alexander B, Kelly CE, Ball G, Beare R, Cheong JLY, Spittle AJ, Doyle LW, Anderson PJ, Seal ML, Thompson DK. Updates to the Melbourne Children's Regional Infant Brain Software Package (M-CRIB-S). Neuroinformatics 2024; 22:207-223. [PMID: 38492127 PMCID: PMC11021251 DOI: 10.1007/s12021-024-09656-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/01/2023] [Indexed: 03/18/2024]
Abstract
The delineation of cortical areas on magnetic resonance images (MRI) is important for understanding the complexities of the developing human brain. The previous version of the Melbourne Children's Regional Infant Brain (M-CRIB-S) (Adamson et al. Scientific Reports, 10(1), 10, 2020) is a software package that performs whole-brain segmentation, cortical surface extraction and parcellation of the neonatal brain. Available cortical parcellation schemes in the M-CRIB-S are the adult-compatible 34- and 31-region per hemisphere Desikan-Killiany (DK) and Desikan-Killiany-Tourville (DKT), respectively. We present a major update to the software package which achieves two aims: 1) to make the voxel-based segmentation outputs derived from the Freesurfer-compatible M-CRIB scheme, and 2) to improve the accuracy of whole-brain segmentation and cortical surface extraction. Cortical surface extraction has been improved with additional steps to improve penetration of the inner surface into thin gyri. The improved cortical surface extraction is shown to increase the robustness of measures such as surface area, cortical thickness, and cortical volume.
Collapse
Affiliation(s)
- Chris L Adamson
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
| | - Bonnie Alexander
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
| | - Claire E Kelly
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Melbourne, Australia
| | - Gareth Ball
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
| | - Richard Beare
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Department of Medicine, Monash University, Melbourne, Australia
| | - Jeanie L Y Cheong
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Neonatal Services, The Royal Women's Hospital, Melbourne, Australia
- Department of Obstetrics and Gynaecology, The University of Melbourne, Melbourne, Australia
| | - Alicia J Spittle
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Neonatal Services, The Royal Women's Hospital, Melbourne, Australia
- Department of Physiotherapy, The University of Melbourne, Melbourne, Australia
| | - Lex W Doyle
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Neonatal Services, The Royal Women's Hospital, Melbourne, Australia
- Department of Obstetrics and Gynaecology, The University of Melbourne, Melbourne, Australia
- Department of Paediatrics, The University of Melbourne, Melbourne, Australia
| | - Peter J Anderson
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Melbourne, Australia
- Department of Paediatrics, The University of Melbourne, Melbourne, Australia
| | - Marc L Seal
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia
- Department of Paediatrics, The University of Melbourne, Melbourne, Australia
| | - Deanne K Thompson
- Royal Children's Hospital, Murdoch Children's Research Institute, Flemington Road, Parkville, Victoria, 3052, Australia.
- Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Melbourne, Australia.
- Department of Paediatrics, The University of Melbourne, Melbourne, Australia.
- Florey Department of Neuroscience and Mental Health, The University of Melbourne, Melbourne, Australia.
| |
Collapse
|
18
|
Lu S, Yan Z, Chen W, Cheng T, Zhang Z, Yang G. Dual consistency regularization with subjective logic for semi-supervised medical image segmentation. Comput Biol Med 2024; 170:107991. [PMID: 38242016 DOI: 10.1016/j.compbiomed.2024.107991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/18/2023] [Accepted: 01/13/2024] [Indexed: 01/21/2024]
Abstract
Semi-supervised learning plays a vital role in computer vision tasks, particularly in medical image analysis. It significantly reduces the time and cost involved in labeling data. Current methods primarily focus on consistency regularization and the generation of pseudo labels. However, due to the model's poor awareness of unlabeled data, aforementioned methods may misguide the model. To alleviate this problem, we propose a dual consistency regularization with subjective logic for semi-supervised medical image segmentation. Specifically, we introduce subjective logic into our semi-supervised medical image segmentation task to estimate uncertainty, and based on the consistency hypothesis, we construct dual consistency regularization under weak and strong perturbations to guide the model's learning from unlabeled data. To evaluate the performance of the proposed method, we performed experiments on three widely used datasets: ACDC, LA, and Pancreas. Experiments show that the proposed method achieved improvement compared with other state-of-the-art (SOTA) methods.
Collapse
Affiliation(s)
- Shanfu Lu
- Perception Vision Medical Technologies Co., Ltd, Guangzhou, 510530, China.
| | - Ziye Yan
- Perception Vision Medical Technologies Co., Ltd, Guangzhou, 510530, China
| | - Wei Chen
- The radiotherapy department of second peoples' hospital, neijiang, 641000, China
| | - Tingting Cheng
- Department of Oncology, National Clinical Research Center for Geriatric Disorders and Xiangya Lung Cancer Center, Xiangya Hospital, Central South University, Changsha, 41000, China.
| | - Zijian Zhang
- Department of Oncology, National Clinical Research Center for Geriatric Disorders and Xiangya Lung Cancer Center, Xiangya Hospital, Central South University, Changsha, 41000, China.
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, London, UK; National Heart and Lung Institute, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| |
Collapse
|
19
|
Mhlanga ST, Viriri S. Deep learning techniques for isointense infant brain tissue segmentation: a systematic literature review. Front Med (Lausanne) 2023; 10:1240360. [PMID: 38193036 PMCID: PMC10773803 DOI: 10.3389/fmed.2023.1240360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/01/2023] [Indexed: 01/10/2024] Open
Abstract
Introduction To improve comprehension of initial brain growth in wellness along with sickness, it is essential to precisely segment child brain magnetic resonance imaging (MRI) into white matter (WM) and gray matter (GM), along with cerebrospinal fluid (CSF). Nonetheless, in the isointense phase (6-8 months of age), the inborn myelination and development activities, WM along with GM display alike stages of intensity in both T1-weighted and T2-weighted MRI, making tissue segmentation extremely difficult. Methods The comprehensive review of studies related to isointense brain MRI segmentation approaches is highlighted in this publication. The main aim and contribution of this study is to aid researchers by providing a thorough review to make their search for isointense brain MRI segmentation easier. The systematic literature review is performed from four points of reference: (1) review of studies concerning isointense brain MRI segmentation; (2) research contribution and future works and limitations; (3) frequently applied evaluation metrics and datasets; (4) findings of this studies. Results and discussion The systemic review is performed on studies that were published in the period of 2012 to 2022. A total of 19 primary studies of isointense brain MRI segmentation were selected to report the research question stated in this review.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
20
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
21
|
Wang F, Zhang H, Wu Z, Hu D, Zhou Z, Girault JB, Wang L, Lin W, Li G. Fine-grained functional parcellation maps of the infant cerebral cortex. eLife 2023; 12:e75401. [PMID: 37526293 PMCID: PMC10393291 DOI: 10.7554/elife.75401] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 07/17/2023] [Indexed: 08/02/2023] Open
Abstract
Resting-state functional MRI (rs-fMRI) is widely used to examine the dynamic brain functional development of infants, but these studies typically require precise cortical parcellation maps, which cannot be directly borrowed from adult-based functional parcellation maps due to the substantial differences in functional brain organization between infants and adults. Creating infant-specific cortical parcellation maps is thus highly desired but remains challenging due to difficulties in acquiring and processing infant brain MRIs. In this study, we leveraged 1064 high-resolution longitudinal rs-fMRIs from 197 typically developing infants and toddlers from birth to 24 months who participated in the Baby Connectome Project to develop the first set of infant-specific, fine-grained, surface-based cortical functional parcellation maps. To establish meaningful cortical functional correspondence across individuals, we performed cortical co-registration using both the cortical folding geometric features and the local gradient of functional connectivity (FC). Then we generated both age-related and age-independent cortical parcellation maps with over 800 fine-grained parcels during infancy based on aligned and averaged local gradient maps of FC across individuals. These parcellation maps reveal complex functional developmental patterns, such as changes in local gradient, network size, and local efficiency, especially during the first 9 postnatal months. Our generated fine-grained infant cortical functional parcellation maps are publicly available at https://www.nitrc.org/projects/infantsurfatlas/ for advancing the pediatric neuroimaging field.
Collapse
Affiliation(s)
- Fan Wang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong UniversityXi'anChina
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Han Zhang
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Dan Hu
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Zhen Zhou
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Jessica B Girault
- Department of Psychiatry, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, the University of North Carolina at Chapel HillChapel HillUnited States
| |
Collapse
|
22
|
Zhao Y, Wang S, Zhang Y, Qiao S, Zhang M. WRANet: wavelet integrated residual attention U-Net network for medical image segmentation. COMPLEX INTELL SYST 2023:1-13. [PMID: 37361970 PMCID: PMC10248349 DOI: 10.1007/s40747-023-01119-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023]
Abstract
Medical image segmentation is crucial for the diagnosis and analysis of disease. Deep convolutional neural network methods have achieved great success in medical image segmentation. However, they are highly susceptible to noise interference during the propagation of the network, where weak noise can dramatically alter the network output. As the network deepens, it can face problems such as gradient explosion and vanishing. To improve the robustness and segmentation performance of the network, we propose a wavelet residual attention network (WRANet) for medical image segmentation. We replace the standard downsampling modules (e.g., maximum pooling and average pooling) in CNNs with discrete wavelet transform, decompose the features into low- and high-frequency components, and remove the high-frequency components to eliminate noise. At the same time, the problem of feature loss can be effectively addressed by introducing an attention mechanism. The combined experimental results show that our method can effectively perform aneurysm segmentation, achieving a Dice score of 78.99%, an IoU score of 68.96%, a precision of 85.21%, and a sensitivity score of 80.98%. In polyp segmentation, a Dice score of 88.89%, an IoU score of 81.74%, a precision rate of 91.32%, and a sensitivity score of 91.07% were achieved. Furthermore, our comparison with state-of-the-art techniques demonstrates the competitiveness of the WRANet network.
Collapse
Affiliation(s)
- Yawu Zhao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Shudong Wang
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Yulin Zhang
- College of Mathematics and System Science, Shandong University of Science and Technology, Qingdao, Shandong China
| | - Sibo Qiao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Mufei Zhang
- Inspur Cloud Information Technology Co, Inspur, Jinan, Shandong China
| |
Collapse
|
23
|
Wu F, Zhuang X. Minimizing Estimated Risks on Unlabeled Data: A New Formulation for Semi-Supervised Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6021-6036. [PMID: 36251907 DOI: 10.1109/tpami.2022.3215186] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Supervised segmentation can be costly, particularly in applications of biomedical image analysis where large scale manual annotations from experts are generally too expensive to be available. Semi-supervised segmentation, able to learn from both the labeled and unlabeled images, could be an efficient and effective alternative for such scenarios. In this work, we propose a new formulation based on risk minimization, which makes full use of the unlabeled images. Different from most of the existing approaches which solely explicitly guarantee the minimization of prediction risks from the labeled training images, the new formulation also considers the risks on unlabeled images. Particularly, this is achieved via an unbiased estimator, based on which we develop a general framework for semi-supervised image segmentation. We validate this framework on three medical image segmentation tasks, namely cardiac segmentation on ACDC2017, optic cup and disc segmentation on REFUGE dataset and 3D whole heart segmentation on MM-WHS dataset. Results show that the proposed estimator is effective, and the segmentation method achieves superior performance and demonstrates great potential compared to the other state-of-the-art approaches. Our code and data will be released via https://zmiclab.github.io/projects.html, once the manuscript is accepted for publication.
Collapse
|
24
|
Chen L, Wu Z, Zhao F, Wang Y, Lin W, Wang L, Li G. An attention-based context-informed deep framework for infant brain subcortical segmentation. Neuroimage 2023; 269:119931. [PMID: 36746299 PMCID: PMC10241225 DOI: 10.1016/j.neuroimage.2023.119931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 01/13/2023] [Accepted: 02/03/2023] [Indexed: 02/06/2023] Open
Abstract
Precise segmentation of subcortical structures from infant brain magnetic resonance (MR) images plays an essential role in studying early subcortical structural and functional developmental patterns and diagnosis of related brain disorders. However, due to the dynamic appearance changes, low tissue contrast, and tiny subcortical size in infant brain MR images, infant subcortical segmentation is a challenging task. In this paper, we propose a context-guided, attention-based, coarse-to-fine deep framework to precisely segment the infant subcortical structures. At the coarse stage, we aim to directly predict the signed distance maps (SDMs) from multi-modal intensity images, including T1w, T2w, and the ratio of T1w and T2w images, with an SDM-Unet, which can leverage the spatial context information, including the structural position information and the shape information of the target structure, to generate high-quality SDMs. At the fine stage, the predicted SDMs, which encode spatial-context information of each subcortical structure, are integrated with the multi-modal intensity images as the input to a multi-source and multi-path attention Unet (M2A-Unet) for achieving refined segmentation. Both the 3D spatial and channel attention blocks are added to guide the M2A-Unet to focus more on the important subregions and channels. We additionally incorporate the inner and outer subcortical boundaries as extra labels to help precisely estimate the ambiguous boundaries. We validate our method on an infant MR image dataset and on an unrelated neonatal MR image dataset. Compared to eleven state-of-the-art methods, the proposed framework consistently achieves higher segmentation accuracy in both qualitative and quantitative evaluations of infant MR images and also exhibits good generalizability in the neonatal dataset.
Collapse
Affiliation(s)
- Liangjun Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ya Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
25
|
He Q, Dong M, Summerfield N, Glide-Hurst C. MAGNET: A MODALITY-AGNOSTIC NETWORK FOR 3D MEDICAL IMAGE SEGMENTATION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230587. [PMID: 38169907 PMCID: PMC10760993 DOI: 10.1109/isbi53787.2023.10230587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
In this paper, we proposed MAGNET, a novel modality-agnostic network for 3D medical image segmentation. Different from existing learning methods, MAGNET is specifically designed to handle real medical situations where multiple modalities/sequences are available during model training, but fewer ones are available or used at time of clinical practice. Our results on multiple datasets show that MAGNET trained on multi-modality data has the unique ability to perform predictions using any subset of training imaging modalities. It outperforms individually trained uni-modality models while can further boost performance when more modalities are available at testing.
Collapse
Affiliation(s)
- Qisheng He
- Wayne State University Department of Computer Science 5057 Woodward Ave, Detroit, MI 48202
| | - Ming Dong
- Wayne State University Department of Computer Science 5057 Woodward Ave, Detroit, MI 48202
| | - Nicholas Summerfield
- University of Wisconsin-Madison Department of Human Oncology Department of Medical Physics 600 Highland Ave, Madison, WI 53792
| | - Carri Glide-Hurst
- University of Wisconsin-Madison Department of Human Oncology Department of Medical Physics 600 Highland Ave, Madison, WI 53792
| |
Collapse
|
26
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
27
|
Reis HC, Turk V, Khoshelham K, Kaya S. MediNet: transfer learning approach with MediNet medical visual database. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-44. [PMID: 37362724 PMCID: PMC10025796 DOI: 10.1007/s11042-023-14831-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/06/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
The rapid development of machine learning has increased interest in the use of deep learning methods in medical research. Deep learning in the medical field is used in disease detection and classification problems in the clinical decision-making process. Large amounts of labeled datasets are often required to train deep neural networks; however, in the medical field, the lack of a sufficient number of images in datasets and the difficulties encountered during data collection are among the main problems. In this study, we propose MediNet, a new 10-class visual dataset consisting of Rontgen (X-ray), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, and Histopathological images such as calcaneal normal, calcaneal tumor, colon benign colon adenocarcinoma, brain normal, brain tumor, breast benign, breast malignant, chest normal, chest pneumonia. AlexNet, VGG19-BN, Inception V3, DenseNet 121, ResNet 101, EfficientNet B0, Nested-LSTM + CNN, and proposed RdiNet deep learning algorithms are used in the transfer learning for pre-training and classification application. Transfer learning aims to apply previously learned knowledge in a new task. Seven algorithms were trained with the MediNet dataset, and the models obtained from these algorithms, namely feature vectors, were recorded. Pre-training models were used for classification studies on chest X-ray images, diabetic retinopathy, and Covid-19 datasets with the transfer learning technique. In performance measurement, an accuracy of 94.84% was obtained in the traditional classification study for the InceptionV3 model in the classification study performed on the Chest X-Ray Images dataset, and the accuracy was increased 98.71% after the transfer learning technique was applied. In the Covid-19 dataset, the classification success of the DenseNet121 model before pre-trained was 88%, while the performance after the transfer application with MediNet was 92%. In the Diabetic retinopathy dataset, the classification success of the Nested-LSTM + CNN model before pre-trained was 79.35%, while the classification success was 81.52% after the transfer application with MediNet. The comparison of results obtained from experimental studies observed that the proposed method produced more successful results. Graphical abstract
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, 2900 Gumushane, Turkey
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| | - Kourosh Khoshelham
- Department of Infrastructure Engineering, The University of Melbourne, Parkville, 3052 Australia
| | - Serhat Kaya
- Department of Mining Engineering, Dicle University, Diyarbakir, Turkey
| |
Collapse
|
28
|
Wang L, Wu Z, Chen L, Sun Y, Lin W, Li G. iBEAT V2.0: a multisite-applicable, deep learning-based pipeline for infant cerebral cortical surface reconstruction. Nat Protoc 2023; 18:1488-1509. [PMID: 36869216 DOI: 10.1038/s41596-023-00806-x] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 11/03/2022] [Indexed: 03/05/2023]
Abstract
The human cerebral cortex undergoes dramatic and critical development during early postnatal stages. Benefiting from advances in neuroimaging, many infant brain magnetic resonance imaging (MRI) datasets have been collected from multiple imaging sites with different scanners and imaging protocols for the investigation of normal and abnormal early brain development. However, it is extremely challenging to precisely process and quantify infant brain development with these multisite imaging data because infant brain MRI scans exhibit (a) extremely low and dynamic tissue contrast caused by ongoing myelination and maturation and (b) inter-site data heterogeneity resulting from the use of diverse imaging protocols/scanners. Consequently, existing computational tools and pipelines typically perform poorly on infant MRI data. To address these challenges, we propose a robust, multisite-applicable, infant-tailored computational pipeline that leverages powerful deep learning techniques. The main functionality of the proposed pipeline includes preprocessing, brain skull stripping, tissue segmentation, topology correction, cortical surface reconstruction and measurement. Our pipeline can handle both T1w and T2w structural infant brain MR images well in a wide age range (from birth to 6 years of age) and is effective for different imaging protocols/scanners, despite being trained only on the data from the Baby Connectome Project. Extensive comparisons with existing methods on multisite, multimodal and multi-age datasets demonstrate superior effectiveness, accuracy and robustness of our pipeline. We have maintained a website, iBEAT Cloud, for users to process their images with our pipeline ( http://www.ibeat.cloud ), which has successfully processed over 16,000 infant MRI scans from more than 100 institutions with various imaging protocols/scanners.
Collapse
Affiliation(s)
- Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - Liangjun Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yue Sun
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
29
|
Zeng Z, Zhao T, Sun L, Zhang Y, Xia M, Liao X, Zhang J, Shen D, Wang L, He Y. 3D-MASNet: 3D mixed-scale asymmetric convolutional segmentation network for 6-month-old infant brain MR images. Hum Brain Mapp 2023; 44:1779-1792. [PMID: 36515219 PMCID: PMC9921327 DOI: 10.1002/hbm.26174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 11/04/2022] [Accepted: 11/25/2022] [Indexed: 12/15/2022] Open
Abstract
Precise segmentation of infant brain magnetic resonance (MR) images into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) are essential for studying neuroanatomical hallmarks of early brain development. However, for 6-month-old infants, the extremely low-intensity contrast caused by inherent myelination hinders accurate tissue segmentation. Existing convolutional neural networks (CNNs) based segmentation models for this task generally employ single-scale symmetric convolutions, which are inefficient for encoding the isointense tissue boundaries in baby brain images. Here, we propose a 3D mixed-scale asymmetric convolutional segmentation network (3D-MASNet) framework for brain MR images of 6-month-old infants. We replaced the traditional convolutional layer of an existing to-be-trained network with a 3D mixed-scale convolution block consisting of asymmetric kernels (MixACB) during the training phase and then equivalently converted it into the original network. Five canonical CNN segmentation models were evaluated using both T1- and T2-weighted images of 23 6-month-old infants from iSeg-2019 datasets, which contained manual labels as ground truth. MixACB significantly enhanced the average accuracy of all five models and obtained the most considerable improvement in the fully convolutional network model (CC-3D-FCN) and the highest performance in the Dense U-Net model. This approach further obtained Dice coefficient accuracies of 0.931, 0.912, and 0.961 in GM, WM, and CSF, respectively, ranking first among 30 teams on the validation dataset of the iSeg-2019 Grand Challenge. Thus, the proposed 3D-MASNet can improve the accuracy of existing CNNs-based segmentation models as a plug-and-play solution that offers a promising technique for future infant brain MRI studies.
Collapse
Affiliation(s)
- Zilong Zeng
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Tengda Zhao
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Lianglong Sun
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Yihe Zhang
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Xuhong Liao
- School of Systems ScienceBeijing Normal UniversityBeijingChina
| | - Jiaying Zhang
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
| | - Dinggang Shen
- School of Biomedical EngineeringShanghaiTech UniversityShanghaiChina
- Shanghai Clinical Research and Trial CenterShanghaiChina
- Department of Research and DevelopmentShanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Li Wang
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Yong He
- State Key Laboratory of Cognitive Neuroscience and LearningBeijing Normal UniversityBeijingChina
- Beijing Key Laboratory of Brain Imaging and ConnectomicsBeijing Normal UniversityBeijingChina
- IDG/McGovern Institute for Brain Research, Beijing Normal UniversityBeijingChina
- Chinese Institute for Brain ResearchBeijingChina
| |
Collapse
|
30
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
31
|
Cabeza-Ruiz R, Velázquez-Pérez L, Pérez-Rodríguez R, Reetz K. ConvNets for automatic detection of polyglutamine SCAs from brain MRIs: state of the art applications. Med Biol Eng Comput 2023; 61:1-24. [PMID: 36385616 DOI: 10.1007/s11517-022-02714-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 10/26/2022] [Indexed: 11/17/2022]
Abstract
Polyglutamine spinocerebellar ataxias (polyQ SCAs) are a group of neurodegenerative diseases, clinically and genetically heterogeneous, characterized by loss of balance and motor coordination due to dysfunction of the cerebellum and its connections. The diagnosis of each type of polyQ SCA, alongside with genetic tests, includes medical images analysis, and its automation may help specialists to distinguish between each type. Convolutional neural networks (ConvNets or CNNs) have been recently used for medical image processing, with outstanding results. In this work, we present the main clinical and imaging features of polyglutamine SCAs, and the basics of CNNs. Finally, we review studies that have used this approach to automatically process brain medical images and may be applied to SCAs detection. We conclude by discussing the possible limitations and opportunities of using ConvNets for SCAs diagnose in the future.
Collapse
Affiliation(s)
| | - Luis Velázquez-Pérez
- Cuban Academy of Sciences, La Habana, Cuba
- Center for the Research and Rehabilitation of Hereditary Ataxias, Holguín, Cuba
| | - Roberto Pérez-Rodríguez
- CAD/CAM Study Center, University of Holguín, Holguín, Cuba
- Cuban Academy of Sciences, La Habana, Cuba
| | - Kathrin Reetz
- Department of Neurology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
32
|
Li M, Jiang Z, Shen W, Liu H. Deep learning in bladder cancer imaging: A review. Front Oncol 2022; 12:930917. [PMID: 36338676 PMCID: PMC9631317 DOI: 10.3389/fonc.2022.930917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022] Open
Abstract
Deep learning (DL) is a rapidly developing field in machine learning (ML). The concept of deep learning originates from research on artificial neural networks and is an upgrade of traditional neural networks. It has achieved great success in various domains and has shown potential in solving medical problems, particularly when using medical images. Bladder cancer (BCa) is the tenth most common cancer in the world. Imaging, as a safe, noninvasive, and relatively inexpensive technique, is a powerful tool to aid in the diagnosis and treatment of bladder cancer. In this review, we provide an overview of the latest progress in the application of deep learning to the imaging assessment of bladder cancer. First, we review the current deep learning approaches used for bladder segmentation. We then provide examples of how deep learning helps in the diagnosis, staging, and treatment management of bladder cancer using medical images. Finally, we summarize the current limitations of deep learning and provide suggestions for future improvements.
Collapse
Affiliation(s)
- Mingyang Li
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zekun Jiang
- Ministry of Education (MoE) Key Lab of Artificial Intelligence, Artificial Intelligence (AI) Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Wei Shen
- Ministry of Education (MoE) Key Lab of Artificial Intelligence, Artificial Intelligence (AI) Institute, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Haitao Liu, ; Wei Shen,
| | - Haitao Liu
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- *Correspondence: Haitao Liu, ; Wei Shen,
| |
Collapse
|
33
|
Liu H, Zhuang Y, Song E, Xu X, Hung CC. A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation. Comput Biol Med 2022; 149:105964. [PMID: 36007288 DOI: 10.1016/j.compbiomed.2022.105964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/16/2022] [Accepted: 08/13/2022] [Indexed: 11/03/2022]
Abstract
Multi-modal medical image segmentation has achieved great success through supervised deep learning networks. However, because of domain shift and limited annotation information, unpaired cross-modality segmentation tasks are still challenging. The unsupervised domain adaptation (UDA) methods can alleviate the segmentation degradation of cross-modality segmentation by knowledge transfer between different domains, but current methods still suffer from the problems of model collapse, adversarial training instability, and mismatch of anatomical structures. To tackle these issues, we propose a bidirectional multilayer contrastive adaptation network (BMCAN) for unpaired cross-modality segmentation. The shared encoder is first adopted for learning modality-invariant encoding representations in image synthesis and segmentation simultaneously. Secondly, to retain the anatomical structure consistency in cross-modality image synthesis, we present a structure-constrained cross-modality image translation approach for image alignment. Thirdly, we construct a bidirectional multilayer contrastive learning approach to preserve the anatomical structures and enhance encoding representations, which utilizes two groups of domain-specific multilayer perceptron (MLP) networks to learn modality-specific features. Finally, a semantic information adversarial learning approach is designed to learn structural similarities of semantic outputs for output space alignment. Our proposed method was tested on three different cross-modality segmentation tasks: brain tissue, brain tumor, and cardiac substructure segmentation. Compared with other UDA methods, experimental results show that our proposed BMCAN achieves state-of-the-art segmentation performance on the above three tasks, and it has fewer training components and better feature representations for overcoming overfitting and domain shift problems. Our proposed method can efficiently reduce the annotation burden of radiologists in cross-modality image analysis.
Collapse
Affiliation(s)
- Hong Liu
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Yuzhou Zhuang
- Institute of Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Enmin Song
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Xiangyang Xu
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Chih-Cheng Hung
- Center for Machine Vision and Security Research, Kennesaw State University, Marietta, MA, 30060, USA.
| |
Collapse
|
34
|
Fully Convolutional Neural Network for Improved Brain Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07169-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
35
|
Khaled A, Han JJ, Ghaleb TA. Learning to detect boundary information for brain image segmentation. BMC Bioinformatics 2022; 23:332. [PMID: 35953776 PMCID: PMC9367147 DOI: 10.1186/s12859-022-04882-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/30/2022] [Indexed: 11/14/2022] Open
Abstract
MRI brain images are always of low contrast, which makes it difficult to identify to which area the information at the boundary of brain images belongs. This can make the extraction of features at the boundary more challenging, since those features can be misleading as they might mix properties of different brain regions. Hence, to alleviate such a problem, image boundary detection plays a vital role in medical image segmentation, and brain segmentation in particular, as unclear boundaries can worsen brain segmentation results. Yet, given the low quality of brain images, boundary detection in the context of brain image segmentation remains challenging. Despite the research invested to improve boundary detection and brain segmentation, these two problems were addressed independently, i.e., little attention was paid to applying boundary detection to brain segmentation tasks. Therefore, in this paper, we propose a boundary detection-based model for brain image segmentation. To this end, we first design a boundary segmentation network for detecting and segmenting images brain tissues. Then, we design a boundary information module (BIM) to distinguish boundaries from the three different brain tissues. After that, we add a boundary attention gate (BAG) to the encoder output layers of our transformer to capture more informative local details. We evaluate our proposed model on two datasets of brain tissue images, including infant and adult brains. The extensive evaluation experiments of our model show better performance (a Dice Coefficient (DC) accuracy of up to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$5.3\%$$\end{document}5.3% compared to the state-of-the-art models) in detecting and segmenting brain tissue images.
Collapse
Affiliation(s)
- Afifa Khaled
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China.
| | - Jian-Jun Han
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Taher A Ghaleb
- School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
| |
Collapse
|
36
|
Cao X, Chen H, Li Y, Peng Y, Zhou Y, Cheng L, Liu T, Shen D. Auto-DenseUNet: Searchable neural network architecture for mass segmentation in 3D automated breast ultrasound. Med Image Anal 2022; 82:102589. [DOI: 10.1016/j.media.2022.102589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 07/18/2022] [Accepted: 08/17/2022] [Indexed: 11/15/2022]
|
37
|
Niyas S, Pawan S, Anand Kumar M, Rajan J. Medical image segmentation with 3D convolutional neural networks: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.065] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
38
|
Zhang Y, Liao Q, Ding L, Zhang J. Bridging 2D and 3D segmentation networks for computation-efficient volumetric medical image segmentation: An empirical study of 2.5D solutions. Comput Med Imaging Graph 2022; 99:102088. [DOI: 10.1016/j.compmedimag.2022.102088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 05/24/2022] [Accepted: 05/26/2022] [Indexed: 11/28/2022]
|
39
|
Wei D, Ahmad S, Guo Y, Chen L, Huang Y, Ma L, Wu Z, Li G, Wang L, Lin W, Yap PT, Shen D, Wang Q. Recurrent Tissue-Aware Network for Deformable Registration of Infant Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1219-1229. [PMID: 34932474 PMCID: PMC9064923 DOI: 10.1109/tmi.2021.3137280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is fundamental to longitudinal and population-based image analyses. However, it is challenging to precisely align longitudinal infant brain MR images of the same subject, as well as cross-sectional infant brain MR images of different subjects, due to fast brain development during infancy. In this paper, we propose a recurrently usable deep neural network for the registration of infant brain MR images. There are three main highlights of our proposed method. (i) We use brain tissue segmentation maps for registration, instead of intensity images, to tackle the issue of rapid contrast changes of brain tissues during the first year of life. (ii) A single registration network is trained in a one-shot manner, and then recurrently applied in inference for multiple times, such that the complex deformation field can be recovered incrementally. (iii) We also propose both the adaptive smoothing layer and the tissue-aware anti-folding constraint into the registration network to ensure the physiological plausibility of estimated deformations without degrading the registration accuracy. Experimental results, in comparison to the state-of-the-art registration methods, indicate that our proposed method achieves the highest registration accuracy while still preserving the smoothness of the deformation field. The implementation of our proposed registration network is available online https://github.com/Barnonewdm/ACTA-Reg-Net.
Collapse
|
40
|
Wei J, Wu Z, Wang L, Bui TD, Qu L, Yap PT, Xia Y, Li G, Shen D. A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling. PATTERN RECOGNITION 2022; 124:108420. [PMID: 38469076 PMCID: PMC10927017 DOI: 10.1016/j.patcog.2021.108420] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images. We first train a nested network (Nes-Net) for a rough segmentation. The second Nes-Net uses tissue-specific geodesic distance maps as contextual information to refine the segmentation. This process is iterated to build CaNes-Net with a cascade of Nes-Net modules to gradually refine the segmentation. To alleviate the misalignment between 3T and corresponding 7T MR images, we incorporate a correlation coefficient map to allow well-aligned voxels to play a more important role in supervising the training process. We compared CaNes-Net with SPM and FSL tools, as well as four deep learning models on 18 adult subjects and the ADNI dataset. Our results indicate that CaNes-Net reduces segmentation errors caused by the misalignment and improves segmentation accuracy substantially over the competing methods.
Collapse
Affiliation(s)
- Jie Wei
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Toan Duc Bui
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science at Stanford University, Stanford, CA 94305, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| |
Collapse
|
41
|
Largent A, De Asis-Cruz J, Kapse K, Barnett SD, Murnick J, Basu S, Andersen N, Norman S, Andescavage N, Limperopoulos C. Automatic brain segmentation in preterm infants with post-hemorrhagic hydrocephalus using 3D Bayesian U-Net. Hum Brain Mapp 2022; 43:1895-1916. [PMID: 35023255 PMCID: PMC8933325 DOI: 10.1002/hbm.25762] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 12/08/2021] [Accepted: 12/11/2021] [Indexed: 12/17/2022] Open
Abstract
Post‐hemorrhagic hydrocephalus (PHH) is a severe complication of intraventricular hemorrhage (IVH) in very preterm infants. PHH monitoring and treatment decisions rely heavily on manual and subjective two‐dimensional measurements of the ventricles. Automatic and reliable three‐dimensional (3D) measurements of the ventricles may provide a more accurate assessment of PHH, and lead to improved monitoring and treatment decisions. To accurately and efficiently obtain these 3D measurements, automatic segmentation of the ventricles can be explored. However, this segmentation is challenging due to the large ventricular anatomical shape variability in preterm infants diagnosed with PHH. This study aims to (a) propose a Bayesian U‐Net method using 3D spatial concrete dropout for automatic brain segmentation (with uncertainty assessment) of preterm infants with PHH; and (b) compare the Bayesian method to three reference methods: DenseNet, U‐Net, and ensemble learning using DenseNets and U‐Nets. A total of 41 T2‐weighted MRIs from 27 preterm infants were manually segmented into lateral ventricles, external CSF, white and cortical gray matter, brainstem, and cerebellum. These segmentations were used as ground truth for model evaluation. All methods were trained and evaluated using 4‐fold cross‐validation and segmentation endpoints, with additional uncertainty endpoints for the Bayesian method. In the lateral ventricles, segmentation endpoint values for the DenseNet, U‐Net, ensemble learning, and Bayesian U‐Net methods were mean Dice score = 0.814 ± 0.213, 0.944 ± 0.041, 0.942 ± 0.042, and 0.948 ± 0.034 respectively. Uncertainty endpoint values for the Bayesian U‐Net were mean recall = 0.953 ± 0.037, mean negative predictive value = 0.998 ± 0.005, mean accuracy = 0.906 ± 0.032, and mean AUC = 0.949 ± 0.031. To conclude, the Bayesian U‐Net showed the best segmentation results across all methods and provided accurate uncertainty maps. This method may be used in clinical practice for automatic brain segmentation of preterm infants with PHH, and lead to better PHH monitoring and more informed treatment decisions.
Collapse
Affiliation(s)
- Axel Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Josepheen De Asis-Cruz
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Kushal Kapse
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Scott D Barnett
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Jonathan Murnick
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Sudeepta Basu
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Nicole Andersen
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Stephanie Norman
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Nickie Andescavage
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA.,Department of Neonatology, Children's National Hospital, Washington, District of Columbia, USA
| | - Catherine Limperopoulos
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA.,Departments of Radiology and Pediatrics, George Washington University, Washington, District of Columbia, USA.,Neurology School of Medicine and Health Sciences, George Washington University, Washington, District of Columbia, USA
| |
Collapse
|
42
|
Gao K, Sun Y, Niu S, Wang L. Unified framework for early stage status prediction of autism based on infant structural magnetic resonance imaging. Autism Res 2021; 14:2512-2523. [PMID: 34643325 PMCID: PMC8665129 DOI: 10.1002/aur.2626] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 09/04/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022]
Abstract
Autism, or autism spectrum disorder (ASD), is a developmental disability that is diagnosed at about 2 years of age based on abnormal behaviors. Existing neuroimaging‐based methods for the prediction of ASD typically focus on functional magnetic resonance imaging (fMRI); however, most of these fMRI‐based studies include subjects older than 5 years of age. Due to challenges in the application of fMRI for infants, structural magnetic resonance imaging (sMRI) has increasingly received attention in the field for early status prediction of ASD. In this study, we propose an automated prediction framework based on infant sMRI at about 24 months of age. Specifically, by leveraging an infant‐dedicated pipeline, iBEAT V2.0 Cloud, we derived segmentation and parcellation maps from infant sMRI. We employed a convolutional neural network to extract features from pairwise maps and a Siamese network to distinguish whether paired subjects were from the same or different classes. As compared to T1w imaging without segmentation and parcellation maps, our proposed approach with segmentation and parcellation maps yielded greater sensitivity, specificity, and accuracy of ASD prediction, which was validated using two datasets with different imaging protocols/scanners and was confirmed by receiver operating characteristic analysis. Furthermore, comparison with state‐of‐the‐art methods demonstrated the superior effectiveness and robustness of the proposed method. Finally, attention maps were generated to identify subject‐specific autism effects, supporting the reasonability of the predictive results. Collectively, these findings demonstrate the utility of our unified framework for the early‐stage status prediction of ASD by sMRI.
Collapse
Affiliation(s)
- Kun Gao
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Yue Sun
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Sijie Niu
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,School of Information Science and Engineering, University of Jinan, Jinan, China
| | - Li Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
43
|
HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7467261. [PMID: 34630994 PMCID: PMC8500745 DOI: 10.1155/2021/7467261] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/03/2021] [Accepted: 09/16/2021] [Indexed: 11/17/2022]
Abstract
Multimodal medical image segmentation is always a critical problem in medical image segmentation. Traditional deep learning methods utilize fully CNNs for encoding given images, thus leading to deficiency of long-range dependencies and bad generalization performance. Recently, a sequence of Transformer-based methodologies emerges in the field of image processing, which brings great generalization and performance in various tasks. On the other hand, traditional CNNs have their own advantages, such as rapid convergence and local representations. Therefore, we analyze a hybrid multimodal segmentation method based on Transformers and CNNs and propose a novel architecture, HybridCTrm network. We conduct experiments using HybridCTrm on two benchmark datasets and compare with HyperDenseNet, a network based on fully CNNs. Results show that our HybridCTrm outperforms HyperDenseNet on most of the evaluation metrics. Furthermore, we analyze the influence of the depth of Transformer on the performance. Besides, we visualize the results and carefully explore how our hybrid methods improve on segmentations.
Collapse
|
44
|
Wang S, Li C, Wang R, Liu Z, Wang M, Tan H, Wu Y, Liu X, Sun H, Yang R, Liu X, Chen J, Zhou H, Ben Ayed I, Zheng H. Annotation-efficient deep learning for automatic medical image segmentation. Nat Commun 2021; 12:5915. [PMID: 34625565 PMCID: PMC8501087 DOI: 10.1038/s41467-021-26216-9] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/22/2021] [Indexed: 01/17/2023] Open
Abstract
Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
- Peng Cheng Laboratory, Shenzhen, Guangdong, China.
- Pazhou Laboratory, Guangzhou, Guangdong, China.
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| | - Rongpin Wang
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zaiyi Liu
- Department of Medical Imaging, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Hongna Tan
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Xinfeng Liu
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Hui Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Jie Chen
- Peng Cheng Laboratory, Shenzhen, Guangdong, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, Guangdong, China
| | - Huihui Zhou
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | | | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| |
Collapse
|
45
|
Delisle PL, Anctil-Robitaille B, Desrosiers C, Lombaert H. Realistic image normalization for multi-Domain segmentation. Med Image Anal 2021; 74:102191. [PMID: 34509168 DOI: 10.1016/j.media.2021.102191] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 06/22/2021] [Accepted: 07/19/2021] [Indexed: 11/16/2022]
Abstract
Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.
Collapse
Affiliation(s)
| | | | | | - Herve Lombaert
- Department of Computer and Software Engineering, ETS Montreal, Canada
| |
Collapse
|
46
|
Sun Y, Gao K, Lin W, Li G, Niu S, Wang L. Multi-Scale Self-Supervised Learning for Multi-Site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2021; 12966:171-179. [PMID: 35528703 PMCID: PMC9077100 DOI: 10.1007/978-3-030-87589-3_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate tissue segmentation of large-scale pediatric brain MR images from multiple sites is essential to characterize early brain development. Due to imaging motion/Gibbs artifacts and multi-site issue (or domain shift issue), it remains a challenge to accurately segment brain tissues from multi-site pediatric MR images. In this paper, we present a multi-scale self-supervised learning (M-SSL) framework to accurately segment tissues for multi-site pediatric brain MR images with artifacts. Specifically, we first work on the downsampled images to estimate coarse tissue probabilities and build a global anatomic guidance. We then train another segmentation model based on the original images to estimate fine tissue probabilities, which are further integrated with the global anatomic guidance to refine the segmentation results. In the testing stage, to alleviate the multi-site issue, we propose an iterative self-supervised learning strategy to train a site-specific segmentation model based on a set of reliable training samples automatically generated for a to-be-segmented site. The experimental results on pediatric brain MR images with real artifacts and multi-site subjects from the iSeg2019 challenge demonstrate that our M-SSL method achieves better performance compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Yue Sun
- Department of Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan 250022, China
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Kun Gao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Sijie Niu
- Department of Shandong Provincial Key Laboratory of Network based Intelligent Computing, University of Jinan, Jinan 250022, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| |
Collapse
|
47
|
Basnet R, Ahmad MO, Swamy M. A deep dense residual network with reduced parameters for volumetric brain tissue segmentation from MR images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
48
|
Chen J, Sun Y, Fang Z, Lin W, Li G, Wang L. Harmonized neonatal brain MR image segmentation model for cross-site datasets. Biomed Signal Process Control 2021; 69. [DOI: 10.1016/j.bspc.2021.102810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
49
|
Le N, Bui T, Vo-Ho VK, Yamazaki K, Luu K. Narrow Band Active Contour Attention Model for Medical Segmentation. Diagnostics (Basel) 2021; 11:1393. [PMID: 34441327 PMCID: PMC8393587 DOI: 10.3390/diagnostics11081393] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/06/2021] [Accepted: 07/09/2021] [Indexed: 11/16/2022] Open
Abstract
Medical image segmentation is one of the most challenging tasks in medical image analysis and widely developed for many clinical applications. While deep learning-based approaches have achieved impressive performance in semantic segmentation, they are limited to pixel-wise settings with imbalanced-class data problems and weak boundary object segmentation in medical images. In this paper, we tackle those limitations by developing a new two-branch deep network architecture which takes both higher level features and lower level features into account. The first branch extracts higher level feature as region information by a common encoder-decoder network structure such as Unet and FCN, whereas the second branch focuses on lower level features as support information around the boundary and processes in parallel to the first branch. Our key contribution is the second branch named Narrow Band Active Contour (NB-AC) attention model which treats the object contour as a hyperplane and all data inside a narrow band as support information that influences the position and orientation of the hyperplane. Our proposed NB-AC attention model incorporates the contour length with the region energy involving a fixed-width band around the curve or surface. The proposed network loss contains two fitting terms: (i) a high level feature (i.e., region) fitting term from the first branch; (ii) a lower level feature (i.e., contour) fitting term from the second branch including the (ii1) length of the object contour and (ii2) regional energy functional formed by the homogeneity criterion of both the inner band and outer band neighboring the evolving curve or surface. The proposed NB-AC loss can be incorporated into both 2D and 3D deep network architectures. The proposed network has been evaluated on different challenging medical image datasets, including DRIVE, iSeg17, MRBrainS18 and Brats18. The experimental results have shown that the proposed NB-AC loss outperforms other mainstream loss functions: Cross Entropy, Dice, Focal on two common segmentation frameworks Unet and FCN. Our 3D network which is built upon the proposed NB-AC loss and 3DUnet framework achieved state-of-the-art results on multiple volumetric datasets.
Collapse
Affiliation(s)
- Ngan Le
- Department of CSCE, University of Arkansas, Fayetteville, AR 72701, USA; (V.-K.V.-H.); (K.Y.); (K.L.)
| | - Toan Bui
- Vin-AI Research, HaNoi 100000, Vietnam;
| | - Viet-Khoa Vo-Ho
- Department of CSCE, University of Arkansas, Fayetteville, AR 72701, USA; (V.-K.V.-H.); (K.Y.); (K.L.)
| | - Kashu Yamazaki
- Department of CSCE, University of Arkansas, Fayetteville, AR 72701, USA; (V.-K.V.-H.); (K.Y.); (K.L.)
| | - Khoa Luu
- Department of CSCE, University of Arkansas, Fayetteville, AR 72701, USA; (V.-K.V.-H.); (K.Y.); (K.L.)
| |
Collapse
|
50
|
Luan X, Zheng X, Li W, Liu L, Shu Y, Guo Y. Rubik-Net: Learning Spatial Information via Rotation-Driven Convolutions for Brain Segmentation. IEEE J Biomed Health Inform 2021; 26:289-300. [PMID: 34242176 DOI: 10.1109/jbhi.2021.3095846] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The accurate segmentation of brain tissue in Magnetic Resonance Image (MRI) slices is essential for assessing neurological conditions and brain diseases. However, it is challenging to segment MRI slices because of the low contrast between different brain tissues and the partial volume effect. A 2-Dimensional (2-D) convolutional network cannot handle such volumetric medical image data well because it overlooks spatial information between MRI slices. Although 3-Dimensional (3-D) convolutions capture volumetric spatial information, they have not been fully exploited to enhance the representative ability of deep networks; moreover, they may lead to overfitting in the case of insufficient training data. In this paper, we propose a novel convolutional mechanism, termed Rubik convolution, to capture multi dimensional information between MRI slices. Rubik convolution rotates the axis of a set of consecutive slices, which enables a 2-D convolution kernel to extract features of each axial plane simultaneously. Next, feature maps are rotated back to fuse multidimensional information by the Max-View-Maps. Furthermore, we propose an efficient 2-D convolutional network, namely Rubik-Net, where the residual connections and the bottleneck structure are used to enhance information transmission and reduce the number of network parameters. The proposed Rubik-Net shows promising results on iSeg2017, iSeg2019, IBSR and Brainweb datasets in terms of segmentation accuracy. In particular, we achieved the best results in 95th-percentile Hausdorff distance and average surface distance in cerebrospinal fluid segmentation on the most challenging iSeg2019 dataset. The experiments indicate that Rubik-Net improves the accuracy and efficiency of medical image segmentation. Moreover, Rubik convolution can be easily embedded into existing 2-D convolutional networks.
Collapse
|