1
|
Chen X, Liu Q, Deng HH, Kuang T, Lin HHY, Xiao D, Gateno J, Xia JJ, Yap PT. Improving Image Segmentation with Contextual and Structural Similarity. Pattern Recognit 2024; 152:110489. [PMID: 38645435 PMCID: PMC11027435 DOI: 10.1016/j.patcog.2024.110489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.
Collapse
Affiliation(s)
- Xiaoyang Chen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Qin Liu
- Department of Computer Science, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Hannah H. Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Henry Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - James J. Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| |
Collapse
|
2
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. Pattern Recognit 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
3
|
Fang Y, Yap PT, Lin W, Zhu H, Liu M. Source-free unsupervised domain adaptation: A survey. Neural Netw 2024; 174:106230. [PMID: 38490115 PMCID: PMC11015964 DOI: 10.1016/j.neunet.2024.106230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/14/2024] [Accepted: 03/07/2024] [Indexed: 03/17/2024]
Abstract
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to the unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
Collapse
Affiliation(s)
- Yuqi Fang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Hongtu Zhu
- Department of Biostatistics and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States.
| |
Collapse
|
4
|
Huynh K, Chang WT, Wu Y, Yap PT. Optimal shrinkage denoising breaks the noise floor in high-resolution diffusion MRI. Patterns (N Y) 2024; 5:100954. [PMID: 38645765 PMCID: PMC11026978 DOI: 10.1016/j.patter.2024.100954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 04/23/2024]
Abstract
The spatial resolution attainable in diffusion magnetic resonance (MR) imaging is inherently limited by noise. The weaker signal associated with a smaller voxel size, especially at a high level of diffusion sensitization, is often buried under the noise floor owing to the non-Gaussian nature of the MR magnitude signal. Here, we show how the noise floor can be suppressed remarkably via optimal shrinkage of singular values associated with noise in complex-valued k-space data from multiple receiver channels. We explore and compare different low-rank signal matrix recovery strategies to utilize the inherently redundant information from multiple channels. In combination with background phase removal, the optimal strategy reduces the noise floor by 11 times. Our framework enables imaging with substantially improved resolution for precise characterization of tissue microstructure and white matter pathways without relying on expensive hardware upgrades and time-consuming acquisition repetitions, outperforming other related denoising methods.
Collapse
Affiliation(s)
- Khoi Huynh
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Wei-Tang Chang
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ye Wu
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
5
|
Lyu W, Wu Y, Huynh KM, Ahmad S, Yap PT. A multimodal submillimeter MRI atlas of the human cerebellum. Sci Rep 2024; 14:5622. [PMID: 38453991 PMCID: PMC10920891 DOI: 10.1038/s41598-024-55412-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 02/23/2024] [Indexed: 03/09/2024] Open
Abstract
The human cerebellum is engaged in a broad array of tasks related to motor coordination, cognition, language, attention, memory, and emotional regulation. A detailed cerebellar atlas can facilitate the investigation of the structural and functional organization of the cerebellum. However, existing cerebellar atlases are typically limited to a single imaging modality with insufficient characterization of tissue properties. Here, we introduce a multifaceted cerebellar atlas based on high-resolution multimodal MRI, facilitating the understanding of the neurodevelopment and neurodegeneration of the cerebellum based on cortical morphology, tissue microstructure, and intra-cerebellar and cerebello-cerebral connectivity.
Collapse
Affiliation(s)
- Wenjiao Lyu
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Ye Wu
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Khoi Minh Huynh
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Sahar Ahmad
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA.
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
6
|
Liu S, Yap PT. Learning multi-site harmonization of magnetic resonance images without traveling human phantoms. Commun Eng 2024; 3:6. [PMID: 38420332 PMCID: PMC10898625 DOI: 10.1038/s44172-023-00140-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 11/20/2023] [Indexed: 03/02/2024]
Abstract
Harmonization improves Magn. Reson. Imaging (MRI) data consistency and is central to effective integration of diverse imaging data acquired across multiple sites. Recent deep learning techniques for harmonization are predominantly supervised in nature and hence require imaging data of the same human subjects to be acquired at multiple sites. Data collection as such requires the human subjects to travel across sites and is hence challenging, costly, and impractical, more so when sufficient sample size is needed for reliable network training. Here we show how harmonization can be achieved with a deep neural network that does not rely on traveling human phantom data. Our method disentangles site-specific appearance information and site-invariant anatomical information from images acquired at multiple sites and then employs the disentangled information to generate the image of each subject for any target site. We demonstrate with more than 6,000 multi-site T1- and T2-weighted images that our method is remarkably effective in generating images with realistic site-specific appearances without altering anatomical details. Our method allows retrospective harmonization of data in a wide range of existing modern large-scale imaging studies, conducted via different scanners and protocols, without additional data collection.
Collapse
Affiliation(s)
- Siyuan Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
7
|
Lyu W, Wu Y, Huang H, Chen Y, Tan X, Liang Y, Ma X, Feng Y, Wu J, Kang S, Qiu S, Yap PT. Aberrant dynamic functional network connectivity in type 2 diabetes mellitus individuals. Cogn Neurodyn 2023; 17:1525-1539. [PMID: 37969945 PMCID: PMC10640562 DOI: 10.1007/s11571-022-09899-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/11/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022] Open
Abstract
An increasing number of recent brain imaging studies are dedicated to understanding the neuro mechanism of cognitive impairment in type 2 diabetes mellitus (T2DM) individuals. In contrast to efforts to date that are limited to static functional connectivity, here we investigate abnormal connectivity in T2DM individuals by characterizing the time-varying properties of brain functional networks. Using group independent component analysis (GICA), sliding-window analysis, and k-means clustering, we extracted thirty-one intrinsic connectivity networks (ICNs) and estimated four recurring brain states. We observed significant group differences in fraction time (FT) and mean dwell time (MDT), and significant negative correlation between the Montreal Cognitive Assessment (MoCA) scores and FT/MDT. We found that in the T2DM group the inter- and intra-network connectivity decreases and increases respectively for the default mode network (DMN) and task-positive network (TPN). We also found alteration in the precuneus network (PCUN) and enhanced connectivity between the salience network (SN) and the TPN. Our study provides evidence of alterations of large-scale resting networks in T2DM individuals and shed light on the fundamental mechanisms of neurocognitive deficits in T2DM.
Collapse
Affiliation(s)
- Wenjiao Lyu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC USA
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu China
| | - Haoming Huang
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Yuna Chen
- Department of Endocrinology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Xin Tan
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Yi Liang
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Xiaomeng Ma
- Department of Radiology, Jingzhou First People’s Hospital of Hubei Province, Jingzhou, Hubei China
| | - Yue Feng
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Jinjian Wu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Shangyu Kang
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Shijun Qiu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, Guangdong China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC USA
| |
Collapse
|
8
|
Huang Y, Ahmad S, Han L, Wang S, Wu Z, Lin W, Li G, Wang L, Yap PT. Longitudinal Prediction of Postnatal Brain Magnetic Resonance Images via a Metamorphic Generative Adversarial Network. Pattern Recognit 2023; 143:109715. [PMID: 37425426 PMCID: PMC10327994 DOI: 10.1016/j.patcog.2023.109715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Missing scans are inevitable in longitudinal studies due to either subject dropouts or failed scans. In this paper, we propose a deep learning framework to predict missing scans from acquired scans, catering to longitudinal infant studies. Prediction of infant brain MRI is challenging owing to the rapid contrast and structural changes particularly during the first year of life. We introduce a trustworthy metamorphic generative adversarial network (MGAN) for translating infant brain MRI from one time-point to another. MGAN has three key features: (i) Image translation leveraging spatial and frequency information for detail-preserving mapping; (ii) Quality-guided learning strategy that focuses attention on challenging regions. (iii) Multi-scale hybrid loss function that improves translation of image contents. Experimental results indicate that MGAN outperforms existing GANs by accurately predicting both tissue contrasts and anatomical details.
Collapse
Affiliation(s)
- Yunzhi Huang
- School of Artificial Intelligence (School of Future Technology), Nanjing University of Information Science and Technology, Nanjing 210044, China
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Luyi Han
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Shuai Wang
- Department of Computer Science, Shandong University (Weihai), China
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| |
Collapse
|
9
|
Chen X, Pang Y, Ahmad S, Royce T, Wang A, Lian J, Yap PT. Organ-aware CBCT enhancement via dual path learning for prostate cancer treatment. Med Phys 2023; 50:6931-6942. [PMID: 37751497 DOI: 10.1002/mp.16752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/16/2023] [Accepted: 08/28/2023] [Indexed: 09/28/2023] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) plays a crucial role in the intensity modulated radiotherapy (IMRT) of prostate cancer. However, poor image contrast and fuzzy organ boundaries pose challenges to precise targeting for dose delivery and plan reoptimization for adaptive therapy. PURPOSE In this work, we aim to enhance pelvic CBCT images by translating them to high-quality CT images with a particular focus on the anatomical structures important for radiotherapy. METHODS We develop a novel dual-path learning framework, covering both global and local information, for organ-aware enhancement of the prostate, bladder and rectum. The global path learns coarse inter-modality translation at the image level. The local path learns organ-aware translation at the regional level. This dual-path learning architecture can serve as a plug-and-play module adaptable to other medical image-to-image translation frameworks. RESULTS We evaluated the performance of the proposed method both quantitatively and qualitatively. The training dataset consists of unpaired 40 CBCT and 40 CT scans, the validation dataset consists of 5 paired CBCT-CT scans, and the testing dataset consists of 10 paired CBCT-CT scans. The peak signal-to-noise ratio (PSNR) between enhanced CBCT and reference CT images is 27.22 ± 1.79, and the structural similarity (SSIM) between enhanced CBCT and the reference CT images is 0.71 ± 0.03. We also compared our method with state-of-the-art image-to-image translation methods, where our method achieves the best performance. Moreover, the statistical analysis confirms that the improvements achieved by our method are statistically significant. CONCLUSIONS The proposed method demonstrates its superiority in enhancing pelvic CBCT images, especially at the organ level, compared to relevant methods.
Collapse
Affiliation(s)
- Xu Chen
- College of Computer Science and Technology, Huaqiao University, Xiamen, China
- Key Laboratory of Computer Vision and Machine Learning (Huaqiao University), Fujian Province University, Xiamen, China
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen, China
| | - Yunkui Pang
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Sahar Ahmad
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Trevor Royce
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Andrew Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| |
Collapse
|
10
|
Jiang W, Zhou Z, Li G, Yin W, Wu Z, Wang L, Ghanbari M, Li G, Yap PT, Howell BR, Styner MA, Yacoub E, Hazlett H, Gilmore JH, Keith Smith J, Ugurbil K, Elison JT, Zhang H, Shen D, Lin W. Mapping the evolution of regional brain network efficiency and its association with cognitive abilities during the first twenty-eight months of life. Dev Cogn Neurosci 2023; 63:101284. [PMID: 37517139 PMCID: PMC10400876 DOI: 10.1016/j.dcn.2023.101284] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 06/20/2023] [Accepted: 07/23/2023] [Indexed: 08/01/2023] Open
Abstract
Human brain undergoes rapid growth during the first few years of life. While previous research has employed graph theory to study early brain development, it has mostly focused on the topological attributes of the whole brain. However, examining regional graph-theory features may provide unique insights into the development of cognitive abilities. Utilizing a large and longitudinal rsfMRI dataset from the UNC/UMN Baby Connectome Project, we investigated the developmental trajectories of regional efficiency and evaluated the relationships between these changes and cognitive abilities using Mullen Scales of Early Learning during the first twenty-eight months of life. Our results revealed a complex and spatiotemporally heterogeneous development pattern of regional global and local efficiency during this age period. Furthermore, we found that the trajectories of the regional global efficiency at the left temporal occipital fusiform and bilateral occipital fusiform gyri were positively associated with cognitive abilities, including visual reception, expressive language, receptive language, and early learning composite scores (P < 0.05, FDR corrected). However, these associations were weakened with age. These findings offered new insights into the regional developmental features of brain topologies and their associations with cognition and provided evidence of ongoing optimization of brain networks at both whole-brain and regional levels.
Collapse
Affiliation(s)
- Weixiong Jiang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhen Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Guoshi Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weiyan Yin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Maryam Ghanbari
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | | | - Martin A Styner
- Department of Psychiatry, University of North Carolina at Chapel Hill, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Heather Hazlett
- Department of Psychiatry, University of North Carolina at Chapel Hill, USA; Department of Radiology, University of North Carolina at Chapel Hill, USA
| | - John H Gilmore
- Department of Psychiatry, University of North Carolina at Chapel Hill, USA
| | - J Keith Smith
- Department of Radiology, University of North Carolina at Chapel Hill, USA
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, USA; Department of Pediatrics, University of Minnesota, USA
| | - Han Zhang
- Biomedical Engineering, Shanghai Tech University, Shanghai, China
| | - Dinggang Shen
- Biomedical Engineering, Shanghai Tech University, Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
11
|
Wu M, Zhang L, Yap PT, Lin W, Zhu H, Liu M. Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation. Mach Learn Med Imaging 2023; 14348:1-11. [PMID: 38389805 PMCID: PMC10883146 DOI: 10.1007/978-3-031-45673-2_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Multi-site brain magnetic resonance imaging (MRI) has been widely used in clinical and research domains, but usually is sensitive to non-biological variations caused by site effects (e.g., field strengths and scanning protocols). Several retrospective data harmonization methods have shown promising results in removing these non-biological variations at feature or whole-image level. Most existing image-level harmonization methods are implemented through generative adversarial networks, which are generally computationally expensive and generalize poorly on independent data. To this end, this paper proposes a disentangled latent energy-based style translation (DLEST) framework for image-level structural MRI harmonization. Specifically, DLEST disentangles site-invariant image generation and site-specific style translation via a latent autoencoder and an energy-based model. The autoencoder learns to encode images into low-dimensional latent space, and generates faithful images from latent codes. The energy-based model is placed in between the encoding and generation steps, facilitating style translation from a source domain to a target domain implicitly. This allows highly generalizable image generation and efficient style translation through the latent space. We train our model on 4,092 T1-weighted MRIs in 3 tasks: histogram comparison, acquisition site classification, and brain tissue segmentation. Qualitative and quantitative results demonstrate the superiority of our approach, which generally outperforms several state-of-the-art methods.
Collapse
Affiliation(s)
- Mengqi Wu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC 27599, USA
| | - Lintao Zhang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hongtu Zhu
- Department of Biostatistics and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
12
|
Liu Y, Li J, Yunkui Pang, Nie D, Yap PT. The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior. Proc IEEE Int Conf Comput Vis 2023; 2023:12374-12383. [PMID: 38726039 PMCID: PMC11078028 DOI: 10.1109/iccv51070.2023.01140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2024]
Abstract
Deep Image Prior (DIP) shows that some network architectures inherently tend towards generating smooth images while resisting noise, a phenomenon known as spectral bias. Image denoising is a natural application of this property. Although denoising with DIP mitigates the need for large training sets, two often intertwined practical challenges need to be overcome: architectural design and noise fitting. Existing methods either handcraft or search for suitable architectures from a vast design space, due to the limited understanding of how architectural choices affect the denoising outcome. In this study, we demonstrate from a frequency perspective that unlearnt upsampling is the main driving force behind the denoising phenomenon with DIP. This finding leads to straightforward strategies for identifying a suitable architecture for every image without laborious search. Extensive experiments show that the estimated architectures achieve superior denoising results than existing methods with up to 95% fewer parameters. Thanks to this under-parameterization, the resulting architectures are less prone to noise-fitting.
Collapse
Affiliation(s)
- Yilin Liu
- University of North Carolina at Chapel Hill
| | - Jiang Li
- University of North Carolina at Chapel Hill
| | | | - Dong Nie
- University of North Carolina at Chapel Hill
| | | |
Collapse
|
13
|
Girard G, Rafael-Patiño J, Truffet R, Aydogan DB, Adluru N, Nair VA, Prabhakaran V, Bendlin BB, Alexander AL, Bosticardo S, Gabusi I, Ocampo-Pineda M, Battocchio M, Piskorova Z, Bontempi P, Schiavi S, Daducci A, Stafiej A, Ciupek D, Bogusz F, Pieciak T, Frigo M, Sedlar S, Deslauriers-Gauthier S, Kojčić I, Zucchelli M, Laghrissi H, Ji Y, Deriche R, Schilling KG, Landman BA, Cacciola A, Basile GA, Bertino S, Newlin N, Kanakaraj P, Rheault F, Filipiak P, Shepherd TM, Lin YC, Placantonakis DG, Boada FE, Baete SH, Hernández-Gutiérrez E, Ramírez-Manzanares A, Coronado-Leija R, Stack-Sánchez P, Concha L, Descoteaux M, Mansour L S, Seguin C, Zalesky A, Marshall K, Canales-Rodríguez EJ, Wu Y, Ahmad S, Yap PT, Théberge A, Gagnon F, Massi F, Fischi-Gomez E, Gardier R, Haro JLV, Pizzolato M, Caruyer E, Thiran JP. Tractography passes the test: Results from the diffusion-simulated connectivity (disco) challenge. Neuroimage 2023; 277:120231. [PMID: 37330025 PMCID: PMC10771037 DOI: 10.1016/j.neuroimage.2023.120231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/12/2023] [Accepted: 06/14/2023] [Indexed: 06/19/2023] Open
Abstract
Estimating structural connectivity from diffusion-weighted magnetic resonance imaging is a challenging task, partly due to the presence of false-positive connections and the misestimation of connection weights. Building on previous efforts, the MICCAI-CDMRI Diffusion-Simulated Connectivity (DiSCo) challenge was carried out to evaluate state-of-the-art connectivity methods using novel large-scale numerical phantoms. The diffusion signal for the phantoms was obtained from Monte Carlo simulations. The results of the challenge suggest that methods selected by the 14 teams participating in the challenge can provide high correlations between estimated and ground-truth connectivity weights, in complex numerical environments. Additionally, the methods used by the participating teams were able to accurately identify the binary connectivity of the numerical dataset. However, specific false positive and false negative connections were consistently estimated across all methods. Although the challenge dataset doesn't capture the complexity of a real brain, it provided unique data with known macrostructure and microstructure ground-truth properties to facilitate the development of connectivity estimation methods.
Collapse
Affiliation(s)
- Gabriel Girard
- CIBM Center for Biomedical Imaging, Switzerland; Radiology Department, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Jonathan Rafael-Patiño
- Radiology Department, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Raphaël Truffet
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U-1228, Rennes, France
| | - Dogu Baran Aydogan
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland; Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland; Department of Psychiatry, Helsinki University Hospital, Helsinki, Finland
| | - Nagesh Adluru
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States; Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Veena A Nair
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Vivek Prabhakaran
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States
| | - Barbara B Bendlin
- Department of Medicine, University of Wisconsin-Madison, Madison, WI, United States
| | - Andrew L Alexander
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States; Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States; Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
| | - Sara Bosticardo
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy; Translational Imaging in Neurology (ThINk), Department of Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland
| | - Ilaria Gabusi
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy; Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | - Mario Ocampo-Pineda
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy
| | - Matteo Battocchio
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy; Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Zuzana Piskorova
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy; Brno Faculty of Electrical Engineering and Communication, Department of mathematics, University of Technology, Brno, Czech Republic
| | - Pietro Bontempi
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy
| | - Simona Schiavi
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), University of Genoa, Genoa, Italy
| | - Alessandro Daducci
- Diffusion Imaging and Connectivity Estimation (DICE) Lab, Department of Computer Science, University of Verona, Verona, Italy
| | | | - Dominika Ciupek
- Sano Centre for Computational Personalised Medicine, Kraków, Poland
| | - Fabian Bogusz
- AGH University of Science and Technology, Kraków, Poland
| | - Tomasz Pieciak
- AGH University of Science and Technology, Kraków, Poland; Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Matteo Frigo
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | - Sara Sedlar
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | | | - Ivana Kojčić
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | - Mauro Zucchelli
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | - Hiba Laghrissi
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France; Institut de Biologie de Valrose, Université Côte d'Azur, Nice, France
| | - Yang Ji
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | - Rachid Deriche
- Athena Project Team, Centre Inria d'Université Côte d'Azur, France
| | - Kurt G Schilling
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Bennett A Landman
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, United States; Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Alberto Cacciola
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Images, University of Messina, Messina, Italy; Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Tsinghua University, Beijing, China; Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Gianpaolo Antonio Basile
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Images, University of Messina, Messina, Italy
| | - Salvatore Bertino
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Images, University of Messina, Messina, Italy
| | - Nancy Newlin
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Praitayini Kanakaraj
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Francois Rheault
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Patryk Filipiak
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, NYU Langone Health, New York, NY, United States
| | - Timothy M Shepherd
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, NYU Langone Health, New York, NY, United States
| | - Ying-Chia Lin
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, NYU Langone Health, New York, NY, United States
| | - Dimitris G Placantonakis
- Department of Neurosurgery, Perlmutter Cancer Center, Neuroscience Institute, Kimmel Center for Stem Cell Biology, NYU Langone Health, New York, NY, United States
| | - Fernando E Boada
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Steven H Baete
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, NYU Langone Health, New York, NY, United States
| | - Erick Hernández-Gutiérrez
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | | | - Ricardo Coronado-Leija
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, NYU Langone Health, New York, NY, United States
| | - Pablo Stack-Sánchez
- Computer Science Department, Centro de Investigación en Matemáticas A.C, Guanajuato, México
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Sina Mansour L
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia; Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne and Melbourne Health, Parkville, Victoria, Australia
| | - Caio Seguin
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne and Melbourne Health, Parkville, Victoria, Australia; School of Biomedical Engineering, The University of Sydney, Sydney, Australia; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States
| | - Andrew Zalesky
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia; Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne and Melbourne Health, Parkville, Victoria, Australia
| | - Kenji Marshall
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland; McGill University, Montréal, QC, Canada
| | - Erick J Canales-Rodríguez
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Antoine Théberge
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Florence Gagnon
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Frédéric Massi
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Elda Fischi-Gomez
- CIBM Center for Biomedical Imaging, Switzerland; Radiology Department, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Rémy Gardier
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Juan Luis Villarreal Haro
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Marco Pizzolato
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Emmanuel Caruyer
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U-1228, Rennes, France
| | - Jean-Philippe Thiran
- CIBM Center for Biomedical Imaging, Switzerland; Radiology Department, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
14
|
Chen X, Qu L, Xie Y, Ahmad S, Yap PT. A paired dataset of T1- and T2-weighted MRI at 3 Tesla and 7 Tesla. Sci Data 2023; 10:489. [PMID: 37500686 PMCID: PMC10374655 DOI: 10.1038/s41597-023-02400-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 07/20/2023] [Indexed: 07/29/2023] Open
Abstract
Brain magnetic resonance imaging (MRI) provides detailed soft tissue contrasts that are critical for disease diagnosis and neuroscience research. Higher MRI resolution typically comes at the cost of signal-to-noise ratio (SNR) and tissue contrast, particularly for more common 3 Tesla (3T) MRI scanners. At ultra-high magnetic field strength, 7 Tesla (7T) MRI allows for higher resolution with greater tissue contrast and SNR. However, the prohibitively high costs of 7T MRI scanners deter their widespread adoption in clinical and research centers. To obtain higher-quality images without 7T MRI scanners, algorithms that can synthesize 7T MR images from 3T MR images are under active development. Here, we make available a dataset of paired T1-weighted and T2-weighted MR images at 3T and 7T of 10 healthy subjects to facilitate the development and evaluation of 3T-to-7T MR image synthesis models. The quality of the dataset is assessed using image quality metrics implemented in MRIQC.
Collapse
Affiliation(s)
- Xiaoyang Chen
- Department of Radiology, University of North Carolina, Chapel Hill, NC, 27599, USA
- Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Liangqiong Qu
- Department of Radiology, University of North Carolina, Chapel Hill, NC, 27599, USA
- Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Yifang Xie
- McAllister Heart Institute, University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Sahar Ahmad
- Department of Radiology, University of North Carolina, Chapel Hill, NC, 27599, USA
- Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, NC, 27599, USA.
- Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
15
|
Chen G, Hong Y, Huynh KM, Yap PT. Deep learning prediction of diffusion MRI data with microstructure-sensitive loss functions. Med Image Anal 2023; 85:102742. [PMID: 36682154 PMCID: PMC9974781 DOI: 10.1016/j.media.2023.102742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 12/05/2022] [Accepted: 01/05/2023] [Indexed: 01/15/2023]
Abstract
Deep learning prediction of diffusion MRI (DMRI) data relies on the utilization of effective loss functions. Existing losses typically measure the signal-wise differences between the predicted and target DMRI data without considering the quality of derived diffusion scalars that are eventually utilized for quantification of tissue microstructure. Here, we propose two novel loss functions, called microstructural loss and spherical variance loss, to explicitly consider the quality of both the predicted DMRI data and derived diffusion scalars. We apply these loss functions to the prediction of multi-shell data and enhancement of angular resolution. Evaluation based on infant and adult DMRI data indicates that both microstructural loss and spherical variance loss improve the quality of derived diffusion scalars.
Collapse
Affiliation(s)
- Geng Chen
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Yoonmi Hong
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Khoi Minh Huynh
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
16
|
Chung SH, Huynh KM, Goralski JL, Chen Y, Yap PT, Ceppe AS, Powell MZ, Donaldson SH, Lee YZ. Feasibility of free-breathing 19 F MRI image acquisition to characterize ventilation defects in CF and healthy volunteers at wash-in. Magn Reson Med 2023; 90:79-89. [PMID: 36912481 PMCID: PMC10149612 DOI: 10.1002/mrm.29630] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/27/2023] [Accepted: 02/15/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE To explore the feasibility of measuring ventilation defect percentage (VDP) using 19 F MRI during free-breathing wash-in of fluorinated gas mixture with postacquisition denoising and to compare these results with those obtained through traditional Cartesian breath-hold acquisitions. METHODS Eight adults with cystic fibrosis and 5 healthy volunteers completed a single MR session on a Siemens 3T Prisma. 1 H Ultrashort-TE MRI sequences were used for registration and masking, and ventilation images with 19 F MRI were obtained while the subjects breathed a normoxic mixture of 79% perfluoropropane and 21% oxygen (O2 ). 19 F MRI was performed during breath holds and while free breathing with one overlapping spiral scan at breath hold for VDP value comparison. The 19 F spiral data were denoised using a low-rank matrix recovery approach. RESULTS VDP measured using 19 F VIBE and 19 F spiral images were highly correlated (r = 0.84) at 10 wash-in breaths. Second-breath VDPs were also highly correlated (r = 0.88). Denoising greatly increased SNR (pre-denoising spiral SNR, 2.46 ± 0.21; post-denoising spiral SNR, 33.91 ± 6.12; and breath-hold SNR, 17.52 ± 2.08). CONCLUSION Free-breathing 19 F lung MRI VDP analysis was feasible and highly correlated with breath-hold measurements. Free-breathing methods are expected to increase patient comfort and extend ventilation MRI use to patients who are unable to perform breath holds, including younger subjects and those with more severe lung disease.
Collapse
Affiliation(s)
- Sang Hun Chung
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Khoi Minh Huynh
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Jennifer L Goralski
- Division of Pulmonary and Critical Care Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,Marsico Lung Institute/UNC Cystic Fibrosis Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,Division of Pediatric Pulmonology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Yong Chen
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Agathe S Ceppe
- Division of Pulmonary and Critical Care Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,Marsico Lung Institute/UNC Cystic Fibrosis Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Margret Z Powell
- Marsico Lung Institute/UNC Cystic Fibrosis Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Scott H Donaldson
- Division of Pulmonary and Critical Care Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,Marsico Lung Institute/UNC Cystic Fibrosis Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Yueh Z Lee
- Division of Pulmonary and Critical Care Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
17
|
Ghanbari M, Li G, Hsu LM, Yap PT. Accumulation of network redundancy marks the early stage of Alzheimer's disease. Hum Brain Mapp 2023; 44:2993-3006. [PMID: 36896755 PMCID: PMC10171535 DOI: 10.1002/hbm.26257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 02/15/2023] [Accepted: 02/18/2023] [Indexed: 03/11/2023] Open
Abstract
Brain wiring redundancy counteracts aging-related cognitive decline by reserving additional communication channels as a neuroprotective mechanism. Such a mechanism plays a potentially important role in maintaining cognitive function during the early stages of neurodegenerative disorders such as Alzheimer's disease (AD). AD is characterized by severe cognitive decline and involves a long prodromal stage of mild cognitive impairment (MCI). Since MCI subjects are at high risk of converting to AD, identifying MCI individuals is essential for early intervention. To delineate the redundancy profile during AD progression and enable better MCI diagnosis, we define a metric that reflects redundant disjoint connections between brain regions and extract redundancy features in three high-order brain networks-medial frontal, frontoparietal, and default mode networks-based on dynamic functional connectivity (dFC) captured by resting-state functional magnetic resonance imaging (rs-fMRI). We show that redundancy increases significantly from normal control (NC) to MCI individuals and decreases slightly from MCI to AD individuals. We further demonstrate that statistical features of redundancy are highly discriminative and yield state-of-the-art accuracy of up to 96.8 ± 1.0% in support vector machine (SVM) classification between NC and MCI individuals. This study provides evidence supporting the notion that redundancy serves as a crucial neuroprotective mechanism in MCI.
Collapse
Affiliation(s)
- Maryam Ghanbari
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA.,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Guoshi Li
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA.,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Li-Ming Hsu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA.,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA.,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| |
Collapse
|
18
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE Trans Med Imaging 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
19
|
Cheng F, Liu Y, Chen Y, Yap PT. High-Resolution 3D Magnetic Resonance Fingerprinting With a Graph Convolutional Network. IEEE Trans Med Imaging 2023; 42:674-683. [PMID: 36269931 PMCID: PMC10081960 DOI: 10.1109/tmi.2022.3216527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging framework for rapid and simultaneous quantification of multiple tissue properties. 3D MRF allows higher through-plane resolution, but the acquisition process is slow when whole-brain coverage is needed. Existing methods for acceleration mainly rely on GRAPPA for k-space interpolation in the partition-encoding direction, limiting the acceleration factor to 2 or 3. In this work, we replace GRAPPA with a deep learning approach for accurate tissue quantification with greater acceleration. Specifically, a graph convolution network (GCN) is developed to cater to the non-Cartesian spiral sampling trajectories typical in MRF acquisition. The GCN maintains high quantification accuracy with up to 6-fold acceleration and allows 1mm isotropic resolution whole-brain 3D MRF data to be acquired in 3min and submillimeter 3D MRF (0.8mm) in 5min, greatly improving the feasibility of MRF in clinical settings.
Collapse
|
20
|
Ma L, Xiao D, Kim D, Lian C, Kuang T, Liu Q, Deng H, Yang E, Liebschner MAK, Gateno J, Xia JJ, Yap PT. Simulation of Postoperative Facial Appearances via Geometric Deep Learning for Efficient Orthognathic Surgical Planning. IEEE Trans Med Imaging 2023; 42:336-345. [PMID: 35657829 PMCID: PMC10037541 DOI: 10.1109/tmi.2022.3180078] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Orthognathic surgery corrects jaw deformities to improve aesthetics and functions. Due to the complexity of the craniomaxillofacial (CMF) anatomy, orthognathic surgery requires precise surgical planning, which involves predicting postoperative changes in facial appearance. To this end, most conventional methods involve simulation with biomechanical modeling methods, which are labor intensive and computationally expensive. Here we introduce a learning-based framework to speed up the simulation of postoperative facial appearances. Specifically, we introduce a facial shape change prediction network (FSC-Net) to learn the nonlinear mapping from bony shape changes to facial shape changes. FSC-Net is a point transform network weakly-supervised by paired preoperative and postoperative data without point-wise correspondence. In FSC-Net, a distance-guided shape loss places more emphasis on the jaw region. A local point constraint loss restricts point displacements to preserve the topology and smoothness of the surface mesh after point transformation. Evaluation results indicate that FSC-Net achieves 15× speedup with accuracy comparable to a state-of-the-art (SOTA) finite-element modeling (FEM) method.
Collapse
|
21
|
Ma L, Lian C, Kim D, Xiao D, Wei D, Liu Q, Kuang T, Ghanbari M, Li G, Gateno J, Shen SGF, Wang L, Shen D, Xia JJ, Yap PT. Bidirectional prediction of facial and bony shapes for orthognathic surgical planning. Med Image Anal 2023; 83:102644. [PMID: 36272236 PMCID: PMC10445637 DOI: 10.1016/j.media.2022.102644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/18/2022] [Accepted: 09/27/2022] [Indexed: 11/07/2022]
Abstract
This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dongming Wei
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Maryam Ghanbari
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Guoshi Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve G F Shen
- Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200025, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
22
|
Ahmad S, Wu Y, Wu Z, Thung KH, Liu S, Lin W, Li G, Wang L, Yap PT. Multifaceted atlases of the human brain in its infancy. Nat Methods 2023; 20:55-64. [PMID: 36585454 PMCID: PMC9834057 DOI: 10.1038/s41592-022-01703-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 10/25/2022] [Indexed: 12/31/2022]
Abstract
Brain atlases are spatial references for integrating, processing, and analyzing brain features gathered from different individuals, sources, and scales. Here we introduce a collection of joint surface-volume atlases that chart postnatal development of the human brain in a spatiotemporally dense manner from two weeks to two years of age. Our month-specific atlases chart normative patterns and capture key traits of early brain development and are therefore conducive to identifying aberrations from normal developmental trajectories. These atlases will enhance our understanding of early structural and functional development by facilitating the mapping of diverse features of the infant brain to a common reference frame for precise multifaceted quantification of cortical and subcortical changes.
Collapse
Affiliation(s)
- Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Siyuan Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
23
|
Guan H, Yue L, Yap PT, Xiao S, Bozoki A, Liu M. Attention-Guided Autoencoder for Automated Progression Prediction of Subjective Cognitive Decline With Structural MRI. IEEE J Biomed Health Inform 2023; PP. [PMID: 37030725 DOI: 10.1109/jbhi.2023.3257081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Subjective cognitive decline (SCD) is the preclinical stage of Alzheimer's disease (AD) which happens even earlier than mild cognitive impairment (MCI). Progressive SCD will convert to MCI with the potential of further evolving to AD. Therefore, early identification of progressive SCD with neuroimaging techniques (eg, structural MRI) is of great clinical value for early intervention of AD. However, existing MRI-based machine/deep learning methods usually suffer the small-sample-size problem and lack interpretability. To this end, we propose an interpretable autoencoder model with domain transfer learning (IADT) for progression prediction of SCD. Firstly, the proposed model can leverage MRIs from both the target domain (eg., SCD) and auxiliary domains (, AD and NC) for progressive SCD identification. Besides, it can automatically locate the disease-related brain regions of interest (defined in brain atlases) through an attention mechanism, which shows good interpretability. In addition, the IADT model is straightforward to train and test with only 5 ∼ 10 seconds on CPUs and is suitable for medical tasks with small datasets. Extensive experiments on the publicly available ADNI dataset and a private CLAS dataset have demonstrated the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina Chapel Hill, Chapel Hill, NC, USA
| | - Ling Yue
- Department of Geriatric Psychiatry,Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina Chapel Hill, Chapel Hill, NC, USA
| | - Shifu Xiao
- Department of Geriatric Psychiatry,Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina, Chapel Hill, NC, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
24
|
Chen X, Kuang T, Deng H, Fung SH, Gateno J, Xia JJ, Yap PT. Dual Adversarial Attention Mechanism for Unsupervised Domain Adaptive Medical Image Segmentation. IEEE Trans Med Imaging 2022; 41:3445-3453. [PMID: 35759585 PMCID: PMC9748599 DOI: 10.1109/tmi.2022.3186698] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Domain adaptation techniques have been demonstrated to be effective in addressing label deficiency challenges in medical image segmentation. However, conventional domain adaptation based approaches often concentrate on matching global marginal distributions between different domains in a class-agnostic fashion. In this paper, we present a dual-attention domain-adaptative segmentation network (DADASeg-Net) for cross-modality medical image segmentation. The key contribution of DADASeg-Net is a novel dual adversarial attention mechanism, which regularizes the domain adaptation module with two attention maps respectively from the space and class perspectives. Specifically, the spatial attention map guides the domain adaptation module to focus on regions that are challenging to align in adaptation. The class attention map encourages the domain adaptation module to capture class-specific instead of class-agnostic knowledge for distribution alignment. DADASeg-Net shows superior performance in two challenging medical image segmentation tasks.
Collapse
|
25
|
Lang Y, Lian C, Xiao D, Deng H, Thung KH, Yuan P, Gateno J, Kuang T, Alfi DM, Wang L, Shen D, Xia JJ, Yap PT. Localization of Craniomaxillofacial Landmarks on CBCT Images Using 3D Mask R-CNN and Local Dependency Learning. IEEE Trans Med Imaging 2022; 41:2856-2866. [PMID: 35544487 PMCID: PMC9673501 DOI: 10.1109/tmi.2022.3174513] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.
Collapse
|
26
|
Xu T, Wu Y, Hong Y, Ahmad S, Huynh KM, Wang Z, Lin W, Chang WT, Yap PT. Rapid Diffusion Magnetic Resonance Imaging Using Slice-Interleaved Encoding. Med Image Anal 2022; 81:102548. [PMID: 35917693 PMCID: PMC9988327 DOI: 10.1016/j.media.2022.102548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 06/24/2022] [Accepted: 07/12/2022] [Indexed: 11/28/2022]
Abstract
In this paper, we present a robust reconstruction scheme for diffusion MRI (dMRI) data acquired using slice-interleaved diffusion encoding (SIDE). When combined with SIDE undersampling and simultaneous multi-slice (SMS) imaging, our reconstruction strategy is capable of significantly reducing the amount of data that needs to be acquired, enabling high-speed diffusion imaging for pediatric, elderly, and claustrophobic individuals. In contrast to the conventional approach of acquiring a full diffusion-weighted (DW) volume per diffusion wavevector, SIDE acquires in each repetition time (TR) a volume that consists of interleaved slice groups, each group corresponding to a different diffusion wavevector. This strategy allows SIDE to rapidly acquire data covering a large number of wavevectors within a short period of time. The proposed reconstruction method uses a diffusion spectrum model and multi-dimensional total variation to recover full DW images from DW volumes that are slice-undersampled due to unacquired SIDE volumes. We formulate an inverse problem that can be solved efficiently using the alternating direction method of multipliers (ADMM). Experiment results demonstrate that DW images can be reconstructed with high fidelity even when the acquisition is accelerated by 25 folds.
Collapse
Affiliation(s)
- Tiantian Xu
- Department of Computer Science, University of North Carolina, Chapel Hill, NC 27599, USA
| | - Ye Wu
- Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA
| | - Yoonmi Hong
- Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA
| | - Sahar Ahmad
- Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA
| | - Khoi Minh Huynh
- Department of Biomedical Engineering, University of North Carolina, Chapel Hill, NC 27599, USA
| | - Zhixing Wang
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22904, USA
| | - Weili Lin
- Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA
| | - Wei-Tang Chang
- Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Computer Science, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Biomedical Engineering, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Radiology, University of North Carolina, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC 27599, USA.
| |
Collapse
|
27
|
Chen G, Jiang H, Liu J, Ma J, Cui H, Xia Y, Yap PT. Hybrid Graph Transformer for Tissue Microstructure Estimation with Undersampled Diffusion MRI Data. Med Image Comput Comput Assist Interv 2022; 13431:113-122. [PMID: 37126477 PMCID: PMC10141974 DOI: 10.1007/978-3-031-16431-6_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Advanced contemporary diffusion models for tissue microstructure often require diffusion MRI (DMRI) data with sufficiently dense sampling in the diffusion wavevector space for reliable model fitting, which might not always be feasible in practice. A potential remedy to this problem is by using deep learning techniques to predict high-quality diffusion microstructural indices from sparsely sampled data. However, existing methods are either agnostic to the data geometry in the diffusion wavevector space ( q -space) or limited to leveraging information from only local neighborhoods in the physical coordinate space ( x -space). Here, we propose a hybrid graph transformer (HGT) to explicitly consider the q -space geometric structure with a graph neural network (GNN) and make full use of spatial information with a novel residual dense transformer (RDT). The RDT consists of multiple densely connected transformer layers and a residual connection to facilitate model training. Extensive experiments on the data from the Human Connectome Project (HCP) demonstrate that our method significantly improves the quality of microstructural estimations over existing state-of-the-art methods.
Collapse
Affiliation(s)
- Geng Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Haotian Jiang
- Department of Software Engineering, Heilongjiang University, Harbin, China
| | - Jiannan Liu
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Jiquan Ma
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
28
|
Ahmad S, Nan F, Wu Y, Wu Z, Lin W, Wang L, Li G, Wu D, Yap PT. Harmonization of Multi-site Cortical Data Across the Human Lifespan. Mach Learn Med Imaging 2022; 13583:220-229. [PMID: 37126478 PMCID: PMC10134963 DOI: 10.1007/978-3-031-21014-3_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Neuroimaging data harmonization has become a prerequisite in integrative data analytics for standardizing a wide variety of data collected from multiple studies and enabling interdisciplinary research. The lack of standardized image acquisition and computational procedures introduces non-biological variability and inconsistency in multi-site data, complicating downstream statistical analyses. Here, we propose a novel statistical technique to retrospectively harmonize multi-site cortical data collected longitudinally and cross-sectionally between birth and 100 years. We demonstrate that our method can effectively eliminate non-biological disparities from cortical thickness and myelination measurements, while preserving biological variation across the entire lifespan. Our harmonization method will foster large-scale population studies by providing comparable data required for investigating developmental and aging processes.
Collapse
Affiliation(s)
- Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Fang Nan
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Di Wu
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill, NC, USA
- Division of Oral and Craniofacial Health Research, Adams School of Dentistry, The University of North Carolina at Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
29
|
Li G, Yap PT. From descriptive connectome to mechanistic connectome: Generative modeling in functional magnetic resonance imaging analysis. Front Hum Neurosci 2022; 16:940842. [PMID: 36061504 PMCID: PMC9428697 DOI: 10.3389/fnhum.2022.940842] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/28/2022] [Indexed: 01/28/2023] Open
Abstract
As a newly emerging field, connectomics has greatly advanced our understanding of the wiring diagram and organizational features of the human brain. Generative modeling-based connectome analysis, in particular, plays a vital role in deciphering the neural mechanisms of cognitive functions in health and dysfunction in diseases. Here we review the foundation and development of major generative modeling approaches for functional magnetic resonance imaging (fMRI) and survey their applications to cognitive or clinical neuroscience problems. We argue that conventional structural and functional connectivity (FC) analysis alone is not sufficient to reveal the complex circuit interactions underlying observed neuroimaging data and should be supplemented with generative modeling-based effective connectivity and simulation, a fruitful practice that we term "mechanistic connectome." The transformation from descriptive connectome to mechanistic connectome will open up promising avenues to gain mechanistic insights into the delicate operating principles of the human brain and their potential impairments in diseases, which facilitates the development of effective personalized treatments to curb neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Guoshi Li
- Department of Radiology, University of North Carolina, Chapel Hill, NC, United States,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, United States,*Correspondence: Guoshi Li,
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, NC, United States,Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, United States
| |
Collapse
|
30
|
Wei D, Ahmad S, Guo Y, Chen L, Huang Y, Ma L, Wu Z, Li G, Wang L, Lin W, Yap PT, Shen D, Wang Q. Recurrent Tissue-Aware Network for Deformable Registration of Infant Brain MR Images. IEEE Trans Med Imaging 2022; 41:1219-1229. [PMID: 34932474 PMCID: PMC9064923 DOI: 10.1109/tmi.2021.3137280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is fundamental to longitudinal and population-based image analyses. However, it is challenging to precisely align longitudinal infant brain MR images of the same subject, as well as cross-sectional infant brain MR images of different subjects, due to fast brain development during infancy. In this paper, we propose a recurrently usable deep neural network for the registration of infant brain MR images. There are three main highlights of our proposed method. (i) We use brain tissue segmentation maps for registration, instead of intensity images, to tackle the issue of rapid contrast changes of brain tissues during the first year of life. (ii) A single registration network is trained in a one-shot manner, and then recurrently applied in inference for multiple times, such that the complex deformation field can be recovered incrementally. (iii) We also propose both the adaptive smoothing layer and the tissue-aware anti-folding constraint into the registration network to ensure the physiological plausibility of estimated deformations without degrading the registration accuracy. Experimental results, in comparison to the state-of-the-art registration methods, indicate that our proposed method achieves the highest registration accuracy while still preserving the smoothness of the deformation field. The implementation of our proposed registration network is available online https://github.com/Barnonewdm/ACTA-Reg-Net.
Collapse
|
31
|
Zhang H, Alexander DC, Shen D, Yap PT. Special issue on machine learning and deep learning in magnetic resonance. NMR Biomed 2022; 35:e4713. [PMID: 35253294 DOI: 10.1002/nbm.4713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 02/09/2022] [Accepted: 02/12/2022] [Indexed: 06/14/2023]
Affiliation(s)
- Hui Zhang
- Centre for Medical Image Computing and Department of Computer Science, University College London, London, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing and Department of Computer Science, University College London, London, UK
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, North Carolina, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, North Carolina, USA
| |
Collapse
|
32
|
Wei J, Wu Z, Wang L, Bui TD, Qu L, Yap PT, Xia Y, Li G, Shen D. A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling. Pattern Recognit 2022; 124:108420. [PMID: 38469076 PMCID: PMC10927017 DOI: 10.1016/j.patcog.2021.108420] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images. We first train a nested network (Nes-Net) for a rough segmentation. The second Nes-Net uses tissue-specific geodesic distance maps as contextual information to refine the segmentation. This process is iterated to build CaNes-Net with a cascade of Nes-Net modules to gradually refine the segmentation. To alleviate the misalignment between 3T and corresponding 7T MR images, we incorporate a correlation coefficient map to allow well-aligned voxels to play a more important role in supervising the training process. We compared CaNes-Net with SPM and FSL tools, as well as four deep learning models on 18 adult subjects and the ADNI dataset. Our results indicate that CaNes-Net reduces segmentation errors caused by the misalignment and improves segmentation accuracy substantially over the competing methods.
Collapse
Affiliation(s)
- Jie Wei
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Toan Duc Bui
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science at Stanford University, Stanford, CA 94305, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| |
Collapse
|
33
|
Ghanbari M, Soussia M, Jiang W, Wei D, Yap PT, Shen D, Zhang H. Alterations of dynamic redundancy of functional brain subnetworks in Alzheimer's disease and major depression disorders. Neuroimage Clin 2021; 33:102917. [PMID: 34929585 PMCID: PMC8688702 DOI: 10.1016/j.nicl.2021.102917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 12/05/2021] [Accepted: 12/13/2021] [Indexed: 11/15/2022]
Abstract
The human brain is not only efficiently but also "redundantly" connected. The redundancy design could help the brain maintain resilience to disease attacks. This paper explores subnetwork-level redundancy dynamics and the potential of such metrics in disease studies. As such, we looked into specific functional subnetworks, including those associated with high-level functions. We investigated how the subnetwork redundancy dynamics change along with Alzheimer's disease (AD) progression and with major depressive disorder (MDD), two major disorders that could share similar subnetwork alterations. We found an increased dynamic redundancy of the subcortical-cerebellum subnetwork and its connections to other high-order subnetworks in the mild cognitive impairment (MCI) and AD compared to the normal control (NC). With gained spatial specificity, we found such a redundancy index was sensitive to disease symptoms and could act as a protective mechanism to prevent the collapse of the brain network and functions. The dynamic redundancy of the medial frontal subnetwork and its connections to the frontoparietal subnetwork was also found decreased in MDD compared to NC. The spatial specificity of the redundancy dynamics changes may provide essential knowledge for a better understanding of shared neural substrates in AD and MDD.
Collapse
Affiliation(s)
- Maryam Ghanbari
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mayssa Soussia
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weixiong Jiang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dongming Wei
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Han Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
34
|
Chen X, Lian C, Deng HH, Kuang T, Lin HY, Xiao D, Gateno J, Shen D, Xia JJ, Yap PT. Fast and Accurate Craniomaxillofacial Landmark Detection via 3D Faster R-CNN. IEEE Trans Med Imaging 2021; 40:3867-3878. [PMID: 34310293 PMCID: PMC8686670 DOI: 10.1109/tmi.2021.3099509] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Automatic craniomaxillofacial (CMF) landmark localization from cone-beam computed tomography (CBCT) images is challenging, considering that 1) the number of landmarks in the images may change due to varying deformities and traumatic defects, and 2) the CBCT images used in clinical practice are typically large. In this paper, we propose a two-stage, coarse-to-fine deep learning method to tackle these challenges with both speed and accuracy in mind. Specifically, we first use a 3D faster R-CNN to roughly locate landmarks in down-sampled CBCT images that have varying numbers of landmarks. By converting the landmark point detection problem to a generic object detection problem, our 3D faster R-CNN is formulated to detect virtual, fixed-size objects in small boxes with centers indicating the approximate locations of the landmarks. Based on the rough landmark locations, we then crop 3D patches from the high-resolution images and send them to a multi-scale UNet for the regression of heatmaps, from which the refined landmark locations are finally derived. We evaluated the proposed approach by detecting up to 18 landmarks on a real clinical dataset of CMF CBCT images with various conditions. Experiments show that our approach achieves state-of-the-art accuracy of 0.89 ± 0.64mm in an average time of 26.2 seconds per volume.
Collapse
|
35
|
Ghanbari M, Zhou Z, Hsu LM, Han Y, Sun Y, Yap PT, Zhang H, Shen D. Altered Connectedness of the Brain Chronnectome During the Progression to Alzheimer's Disease. Neuroinformatics 2021; 20:391-403. [PMID: 34837154 DOI: 10.1007/s12021-021-09554-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 11/24/2022]
Abstract
Graph theory has been extensively used to investigate brain network topology and its changes in disease cohorts. However, many graph theoretic analysis-based brain network studies focused on the shortest paths or, more generally, cost-efficiency. In this work, we use two new concepts, connectedness and 2-connectedness, to measure different global properties compared to the previously widely adopted ones. We apply them to unravel interesting characteristics in the brain, such as redundancy design and further conduct a time-varying brain functional network analysis for characterizing the progression of Alzheimer's disease (AD). Specifically, we define different connectedness and 2-connectedness states and evaluate their dynamics in AD and its preclinical stage, mild cognitive impairment (MCI), compared to the normal controls (NC). Results indicate that, compared to MCI and NC, brain networks of AD tend to be more frequently connected at a sparse level. For MCI, we found that their brains are more likely to be 2-connected in the minimal connected state as well indicating increasing redundancy in brain connectivity. Such a redundant design could ensure maintained connectedness of the MCI's brain network in the case that pathological damages break down any link or silenced any node, making it possible to preserve cognitive abilities. Our study suggests that the redundancy in the brain functional chronnectome could be altered in the preclinical stage of AD. The findings can be successfully replicated in a retest study and with an independent MCI dataset. Characterizing redundancy design in the brain chronnectome using connectedness and 2-connectedness analysis provides a unique viewpoint for understanding disease affected brain networks.
Collapse
Affiliation(s)
- Maryam Ghanbari
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Zhen Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li-Ming Hsu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Ying Han
- Department of National Clinical Research Center for Geriatric Disorders, Beijing, 100053, China
- Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing, 100053, China
- Beijing Institute of Geriatrics, Beijing, 100053, China
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, 100053, China
| | - Yu Sun
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, 100053, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Han Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| |
Collapse
|
36
|
Abstract
Magnetic resonance fingerprinting (MRF) is increasingly being used to evaluate brain development and differentiate normal and pathologic tissues in children. MRF can provide reliable and accurate intrinsic tissue properties, such as T1 and T2 relaxation times. MRF is a powerful tool in evaluating brain disease in pediatric population. MRF is a new quantitative MR imaging technique for rapid and simultaneous quantification of multiple tissue properties.
Collapse
Affiliation(s)
- Sheng-Che Hung
- Department of Radiology, School of Medicine, University of North Carolina at Chapel Hill, 2006 Old Clinic, CB#7510, 101 Manning Dr, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center, School of Medicine, University of North Carolina at Chapel Hill, 125 Mason Farm Road, Marsico Hall, suite 1200, Chapel Hill, NC 27599, USA
| | - Yong Chen
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
| | - Pew-Thian Yap
- Department of Radiology, School of Medicine, University of North Carolina at Chapel Hill, 2006 Old Clinic, CB#7510, 101 Manning Dr, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center, School of Medicine, University of North Carolina at Chapel Hill, 125 Mason Farm Road, Marsico Hall, suite 1200, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology, School of Medicine, University of North Carolina at Chapel Hill, 2006 Old Clinic, CB#7510, 101 Manning Dr, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center, School of Medicine, University of North Carolina at Chapel Hill, 125 Mason Farm Road, Marsico Hall, suite 1200, Chapel Hill, NC 27599, USA.
| |
Collapse
|
37
|
Szalkowski G, Nie D, Zhu T, Yap PT, Lian J. Synthetic digital reconstructed radiographs for MR-only robotic stereotactic radiation therapy: A proof of concept. Comput Biol Med 2021; 138:104917. [PMID: 34688037 DOI: 10.1016/j.compbiomed.2021.104917] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 09/16/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE To create synthetic CTs and digital reconstructed radiographs (DRRs) from MR images that allow for fiducial visualization and accurate dose calculation for MR-only radiosurgery. METHODS We developed a machine learning model to create synthetic CTs from pelvic MRs for prostate treatments. This model has been previously proven to generate synthetic CTs with accuracy on par or better than alternate methods, such as atlas-based registration. Our dataset consisted of 11 paired CT and conventional MR (T2) images used for previous CyberKnife (Accuray, Inc) radiotherapy treatments. The MR images were pre-processed to mimic the appearance of fiducial-enhancing images. Two models were trained for each parameter case, using a sub-set of the available image pairs, with the remaining images set aside for testing and validation of the model to identify the optimal patch size and number of image pairs used for training. Four models were then trained using the identified parameters and used to generate synthetic CTs, which in turn were used to generate DRRs at angles 45° and 315°, as would be used for a CyberKnife treatment. The synthetic CTs and DRRs were compared visually and using the mean squared error and peak signal-to-noise ratio against the ground-truth images to evaluate their similarity. RESULTS The synthetic CTs, as well as the DRRs generated from them, gave similar visualization of the fiducial markers in the prostate as the true counterparts. There was no significant difference found for the fiducial localization for the CTs and DRRs. Across the 8 DRRs analyzed, the mean MSE between the normalized true and synthetic DRRs was 0.66 ± 0.42% and the mean PSNR for this region was 22.9 ± 3.7 dB. For the full CTs, the mean MAE was 72.9 ± 88.1 HU and the mean PSNR was 31.2 ± 2.2 dB. CONCLUSIONS Our machine learning-based method provides a proof of concept of a way to generate synthetic CTs and DRRs for accurate dose calculation and fiducial localization for use in radiation treatment of the prostate.
Collapse
Affiliation(s)
- Gregory Szalkowski
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA
| | - Dong Nie
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Tong Zhu
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA.
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
38
|
Jiaerken Y, Lian C, Huang P, Yu X, Zhang R, Wang S, Hong H, Luo X, Yap PT, Shen D, Zhang M. Dilated perivascular space is related to reduced free-water in surrounding white matter among healthy adults and elderlies but not in patients with severe cerebral small vessel disease. J Cereb Blood Flow Metab 2021; 41:2561-2570. [PMID: 33818186 PMCID: PMC8504939 DOI: 10.1177/0271678x211005875] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Perivascular space facilitates cerebral interstitial water clearance. However, it is unclear how dilated perivascular space (dPVS) affects the interstitial water of surrounding white matter. We aimed to determine the presence and extent of changes in normal-appearing white matter water components around dPVS in different populations. Twenty healthy elderly subjects and 15 elderly subjects with severe cerebral small vessel disease (CSVD, with lacunar infarction 6 months before the scan) were included in our study. And other 28 healthy adult subjects were enrolled under a different scanning parameter to see if the results are comparable. The normal-appearing white matter around dPVS was categorized into 10 layers (1 mm thickness each) based on their distance to dPVS. We evaluated the mean isotropic-diffusing water volume fraction in each layer. We discovered a significantly reduced free-water content in the layers closely adjacent to the dPVS in the healthy elderlies. however, this reduction around dPVS was weaker in the CSVD subjects. We also discovered an elevated free-water content within dPVS. DPVS played different roles in healthy subjects or CSVD subjects. The reduced water content around dPVS in healthy subjects suggests these MR-visible PVSs are not always related to the stagnation of fluid.
Collapse
Affiliation(s)
- Yeerfan Jiaerken
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China.,Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Peiyu Huang
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Xinfeng Yu
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Ruiting Zhang
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Shuyue Wang
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Hui Hong
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Xiao Luo
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China.,Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Minming Zhang
- Department of Radiology, School of Medicine, Second Affiliated Hospital of Zhejiang University, Zhejiang, China
| |
Collapse
|
39
|
Abstract
Magnetic resonance fingerprinting (MRF) is a relatively new multi-parametric quantitative imaging method that involves a two-step process: (i) reconstructing a series of time frames from highly-undersampled non-Cartesian spiral k-space data and (ii) pattern matching using the time frames to infer tissue properties (e.g., T 1 and T 2 relaxation times). In this paper, we introduce a novel end-to-end deep learning framework to seamlessly map the tissue properties directly from spiral k-space MRF data, thereby avoiding time-consuming processing such as the non-uniform fast Fourier transform (NUFFT) and the dictionary-based fingerprint matching. Our method directly consumes the non-Cartesian k -space data, performs adaptive density compensation, and predicts multiple tissue property maps in one forward pass. Experiments on both 2D and 3D MRF data demonstrate that quantification accuracy comparable to state-of-the-art methods can be accomplished within 0.5 s, which is 1,100 to 7,700 times faster than the original MRF framework. The proposed method is thus promising for facilitating the adoption of MRF in clinical settings.
Collapse
Affiliation(s)
- Yilin Liu
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Yong Chen
- Department of Radiology, Case Western Reserve University, Cleveland, USA
| | - Pew-Thian Yap
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
40
|
Wu Y, Ahmad S, Yap PT. Highly Reproducible Whole Brain Parcellation in Individuals via Voxel Annotation with Fiber Clusters. Med Image Comput Comput Assist Interv 2021; 12907:477-486. [PMID: 36200667 PMCID: PMC9531918 DOI: 10.1007/978-3-030-87234-2_45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
A central goal in systems neuroscience is to parcellate the brain into discrete units that are neurobiologically coherent. Here, we propose a strategy for consistent whole-brain parcellation of white matter (WM) and gray matter (GM) in individuals. We parcellate the brain into coherent parcels using non-negative matrix factorization based on voxel annotation using fiber clusters. Tractography is performed using an algorithm that mitigates gyral bias, allowing full gyral and sulcal coverage for reliable parcellation of the cortical ribbon. Experimental results indicate that parcellation using our approach is highly reproducible with 100% test-retest parcel identification rate and is highly consistent with significantly lower inter-subject variability than FreeSurfer parcellation. This implies that reproducible parcellation can be obtained for subject-specific investigation of brain structure and function.
Collapse
Affiliation(s)
- Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
41
|
Hong Y, Ahmad S, Wu Y, Liu S, Yap PT. Vox2Surf: Implicit Surface Reconstruction from Volumetric Data. Med Image Comput Comput Assist Interv 2021; 12966:644-653. [PMID: 36222819 PMCID: PMC9542254 DOI: 10.1007/978-3-030-87589-3_66] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Surface reconstruction from volumetric T1-weighted and T2-weighted images is a time-consuming multi-step process that often involves careful parameter fine-tuning, hindering a more wide-spread utilization of surface-based analysis particularly in large-scale studies. In this work, we propose a fast surface reconstruction method that is based on directly learning a continuous-valued signed distance function (SDF) as implicit surface representation. This continuous representation implicitly encodes the boundary of the surface as the zero isosurface. Given the predicted SDF, the target 3D surface is reconstructed by applying the marching cubes algorithm. Our implicit reconstruction method concurrently predicts the surfaces of the brain parenchyma, the white matter and pial surfaces, the subcortical structures, and the ventricles. Evaluation based on data from the Human Connectome Project indicates that surface reconstruction of a total of 22 cortical and subcortical structures can be completed in less than 20 min.
Collapse
Affiliation(s)
- Yoonmi Hong
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Siyuan Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
42
|
Lang Y, Deng HH, Xiao D, Lian C, Kuang T, Gateno J, Yap PT, Xia JJ. DLLNet: An Attention-Based Deep Learning Method for Dental Landmark Localization on High-Resolution 3D Digital Dental Models. Med Image Comput Comput Assist Interv 2021; 12904:478-487. [PMID: 34927177 PMCID: PMC8675275 DOI: 10.1007/978-3-030-87202-1_46] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Dental landmark localization is a fundamental step to analyzing dental models in the planning of orthodontic or orthognathic surgery. However, current clinical practices require clinicians to manually digitize more than 60 landmarks on 3D dental models. Automatic methods to detect landmarks can release clinicians from the tedious labor of manual annotation and improve localization accuracy. Most existing landmark detection methods fail to capture local geometric contexts, causing large errors and misdetections. We propose an end-to-end learning framework to automatically localize 68 landmarks on high-resolution dental surfaces. Our network hierarchically extracts multi-scale local contextual features along two paths: a landmark localization path and a landmark area-of-interest segmentation path. Higher-level features are learned by combining local-to-global features from the two paths by feature fusion to predict the landmark heatmap and the landmark area segmentation map. An attention mechanism is then applied to the two maps to refine the landmark position. We evaluated our framework on a real-patient dataset consisting of 77 high-resolution dental surfaces. Our approach achieves an average localization error of 0.42 mm, significantly outperforming related start-of-the-art methods.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY, USA
| |
Collapse
|
43
|
Wu Y, Hong Y, Ahmad S, Yap PT. Active Cortex Tractography. Med Image Comput Comput Assist Interv 2021; 12907:467-476. [PMID: 35939282 PMCID: PMC9355463 DOI: 10.1007/978-3-030-87234-2_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Most existing diffusion tractography algorithms are affected by gyral bias, causing the termination of streamlines at gyral crowns instead of sulcal banks. In this paper, we propose a tractography technique, called active cortex tractography (ACT), to overcome gyral bias by enabling fiber streamlines to curve naturally into the cortex. We show that the cortex can play an active role in cortical tractography by providing anatomical information to overcome orientation ambiguities as the streamlines enter the superficial white matter in gyral blades and approach the cortex. This is achieved by devising a direction scouting mechanism that takes into account the white matter surface normal vectors. The scouting mechanism allows probing of directions further in space to prepare the streamlines to turn at appropriate angles. The surface normal vectors guide the streamlines to turn into the cortex, perpendicular to the white-gray matter interface. Evaluation using synthetic, macaque and human data with different streamline seeding schemes demonstrates that ACT improves cortical tractography.
Collapse
Affiliation(s)
- Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Yoonmi Hong
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| |
Collapse
|
44
|
Liu Q, Deng H, Lian C, Chen X, Xiao D, Ma L, Chen X, Kuang T, Gateno J, Yap PT, Xia JJ. SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection. Mach Learn Med Imaging 2021; 12966:606-614. [PMID: 34964046 DOI: 10.1007/978-3-030-87589-3_62] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.
Collapse
Affiliation(s)
- Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Han Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Xiaoyang Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Xu Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| |
Collapse
|
45
|
Ahmad S, Wu Y, Yap PT. Surface-Guided Image Fusion for Preserving Cortical Details in Human Brain Templates. Med Image Comput Comput Assist Interv 2021; 12907:390-399. [PMID: 35403173 PMCID: PMC8986340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Human brain templates are a basis for comparison of brain features across individuals. They should ideally capture an atomical details at both coarse and fine scales to facilitate comparison at varying granularity. Brain template construction typically involves spatial normalization and image fusion. While significant efforts have been dedicated to improving brain templates with sophisticated spatial normalization algorithms, image fusion is typically carried out using intensity-based averaging, causing blurring of anatomical structures. Here, we present an image fusion method that exploits cortical surfaces as guidance to help preserve details in brain templates. Our method encodes cortical boundary information given by a cortical surface mesh in a signed distance function (SDF) map. We use the SDF map to help determine localized contributions of the individual images, especially at cortical boundaries, in image fusion. Experimental results demonstrate that our method significantly improves the preservation of fine gyral and sulcal details, resulting in detailed brain templates with good surface-volume agreement.
Collapse
Affiliation(s)
- Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
46
|
Liu Q, Lian C, Xiao D, Ma L, Deng H, Chen X, Shen D, Yap PT, Xia JJ. Skull Segmentation from CBCT Images via Voxel-Based Rendering. Mach Learn Med Imaging 2021; 12966:615-623. [PMID: 34927174 PMCID: PMC8675180 DOI: 10.1007/978-3-030-87589-3_63] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (e.g., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.
Collapse
Affiliation(s)
- Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Han Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| | - Xu Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
47
|
Xiao D, Deng H, Kuang T, Ma L, Liu Q, Chen X, Lian C, Lang Y, Kim D, Gateno J, Shen SG, Shen D, Yap PT, Xia JJ. A Self-Supervised Deep Framework for Reference Bony Shape Estimation in Orthognathic Surgical Planning. Med Image Comput Comput Assist Interv 2021; 12904:469-477. [PMID: 34927176 PMCID: PMC8674926 DOI: 10.1007/978-3-030-87202-1_45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Virtual orthognathic surgical planning involves simulating surgical corrections of jaw deformities on 3D facial bony shape models. Due to the lack of necessary guidance, the planning procedure is highly experience-dependent and the planning results are often suboptimal. A reference facial bony shape model representing normal anatomies can provide an objective guidance to improve planning accuracy. Therefore, we propose a self-supervised deep framework to automatically estimate reference facial bony shape models. Our framework is an end-to-end trainable network, consisting of a simulator and a corrector. In the training stage, the simulator maps jaw deformities of a patient bone to a normal bone to generate a simulated deformed bone. The corrector then restores the simulated deformed bone back to normal. In the inference stage, the trained corrector is applied to generate a patient-specific normal-looking reference bone from a real deformed bone. The proposed framework was evaluated using a clinical dataset and compared with a state-of-the-art method that is based on a supervised point-cloud network. Experimental results show that the estimated shape models given by our approach are clinically acceptable and significantly more accurate than that of the competing method.
Collapse
Affiliation(s)
- Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Hannah Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Xu Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve Guofang Shen
- Oral and Craniomaxillofacial Surgery at Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200011, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| |
Collapse
|
48
|
Ma L, Kim D, Lian C, Xiao D, Kuang T, Liu Q, Lang Y, Deng HH, Gateno J, Wu Y, Yang E, Liebschner MAK, Xia JJ, Yap PT. Deep Simulation of Facial Appearance Changes Following Craniomaxillofacial Bony Movements in Orthognathic Surgical Planning. Med Image Comput Comput Assist Interv 2021; 12904:459-468. [PMID: 34966912 PMCID: PMC8713535 DOI: 10.1007/978-3-030-87202-1_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Facial appearance changes with the movements of bony segments in orthognathic surgery of patients with craniomaxillofacial (CMF) deformities. Conventional bio-mechanical methods, such as finite element modeling (FEM), for simulating such changes, are labor intensive and computationally expensive, preventing them from being used in clinical settings. To overcome these limitations, we propose a deep learning framework to predict post-operative facial changes. Specifically, FC-Net, a facial appearance change simulation network, is developed to predict the point displacement vectors associated with a facial point cloud. FC-Net learns the point displacements of a pre-operative facial point cloud from the bony movement vectors between pre-operative and simulated post-operative bony models. FC-Net is a weakly-supervised point displacement network trained using paired data with strict point-to-point correspondence. To preserve the topology of the facial model during point transform, we employ a local-point-transform loss to constrain the local movements of points. Experimental results on real patient data reveal that the proposed framework can predict post-operative facial appearance changes remarkably faster than a state-of-the-art FEM method with comparable prediction accuracy.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, Ithaca, NY, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Erkun Yang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | | | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, Ithaca, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
49
|
Minh Huynh K, Chang WT, Hun Chung S, Chen Y, Lee Y, Yap PT. Noise Mapping and Removal in Complex-Valued Multi-Channel MRI via Optimal Shrinkage of Singular Values. Med Image Comput Comput Assist Interv 2021; 2021:191-200. [PMID: 35994030 PMCID: PMC9390971 DOI: 10.1007/978-3-030-87231-1_19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In magnetic resonance imaging (MRI), noise is a limiting factor for higher spatial resolution and a major cause of prolonged scan time, owing to the need for repeated scans. Improving the signal-to-noise ratio is therefore key to faster and higher-resolution MRI. Here we propose a method for mapping and reducing noise in MRI by leveraging the inherent redundancy in complex-valued multi-channel MRI data. Our method leverages a provably optimal strategy for shrinking the singular values of a data matrix, allowing it to outperform state-of-the-art methods such as Marchenko-Pastur PCA in noise reduction. Our method reduces the noise floor in brain diffusion MRI by 5-fold and remarkably improves the contrast of spiral lung 19F MRI. Our framework is fast and does not require training and hyper-parameter tuning, therefore providing a convenient means for improving SNR in MRI.
Collapse
Affiliation(s)
- Khoi Minh Huynh
- Department of Biomedical Engineering, University of North Carolina, Chapel Hill, U.S.A
| | - Wei-Tang Chang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| | - Sang Hun Chung
- Department of Biomedical Engineering, University of North Carolina, Chapel Hill, U.S.A
| | - Yong Chen
- Department of Radiology, Case Western Reserve University, Cleveland, U.S.A
| | - Yueh Lee
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
- Department of Biomedical Engineering, University of North Carolina, Chapel Hill, U.S.A
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
- Department of Biomedical Engineering, University of North Carolina, Chapel Hill, U.S.A
| |
Collapse
|
50
|
Xiao D, Lian C, Deng H, Kuang T, Liu Q, Ma L, Kim D, Lang Y, Chen X, Gateno J, Shen SG, Xia JJ, Yap PT. Estimating Reference Bony Shape Models for Orthognathic Surgical Planning Using 3D Point-Cloud Deep Learning. IEEE J Biomed Health Inform 2021; 25:2958-2966. [PMID: 33497345 DOI: 10.1109/jbhi.2021.3054494] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patient's deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.
Collapse
|