51
|
Niu K, Guo Z, Peng X, Pei S. P-ResUnet: Segmentation of brain tissue with Purified Residual Unet. Comput Biol Med 2022; 151:106294. [PMID: 36435055 DOI: 10.1016/j.compbiomed.2022.106294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/14/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
Abstract
Brain tissue of Magnetic Resonance Imaging is precisely segmented and quantified, which aids in the diagnosis of neurological diseases such as epilepsy, Alzheimer's, and multiple sclerosis. Recently, UNet-like architectures are widely used for medical image segmentation, which achieved promising performance by using the skip connection to fuse the low-level and high-level information. However, In the process of integrating the low-level and high-level information, the non-object information (noise) will be added, which reduces the accuracy of medical image segmentation. Likewise, the same problem also exists in the residual unit. Since the output and input of the residual unit are fused, the non-object information (noise) of the input of the residual unit will be in the integration. To address this challenging problem, in this paper we propose a Purified Residual U-net for the segmentation of brain tissue. This model encodes the image to obtain deep semantic information and purifies the information of low-level features and the residual unit from the image, and acquires the result through a decoder at last. We use the Dilated Pyramid Separate Block (DPSB) as the first block to purify the features for each layer in the encoder without the first layer, which expands the receptive field of the convolution kernel with only a few parameters added. In the first layer, we have explored the best performance achieved with DPB. We find the most non-object information (noise) in the initial image, so it is good for the accuracy to exchange the information to the max degree. We have conducted experiments with the widely used IBSR-18 dataset composed of T-1 weighted MRI volumes from 18 subjects. The results show that compared with some of the cutting-edge methods, our method enhances segmentation performance with the mean dice score reaching 91.093% and the mean Hausdorff distance decreasing to 3.2606.
Collapse
Affiliation(s)
- Ke Niu
- Beijing Information Science and Technology University, Beijing, China.
| | - Zhongmin Guo
- Beijing Information Science and Technology University, Beijing, China.
| | - Xueping Peng
- Australian Artificial Intelligence Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia.
| | - Su Pei
- Beijing Information Science and Technology University, Beijing, China.
| |
Collapse
|
52
|
Yao X, Wang X, Wang SH, Zhang YD. A comprehensive survey on convolutional neural network in medical image analysis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:41361-41405. [DOI: 10.1007/s11042-020-09634-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/30/2020] [Accepted: 08/13/2020] [Indexed: 08/30/2023]
|
53
|
Nguyen HC, Xuan CD, Nguyen LT, Nguyen HD. A new framework for APT attack detection based on network traffic. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Advanced Persistent Threat (APT) attack detection and monitoring has attracted a lot of attention recently when this type of cyber-attacks is growing in both number and dangerous levels. In this paper, a new APT attack model, which is the combination of three different neural network layers including: Multi-layer Perceptron (MLP), Inference (I), and Graph Convolutional Networks (GCN) is proposed. The new model is named MIG for short. In this model, the MLP layer is in charge of aggregating and extracting properties of the IPs based on flow network in Network traffic, while the Inference layer is responsible for building IP information profiles by grouping and concatenating flow networks generated from the same IP. Finally, the GCN layer is used for analyzing and reconstructing IP features based on the behavior extraction process from IP information records. The APT attacks detection method based on network traffic using this MIG model is new, and has yet been proposed and applied anywhere. The novelty and uniqueness of this method is the combination of many different data mining techniques in order to calculate, extract and represent the relationship and the correlation between APT attack behaviors based on Network traffic. In MIG model, many meaningful anomalous properties and behaviors of APT attacks are synthesized and extracted, which help improve the performance of APT attack detection. The experimental results showed that the proposed method is meaningful in both theory and practice since the MIG model not only improves the ability to correctly detect APT attacks in network traffic but also minimizes false alarms.
Collapse
Affiliation(s)
- Hoa Cuong Nguyen
- Faculty of Information Science And Engineering, Yunnan University, China
- Faculty of Information Security, Posts and Telecommunications Institute of Technology Hanoi, Vietnam
| | - Cho Do Xuan
- Faculty of Information Security, Posts and Telecommunications Institute of Technology Hanoi, Vietnam
| | - Long Thanh Nguyen
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology Hanoi, Vietnam
| | - Hoa Dinh Nguyen
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology Hanoi, Vietnam
| |
Collapse
|
54
|
Rao VM, Wan Z, Arabshahi S, Ma DJ, Lee PY, Tian Y, Zhang X, Laine AF, Guo J. Improving across-dataset brain tissue segmentation for MRI imaging using transformer. FRONTIERS IN NEUROIMAGING 2022; 1:1023481. [PMID: 37555170 PMCID: PMC10406272 DOI: 10.3389/fnimg.2022.1023481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/24/2022] [Indexed: 08/10/2023]
Abstract
Brain tissue segmentation has demonstrated great utility in quantifying MRI data by serving as a precursor to further post-processing analysis. However, manual segmentation is highly labor-intensive, and automated approaches, including convolutional neural networks (CNNs), have struggled to generalize well due to properties inherent to MRI acquisition, leaving a great need for an effective segmentation tool. This study introduces a novel CNN-Transformer hybrid architecture designed to improve brain tissue segmentation by taking advantage of the increased performance and generality conferred by Transformers for 3D medical image segmentation tasks. We first demonstrate the superior performance of our model on various T1w MRI datasets. Then, we rigorously validate our model's generality applied across four multi-site T1w MRI datasets, covering different vendors, field strengths, scan parameters, and neuropsychiatric conditions. Finally, we highlight the reliability of our model on test-retest scans taken in different time points. In all situations, our model achieved the greatest generality and reliability compared to the benchmarks. As such, our method is inherently robust and can serve as a valuable tool for brain related T1w MRI studies. The code for the TABS network is available at: https://github.com/raovish6/TABS.
Collapse
Affiliation(s)
- Vishwanatha M. Rao
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Zihan Wan
- Department of Applied Mathematics, Columbia University, New York, NY, United States
| | - Soroush Arabshahi
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - David J. Ma
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Pin-Yu Lee
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Ye Tian
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Xuzhe Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Andrew F. Laine
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Jia Guo
- Department of Psychiatry, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| |
Collapse
|
55
|
Chai L, Wang Z, Chen J, Zhang G, Alsaadi FE, Alsaadi FE, Liu Q. Synthetic augmentation for semantic segmentation of class imbalanced biomedical images: A data pair generative adversarial network approach. Comput Biol Med 2022; 150:105985. [PMID: 36137319 DOI: 10.1016/j.compbiomed.2022.105985] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/05/2022] [Accepted: 08/14/2022] [Indexed: 11/03/2022]
Abstract
In recent years, deep learning (DL) has been recognized very useful in the semantic segmentation of biomedical images. Such an application, however, is significantly hindered by the lack of pixel-wise annotations. In this work, we propose a data pair generative adversarial network (DPGAN) for the purpose of synthesizing concurrently the diverse biomedical images and the segmentation labels from random latent vectors. First, a hierarchical structure is constructed consisting of three variational auto-encoder generative adversarial networks (VAEGANs) with an extra discriminator. Subsequently, to alleviate the influence from the imbalance between lesions and non-lesions areas in biomedical segmentation data sets, we divide the DPGAN into three stages, namely, background stage, mask stage and advanced stage, with each stage deploying a VAEGAN. In such a way, a large number of new segmentation data pairs are generated from random latent vectors and then used to augment the original data sets. Finally, to validate the effectiveness of the proposed DPGAN, experiments are carried out on a vestibular schwannoma data set, a kidney tumor data set and a skin cancer data set. The results indicate that, in comparison to other state-of-the-art GAN-based methods, the proposed DPGAN shows better performance in the generative quality, and meanwhile, gains an effective boost on semantic segmentation of class imbalanced biomedical images.
Collapse
Affiliation(s)
- Lu Chai
- Department of Computer Science and Technology, Tongji University, Shanghai 201804, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH, United Kingdom.
| | - Jianqing Chen
- Department of Otolaryngology, Head & Neck Surgery, Shanghai Ninth People's Hospital, Shanghai 200041, China
| | - Guokai Zhang
- Department of Computer Science and Technology, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Fawaz E Alsaadi
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Fuad E Alsaadi
- Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Qinyuan Liu
- Department of Computer Science and Technology, Tongji University, Shanghai 201804, China.
| |
Collapse
|
56
|
Udupa JK, Liu T, Jin C, Zhao L, Odhner D, Tong Y, Agrawal V, Pednekar G, Nag S, Kotia T, Goodman M, Wileyto EP, Mihailidis D, Lukens JN, Berman AT, Stambaugh J, Lim T, Chowdary R, Jalluri D, Jabbour SK, Kim S, Reyhan M, Robinson CG, Thorstad WL, Choi JI, Press R, Simone CB, Camaratta J, Owens S, Torigian DA. Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring. Med Phys 2022; 49:7118-7149. [PMID: 35833287 PMCID: PMC10087050 DOI: 10.1002/mp.15854] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/20/2022] [Accepted: 06/30/2022] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Collapse
Affiliation(s)
- Jayaram K. Udupa
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tiange Liu
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
| | - Chao Jin
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Liming Zhao
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dewey Odhner
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Yubing Tong
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Vibhu Agrawal
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Gargi Pednekar
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Sanghita Nag
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Tarun Kotia
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | | | - E. Paul Wileyto
- Department of Biostatistics and EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dimitris Mihailidis
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - John Nicholas Lukens
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Abigail T. Berman
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Joann Stambaugh
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tristan Lim
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rupa Chowdary
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dheeraj Jalluri
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Salma K. Jabbour
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Sung Kim
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Meral Reyhan
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | | | - Wade L. Thorstad
- Department of Radiation OncologyWashington UniversitySt. LouisMissouriUSA
| | | | | | | | - Joe Camaratta
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Steve Owens
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Drew A. Torigian
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
57
|
Ha S, Lyu I. SPHARM-Net: Spherical Harmonics-Based Convolution for Cortical Parcellation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2739-2751. [PMID: 35436188 DOI: 10.1109/tmi.2022.3168670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We present a spherical harmonics-based convolutional neural network (CNN) for cortical parcellation, which we call SPHARM-Net. Recent advances in CNNs offer cortical parcellation on a fine-grained triangle mesh of the cortex. Yet, most CNNs designed for cortical parcellation employ spatial convolution that depends on extensive data augmentation and allows only predefined neighborhoods of specific spherical tessellation. On the other hand, a rotation-equivariant convolutional filter avoids data augmentation, and rotational equivariance can be achieved in spectral convolution independent of a neighborhood definition. Nevertheless, the limited resources of a modern machine enable only a finite set of spectral components that might lose geometric details. In this paper, we propose (1) a constrained spherical convolutional filter that supports an infinite set of spectral components and (2) an end-to-end framework without data augmentation. The proposed filter encodes all the spectral components without the full expansion of spherical harmonics. We show that rotational equivariance drastically reduces the training time while achieving accurate cortical parcellation. Furthermore, the proposed convolution is fully composed of matrix transformations, which offers efficient and fast spectral processing. In the experiments, we validate SPHARM-Net on two public datasets with manual labels: Mindboggle-101 (N=101) and NAMIC (N=39). The experimental results show that the proposed method outperforms the state-of-the-art methods on both datasets even with fewer learnable parameters without rigid alignment and data augmentation. Our code is publicly available at https://github.com/Shape-Lab/SPHARM-Net.
Collapse
|
58
|
Chen Q, Xie W, Zhou P, Zheng C, Wu D. Multi-Crop Convolutional Neural Networks for Fast Lung Nodule Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2021.3051910] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Quan Chen
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wei Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Pan Zhou
- Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dapeng Wu
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
| |
Collapse
|
59
|
Richter L, Fetit AE. Accurate segmentation of neonatal brain MRI with deep learning. Front Neuroinform 2022; 16:1006532. [PMID: 36246394 PMCID: PMC9554654 DOI: 10.3389/fninf.2022.1006532] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 09/06/2022] [Indexed: 11/13/2022] Open
Abstract
An important step toward delivering an accurate connectome of the human brain is robust segmentation of 3D Magnetic Resonance Imaging (MRI) scans, which is particularly challenging when carried out on perinatal data. In this paper, we present an automated, deep learning-based pipeline for accurate segmentation of tissues from neonatal brain MRI and extend it by introducing an age prediction pathway. A major constraint to using deep learning techniques on developing brain data is the need to collect large numbers of ground truth labels. We therefore also investigate two practical approaches that can help alleviate the problem of label scarcity without loss of segmentation performance. First, we examine the efficiency of different strategies of distributing a limited budget of annotated 2D slices over 3D training images. In the second approach, we compare the segmentation performance of pre-trained models with different strategies of fine-tuning on a small subset of preterm infants. Our results indicate that distributing labels over a larger number of brain scans can improve segmentation performance. We also show that even partial fine-tuning can be superior in performance to a model trained from scratch, highlighting the relevance of transfer learning strategies under conditions of label scarcity. We illustrate our findings on large, publicly available T1- and T2-weighted MRI scans (n = 709, range of ages at scan: 26–45 weeks) obtained retrospectively from the Developing Human Connectome Project (dHCP) cohort.
Collapse
Affiliation(s)
- Leonie Richter
- Department of Computing, Imperial College London, London, United Kingdom
- *Correspondence: Leonie Richter
| | - Ahmed E. Fetit
- Department of Computing, Imperial College London, London, United Kingdom
- UKRI CDT in Artificial Intelligence for Healthcare, Imperial College London, London, United Kingdom
| |
Collapse
|
60
|
Joo L, Shim WH, Suh CH, Lim SJ, Heo H, Kim WS, Hong E, Lee D, Sung J, Lim JS, Lee JH, Kim SJ. Diagnostic performance of deep learning-based automatic white matter hyperintensity segmentation for classification of the Fazekas scale and differentiation of subcortical vascular dementia. PLoS One 2022; 17:e0274562. [PMID: 36107961 PMCID: PMC9477348 DOI: 10.1371/journal.pone.0274562] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 08/31/2022] [Indexed: 11/26/2022] Open
Abstract
Purpose To validate the diagnostic performance of commercially available, deep learning-based automatic white matter hyperintensity (WMH) segmentation algorithm for classifying the grades of the Fazekas scale and differentiating subcortical vascular dementia. Methods This retrospective, observational, single-institution study investigated the diagnostic performance of a deep learning-based automatic WMH volume segmentation to classify the grades of the Fazekas scale and differentiate subcortical vascular dementia. The VUNO Med-DeepBrain was used for the WMH segmentation system. The system for segmentation of WMH was designed with convolutional neural networks, in which the input image was comprised of a pre-processed axial FLAIR image, and the output was a segmented WMH mask and its volume. Patients presented with memory complaint between March 2017 and June 2018 were included and were split into training (March 2017–March 2018, n = 596) and internal validation test set (April 2018–June 2018, n = 204). Results Optimal cut-off values to categorize WMH volume as normal vs. mild/moderate/severe, normal/mild vs. moderate/severe, and normal/mild/moderate vs. severe were 3.4 mL, 9.6 mL, and 17.1 mL, respectively, and the AUC were 0.921, 0.956 and 0.960, respectively. When differentiating normal/mild vs. moderate/severe using WMH volume in the test set, sensitivity, specificity, and accuracy were 96.4%, 89.9%, and 91.7%, respectively. For distinguishing subcortical vascular dementia from others using WMH volume, sensitivity, specificity, and accuracy were 83.3%, 84.3%, and 84.3%, respectively. Conclusion Deep learning-based automatic WMH segmentation may be an accurate and promising method for classifying the grades of the Fazekas scale and differentiating subcortical vascular dementia.
Collapse
Affiliation(s)
- Leehi Joo
- Department of Radiology, Korea University Guro Hospital, Seoul, Republic of Korea
| | - Woo Hyun Shim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Chong Hyun Suh
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- * E-mail:
| | - Su Jin Lim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hwon Heo
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Ulsan, Republic of Korea
| | - Woo Seok Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | | | | | | - Jae-Sung Lim
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jae-Hong Lee
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Joon Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
61
|
Noothout JMH, Lessmann N, van Eede MC, van Harten LD, Sogancioglu E, Heslinga FG, Veta M, van Ginneken B, Išgum I. Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation. J Med Imaging (Bellingham) 2022; 9:052407. [PMID: 35692896 PMCID: PMC9142841 DOI: 10.1117/1.jmi.9.5.052407] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 05/12/2022] [Indexed: 06/21/2024] Open
Abstract
Purpose: Ensembles of convolutional neural networks (CNNs) often outperform a single CNN in medical image segmentation tasks, but inference is computationally more expensive and makes ensembles unattractive for some applications. We compared the performance of differently constructed ensembles with the performance of CNNs derived from these ensembles using knowledge distillation, a technique for reducing the footprint of large models such as ensembles. Approach: We investigated two different types of ensembles, namely, diverse ensembles of networks with three different architectures and two different loss-functions, and uniform ensembles of networks with the same architecture but initialized with different random seeds. For each ensemble, additionally, a single student network was trained to mimic the class probabilities predicted by the teacher model, the ensemble. We evaluated the performance of each network, the ensembles, and the corresponding distilled networks across three different publicly available datasets. These included chest computed tomography scans with four annotated organs of interest, brain magnetic resonance imaging (MRI) with six annotated brain structures, and cardiac cine-MRI with three annotated heart structures. Results: Both uniform and diverse ensembles obtained better results than any of the individual networks in the ensemble. Furthermore, applying knowledge distillation resulted in a single network that was smaller and faster without compromising performance compared with the ensemble it learned from. The distilled networks significantly outperformed the same network trained with reference segmentation instead of knowledge distillation. Conclusion: Knowledge distillation can compress segmentation ensembles of uniform or diverse composition into a single CNN while maintaining the performance of the ensemble.
Collapse
Affiliation(s)
- Julia M. H. Noothout
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
| | - Nikolas Lessmann
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Matthijs C. van Eede
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
| | - Louis D. van Harten
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
| | - Ecem Sogancioglu
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Friso G. Heslinga
- Eindhoven University of Technology, Department of Biomedical Engineering, Eindhoven, The Netherlands
| | - Mitko Veta
- Eindhoven University of Technology, Department of Biomedical Engineering, Eindhoven, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Ivana Išgum
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
- Amsterdam University Medical Center, University of Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Amsterdam University Medical Center, University of Amsterdam, Amsterdam Cardiovascular Sciences, Heart Failure & Arrhythmias, Amsterdam, The Netherlands
- University of Amsterdam, Informatics Institute, Amsterdam, The Netherlands
| |
Collapse
|
62
|
Yang J, Su Y, He Y, Zhou P, Xu L, Su Z. Machine learning-enabled resolution-lossless tomography for composite structures with a restricted sensing capability. ULTRASONICS 2022; 125:106801. [PMID: 35830747 DOI: 10.1016/j.ultras.2022.106801] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 06/13/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
Construction of a precise ultrasound tomographic image is guaranteed only when the sensor network for signal acquisition is of adequate density. On the other hand, machine learning (ML), as represented by artificial neural network and convolutional neural network (CNN), has emerged as a prevalent data-driven technique to predictively model high-degree complexity and abstraction. A new tomographic imaging approach, facilitated by ML and based on algebraic reconstruction technique (ART), is developed to implement in-situ ultrasound tomography, and monitor the structural health of composites with a restricted sensing capability due to insufficient sensors of the sensor network. The blurry ART images, as the inputs to train a CNN with an encoder-decoder-type architecture, are segmented using convolution and max-pooling to extract defect-modulated image features. The max-unpooling boosts the resolution of ART images with transposed convolution. To validate, a carbon fibre-reinforced polymer laminate is prepared with an implanted piezoresistive sensor network, the sensing capability of which is purposedly restrained. Results demonstrate that the developed approach accurately images artificial anomaly and delamination in the laminate, with inadequate training data from the restricted sensor network for tomographic image construction, and in the meantime it minimizes the false alarm by eliminating image artifacts.
Collapse
Affiliation(s)
- Jianwei Yang
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong Special Administrative Region
| | - Yiyin Su
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311121, People's Republic of China
| | - Yi He
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong Special Administrative Region
| | - Pengyu Zhou
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong Special Administrative Region
| | - Lei Xu
- Robotics and Artificial Intelligence Division, Hong Kong Productivity Council, Kowloon, Hong Kong Special Administrative Region
| | - Zhongqing Su
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong Special Administrative Region; The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen 518057, People's Republic of China; School of Astronautics, Northwestern Polytechnical University, Xi'an 710072, People's Republic of China.
| |
Collapse
|
63
|
Fully Convolutional Neural Network for Improved Brain Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07169-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
64
|
Deep learning models and traditional automated techniques for brain tumor segmentation in MRI: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10245-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
65
|
Kumar A. Study and analysis of different segmentation methods for brain tumor MRI application. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:7117-7139. [PMID: 35991584 PMCID: PMC9379244 DOI: 10.1007/s11042-022-13636-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 04/26/2022] [Accepted: 08/01/2022] [Indexed: 06/15/2023]
Abstract
Medical Resonance Imaging (MRI) is one of the preferred imaging methods for brain tumor diagnosis and getting detailed information on tumor type, location, size, identification, and detection. Segmentation divides an image into multiple segments and describes the separation of the suspicious region from pre-processed MRI images to make the simpler image that is more meaningful and easier to examine. There are many segmentation methods, embedded with detection devices, and the response of each method is different. The study article focuses on comparing the performance of several image segmentation algorithms for brain tumor diagnosis, such as Otsu's, watershed, level set, K-means, HAAR Discrete Wavelet Transform (DWT), and Convolutional Neural Network (CNN). All of the techniques are simulated in MATLAB using online images from the Brain Tumor Image Segmentation Benchmark (BRATS) dataset-2018. The performance of these methods is analyzed based on response time and measures such as recall, precision, F-measures, and accuracy. The measured accuracy of Otsu's, watershed, level set, K-means, DWT, and CNN methods is 71.42%, 78.26%, 80.45%, 84.34%, 86.95%, and 91.39 respectively. The response time of CNN is 2.519 s in the MATLAB simulation environment for the designed algorithm. The novelty of the work is that CNN has been proven the best algorithm in comparison to all other methods for brain tumor image segmentation. The simulated and estimated parameters provide the direction to researchers to choose the specific algorithm for embedded hardware solutions and develop the optimal machine-learning models, as the industries are looking for the optimal solutions of CNN and deep learning-based hardware models for the brain tumor.
Collapse
Affiliation(s)
- Adesh Kumar
- Department of Electrical and Electronics Engineering, School of Engineering, University of Petroleum and Energy Studies, Dehradun, India
| |
Collapse
|
66
|
MRF-IUNet: A Multiresolution Fusion Brain Tumor Segmentation Network Based on Improved Inception U-Net. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6305748. [PMID: 35966244 PMCID: PMC9371863 DOI: 10.1155/2022/6305748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 07/12/2022] [Accepted: 07/20/2022] [Indexed: 11/17/2022]
Abstract
The automatic segmentation method of MRI brain tumors uses computer technology to segment and label tumor areas and normal tissues, which plays an important role in assisting doctors in the clinical diagnosis and treatment of brain tumors. This paper proposed a multiresolution fusion MRI brain tumor segmentation algorithm based on improved inception U-Net named MRF-IUNet (multiresolution fusion inception U-Net). By replacing the original convolution modules in U-Net with the inception modules, the width and depth of the network are increased. The inception module connects convolution kernels of different sizes in parallel to obtain receptive fields of different sizes, which can extract features of different scales. In order to reduce the loss of detailed information during the downsampling process, atrous convolutions are introduced in the inception module to expand the receptive field. The multiresolution feature fusion modules are connected between the encoder and decoder of the proposed network to fuse the semantic features learned by the deeper layers and the spatial detail features learned by the early layers, which improves the recognition and segmentation of local detail features by the network and effectively improves the segmentation accuracy. The experimental results on the BraTS (the Multimodal Brain Tumor Segmentation Challenge) dataset show that the Dice similarity coefficient (DSC) obtained by the method in this paper is 0.94 for the enhanced tumor area, 0.83 for the whole tumor area, and 0.93 for the tumor core area. The segmentation accuracy has been improved.
Collapse
|
67
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
68
|
A layer-level multi-scale architecture for lung cancer classification with fluorescence lifetime imaging endomicroscopy. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07481-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractIn this paper, we introduce our unique dataset of fluorescence lifetime imaging endo/microscopy (FLIM), containing over 100,000 different FLIM images collected from 18 pairs of cancer/non-cancer human lung tissues of 18 patients by our custom fibre-based FLIM system. The aim of providing this dataset is that more researchers from relevant fields can push forward this particular area of research. Afterwards, we describe the best practice of image post-processing suitable per the dataset. In addition, we propose a novel hierarchically aggregated multi-scale architecture to improve the binary classification performance of classic CNNs. The proposed model integrates the advantages of multi-scale feature extraction at different levels, where layer-wise global information is aggregated with branch-wise local information. We integrate the proposal, namely ResNetZ, into ResNet, and appraise it on the FLIM dataset. Since ResNetZ can be configured with a shortcut connection and the aggregations by Addition or Concatenation, we first evaluate the impact of different configurations on the performance. We thoroughly examine various ResNetZ variants to demonstrate the superiority. We also compare our model with a feature-level multi-scale model to illustrate the advantages and disadvantages of multi-scale architectures at different levels.
Collapse
|
69
|
Suganyadevi S, Seethalakshmi V. CVD-HNet: Classifying Pneumonia and COVID-19 in Chest X-ray Images Using Deep Network. WIRELESS PERSONAL COMMUNICATIONS 2022; 126:3279-3303. [PMID: 35756172 PMCID: PMC9206838 DOI: 10.1007/s11277-022-09864-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/04/2023]
Abstract
The use of computer-assisted analysis to improve image interpretation has been a long-standing challenge in the medical imaging industry. In terms of image comprehension, Continuous advances in AI (Artificial Intelligence), predominantly in DL (Deep Learning) techniques, are supporting in the classification, Detection, and quantification of anomalies in medical images. DL techniques are the most rapidly evolving branch of AI, and it's recently been successfully pragmatic in a variety of fields, including medicine. This paper provides a classification method for COVID 19 infected X-ray images based on new novel deep CNN model. For COVID19 specified pneumonia analysis, two new customized CNN architectures, CVD-HNet1 (COVID-HybridNetwork1) and CVD-HNet2 (COVID-HybridNetwork2), have been designed. The suggested method utilizes operations based on boundaries and regions, as well as convolution processes, in a systematic manner. In comparison to existing CNNs, the suggested classification method achieves excellent Accuracy 98 percent, F Score 0.99 and MCC 0.97. These results indicate impressive classification accuracy on a limited dataset, with more training examples, much better results can be achieved. Overall, our CVD-HNet model could be a useful tool for radiologists in diagnosing and detecting COVID 19 instances early.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| | - V. Seethalakshmi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| |
Collapse
|
70
|
Jafar A, Hameed MT, Akram N, Waqas U, Kim HS, Naqvi RA. CardioNet: Automatic Semantic Segmentation to Calculate the Cardiothoracic Ratio for Cardiomegaly and Other Chest Diseases. J Pers Med 2022; 12:988. [PMID: 35743771 PMCID: PMC9225197 DOI: 10.3390/jpm12060988] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 11/18/2022] Open
Abstract
Semantic segmentation for diagnosing chest-related diseases like cardiomegaly, emphysema, pleural effusions, and pneumothorax is a critical yet understudied tool for identifying the chest anatomy. A dangerous disease among these is cardiomegaly, in which sudden death is a high risk. An expert medical practitioner can diagnose cardiomegaly early using a chest radiograph (CXR). Cardiomegaly is a heart enlargement disease that can be analyzed by calculating the transverse cardiac diameter (TCD) and the cardiothoracic ratio (CTR). However, the manual estimation of CTR and other chest-related diseases requires much time from medical experts. Based on their anatomical semantics, artificial intelligence estimates cardiomegaly and related diseases by segmenting CXRs. Unfortunately, due to poor-quality images and variations in intensity, the automatic segmentation of the lungs and heart with CXRs is challenging. Deep learning-based methods are being used to identify the chest anatomy segmentation, but most of them only consider the lung segmentation, requiring a great deal of training. This work is based on a multiclass concatenation-based automatic semantic segmentation network, CardioNet, that was explicitly designed to perform fine segmentation using fewer parameters than a conventional deep learning scheme. Furthermore, the semantic segmentation of other chest-related diseases is diagnosed using CardioNet. CardioNet is evaluated using the JSRT dataset (Japanese Society of Radiological Technology). The JSRT dataset is publicly available and contains multiclass segmentation of the heart, lungs, and clavicle bones. In addition, our study examined lung segmentation using another publicly available dataset, Montgomery County (MC). The experimental results of the proposed CardioNet model achieved acceptable accuracy and competitive results across all datasets.
Collapse
Affiliation(s)
- Abbas Jafar
- Department of Computer Engineering, Myongji University, Yongin 03674, Korea;
| | - Muhammad Talha Hameed
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Nadeem Akram
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Umer Waqas
- Research and Development, AItheNutrigene, Seoul 06132, Korea;
| | - Hyung Seok Kim
- School of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| |
Collapse
|
71
|
Patel NR, Setya K, Pradhan S, Lu M, Demer LL, Tintut Y. Microarchitectural Changes of Cardiovascular Calcification in Response to In Vivo Interventions Using Deep-Learning Segmentation and Computed Tomography Radiomics. Arterioscler Thromb Vasc Biol 2022; 42:e228-e241. [PMID: 35708025 PMCID: PMC9339530 DOI: 10.1161/atvbaha.122.317761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Coronary calcification associates closely with cardiovascular risk, but its progress is accelerated in response to some interventions widely used to reduce risk. This paradox suggests that qualitative, not just quantitative, changes in calcification may affect plaque stability. To determine if the microarchitecture of calcification varies with aging, Western diet, statin therapy, and high intensity, progressive exercise, we assessed changes in a priori selected computed tomography radiomic features (intensity, size, shape, and texture). METHODS Longitudinal computed tomography scans of mice (Apoe-/-) exposed to each of these conditions were autosegmented by deep learning segmentation, and radiomic features of the largest deposits were analyzed. RESULTS Over 20 weeks of aging, intensity and most size parameters increased, but surface-area-to-volume ratio (a measure of porosity) decreased, suggesting stabilization. However, texture features (coarseness, cluster tendency, and nonuniformity) increased, suggesting heterogeneity and likely destabilization. Shape parameters showed no significant changes, except sphericity, which showed a decrease. The Western diet had significant effects on radiomic features related to size and texture, but not intensity or shape. In mice undergoing either pravastatin treatment or exercise, the selected radiomic features of their computed tomography scans were not significantly different from those of their respective controls. Interestingly, the total number of calcific deposits increased significantly less in the 2 intervention groups compared with the respective controls, suggesting more coalescence and/or fewer de novo deposits. CONCLUSIONS Thus, aging and standard interventions alter the microarchitectural features of vascular calcium deposits in ways that may alter plaque biomechanical stability.
Collapse
Affiliation(s)
- Nikhil Rajesh Patel
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.)
| | - Kulveer Setya
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.)
| | - Stuti Pradhan
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.)
| | - Mimi Lu
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.)
| | - Linda L Demer
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.).,Department of Bioengineering, University of California, Los Angeles. (L.L.D.).,Department of Physiology, University of California, Los Angeles. (L.L.D., Y.T.).,VA Greater Los Angeles Healthcare System, CA (L.L.D., Y.T.)
| | - Yin Tintut
- Department of Medicine, University of California, Los Angeles. (N.R.P., K.S., S.P., M.L., L.L.D., Y.T.).,Department of Physiology, University of California, Los Angeles. (L.L.D., Y.T.).,Department of Orthopaedic Surgery, University of California, Los Angeles. (Y.T.).,VA Greater Los Angeles Healthcare System, CA (L.L.D., Y.T.)
| |
Collapse
|
72
|
Lee J, Li C, Liu CSJ, Shiroishi M, Carmichael JD, Zada G, Patel V. Ultra-high field 7 T MRI localizes regional brain volume recovery following corticotroph adenoma resection and hormonal remission in Cushing's disease: A case series. Surg Neurol Int 2022; 13:239. [PMID: 35855134 PMCID: PMC9282752 DOI: 10.25259/sni_787_2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 04/29/2022] [Indexed: 01/28/2023] Open
Abstract
Background Cushing's disease (CD) is defined by glucocorticoid excess secondary to the increased section of corticotropin by a pituitary adenoma. Magnetic resonance imaging (MRI) studies performed at 1.5 or 3 Tesla (T) have demonstrated correlations between regional changes in brain structure and the progression of CD. In this report, we examine the changes in brain volume following corticotroph pituitary adenoma resection using ultra-high field 7 T MRI to increase the accuracy of our volumetric analyses. Methods Thirteen patients were referred to the endocrinology clinic at our institution from 2017 to 2020 with symptoms of cortisol excess and were diagnosed with ACTH-dependent endogenous Cushing syndrome. Five patients had follow-up 7 T imaging at varying time points after a transsphenoidal resection. Results Symmetrized percent change in regional volumes demonstrated a postoperative increase in cortical volume that was relatively larger than that of cerebral white matter or subcortical gray matter (percent changes = 0.0172%, 0.0052%, and 0.0120%, respectively). In the left cerebral hemisphere, the medial orbitofrontal, lateral orbitofrontal, and pars opercularis cortical regions experienced the most robust postoperative percent increases (percent changes = 0.0166%, 0.0122%, and 0.0068%, respectively). In the right cerebral hemisphere, the largest percent increases were observed in the pars triangularis, rostral portion of the middle frontal gyrus, and superior frontal gyrus (percent changes = 0.0156%, 0.0120%, and 0.0158%). Conclusion Cerebral volume recovery following pituitary adenoma resection is driven by changes in cortical thickness predominantly in the frontal lobe, while subcortical white and gray matter volumes increase more modestly.
Collapse
Affiliation(s)
- Jonathan Lee
- Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California
| | - Charles Li
- Departments of Radiology, University of Southern California, Los Angeles, California, United States
| | - Chia-Shang J. Liu
- Departments of Radiology, University of Southern California, Los Angeles, California, United States
| | - Mark Shiroishi
- Departments of Radiology, University of Southern California, Los Angeles, California, United States
| | - John D. Carmichael
- Medicine, Division of Endocrinology, University of Southern California, Los Angeles, California, United States
| | - Gabriel Zada
- Neurological Surgery, University of Southern California, Los Angeles, California, United States
| | - Vishal Patel
- Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California,,Departments of Radiology, University of Southern California, Los Angeles, California, United States,,Corresponding author: Vishal Patel, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, California, United States.
| |
Collapse
|
73
|
Kaur I, Goyal LM, Ghansiyal A, Hemanth DJ. Efficient Approach for Rhopalocera Classification Using Growing Convolutional Neural Network. INT J UNCERTAIN FUZZ 2022. [DOI: 10.1142/s0218488522400189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In the present times, artificial-intelligence based techniques are considered as one of the prominent ways to classify images which can be conveniently leveraged in the real-world scenarios. This technology can be extremely beneficial to the lepidopterists, to assist them in classification of the diverse species of Rhopalocera, commonly called as butterflies. In this article, image classification is performed on a dataset of various butterfly species, facilitated via the feature extraction process of the Convolutional Neural Network (CNN) along with leveraging the additional features calculated independently to train the model. The classification models deployed for this purpose predominantly include K-Nearest Neighbors (KNN), Random Forest and Support Vector Machine (SVM). However, each of these methods tend to focus on one specific class of features. Therefore, an ensemble of multiple classes of features used for classification of images is implemented. This research paper discusses the results achieved from the classification performed on basis of two different classes of features i.e., structure and texture. The amalgamation of the two specified classes of features forms a combined data set, which has further been used to train the Growing Convolutional Neural Network (GCNN), resulting in higher accuracy of the classification model. The experiment performed resulted in promising outcomes with TP rate, FP rate, Precision, recall and F-measure values as 0.9690, 0.0034, 0.9889, 0.9692 and 0.9686 respectively. Furthermore, an accuracy of 96.98% was observed by the proposed methodology.
Collapse
Affiliation(s)
- Iqbaldeep Kaur
- Department of Computer Science, CGC, Landran, Mohali, India
| | - Lalit Mohan Goyal
- Department of Computer Engineering, J C Bose University of Science and Technology, YMCA, Faridabad, India
| | | | - D. Jude Hemanth
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
74
|
Backtracking Reconstruction Network for Three-Dimensional Compressed Hyperspectral Imaging. REMOTE SENSING 2022. [DOI: 10.3390/rs14102406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) has been widely used in hyperspectral (HS) imaging to obtain hyperspectral data at a sub-Nyquist sampling rate, lifting the efficiency of data acquisition. Yet, reconstructing the acquired HS data via iterative algorithms is time consuming, which hinders the real-time application of compressed HS imaging. To alleviate this problem, this paper makes the first attempt to adopt convolutional neural networks (CNNs) to reconstruct three-dimensional compressed HS data by backtracking the entire imaging process, leading to a simple yet effective network, dubbed the backtracking reconstruction network (BTR-Net). Concretely, we leverage the divide-and-conquer method to divide the imaging process based on coded aperture tunable filter (CATF) spectral imager into steps, and build a subnetwork for each step to specialize in its reverse process. Consequently, BTR-Net introduces multiple built-in networks which performs spatial initialization, spatial enhancement, spectral initialization and spatial–spectral enhancement in an independent and sequential manner. Extensive experiments show that BTR-Net can reconstruct compressed HS data quickly and accurately, which outperforms leading iterative algorithms both quantitatively and visually, while having superior resistance to noise.
Collapse
|
75
|
Sadeghibakhi M, Pourreza H, Mahyar H. Multiple Sclerosis Lesions Segmentation Using Attention-Based CNNs in FLAIR Images. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800411. [PMID: 35711337 PMCID: PMC9191687 DOI: 10.1109/jtehm.2022.3172025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/05/2022] [Accepted: 04/08/2022] [Indexed: 11/17/2022]
Abstract
Objective: Multiple Sclerosis (MS) is an autoimmune and demyelinating disease that leads to lesions in the central nervous system. This disease can be tracked and diagnosed using Magnetic Resonance Imaging (MRI). A multitude of multimodality automatic biomedical approaches are used to segment lesions that are not beneficial for patients in terms of cost, time, and usability. The authors of the present paper propose a method employing just one modality (FLAIR image) to segment MS lesions accurately. Methods: A patch-based Convolutional Neural Network (CNN) is designed, inspired by 3D-ResNet and spatial-channel attention module, to segment MS lesions. The proposed method consists of three stages: (1) the Contrast-Limited Adaptive Histogram Equalization (CLAHE) is applied to the original images and concatenated to the extracted edges to create 4D images; (2) the patches of size [Formula: see text] are randomly selected from the 4D images; and (3) the extracted patches are passed into an attention-based CNN which is used to segment the lesions. Finally, the proposed method was compared to previous studies of the same dataset. Results: The current study evaluates the model with a test set of ISIB challenge data. Experimental results illustrate that the proposed approach significantly surpasses existing methods of Dice similarity and Absolute Volume Difference while the proposed method uses just one modality (FLAIR) to segment the lesions. Conclusion: The authors have introduced an automated approach to segment the lesions, which is based on, at most, two modalities as an input. The proposed architecture comprises convolution, deconvolution, and an SCA-VoxRes module as an attention module. The results show, that the proposed method outperforms well compared to other methods.
Collapse
Affiliation(s)
- Mehdi Sadeghibakhi
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Pourreza
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Mahyar
- Faculty of Engineering, W Booth School of Engineering Practice and TechnologyMcMaster UniversityHamiltonONL8S 4L8Canada
| |
Collapse
|
76
|
Chen Y, He K, Hao B, Weng Y, Chen Z. FractureNet: A 3D Convolutional Neural Network Based on the Architecture of m-Ary Tree for Fracture Type Identification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1196-1207. [PMID: 34890325 DOI: 10.1109/tmi.2021.3134650] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
To address the problem of automatic identification of fine-grained fracture types, in this paper, we propose a novel framework using 3D convolutional neural network (CNN) to learn fracture features from voxelized bone models which are obtained by establishing isomorphic mapping from fractured bones to a voxelized template. The network, which is named FractureNet, consists of four discriminators forming a multi-stage hierarchy. Each discriminator includes multiple sub-classifiers. These sub-classifiers are chained by two kinds of feature chains (feature map chain and classification feature chain) in the form of a full m-ary tree to perform multi-stage classification tasks. The features learned and classification results obtained at previous stages serve as prior knowledge for current learning and classification. All sub-classifiers are jointly learned in an end-to-end network via a multi-stage loss function integrating losses of the four discriminators. To make our FractureNet more robust and accurate, a data augmentation strategy termed r-combination with constraints is further proposed on the basis of an adjacency relation and a continuity relation between voxels to create a large-scale fracture dataset of voxel models. Extensive experiments show that the proposed method can recognize various fracture types in patients accurately and effectively, and enables significant improvements over the state-of-the-arts on a variety of fracture recognition tasks. Moreover, ancillary experiments on the CIFAR-10 and the PadChest datasets at large scales further support the superior performance of the proposed FractureNet.
Collapse
|
77
|
Shiohama T, Tsujimura K. Quantitative Structural Brain Magnetic Resonance Imaging Analyses: Methodological Overview and Application to Rett Syndrome. Front Neurosci 2022; 16:835964. [PMID: 35450016 PMCID: PMC9016334 DOI: 10.3389/fnins.2022.835964] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Congenital genetic disorders often present with neurological manifestations such as neurodevelopmental disorders, motor developmental retardation, epilepsy, and involuntary movement. Through qualitative morphometric evaluation of neuroimaging studies, remarkable structural abnormalities, such as lissencephaly, polymicrogyria, white matter lesions, and cortical tubers, have been identified in these disorders, while no structural abnormalities were identified in clinical settings in a large population. Recent advances in data analysis programs have led to significant progress in the quantitative analysis of anatomical structural magnetic resonance imaging (MRI) and diffusion-weighted MRI tractography, and these approaches have been used to investigate psychological and congenital genetic disorders. Evaluation of morphometric brain characteristics may contribute to the identification of neuroimaging biomarkers for early diagnosis and response evaluation in patients with congenital genetic diseases. This mini-review focuses on the methodologies and attempts employed to study Rett syndrome using quantitative structural brain MRI analyses, including voxel- and surface-based morphometry and diffusion-weighted MRI tractography. The mini-review aims to deepen our understanding of how neuroimaging studies are used to examine congenital genetic disorders.
Collapse
Affiliation(s)
- Tadashi Shiohama
- Department of Pediatrics, Chiba University Hospital, Chiba, Japan
- *Correspondence: Tadashi Shiohama,
| | - Keita Tsujimura
- Group of Brain Function and Development, Nagoya University Neuroscience Institute of the Graduate School of Science, Nagoya, Japan
- Research Unit for Developmental Disorders, Institute for Advanced Research, Nagoya University, Nagoya, Japan
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
| |
Collapse
|
78
|
Mamatha SK, Krishnappa HK, Shalini N. Graph Theory Based Segmentation of Magnetic Resonance Images for Brain Tumor Detection. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1134/s1054661821040167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
79
|
Diagnosis System of Microscopic Hyperspectral Image of Hepatobiliary Tumors Based on Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3794844. [PMID: 35341163 PMCID: PMC8947895 DOI: 10.1155/2022/3794844] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/09/2022] [Accepted: 02/16/2022] [Indexed: 11/29/2022]
Abstract
Hepatobiliary tumor is one of the common tumors and cancers in medicine, which seriously affects people's lives, so how to accurately diagnose it is a very serious problem. This article mainly studies a diagnostic method of microscopic images of liver and gallbladder tumors. Under this research direction, this article proposes to use convolutional neural network to learn and use hyperspectral images to diagnose it. It is found that the addition of the convolutional neural network can greatly improve the actual map classification and the accuracy of the map, and effectively improve the success rate of the treatment. At the same time, the article designs related experiments to compare its feature extraction performance and classification situation. The experimental results in this article show that the improved diagnostic method based on convolutional neural network has an accuracy rate of 85%–90%, which is as high as 6%–8% compared with the traditional accuracy rate, and thus it effectively improves the clinical problem of hepatobiliary tumor treatment.
Collapse
|
80
|
Thakur SP, Schindler MK, Bilello M, Bakas S. Clinically Deployed Computational Assessment of Multiple Sclerosis Lesions. Front Med (Lausanne) 2022; 9:797586. [PMID: 35372431 PMCID: PMC8968446 DOI: 10.3389/fmed.2022.797586] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 02/17/2022] [Indexed: 02/05/2023] Open
Abstract
Multiple Sclerosis (MS) is a demyelinating disease of the central nervous system that affects nearly 1 million adults in the United States. Magnetic Resonance Imaging (MRI) plays a vital role in diagnosis and treatment monitoring in MS patients. In particular, follow-up MRI with T2-FLAIR images of the brain, depicting white matter lesions, is the mainstay for monitoring disease activity and making treatment decisions. In this article, we present a computational approach that has been deployed and integrated into a real-world routine clinical workflow, focusing on two tasks: (a) detecting new disease activity in MS patients, and (b) determining the necessity for injecting Gadolinium Based Contract Agents (GBCAs). This computer-aided detection (CAD) software has been utilized for the former task on more than 19, 000 patients over the course of 10 years, while its added function of identifying patients who need GBCA injection, has been operative for the past 3 years, with > 85% sensitivity. The benefits of this approach are summarized in: (1) offering a reproducible and accurate clinical assessment of MS lesion patients, (2) reducing the adverse effects of GBCAs (and the deposition of GBCAs to the patient's brain) by identifying the patients who may benefit from injection, and (3) reducing healthcare costs, patients' discomfort, and caregivers' workload.
Collapse
Affiliation(s)
- Siddhesh P. Thakur
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Matthew K. Schindler
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States,Michel Bilello
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States,*Correspondence: Spyridon Bakas
| |
Collapse
|
81
|
Chen S, Zhao S, Lan Q. Residual Block Based Nested U-Type Architecture for Multi-Modal Brain Tumor Image Segmentation. Front Neurosci 2022; 16:832824. [PMID: 35356052 PMCID: PMC8959850 DOI: 10.3389/fnins.2022.832824] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/03/2022] [Indexed: 11/26/2022] Open
Abstract
Multi-modal magnetic resonance imaging (MRI) segmentation of brain tumors is a hot topic in brain tumor processing research in recent years, which can make full use of the feature information of different modalities in MRI images, so that tumors can be segmented more effectively. In this article, convolutional neural networks (CNN) is used as a tool to improve the efficiency and effectiveness of segmentation. Based on this, Dense-ResUNet, a multi-modal MRI image segmentation model for brain tumors is created. The Dense-ResUNet consists of a series of nested dense convolutional blocks and a U-Net shaped model with residual connections. The nested dense convolutional blocks can bridge the semantic disparity between the feature maps of the encoder and decoder before fusion and make full use of different levels of features. The residual blocks and skip connection can extract pixel information from the image and skip the link to solve the traditional deep traditional CNN network problem. The experiment results show that our Dense-ResUNet can effectively help to extract the brain tumor and has great clinical research and application value.
Collapse
Affiliation(s)
- Sirui Chen
- School of Software Engineering, Tongji University, Shanghai, China
| | - Shengjie Zhao
- School of Software Engineering, Tongji University, Shanghai, China
- *Correspondence: Shengjie Zhao
| | - Quan Lan
- Department of Neurology, First Affiliated Hospital of Xiamen University, Xiamen, China
- Quan Lan
| |
Collapse
|
82
|
Jiang Y, Li X, Luo H, Yin S, Kaynak O. Quo vadis artificial intelligence? DISCOVER ARTIFICIAL INTELLIGENCE 2022; 2:4. [DOI: 10.1007/s44163-022-00022-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/28/2022] [Indexed: 12/14/2022]
Abstract
AbstractThe study of artificial intelligence (AI) has been a continuous endeavor of scientists and engineers for over 65 years. The simple contention is that human-created machines can do more than just labor-intensive work; they can develop human-like intelligence. Being aware or not, AI has penetrated into our daily lives, playing novel roles in industry, healthcare, transportation, education, and many more areas that are close to the general public. AI is believed to be one of the major drives to change socio-economical lives. In another aspect, AI contributes to the advancement of state-of-the-art technologies in many fields of study, as helpful tools for groundbreaking research. However, the prosperity of AI as we witness today was not established smoothly. During the past decades, AI has struggled through historical stages with several winters. Therefore, at this juncture, to enlighten future development, it is time to discuss the past, present, and have an outlook on AI. In this article, we will discuss from a historical perspective how challenges were faced on the path of revolution of both the AI tools and the AI systems. Especially, in addition to the technical development of AI in the short to mid-term, thoughts and insights are also presented regarding the symbiotic relationship of AI and humans in the long run.
Collapse
|
83
|
Optimization of deep learning based segmentation method. Soft comput 2022. [DOI: 10.1007/s00500-021-06711-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
84
|
Wu X, Huang W, Wu X, Wu S, Huang J. Classification of thermal image of clinical burn based on incremental reinforcement learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-05772-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
85
|
Stegeman R, Sprong MCA, Breur JMPJ, Groenendaal F, de Vries LS, Haas F, van der Net J, Jansen NJG, Benders MJNL, Claessens NHP. Early motor outcomes in infants with critical congenital heart disease are related to neonatal brain development and brain injury. Dev Med Child Neurol 2022; 64:192-199. [PMID: 34416027 PMCID: PMC9290970 DOI: 10.1111/dmcn.15024] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 07/19/2021] [Accepted: 07/20/2021] [Indexed: 01/23/2023]
Abstract
AIM To assess the relationship between neonatal brain development and injury with early motor outcomes in infants with critical congenital heart disease (CCHD). METHOD Neonatal brain magnetic resonance imaging was performed after open-heart surgery with cardiopulmonary bypass. Cortical grey matter (CGM), unmyelinated white matter, and cerebellar volumes, as well as white matter motor tract fractional anisotropy and mean diffusivity were assessed. White matter injury (WMI) and arterial ischaemic stroke (AIS) with corticospinal tract (CST) involvement were scored. Associations with motor outcomes at 3, 9, and 18 months were corrected for repeated cardiac surgery. RESULTS Fifty-one infants (31 males, 20 females) were included prospectively. Median age at neonatal surgery and postoperative brain magnetic resonance imaging was 7 days (interquartile range [IQR] 5-11d) and 15 days (IQR 12-21d) respectively. Smaller CGM and cerebellar volumes were associated with lower fine motor scores at 9 months (CGM regression coefficient=0.51, 95% confidence interval [CI]=0.15-0.86; cerebellum regression coefficient=3.08, 95% CI=1.07-5.09) and 18 months (cerebellum regression coefficient=2.08, 95% CI=0.47-5.12). The fractional anisotropy and mean diffusivity of white matter motor tracts were not related with motor scores. WMI was related to lower gross motor scores at 9 months (mean difference -0.8SD, 95% CI=-1.5 to -0.2). AIS with CST involvement increased the risk of gross motor problems and muscle tone abnormalities. Cerebral palsy (n=3) was preceded by severe ischaemic brain injury. INTERPRETATION Neonatal brain development and injury are associated with fewer favourable early motor outcomes in infants with CCHD.
Collapse
Affiliation(s)
- Raymond Stegeman
- NeonatologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Pediatric CardiologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Pediatric Intensive CareWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Congenital Cardiothoracic SurgeryWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Brain CenterUniversity Medical Center UtrechtUtrecht UniversityUtrechtthe Netherlands
| | - Maaike C A Sprong
- Center for Child Development, Exercise and Physical LiteracyUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Johannes M P J Breur
- Pediatric CardiologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Floris Groenendaal
- NeonatologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Linda S de Vries
- NeonatologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Felix Haas
- Congenital Cardiothoracic SurgeryWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Janjaap van der Net
- Center for Child Development, Exercise and Physical LiteracyUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Nicolaas J G Jansen
- Pediatric Intensive CareWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Department of PediatricsUniversity Medical Center GroningenGroningenthe Netherlands
| | - Manon J N L Benders
- NeonatologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Brain CenterUniversity Medical Center UtrechtUtrecht UniversityUtrechtthe Netherlands
| | - Nathalie H P Claessens
- NeonatologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Pediatric CardiologyWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Pediatric Intensive CareWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Congenital Cardiothoracic SurgeryWilhelmina Children’s HospitalUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands,Brain CenterUniversity Medical Center UtrechtUtrecht UniversityUtrechtthe Netherlands
| | | |
Collapse
|
86
|
Lesion Volume Quantification Using Two Convolutional Neural Networks in MRIs of Multiple Sclerosis Patients. Diagnostics (Basel) 2022; 12:diagnostics12020230. [PMID: 35204321 PMCID: PMC8870921 DOI: 10.3390/diagnostics12020230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/23/2021] [Accepted: 01/14/2022] [Indexed: 01/18/2023] Open
Abstract
Background: Multiple sclerosis (MS) is a neurologic disease of the central nervous system which affects almost three million people worldwide. MS is characterized by a demyelination process that leads to brain lesions, allowing these affected areas to be visualized with magnetic resonance imaging (MRI). Deep learning techniques, especially computational algorithms based on convolutional neural networks (CNNs), have become a frequently used algorithm that performs feature self-learning and enables segmentation of structures in the image useful for quantitative analysis of MRIs, including quantitative analysis of MS. To obtain quantitative information about lesion volume, it is important to perform proper image preprocessing and accurate segmentation. Therefore, we propose a method for volumetric quantification of lesions on MRIs of MS patients using automatic segmentation of the brain and lesions by two CNNs. Methods: We used CNNs at two different moments: the first to perform brain extraction, and the second for lesion segmentation. This study includes four independent MRI datasets: one for training the brain segmentation models, two for training the lesion segmentation model, and one for testing. Results: The proposed brain detection architecture using binary cross-entropy as the loss function achieved a 0.9786 Dice coefficient, 0.9969 accuracy, 0.9851 precision, 0.9851 sensitivity, and 0.9985 specificity. In the second proposed framework for brain lesion segmentation, we obtained a 0.8893 Dice coefficient, 0.9996 accuracy, 0.9376 precision, 0.8609 sensitivity, and 0.9999 specificity. After quantifying the lesion volume of all patients from the test group using our proposed method, we obtained a mean value of 17,582 mm3. Conclusions: We concluded that the proposed algorithm achieved accurate lesion detection and segmentation with reproducibility corresponding to state-of-the-art software tools and manual segmentation. We believe that this quantification method can add value to treatment monitoring and routine clinical evaluation of MS patients.
Collapse
|
87
|
Yan Y, Balbastre Y, Brudfors M, Ashburner J. Factorisation-Based Image Labelling. Front Neurosci 2022; 15:818604. [PMID: 35110992 PMCID: PMC8801908 DOI: 10.3389/fnins.2021.818604] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/10/2021] [Indexed: 12/21/2022] Open
Abstract
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.
Collapse
Affiliation(s)
- Yu Yan
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Yaël Balbastre
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Mikael Brudfors
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
88
|
Albishri AA, Shah SJH, Kang SS, Lee Y. AM-UNet: automated mini 3D end-to-end U-net based network for brain claustrum segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:36171-36194. [PMID: 35035265 PMCID: PMC8742670 DOI: 10.1007/s11042-021-11568-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 09/08/2021] [Accepted: 09/20/2021] [Indexed: 06/14/2023]
Abstract
Recent advances in deep learning (DL) have provided promising solutions to medical image segmentation. Among existing segmentation approaches, the U-Net-based methods have been used widely. However, very few U-Net-based studies have been conducted on automatic segmentation of the human brain claustrum (CL). The CL segmentation is challenging due to its thin, sheet-like structure, heterogeneity of its image modalities and formats, imperfect labels, and data imbalance. We propose an automatic optimized U-Net-based 3D segmentation model, called AM-UNet, designed as an end-to-end process of the pre and post-process techniques and a U-Net model for CL segmentation. It is a lightweight and scalable solution which has achieved the state-of-the-art accuracy for automatic CL segmentation on 3D magnetic resonance images (MRI). On the T1/T2 combined MRI CL dataset, AM-UNet has obtained excellent results, including Dice, Intersection over Union (IoU), and Intraclass Correlation Coefficient (ICC) scores of 82%, 70%, and 90%, respectively. We have conducted the comparative evaluation of AM-UNet with other pre-existing models for segmentation on the MRI CL dataset. As a result, medical experts confirmed the superiority of the proposed AM-UNet model for automatic CL segmentation. The source code and model of the AM-UNet project is publicly available on GitHub: https://github.com/AhmedAlbishri/AM-UNET.
Collapse
Affiliation(s)
- Ahmed Awad Albishri
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
- College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia
| | - Syed Jawad Hussain Shah
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Seung Suk Kang
- Department of Psychiatry Biomedical Sciences, School of Medicine, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Yugyung Lee
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| |
Collapse
|
89
|
Song J, Yuan L. Brain tissue segmentation via non-local fuzzy c-means clustering combined with Markov random field. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:1891-1908. [PMID: 35135234 DOI: 10.3934/mbe.2022089] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The segmentation and extraction of brain tissue in magnetic resonance imaging (MRI) is a meaningful task because it provides a diagnosis and treatment basis for observing brain tissue development, delineating lesions, and planning surgery. However, MRI images are often damaged by factors such as noise, low contrast and intensity brightness, which seriously affect the accuracy of segmentation. A non-local fuzzy c-means clustering framework incorporating the Markov random field for brain tissue segmentation is proposed in this paper. Firstly, according to the statistical characteristics that MRF can effectively describe the local spatial correlation of an image, a new distance metric with neighborhood constraints is constructed by combining probabilistic statistical information. Secondly, a non-local regularization term is integrated into the objective function to utilize the global structure feature of the image, so that both the local and global information of the image can be taken into account. In addition, a linear model of inhomogeneous intensity is also built to estimate the bias field in brain MRI, which has achieved the goal of overcoming the intensity inhomogeneity. The proposed model fully considers the randomness and fuzziness in the image segmentation problem, and obtains the prior knowledge of the image reasonably, which reduces the influence of low contrast in the MRI images. Then the experimental results demonstrate that the proposed method can eliminate the noise and intensity inhomogeneity of the MRI image and effectively improve the image segmentation accuracy.
Collapse
Affiliation(s)
- Jianhua Song
- The Key Laboratory of Intelligent Optimization and Information Processing, Minnan Normal University, Zhangzhou, 363000, China
- College of Physics and Information Engineering, Minnan Normal University, Zhangzhou, 363000, China
| | - Lei Yuan
- College of Physics and Information Engineering, Minnan Normal University, Zhangzhou, 363000, China
| |
Collapse
|
90
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
91
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 88] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
92
|
Kelly CJ, Brown APY, Taylor JA. Artificial Intelligence in Pediatrics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
93
|
Patel G, Dolz J. Weakly supervised segmentation with cross-modality equivariant constraints. Med Image Anal 2022; 77:102374. [DOI: 10.1016/j.media.2022.102374] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 01/12/2022] [Accepted: 01/18/2022] [Indexed: 10/19/2022]
|
94
|
Myocardial Pathology Segmentation of Multi-modal Cardiac MR Images with a Simple but Efficient Siamese U-shaped Network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
95
|
Inamdar MA, Raghavendra U, Gudigar A, Chakole Y, Hegde A, Menon GR, Barua P, Palmer EE, Cheong KH, Chan WY, Ciaccio EJ, Acharya UR. A Review on Computer Aided Diagnosis of Acute Brain Stroke. SENSORS (BASEL, SWITZERLAND) 2021; 21:8507. [PMID: 34960599 PMCID: PMC8707263 DOI: 10.3390/s21248507] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 12/05/2021] [Accepted: 12/09/2021] [Indexed: 01/01/2023]
Abstract
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas.
Collapse
Affiliation(s)
- Mahesh Anil Inamdar
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Udupi Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Ajay Hegde
- Department of Neurosurgery, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India; (A.H.); (G.R.M.)
| | - Girish R. Menon
- Department of Neurosurgery, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India; (A.H.); (G.R.M.)
| | - Prabal Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia;
- Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW 2007, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
| | - Elizabeth Emma Palmer
- School of Women’s and Children’s Health, University of New South Wales, Sydney, NSW 2052, Australia;
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, Singapore 487372, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Research Imaging Centre, University of Malaya, Kuala Lumpur 59100, Malaysia;
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia;
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 599491, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
96
|
Fernandes FE, Yen GG. Automatic Searching and Pruning of Deep Neural Networks for Medical Imaging Diagnostic. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5664-5674. [PMID: 33048758 DOI: 10.1109/tnnls.2020.3027308] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The field of medical imaging diagnostic makes use of a modality of imaging tests, e.g., X-rays, ultrasounds, computed tomographies, and magnetic resonance imaging, to assist physicians with the diagnostic of patients' illnesses. Due to their state-of-the-art results in many challenging image classification tasks, deep neural networks (DNNs) are suitable tools for use by physicians to provide diagnostic support when dealing with medical images. To further advance the field, the present work proposes a two-phase algorithm capable of automatically generating compact DNN architectures given a database, called here DNNDeepeningPruning. In the first phase, also called the deepening phase, the algorithm grows a DNN by adding blocks of residual layers one after another until the model overfits the given data. In the second phase, called the pruning phase, the algorithm prunes the created DNN model from the first phase to produce a DNN with a small amount of floating-point operations guided by some preference given by the user. The proposed algorithm unifies the two separate fields of DNN architecture searching and pruning under a single framework, and it is tested in two medical imaging data sets with satisfactory results.
Collapse
|
97
|
Ji S, Jeong J, Oh SH, Nam Y, Choi SH, Shin HG, Shin D, Jung W, Lee J. Quad-Contrast Imaging: Simultaneous Acquisition of Four Contrast-Weighted Images (PD-Weighted, T₂-Weighted, PD-FLAIR and T₂-FLAIR Images) With Synthetic T₁-Weighted Image, T₁- and T₂-Maps. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3617-3626. [PMID: 34191724 DOI: 10.1109/tmi.2021.3093617] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance imaging (MRI) can provide multiple contrast-weighted images using different pulse sequences and protocols. However, a long acquisition time of the images is a major challenge. To address this limitation, a new pulse sequence referred to as quad-contrast imaging is presented. The quad-contrast sequence enables the simultaneous acquisition of four contrast-weighted images (proton density (PD)-weighted, T2-weighted, PD-fluid attenuated inversion recovery (FLAIR), and T2-FLAIR), and the synthesis of T1-weighted images and T1- and T2-maps in a single scan. The scan time is less than 6 min and is further reduced to 2 min 50 s using a deep learning-based parallel imaging reconstruction. The natively acquired quad contrasts demonstrate high quality images, comparable to those from the conventional scans. The deep learning-based reconstruction successfully reconstructed highly accelerated data (acceleration factor 6), reporting smaller normalized root mean squared errors (NRMSEs) and higher structural similarities (SSIMs) than those from conventional generalized autocalibrating partially parallel acquisitions (GRAPPA)-reconstruction (mean NRMSE of 4.36% vs. 10.54% and mean SSIM of 0.990 vs. 0.953). In particular, the FLAIR contrast is natively acquired and does not suffer from lesion-like artifacts at the boundary of tissue and cerebrospinal fluid, differentiating the proposed method from synthetic imaging methods. The quad-contrast imaging method may have the potentials to be used in a clinical routine as a rapid diagnostic tool.
Collapse
|
98
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
99
|
Stegeman R, Feldmann M, Claessens NHP, Jansen NJG, Breur JMPJ, de Vries LS, Logeswaran T, Reich B, Knirsch W, Kottke R, Hagmann C, Latal B, Simpson J, Pushparajah K, Bonthrone AF, Kelly CJ, Arulkumaran S, Rutherford MA, Counsell SJ, Benders MJNL. A Uniform Description of Perioperative Brain MRI Findings in Infants with Severe Congenital Heart Disease: Results of a European Collaboration. AJNR Am J Neuroradiol 2021; 42:2034-2039. [PMID: 34674999 PMCID: PMC8583253 DOI: 10.3174/ajnr.a7328] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 07/19/2021] [Indexed: 12/24/2022]
Abstract
BACKGROUND AND PURPOSE A uniform description of brain MR imaging findings in infants with severe congenital heart disease to assess risk factors, predict outcome, and compare centers is lacking. Our objective was to uniformly describe the spectrum of perioperative brain MR imaging findings in infants with congenital heart disease. MATERIALS AND METHODS Prospective observational studies were performed at 3 European centers between 2009 and 2019. Brain MR imaging was performed preoperatively and/or postoperatively in infants with transposition of the great arteries, single-ventricle physiology, or left ventricular outflow tract obstruction undergoing cardiac surgery within the first 6 weeks of life. Brain injury was assessed on T1, T2, DWI, SWI, and MRV. A subsample of images was assessed jointly to reach a consensus. RESULTS A total of 348 MR imaging scans (180 preoperatively, 168 postoperatively, 146 pre- and postoperatively) were obtained in 202 infants. Preoperative, new postoperative, and cumulative postoperative white matter injury was identified in 25%, 30%, and 36%; arterial ischemic stroke, in 6%, 10%, and 14%; hypoxic-ischemic watershed injury in 2%, 1%, and 1%; intraparenchymal cerebral hemorrhage, in 0%, 4%, and 5%; cerebellar hemorrhage, in 6%, 2%, and 6%; intraventricular hemorrhage, in 14%, 6%, and 13%; subdural hemorrhage, in 29%, 17%, and 29%; and cerebral sinovenous thrombosis, in 0%, 10%, and 10%, respectively. CONCLUSIONS A broad spectrum of perioperative brain MR imaging findings was found in infants with severe congenital heart disease. We propose an MR imaging protocol including T1-, T2-, diffusion-, and susceptibility-weighted imaging, and MRV to identify ischemic, hemorrhagic, and thrombotic lesions observed in this patient group.
Collapse
Affiliation(s)
- R Stegeman
- From the Departments of Neonatology (R.S., N.H.P.C., L.S.d.V., M.J.N.L.B.)
- Pediatric Intensive Care (R.S., N.H.P.C., N.J.G.J.)
- Pediatric Cardiology (R.S., N.H.P.C., J.M.P.J.B.), Wilhelmina Children's Hospital, UMC Utrecht, Utrecht, the Netherlands
- Utrecht Brain Center (R.S., L.S.d.V., M.J.N.L.B.), UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | | | - N H P Claessens
- From the Departments of Neonatology (R.S., N.H.P.C., L.S.d.V., M.J.N.L.B.)
- Pediatric Intensive Care (R.S., N.H.P.C., N.J.G.J.)
- Pediatric Cardiology (R.S., N.H.P.C., J.M.P.J.B.), Wilhelmina Children's Hospital, UMC Utrecht, Utrecht, the Netherlands
| | - N J G Jansen
- Pediatric Intensive Care (R.S., N.H.P.C., N.J.G.J.)
- Department of Pediatrics (N.J.G.J.), Beatrix Children's Hospital, UMC Groningen, Groningen, the Netherlands
| | - J M P J Breur
- Pediatric Cardiology (R.S., N.H.P.C., J.M.P.J.B.), Wilhelmina Children's Hospital, UMC Utrecht, Utrecht, the Netherlands
| | - L S de Vries
- From the Departments of Neonatology (R.S., N.H.P.C., L.S.d.V., M.J.N.L.B.)
- Utrecht Brain Center (R.S., L.S.d.V., M.J.N.L.B.), UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - T Logeswaran
- Pediatric Heart Center (T.L., B.R.), University Hospital Giessen, Justus-Liebig-University Giessen, Giessen, Germany
| | - B Reich
- Pediatric Heart Center (T.L., B.R.), University Hospital Giessen, Justus-Liebig-University Giessen, Giessen, Germany
| | - W Knirsch
- Division of Pediatric Cardiology (W.K.), Pediatric Heart Center
| | - R Kottke
- Department of Diagnostic Imaging (R.K.)
| | - C Hagmann
- Department of Neonatology and Pediatric Intensive Care (C.H.), University Children's Hospital Zurich, Zurich, Switzerland
| | - B Latal
- Child Development Center (M.F., B.L.)
| | - J Simpson
- Department of Pediatric Cardiology (J.S., K.P.), Evelina Children's Hospital London, London, UK
| | - K Pushparajah
- Department of Pediatric Cardiology (J.S., K.P.), Evelina Children's Hospital London, London, UK
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - A F Bonthrone
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - C J Kelly
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - S Arulkumaran
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - M A Rutherford
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - S J Counsell
- Centre for the Developing Brain (K.P., A.F.B., C.J.K., S.A., M.A.R., S.J.C.), School of Biomedical Engineering and Imaging Sciences, King.s College London, London, UK
| | - M J N L Benders
- From the Departments of Neonatology (R.S., N.H.P.C., L.S.d.V., M.J.N.L.B.)
- Utrecht Brain Center (R.S., L.S.d.V., M.J.N.L.B.), UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
100
|
Hai J, Qiao K, Chen J, Liang N, Zhang L, Yan B. Multi-view features integrated 2D\3D Net for glomerulopathy histologic types classification using ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106439. [PMID: 34695734 DOI: 10.1016/j.cmpb.2021.106439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 09/18/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Early diagnoses and rational therapeutics of glomerulopathy can control progression and improve prognosis. The gold standard for the diagnosis of glomerulopathy is pathology by renal biopsy, which is invasive and has many contraindications. We aim to use renal ultrasonography for histologic classification of glomerulopathy. METHODS Ultrasonography can present multi-view sections of kidney, thus we proposed a multi-view and cross-domain integration strategy (CD-ConcatNet) to obtain more effective features and improve diagnosis accuracy. We creatively apply 2D group convolution and 3D convolution to process multiple 2D ultrasound images and extract multi-view features of renal ultrasound images. Cross-domain concatenation in each spatial resolution of feature maps is applied for more informative feature learning. RESULTS A total of 76 adult patients were collected and divided into training dataset (56 cases with 515 images) and validation dataset (20 cases with 180 images). We obtained the best mean accuracy of 0.83 and AUC of 0.8667 in the validation dataset. CONCLUSION Comparison experiments demonstrate that our designed CD-ConcatNet achieves the best classification performance and has great superiority on histologic types diagnosis. Results also prove that the integration of multi-view ultrasound images is beneficial for histologic classification and ultrasound images can indeed provide discriminating information for histologic diagnosis.
Collapse
Affiliation(s)
- Jinjin Hai
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Kai Qiao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Jian Chen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Ningning Liang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Lijie Zhang
- Department of Nephrology in First Affiliated Hospital of Zhengzhou University, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China.
| |
Collapse
|