101
|
Rai HM, Chatterjee K. 2D MRI image analysis and brain tumor detection using deep learning CNN model LeU-Net. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:36111-36141. [DOI: 10.1007/s11042-021-11504-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 07/03/2021] [Accepted: 08/19/2021] [Indexed: 08/08/2023]
|
102
|
Verma SS, Prasad A, Kumar A. CovXmlc: High performance COVID-19 detection on X-ray images using Multi-Model classification. Biomed Signal Process Control 2021; 71:103272. [PMID: 34691234 PMCID: PMC8526503 DOI: 10.1016/j.bspc.2021.103272] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 02/07/2023]
Abstract
The Coronavirus Disease 2019 (COVID-19) outbreak has a devastating impact on health and the economy globally, that's why it is critical to diagnose positive cases rapidly. Currently, the most effective test to detect COVID-19 is Reverse Transcription-polymerase chain reaction (RT-PCR) which is time-consuming, expensive and sometimes not accurate. It is found in many studies that, radiology seems promising by extracting features from X-rays. COVID-19 motivates the researchers to undergo the deep learning process to detect the COVID- 19 patient rapidly. This paper has classified the X-rays images into COVID- 19 and normal by using multi-model classification process. This multi-model classification incorporates Support Vector Machine (SVM) in the last layer of VGG16 Convolution network. For synchronization among VGG16 and SVM we have added one more layer of convolution, pool, and dense between VGG16 and SVM. Further, for transformations and discovering the best result, we have used the Radial Basis function. CovXmlc is compared with five existing models using different parameters and metrics. The result shows that our proposed CovXmlc with minimal dataset reached accuracy up to 95% which is significantly higher than the existing ones. Similarly, it also performs better on other metrics such as recall, precision and f-score.
Collapse
Affiliation(s)
| | - Ajay Prasad
- SCS, University of Petroleum and Energy Studies, Dehradun, India
| | | |
Collapse
|
103
|
Segmentation of Overlapping Cervical Cells with Mask Region Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:3890988. [PMID: 34646333 PMCID: PMC8505098 DOI: 10.1155/2021/3890988] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 09/18/2021] [Indexed: 11/18/2022]
Abstract
The task of segmenting cytoplasm in cytology images is one of the most challenging tasks in cervix cytological analysis due to the presence of fuzzy and highly overlapping cells. Deep learning-based diagnostic technology has proven to be effective in segmenting complex medical images. We present a two-stage framework based on Mask RCNN to automatically segment overlapping cells. In stage one, candidate cytoplasm bounding boxes are proposed. In stage two, pixel-to-pixel alignment is used to refine the boundary and category classification is also presented. The performance of the proposed method is evaluated on publicly available datasets from ISBI 2014 and 2015. The experimental results demonstrate that our method outperforms other state-of-the-art approaches with DSC 0.92 and FPRp 0.0008 at the DSC threshold of 0.8. Those results indicate that our Mask RCNN-based segmentation method could be effective in cytological analysis.
Collapse
|
104
|
Ghalati MK, Nunes A, Ferreira H, Serranho P, Bernardes R. Texture Analysis and its Applications in Biomedical Imaging: A Survey. IEEE Rev Biomed Eng 2021; 15:222-246. [PMID: 34570709 DOI: 10.1109/rbme.2021.3115703] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This surveys emphasis is in collecting and categorising over five decades of active research on texture analysis. Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this surveys final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.
Collapse
|
105
|
Lotlikar VS, Satpute N, Gupta A. Brain Tumor Detection Using Machine Learning and Deep Learning: A Review. Curr Med Imaging 2021; 18:604-622. [PMID: 34561990 DOI: 10.2174/1573405617666210923144739] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 07/09/2021] [Accepted: 07/27/2021] [Indexed: 11/22/2022]
Abstract
According to the international agency for research on cancer (IARC), the mortality rate due to brain tumors is 76%. It is required to detect the brain tumors as early as possible and to provide the patient with the required treatment to avoid any fatal situation. With the recent advancement in technology, it is possible to automatically detect the tumor from images such as magnetic resonance imaging (MRI) and computed tomography scans using a computer-aided design. Machine learning and deep learning techniques have gained significance among researchers in medical fields, especially convolutional neural networks (CNN), due to their ability to analyze large amounts of complex image data and perform classification. The objective of this review article is to present an exhaustive study of techniques such as preprocessing, machine learning, and deep learning that have been adopted in the last 15 years and based on it to present a detailed comparative analysis. The challenges encountered by researchers in the past for tumor detection have been discussed along with the future scopes that can be taken by the researchers as the future work. Clinical challenges that are encountered have also been discussed, which are missing in existing review articles.
Collapse
Affiliation(s)
- Venkatesh S Lotlikar
- MTech scholar, Department of E&TC Engineering, College of Engineering Pune, India
| | - Nitin Satpute
- Electrical and Computer Engineering, Aarhus University. Denmark
| | - Aditya Gupta
- Adjunct Faculty, Department of E&TC Engineering, College of Engineering Pune, India
| |
Collapse
|
106
|
Sui B, Lv J, Tong X, Li Y, Wang C. Simultaneous image reconstruction and lesion segmentation in accelerated MRI using multitasking learning. Med Phys 2021; 48:7189-7198. [PMID: 34542180 DOI: 10.1002/mp.15213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 08/02/2021] [Accepted: 08/26/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI) serves as an important medical imaging modality for a variety of clinical applications. However, the problem of long imaging time limited its wide usage. In addition, prolonged scan time will cause discomfort to the patient, leading to severe image artifacts. On the other hand, manually lesion segmentation is time consuming. Algorithm-based automatic lesion segmentation is still challenging, especially for accelerated imaging with low quality. METHODS In this paper, we proposed a multitask learning-based method to perform image reconstruction and lesion segmentation simultaneously, called "RecSeg". Our hypothesis is that both tasks can benefit from the usage of the proposed combined model. In the experiment, we validated the proposed multitask model on MR k-space data with different acceleration factors (2×, 4×, and 6×). Two connected U-nets were used for the tasks of liver and renal image reconstruction and segmentation. A total of 50 healthy subjects and 100 patients with hepatocellular carcinoma were included for training and testing. For the segmentation part, we use healthy subjects to verify organ segmentation, and hepatocellular carcinoma patients to verify lesion segmentation. The organs and lesions were manually contoured by an experienced radiologist. RESULTS Experimental results show that the proposed RecSeg yielded the highest PSNR (RecSeg: 32.39 ± 1.64 vs. KSVD: 29.53 ± 2.74 and single U-net: 31.18 ± 1.68, respectively, p < 0.05) and highest structural similarity index measure (SSIM) (RecSeg: 0.93 ± 0.01 vs. KSVD: 0.88 ± 0.02 and single U-net: 0.90 ± 0.01, respectively, p < 0.05) under 6× acceleration. Moreover, in the task of lesion segmentation, it is proposed that RecSeg produced the highest Dice score (RecSeg: 0.86 ± 0.01 vs. KSVD: 0.82 ± 0.01 and single U-net: 0.84 ± 0.01, respectively, p < 0.05). CONCLUSIONS This study focused on the simultaneous reconstruction of medical images and the segmentation of organs and lesions. It is observed that the multitask learning-based method can improve performances of both image reconstruction and lesion segmentation.
Collapse
Affiliation(s)
- Bin Sui
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
107
|
Zhang YN, XIA KR, LI CY, WEI BL, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1994764. [PMID: 34595234 PMCID: PMC8478535 DOI: 10.1155/2021/1994764] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/24/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most common malignancies. Pathological image processing of breast has become an important means for early diagnosis of breast cancer. Using medical image processing to assist doctors to detect potential breast cancer as early as possible has always been a hot topic in the field of medical image diagnosis. In this paper, a breast cancer recognition method based on image processing is systematically expounded from four aspects: breast cancer detection, image segmentation, image registration, and image fusion. The achievements and application scope of supervised learning, unsupervised learning, deep learning, CNN, and so on in breast cancer examination are expounded. The prospect of unsupervised learning and transfer learning for breast cancer diagnosis is prospected. Finally, the privacy protection of breast cancer patients is put forward.
Collapse
Affiliation(s)
- Ya-nan Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Ke-rui XIA
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Chang-yi LI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Ben-li WEI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Bing Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
108
|
Vieira PM, Freitas NR, Lima VB, Costa D, Rolanda C, Lima CS. Multi-pathology detection and lesion localization in WCE videos by using the instance segmentation approach. Artif Intell Med 2021; 119:102141. [PMID: 34531016 DOI: 10.1016/j.artmed.2021.102141] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 06/10/2021] [Accepted: 08/03/2021] [Indexed: 12/13/2022]
Abstract
The majority of current systems for automatic diagnosis considers the detection of a unique and previously known pathology. Considering specifically the diagnosis of lesions in the small bowel using endoscopic capsule images, very few consider the possible existence of more than one pathology and when they do, they are mainly detection based systems therefore unable to localize the suspected lesions. Such systems do not fully satisfy the medical community, that in fact needs a system that detects any pathology and eventually more than one, when they coexist. In addition, besides the diagnostic capability of these systems, localizing the lesions in the image has been of great interest to the medical community, mainly for training medical personnel purposes. So, nowadays, the inclusion of the lesion location in automatic diagnostic systems is practically mandatory. Multi-pathology detection can be seen as a multi-object detection task and as each frame can contain different instances of the same lesion, instance segmentation seems to be appropriate for the purpose. Consequently, we argue that a multi-pathology system benefits from using the instance segmentation approach, since classification and segmentation modules are both required complementing each other in lesion detection and localization. According to our best knowledge such a system does not yet exist for the detection of WCE pathologies. This paper proposes a multi-pathology system that can be applied to WCE images, which uses the Mask Improved RCNN (MI-RCNN), a new mask subnet scheme which has shown to significantly improve mask predictions of the high performing state-of-the-art Mask-RCNN and PANet systems. A novel training strategy based on the second momentum is also proposed for the first time for training Mask-RCNN and PANet based systems. These approaches were tested using the public database KID, and the included pathologies were bleeding, angioectasias, polyps and inflammatory lesions. Experimental results show significant improvements for the proposed versions, reaching increases of almost 7% over the PANet model when the new proposed training approach was employed.
Collapse
Affiliation(s)
- Pedro M Vieira
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal.
| | - Nuno R Freitas
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal
| | - Veríssimo B Lima
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal; School of Engineering (ISEP), Polytechnic Institute of Porto (P.PORTO), Porto, Portugal
| | - Dalila Costa
- Life and Health Sciences Research Institute, University of Minho, Campus Gualtar, 4710-057 Braga, Portugal; ICVS/3Bs - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Gastroenterology, Hospital de Braga, Braga, Portugal
| | - Carla Rolanda
- Life and Health Sciences Research Institute, University of Minho, Campus Gualtar, 4710-057 Braga, Portugal; ICVS/3Bs - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Gastroenterology, Hospital de Braga, Braga, Portugal
| | - Carlos S Lima
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal
| |
Collapse
|
109
|
Li B, You X, Wang J, Peng Q, Yin S, Qi R, Ren Q, Hong Z. IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation. Med Phys 2021; 48:6962-6975. [PMID: 34494276 DOI: 10.1002/mp.15212] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/15/2021] [Accepted: 08/15/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE In neonatal brain magnetic resonance image (MRI) segmentation, the model we trained on the training set (source domain) often performs poorly in clinical practice (target domain). As the label of target-domain images is unavailable, this cross-domain segmentation needs unsupervised domain adaptation (UDA) to make the model adapt to the target domain. However, the shape and intensity distribution of neonatal brain MRI images across the domains are largely different from adults'. Current UDA methods aim to make synthesized images similar to the target domain as a whole. But it is impossible to synthesize images with intraclass similarity because of the regional misalignment caused by the cross-domain difference. This will result in generating intraclassly incorrect intensity information from target-domain images. To address this issue, we propose an IAS-NET (joint intraclassly adaptive generative adversarial network (GAN) (IA-NET) and segmentation) framework to bridge the gap between the two domains for intraclass alignment. METHODS Our proposed IAS-NET is an elegant learning framework that transfers the appearance of images across the domains from both image and feature perspectives. It consists of the proposed IA-NET and a segmentation network (S-NET). The proposed IA-NET is a GAN-based adaptive network that contains one generator (including two encoders and one shared decoder) and four discriminators for cross-domain transfer. The two encoders are implemented to extract original image, mean, and variance features from source and target domains. The proposed local adaptive instance normalization algorithm is used to perform intraclass feature alignment to the target domain in the feature-map level. S-NET is a U-net structure network that is used to provide semantic constraint by a segmentation loss for the training of IA-NET. Meanwhile, it offers pseudo-label images for calculating intraclass features of the target domain. Source code (in Tensorflow) is available at https://github.com/lb-whu/RAS-NET/. RESULTS Extensive experiments are carried out on two different data sets (NeoBrainS12 and dHCP), respectively. There exist great differences in the shape, size, and intensity distribution of magnetic resonance (MR) images in the two databases. Compared to baseline, we improve the average dice score of all tissues on NeoBrains12 by 6% through adaptive training with unlabeled dHCP images. Besides, we also conduct experiments on dHCP and improved the average dice score by 4%. The quantitative analysis of the mean and variance of the synthesized images shows that the synthesized image by the proposed is closer to the target domain both in the full brain or within each class than that of the compared methods. CONCLUSIONS In this paper, the proposed IAS-NET can improve the performance of the S-NET effectively by its intraclass feature alignment in the target domain. Compared to the current UDA methods, the synthesized images by IAS-NET are more intraclassly similar to the target domain for neonatal brain MR images. Therefore, it achieves state-of-the-art results in the compared UDA models for the segmentation task.
Collapse
Affiliation(s)
- Bo Li
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Xinge You
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China.,Shenzhen Research Institute, Huazhong University of Science and Technology, Shenzhen, China
| | - Jing Wang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qinmu Peng
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China.,Shenzhen Research Institute, Huazhong University of Science and Technology, Shenzhen, China
| | - Shi Yin
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Ruinan Qi
- Department of Radiology, Huazhong University of Science and Technology Hospital, Wuhan, China
| | - Qianqian Ren
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.,Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Ziming Hong
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
110
|
GSCFN: A graph self-construction and fusion network for semi-supervised brain tissue segmentation in MRI. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
111
|
Niyas S, Chethana Vaisali S, Show I, Chandrika T, Vinayagamani S, Kesavadas C, Rajan J. Segmentation of focal cortical dysplasia lesions from magnetic resonance images using 3D convolutional neural networks. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
112
|
Krishna Priya R, Chacko S. Improved particle swarm optimized deep convolutional neural network with super-pixel clustering for multiple sclerosis lesion segmentation in brain MRI imaging. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3506. [PMID: 34181310 DOI: 10.1002/cnm.3506] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 02/09/2021] [Accepted: 03/12/2021] [Indexed: 06/13/2023]
Abstract
A central nervous system (CNS) disease affecting the insulating myelin sheaths around the brain axons is called multiple sclerosis (MS). In today's world, MS is extensively diagnosed and monitored using the MRI, because of the structural MRI sensitivity in dissemination of white matter lesions with respect to space and time. The main aim of this study is to propose Multiple Sclerosis Lesion Segmentation in Brain MRI imaging using Optimized Deep Convolutional Neural Network and Super-pixel Clustering. Three stages included in the proposed methodology are: (a) preprocessing, (b) segmentation of super-pixel, and (c) classification of super-pixel. In the first stage, image enhancement and skull stripping is done through performing a preprocessing step. In the second stage, the MS lesion and Non-MS lesion regions are segmented through applying SLICO algorithm over each slice of the volume. In the fourth stage, a CNN training and classification is performed using this segmented lesion and non-lesion regions. To handle this complex task, a newly developed Improved Particle Swarm Optimization (IPSO) based optimized convolutional neural network classifier is applied. On clinical MS data, the approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods.
Collapse
Affiliation(s)
- R Krishna Priya
- Department of Electrical and Communication Engineering, National University of Science and Technology, Oman
| | - Susamma Chacko
- Department of Quality Enhancement and Assurance, National University of Science and Technology, Oman
| |
Collapse
|
113
|
Basnet R, Ahmad MO, Swamy M. A deep dense residual network with reduced parameters for volumetric brain tissue segmentation from MR images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
114
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
115
|
Palraj K, Kalaivani V. Deep learning methods for predicting brain abnormalities and compute human cognitive power using fMRI. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In modern times, digital medical images play a significant progression in clinical diagnosis to treat the populace earlier to hoard their lives. Magnetic Resonance Imaging (MRI) is one of the most advanced medical imaging modalities that facilitate scanning various parts of the human body like the head, chest, abdomen, and pelvis and identify the diseases. Numerous studies on the same discipline have proposed different algorithms, techniques, and methods for analyzing medical digital images, especially MRI. Most of them have mainly focused on identifying and classifying the images as either normal or abnormal. Computing brainpower is essential to understand and handle various brain diseases efficiently in critical situations. This paper knuckles down to design and implement a computer-aided framework, enhancing the identification of humans’ cognitive power from their MRI. Images. The proposed framework converts the 3D DICOM images into 2D medical images, preprocessing, enhancement, learning, and extracting various image information to classify it as normal or abnormal and provide the brain’s cognitive power. This study widens the efficient use of machine learning methods, Voxel Residual Network (VRN), with multimodality fusion architecture to learn and analyze the image to classify and predict cognitive power. The experimental results denote that the proposed framework demonstrates better performance than the existing approaches.
Collapse
Affiliation(s)
- K. Palraj
- AP, CSE, Srividya College of Engineering &Technology, Virudhunagar, Tamilnadu, India
| | - V. Kalaivani
- CSE, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
116
|
Mishro PK, Agrawal S, Panda R, Abraham A. A Novel Type-2 Fuzzy C-Means Clustering for Brain MR Image Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3901-3912. [PMID: 32568716 DOI: 10.1109/tcyb.2020.2994235] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The fuzzy C -means (FCM) clustering procedure is an unsupervised form of grouping the homogenous pixels of an image in the feature space into clusters. A brain magnetic resonance (MR) image is affected by noise and intensity inhomogeneity (IIH) during the acquisition process. FCM has been used in MR brain tissue segmentation. However, it does not consider the neighboring pixels for computing the membership values, thereby misclassifying the noisy pixels. The inaccurate cluster centers obtained in FCM do not address the problem of IIH. A fixed value of the fuzzifier ( m ) used in FCM brings uncertainty in controlling the fuzziness of the extracted clusters. To resolve these issues, we suggest a novel type-2 adaptive weighted spatial FCM (AWSFCM) clustering algorithm for MR brain tissue segmentation. The idea of type-2 FCM applied to the problem on hand is new and is reported in this article. The application of the proposed technique to the problem of MR brain tissue segmentation replaces the fixed fuzzifier value with a fuzzy linguistic fuzzifier value ( M ). The introduction of the spatial information in the membership function reduces the misclassification of noisy pixels. Furthermore, the incorporation of adaptive weights into the cluster center update function improves the accuracy of the final cluster centers, thereby reducing the effect of IIH. The suggested algorithm is evaluated using T1-w, T2-w, and proton density (PD) brain MR image slices. The performance is justified in terms of qualitative and quantitative measures followed by statistical analysis. The outcomes demonstrate the superiority and robustness of the algorithm in comparison to the state-of-the-art methods. This article is useful for the cybernetics application.
Collapse
|
117
|
Weiss DA, Saluja R, Xie L, Gee JC, Sugrue LP, Pradhan A, Nick Bryan R, Rauschecker AM, Rudie JD. Automated multiclass tissue segmentation of clinical brain MRIs with lesions. NEUROIMAGE-CLINICAL 2021; 31:102769. [PMID: 34333270 PMCID: PMC8346689 DOI: 10.1016/j.nicl.2021.102769] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/29/2021] [Accepted: 07/20/2021] [Indexed: 12/21/2022]
Abstract
A U-Net incorporating spatial prior information can successfully segment 6 brain tissue types. The U-Net was able to segment gray and white matter in the presence of lesions. The U-Net surpassed the performance of its source algorithm in an external dataset. Segmentations were produced in a hundredth of the time of its predecessor algorithm.
Delineation and quantification of normal and abnormal brain tissues on Magnetic Resonance Images is fundamental to the diagnosis and longitudinal assessment of neurological diseases. Here we sought to develop a convolutional neural network for automated multiclass tissue segmentation of brain MRIs that was robust at typical clinical resolutions and in the presence of a variety of lesions. We trained a 3D U-Net for full brain multiclass tissue segmentation from a prior atlas-based segmentation method on an internal dataset that consisted of 558 clinical T1-weighted brain MRIs (453/52/53; training/validation/test) of patients with one of 50 different diagnostic entities (n = 362) or with a normal brain MRI (n = 196). We then used transfer learning to refine our model on an external dataset that consisted of 7 patients with hand-labeled tissue types. We evaluated the tissue-wise and intra-lesion performance with different loss functions and spatial prior information in the validation set and applied the best performing model to the internal and external test sets. The network achieved an average overall Dice score of 0.87 and volume similarity of 0.97 in the internal test set. Further, the network achieved a median intra-lesion tissue segmentation accuracy of 0.85 inside lesions within white matter and 0.61 inside lesions within gray matter. After transfer learning, the network achieved an average overall Dice score of 0.77 and volume similarity of 0.96 in the external dataset compared to human raters. The network had equivalent or better performance than the original atlas-based method on which it was trained across all metrics and produced segmentations in a hundredth of the time. We anticipate that this pipeline will be a useful tool for clinical decision support and quantitative analysis of clinical brain MRIs in the presence of lesions.
Collapse
Affiliation(s)
- David A Weiss
- University of Pennsylvania, United States; University of California, San Francisco, United States.
| | | | - Long Xie
- University of Pennsylvania, United States
| | | | - Leo P Sugrue
- University of California, San Francisco, United States
| | | | | | | | | |
Collapse
|
118
|
Zopes J, Platscher M, Paganucci S, Federau C. Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks. Front Neurol 2021; 12:653375. [PMID: 34335436 PMCID: PMC8318570 DOI: 10.3389/fneur.2021.653375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 06/17/2021] [Indexed: 11/17/2022] Open
Abstract
Anatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit.
Collapse
Affiliation(s)
- Jonathan Zopes
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Moritz Platscher
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Silvio Paganucci
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Christian Federau
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
119
|
Luan X, Zheng X, Li W, Liu L, Shu Y, Guo Y. Rubik-Net: Learning Spatial Information via Rotation-Driven Convolutions for Brain Segmentation. IEEE J Biomed Health Inform 2021; 26:289-300. [PMID: 34242176 DOI: 10.1109/jbhi.2021.3095846] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The accurate segmentation of brain tissue in Magnetic Resonance Image (MRI) slices is essential for assessing neurological conditions and brain diseases. However, it is challenging to segment MRI slices because of the low contrast between different brain tissues and the partial volume effect. A 2-Dimensional (2-D) convolutional network cannot handle such volumetric medical image data well because it overlooks spatial information between MRI slices. Although 3-Dimensional (3-D) convolutions capture volumetric spatial information, they have not been fully exploited to enhance the representative ability of deep networks; moreover, they may lead to overfitting in the case of insufficient training data. In this paper, we propose a novel convolutional mechanism, termed Rubik convolution, to capture multi dimensional information between MRI slices. Rubik convolution rotates the axis of a set of consecutive slices, which enables a 2-D convolution kernel to extract features of each axial plane simultaneously. Next, feature maps are rotated back to fuse multidimensional information by the Max-View-Maps. Furthermore, we propose an efficient 2-D convolutional network, namely Rubik-Net, where the residual connections and the bottleneck structure are used to enhance information transmission and reduce the number of network parameters. The proposed Rubik-Net shows promising results on iSeg2017, iSeg2019, IBSR and Brainweb datasets in terms of segmentation accuracy. In particular, we achieved the best results in 95th-percentile Hausdorff distance and average surface distance in cerebrospinal fluid segmentation on the most challenging iSeg2019 dataset. The experiments indicate that Rubik-Net improves the accuracy and efficiency of medical image segmentation. Moreover, Rubik convolution can be easily embedded into existing 2-D convolutional networks.
Collapse
|
120
|
Zhuang Y, Liu H, Song E, Ma G, Xu X, Hung CC. APRNet: A 3D Anisotropic Pyramidal Reversible Network with Multi-modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images. IEEE J Biomed Health Inform 2021; 26:749-761. [PMID: 34197331 DOI: 10.1109/jbhi.2021.3093932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.
Collapse
|
121
|
Wang H, Cao J, Feng J, Xie Y, Yang D, Chen B. Mixed 2D and 3D convolutional network with multi-scale context for lesion segmentation in breast DCE-MRI. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102607] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
122
|
Jiang Y, Li M, Zhang P, Tan X, Song W. Hierarchical fusion convolutional neural networks for SAR image segmentation. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.04.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
123
|
Recent Radiomics Advancements in Breast Cancer: Lessons and Pitfalls for the Next Future. ACTA ACUST UNITED AC 2021; 28:2351-2372. [PMID: 34202321 PMCID: PMC8293249 DOI: 10.3390/curroncol28040217] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/14/2021] [Accepted: 06/21/2021] [Indexed: 12/13/2022]
Abstract
Radiomics is an emerging translational field of medicine based on the extraction of high-dimensional data from radiological images, with the purpose to reach reliable models to be applied into clinical practice for the purposes of diagnosis, prognosis and evaluation of disease response to treatment. We aim to provide the basic information on radiomics to radiologists and clinicians who are focused on breast cancer care, encouraging cooperation with scientists to mine data for a better application in clinical practice. We investigate the workflow and clinical application of radiomics in breast cancer care, as well as the outlook and challenges based on recent studies. Currently, radiomics has the potential ability to distinguish between benign and malignant breast lesions, to predict breast cancer’s molecular subtypes, the response to neoadjuvant chemotherapy and the lymph node metastases. Even though radiomics has been used in tumor diagnosis and prognosis, it is still in the research phase and some challenges need to be faced to obtain a clinical translation. In this review, we discuss the current limitations and promises of radiomics for improvement in further research.
Collapse
|
124
|
Fletcher E, DeCarli C, Fan AP, Knaack A. Convolutional Neural Net Learning Can Achieve Production-Level Brain Segmentation in Structural Magnetic Resonance Imaging. Front Neurosci 2021; 15:683426. [PMID: 34234642 PMCID: PMC8255694 DOI: 10.3389/fnins.2021.683426] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/27/2021] [Indexed: 01/18/2023] Open
Abstract
Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability.
Collapse
Affiliation(s)
- Evan Fletcher
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Charles DeCarli
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Audrey P Fan
- Department of Neurology, University of California, Davis, Davis, CA, United States.,Department of Biomedical Engineering, University of California, Davis, Davis, CA, United States
| | - Alexander Knaack
- Department of Neurology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
125
|
Uçar E, Atila Ü, Uçar M, Akyol K. Automated detection of Covid-19 disease using deep fused features from chest radiography images. Biomed Signal Process Control 2021; 69:102862. [PMID: 34131433 PMCID: PMC8192891 DOI: 10.1016/j.bspc.2021.102862] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 04/12/2021] [Accepted: 06/07/2021] [Indexed: 12/30/2022]
Abstract
The health systems of many countries are desperate in the face of Covid-19, which has become a pandemic worldwide and caused the death of hundreds of thousands of people. In order to keep Covid-19, which has a very high propagation rate, under control, it is necessary to develop faster, low-cost and highly accurate methods, rather than a costly Polymerase Chain Reaction test that can yield results in a few hours. In this study, a deep learning-based approach that can detect Covid-19 quickly and with high accuracy on X-ray images, which are common in every hospital and can be obtained at low cost, was proposed. Deep features were extracted from X-Ray images in RGB, CIE Lab and RGB CIE color spaces using DenseNet121 and EfficientNet B0 pre-trained deep learning architectures and then obtained features were fed into a two-stage classifier approach. Each of the classifiers in the proposed approach performed binary classification. In the first stage, healthy and infected samples were separated, and in the second stage, infected samples were detected as Covid-19 or pneumonia. In the experiments, Bi-LSTM network and well-known ensemble approaches such as Gradient Boosting, Random Forest and Extreme Gradient Boosting were used as the classifier model and it was seen that the Bi-LSTM network had a superior performance than other classifiers with 92.489% accuracy.
Collapse
Affiliation(s)
- Emine Uçar
- Department of Management Information Systems, Faculty of Business and Management Science, Iskenderun Technical University, Hatay, Turkey
| | - Ümit Atila
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey
| | - Murat Uçar
- Department of Management Information Systems, Faculty of Business and Management Science, Iskenderun Technical University, Hatay, Turkey
| | - Kemal Akyol
- Department of Computer Engineering, Faculty of Engineering and Architecture, Kastamonu University, Kastamonu, Turkey
| |
Collapse
|
126
|
Heidenreich JF, Gassenmaier T, Ankenbrand MJ, Bley TA, Wech T. Self-configuring nnU-net pipeline enables fully automatic infarct segmentation in late enhancement MRI after myocardial infarction. Eur J Radiol 2021; 141:109817. [PMID: 34144308 DOI: 10.1016/j.ejrad.2021.109817] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 05/07/2021] [Accepted: 06/07/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE To fully automatically derive quantitative parameters from late gadolinium enhancement (LGE) cardiac MR (CMR) in patients with myocardial infarction and to investigate if phase sensitive or magnitude reconstructions or a combination of both results in best segmentation accuracy. METHODS In this retrospective single center study, a convolutional neural network with a U-Net architecture with a self-configuring framework ("nnU-net") was trained for segmentation of left ventricular myocardium and infarct zone in LGE-CMR. A database of 170 examinations from 78 patients with history of myocardial infarction was assembled. Separate fitting of the model was performed, using phase sensitive inversion recovery, the magnitude reconstruction or both contrasts as input channels. Manual labelling served as ground truth. In a subset of 10 patients, the performance of the trained models was evaluated and quantitatively compared by determination of the Sørensen-Dice similarity coefficient (DSC) and volumes of the infarct zone compared with the manual ground truth using Pearson's r correlation and Bland-Altman analysis. RESULTS The model achieved high similarity coefficients for myocardium and scar tissue. No significant difference was observed between using PSIR, magnitude reconstruction or both contrasts as input (PSIR and MAG; mean DSC: 0.83 ± 0.03 for myocardium and 0.72 ± 0.08 for scars). A strong correlation for volumes of infarct zone was observed between manual and model-based approach (r = 0.96), with a significant underestimation of the volumes obtained from the neural network. CONCLUSION The self-configuring nnU-net achieves predictions with strong agreement compared to manual segmentation, proving the potential as a promising tool to provide fully automatic quantitative evaluation of LGE-CMR.
Collapse
Affiliation(s)
- Julius F Heidenreich
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Germany.
| | - Tobias Gassenmaier
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Germany
| | - Markus J Ankenbrand
- Department of Cellular and Molecular Imaging, Comprehensive Heart Failure Center, University Hospital Würzburg, Germany; Center for Computational and Theoretical Biology, University of Würzburg, Germany
| | - Thorsten A Bley
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Germany
| | - Tobias Wech
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Germany
| |
Collapse
|
127
|
Sheng N, Liu D, Zhang J, Che C, Zhang J. Second-order ResU-Net for automatic MRI brain tumor segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:4943-4960. [PMID: 34517471 DOI: 10.3934/mbe.2021251] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tumor segmentation using magnetic resonance imaging (MRI) plays a significant role in assisting brain tumor diagnosis and treatment. Recently, U-Net architecture with its variants have become prevalent in the field of brain tumor segmentation. However, the existing U-Net models mainly exploit coarse first-order features for tumor segmentation, and they seldom consider the more powerful second-order statistics of deep features. Therefore, in this work, we aim to explore the effectiveness of second-order statistical features for brain tumor segmentation application, and further propose a novel second-order residual brain tumor segmentation network, i.e., SoResU-Net. SoResU-Net utilizes a number of second-order modules to replace the original skip connection operations, thus augmenting the series of transformation operations and increasing the non-linearity of the segmentation network. Extensive experimental results on the BraTS 2018 and BraTS 2019 datasets demonstrate that SoResU-Net outperforms its baseline, especially on core tumor and enhancing tumor segmentation, illuminating the effectiveness of second-order statistical features for the brain tumor segmentation application.
Collapse
Affiliation(s)
- Ning Sheng
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian 116622, China
| | - Dongwei Liu
- School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
| | - Jianxia Zhang
- School of Intelligent Engineering, Henan Institute of Technology, Xinxiang 453003, China
| | - Chao Che
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian 116622, China
| | - Jianxin Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
| |
Collapse
|
128
|
Si L, Liu B, Fu Y. Unmanned aerial vehicle reconnaissance image recognition based on convolutional neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-219086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The important strategic position of military UAVs and the wide application of civil UAVs in many fields, they all mark the arrival of the era of unmanned aerial vehicles. At present, in the field of image research, recognition and real-time tracking of specific objects in images has been a technology that many scholars continue to study in depth and need to be further tackled. Image recognition and real-time tracking technology has been widely used in UAV aerial photography. Through the analysis of convolution neural network algorithm and the comparison of image recognition technology, the convolution neural network algorithm is improved to improve the image recognition effect. In this paper, a target detection technique based on improved Faster R-CNN is proposed. The algorithm model is implemented and the classification accuracy is improved through Faster R-CNN network optimization. Aiming at the problem of small target error detection and scale difference in aerial data sets, this paper designs the network structure of RPN and the optimization scheme of related algorithms. The structure of Faster R-CNN is adjusted by improving the embedding of CNN and OHEM algorithm, the accuracy of small target and multitarget detection is improved as a whole. The experimental results show that: compared with LENET-5, the recognition accuracy of the proposed algorithm is significantly improved. And with the increase of the number of samples, the accuracy of this algorithm is 98.9%.
Collapse
Affiliation(s)
- Lipeng Si
- School of Computer Science and Engineering, Xi’an Technological University, Xi’an, Shaanxi, China
| | - Baolong Liu
- School of Computer Science and Engineering, Xi’an Technological University, Xi’an, Shaanxi, China
| | - Yanfang Fu
- School of Computer Science and Engineering, Xi’an Technological University, Xi’an, Shaanxi, China
| |
Collapse
|
129
|
Shaari H, Kevrić J, Jukić S, Bešić L, Jokić D, Ahmed N, Rajs V. Deep Learning-Based Studies on Pediatric Brain Tumors Imaging: Narrative Review of Techniques and Challenges. Brain Sci 2021; 11:brainsci11060716. [PMID: 34071202 PMCID: PMC8230188 DOI: 10.3390/brainsci11060716] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 05/10/2021] [Accepted: 05/17/2021] [Indexed: 11/16/2022] Open
Abstract
Brain tumors diagnosis in children is a scientific concern due to rapid anatomical, metabolic, and functional changes arising in the brain and non-specific or conflicting imaging results. Pediatric brain tumors diagnosis is typically centralized in clinical practice on the basis of diagnostic clues such as, child age, tumor location and incidence, clinical history, and imaging (Magnetic resonance imaging MRI / computed tomography CT) findings. The implementation of deep learning has rapidly propagated in almost every field in recent years, particularly in the medical images’ evaluation. This review would only address critical deep learning issues specific to pediatric brain tumor imaging research in view of the vast spectrum of other applications of deep learning. The purpose of this review paper is to include a detailed summary by first providing a succinct guide to the types of pediatric brain tumors and pediatric brain tumor imaging techniques. Then, we will present the research carried out by summarizing the scientific contributions to the field of pediatric brain tumor imaging processing and analysis. Finally, to establish open research issues and guidance for potential study in this emerging area, the medical and technical limitations of the deep learning-based approach were included.
Collapse
Affiliation(s)
- Hala Shaari
- Department of Information Technologies, Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Jasmin Kevrić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Samed Jukić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Larisa Bešić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Dejan Jokić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Nuredin Ahmed
- Control Department, Technical Computer College Tripoli, Tripoli 00218, Libya;
| | - Vladimir Rajs
- Department of Power, Electronics and Telecommunication Engineering, Faculty of Technical Science, University of Novi Sad, 21000 Novi Sad, Serbia
- Correspondence:
| |
Collapse
|
130
|
Bandyk MG, Gopireddy DR, Lall C, Balaji KC, Dolz J. MRI and CT bladder segmentation from classical to deep learning based approaches: Current limitations and lessons. Comput Biol Med 2021; 134:104472. [PMID: 34023696 DOI: 10.1016/j.compbiomed.2021.104472] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 04/29/2021] [Accepted: 05/02/2021] [Indexed: 10/21/2022]
Abstract
Precise determination and assessment of bladder cancer (BC) extent of muscle invasion involvement guides proper risk stratification and personalized therapy selection. In this context, segmentation of both bladder walls and cancer are of pivotal importance, as it provides invaluable information to stage the primary tumor. Hence, multiregion segmentation on patients presenting with symptoms of bladder tumors using deep learning heralds a new level of staging accuracy and prediction of the biologic behavior of the tumor. Nevertheless, despite the success of these models in other medical problems, progress in multiregion bladder segmentation, particularly in MRI and CT modalities, is still at a nascent stage, with just a handful of works tackling a multiregion scenario. Furthermore, most existing approaches systematically follow prior literature in other clinical problems, without casting a doubt on the validity of these methods on bladder segmentation, which may present different challenges. Inspired by this, we provide an in-depth look at bladder cancer segmentation using deep learning models. The critical determinants for accurate differentiation of muscle invasive disease, current status of deep learning based bladder segmentation, lessons and limitations of prior work are highlighted.
Collapse
Affiliation(s)
- Mark G Bandyk
- Department of Urology, University of Florida, Jacksonville, FL, USA.
| | | | - Chandana Lall
- Department of Radiology, University of Florida, Jacksonville, FL, USA
| | - K C Balaji
- Department of Urology, University of Florida, Jacksonville, FL, USA
| | | |
Collapse
|
131
|
Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. SENSORS 2021; 21:s21093232. [PMID: 34067101 PMCID: PMC8124734 DOI: 10.3390/s21093232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/17/2022]
Abstract
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing.
Collapse
|
132
|
Chen X, Zhang X, Xie H, Tao X, Wang FL, Xie N, Hao T. A bibliometric and visual analysis of artificial intelligence technologies-enhanced brain MRI research. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:17335-17363. [DOI: 10.1007/s11042-020-09062-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 03/23/2020] [Accepted: 05/08/2020] [Indexed: 01/03/2025]
|
133
|
Fernandes FE, Yen GG. Pruning of generative adversarial neural networks for medical imaging diagnostics with evolution strategy. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.12.086] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
134
|
Li Y, Li H, Fan Y. ACEnet: Anatomical context-encoding network for neuroanatomy segmentation. Med Image Anal 2021; 70:101991. [PMID: 33607514 PMCID: PMC8044013 DOI: 10.1016/j.media.2021.101991] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/30/2020] [Accepted: 02/02/2021] [Indexed: 11/25/2022]
Abstract
Segmentation of brain structures from magnetic resonance (MR) scans plays an important role in the quantification of brain morphology. Since 3D deep learning models suffer from high computational cost, 2D deep learning methods are favored for their computational efficiency. However, existing 2D deep learning methods are not equipped to effectively capture 3D spatial contextual information that is needed to achieve accurate brain structure segmentation. In order to overcome this limitation, we develop an Anatomical Context-Encoding Network (ACEnet) to incorporate 3D spatial and anatomical contexts in 2D convolutional neural networks (CNNs) for efficient and accurate segmentation of brain structures from MR scans, consisting of 1) an anatomical context encoding module to incorporate anatomical information in 2D CNNs and 2) a spatial context encoding module to integrate 3D image information in 2D CNNs. In addition, a skull stripping module is adopted to guide the 2D CNNs to attend to the brain. Extensive experiments on three benchmark datasets have demonstrated that our method achieves promising performance compared with state-of-the-art alternative methods for brain structure segmentation in terms of both computational efficiency and segmentation accuracy.
Collapse
Affiliation(s)
- Yuemeng Li
- Center for Biomedical Image Computing and Analytics and the Department of Radiology, the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics and the Department of Radiology, the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics and the Department of Radiology, the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104 USA.
| |
Collapse
|
135
|
|
136
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. Segmentation evaluation with sparse ground truth data: Simulating true segmentations as perfect/imperfect as those generated by humans. Med Image Anal 2021; 69:101980. [PMID: 33588116 PMCID: PMC7933105 DOI: 10.1016/j.media.2021.101980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 10/22/2022]
Abstract
Fully annotated data sets play important roles in medical image segmentation and evaluation. Expense and imprecision are the two main issues in generating ground truth (GT) segmentations. In this paper, in an attempt to overcome these two issues jointly, we propose a method, named SparseGT, which exploit variability among human segmenters to maximally save manual workload in GT generation for evaluating actual segmentations by algorithms. Pseudo ground truth (p-GT) segmentations are created by only a small fraction of workload and with human-level perfection/imperfection, and they can be used in practice as a substitute for fully manual GT in evaluating segmentation algorithms at the same precision. p-GT segmentations are generated by first selecting slices sparsely, where manual contouring is conducted only on these sparse slices, and subsequently filling segmentations on other slices automatically. By creating p-GT with different levels of sparseness, we determine the largest workload reduction achievable for each considered object, where the variability of the generated p-GT is statistically indistinguishable from inter-segmenter differences in full manual GT segmentations for that object. Furthermore, we investigate the segmentation evaluation errors introduced by variability in manual GT by applying p-GT in evaluation of actual segmentations by an algorithm. Experiments are conducted on ∼500 computed tomography (CT) studies involving six objects in two body regions, Head & Neck and Thorax, where optimal sparseness and corresponding evaluation errors are determined for each object and each strategy. Our results indicate that creating p-GT by the concatenated strategy of uniformly selecting sparse slices and filling segmentations via deep-learning (DL) network show highest manual workload reduction by ∼80-96% without sacrificing evaluation accuracy compared to fully manual GT. Nevertheless, other strategies also have obvious contributions in different situations. A non-uniform strategy for slice selection shows its advantage for objects with irregular shape change from slice to slice. An interpolation strategy for filling segmentations can achieve ∼60-90% of workload reduction in simulating human-level GT without the need of an actual training stage and shows potential in enlarging data sets for training p-GT generation networks. We conclude that not only over 90% reduction in workload is feasible without sacrificing evaluation accuracy but also the suitable strategy and the optimal sparseness level achievable for creating p-GT are object- and application-specific.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| |
Collapse
|
137
|
Li J, Jin P, Zhu J, Zou H, Xu X, Tang M, Zhou M, Gan Y, He J, Ling Y, Su Y. Multi-scale GCN-assisted two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images. BIOMEDICAL OPTICS EXPRESS 2021; 12:2204-2220. [PMID: 33996224 PMCID: PMC8086482 DOI: 10.1364/boe.417212] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 05/03/2023]
Abstract
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ± 0.001 and the pixel accuracy is 0.830 ± 0.002, both of which outperform those from other state-of-the-art techniques.
Collapse
Affiliation(s)
- Jiaxuan Li
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Peiyao Jin
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Jianfeng Zhu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Haidong Zou
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Xun Xu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Min Tang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Yu Gan
- Department of Electrical and Computer Engineering, The University of Alabama, AL 35487, USA
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Yuye Ling
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yikai Su
- State Key Lab of Advanced Optical Communication Systems and Networks, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
138
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 1024] [Impact Index Per Article: 256.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
139
|
Moura LM, Ferreira VLDR, Loureiro RM, de Paiva JPQ, Rosa-Ribeiro R, Amaro E, Soares MBP, Machado BS. The Neurobiology of Zika Virus: New Models, New Challenges. Front Neurosci 2021; 15:654078. [PMID: 33897363 PMCID: PMC8059436 DOI: 10.3389/fnins.2021.654078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 03/08/2021] [Indexed: 12/21/2022] Open
Abstract
The Zika virus (ZIKV) attracted attention due to one striking characteristic: the ability to cross the placental barrier and infect the fetus, possibly causing severe neurodevelopmental disruptions included in the Congenital Zika Syndrome (CZS). Few years after the epidemic, the CZS incidence has begun to decline. However, how ZIKV causes a diversity of outcomes is far from being understood. This is probably driven by a chain of complex events that relies on the interaction between ZIKV and environmental and physiological variables. In this review, we address open questions that might lead to an ill-defined diagnosis of CZS. This inaccuracy underestimates a large spectrum of apparent normocephalic cases that remain underdiagnosed, comprising several subtle brain abnormalities frequently masked by a normal head circumference. Therefore, new models using neuroimaging and artificial intelligence are needed to improve our understanding of the neurobiology of ZIKV and its true impact in neurodevelopment.
Collapse
Affiliation(s)
| | | | | | | | | | - Edson Amaro
- Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Milena Botelho Pereira Soares
- Gonçalo Moniz Institute, Oswaldo Cruz Foundation (IGM-FIOCRUZ), Bahia, Brazil.,University Center SENAI CIMATEC, SENAI Institute of Innovation (ISI) in Advanced Health Systems (CIMATEC ISI SAS), National Service of Industrial Learning - SENAI, Bahia, Brazil
| | | |
Collapse
|
140
|
Hortensius LM, van den Hooven EH, Dudink J, Tataranno ML, van Elburg RM, Benders MJNL. NutriBrain: protocol for a randomised, double-blind, controlled trial to evaluate the effects of a nutritional product on brain integrity in preterm infants. BMC Pediatr 2021; 21:132. [PMID: 33731062 PMCID: PMC7968155 DOI: 10.1186/s12887-021-02570-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 02/24/2021] [Indexed: 11/23/2022] Open
Abstract
Background The gut microbiota and the brain are connected through different mechanisms. Bacterial colonisation of the gut plays a substantial role in normal brain development, providing opportunities for nutritional neuroprotective interventions that target the gut microbiome. Preterm infants are at risk for brain injury, especially white matter injury, mediated by inflammation and infection. Probiotics, prebiotics and L-glutamine are nutritional components that have individually already demonstrated beneficial effects in preterm infants, mostly by reducing infections or modulating the inflammatory response. The NutriBrain study aims to evaluate the benefits of a combination of probiotics, prebiotics and L-glutamine on white matter microstructure integrity (i.e., development of white matter tracts) at term equivalent age in very and extremely preterm born infants. Methods This study is a double-blind, randomised, controlled, parallel-group, single-center study. Eighty-eight infants born between 24 + 0 and < 30 + 0 weeks gestational age and less than 72 h old will be randomised after parental informed consent to receive either active study product or placebo. Active study product consists of a combination of Bifidobacterium breve M-16V, short-chain galacto-oligosaccharides, long-chain fructo-oligosaccharides and L-glutamine and will be given enterally in addition to regular infant feeding from 48 to 72 h after birth until 36 weeks postmenstrual age. The primary study outcome of white matter microstructure integrity will be measured as fractional anisotropy, assessed using magnetic resonance diffusion tensor imaging at term equivalent age and analysed using Tract-Based Spatial Statistics. Secondary outcomes are white matter injury, brain tissue volumes and cortical morphology, serious neonatal infections, serum inflammatory markers and neurodevelopmental outcome. Discussion This study will be the first to evaluate the effect of a combination of probiotics, prebiotics and L-glutamine on brain development in preterm infants. It may give new insights in the development and function of the gut microbiota and immune system in relation to brain development and provide a new, safe treatment possibility to improve brain development in the care for preterm infants. Trial registration ISRCTN, ISRCTN96620855. Date assigned: 10/10/2017. Supplementary Information The online version contains supplementary material available at 10.1186/s12887-021-02570-x.
Collapse
Affiliation(s)
- Lisa M Hortensius
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.,University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | | | - Jeroen Dudink
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.,University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Maria Luisa Tataranno
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.,University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Ruurd M van Elburg
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.,Emma Children's Hospital, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Manon J N L Benders
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands. .,University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
141
|
Abd El Kader I, Xu G, Shuai Z, Saminu S, Javaid I, Salim Ahmad I. Differential Deep Convolutional Neural Network Model for Brain Tumor Classification. Brain Sci 2021; 11:352. [PMID: 33801994 PMCID: PMC8001442 DOI: 10.3390/brainsci11030352] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 02/02/2023] Open
Abstract
The classification of brain tumors is a difficult task in the field of medical image analysis. Improving algorithms and machine learning technology helps radiologists to easily diagnose the tumor without surgical intervention. In recent years, deep learning techniques have made excellent progress in the field of medical image processing and analysis. However, there are many difficulties in classifying brain tumors using magnetic resonance imaging; first, the difficulty of brain structure and the intertwining of tissues in it; and secondly, the difficulty of classifying brain tumors due to the high density nature of the brain. We propose a differential deep convolutional neural network model (differential deep-CNN) to classify different types of brain tumor, including abnormal and normal magnetic resonance (MR) images. Using differential operators in the differential deep-CNN architecture, we derived the additional differential feature maps in the original CNN feature maps. The derivation process led to an improvement in the performance of the proposed approach in accordance with the results of the evaluation parameters used. The advantage of the differential deep-CNN model is an analysis of a pixel directional pattern of images using contrast calculations and its high ability to classify a large database of images with high accuracy and without technical problems. Therefore, the proposed approach gives an excellent overall performance. To test and train the performance of this model, we used a dataset consisting of 25,000 brain magnetic resonance imaging (MRI) images, which includes abnormal and normal images. The experimental results showed that the proposed model achieved an accuracy of 99.25%. This study demonstrates that the proposed differential deep-CNN model can be used to facilitate the automatic classification of brain tumors.
Collapse
Affiliation(s)
- Isselmou Abd El Kader
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, China; (Z.S.); (S.S.); (I.J.); (I.S.A.)
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, China; (Z.S.); (S.S.); (I.J.); (I.S.A.)
| | | | | | | | | |
Collapse
|
142
|
Chen Y, Goorden MC, Beekman FJ. Automatic attenuation map estimation from SPECT data only for brain perfusion scans using convolutional neural networks. Phys Med Biol 2021; 66:065006. [PMID: 33571975 DOI: 10.1088/1361-6560/abe557] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In clinical brain SPECT, correction for photon attenuation in the patient is essential to obtain images which provide quantitative information on the regional activity concentration per unit volume (kBq.[Formula: see text]). This correction generally requires an attenuation map ([Formula: see text] map) denoting the attenuation coefficient at each voxel which is often derived from a CT or MRI scan. However, such an additional scan is not always available and the method may suffer from registration errors. Therefore, we propose a SPECT-only-based strategy for [Formula: see text] map estimation that we apply to a stationary multi-pinhole clinical SPECT system (G-SPECT-I) for 99mTc-HMPAO brain perfusion imaging. The method is based on the use of a convolutional neural network (CNN) and was validated with Monte Carlo simulated scans. Data acquired in list mode was used to employ the energy information of both primary and scattered photons to obtain information about the tissue attenuation as much as possible. Multiple SPECT reconstructions were performed from different energy windows over a large energy range. Locally extracted 4D SPECT patches (three spatial plus one energy dimension) were used as input for the CNN which was trained to predict the attenuation coefficient of the corresponding central voxel of the patch. Results show that Attenuation Correction using the Ground Truth [Formula: see text] maps (GT-AC) or using the CNN estimated [Formula: see text] maps (CNN-AC) achieve comparable accuracy. This was confirmed by a visual assessment as well as a quantitative comparison; the mean deviation from the GT-AC when using the CNN-AC is within 1.8% for the standardized uptake values in all brain regions. Therefore, our results indicate that a CNN-based method can be an automatic and accurate tool for SPECT attenuation correction that is independent of attenuation data from other imaging modalities or human interpretations about head contours.
Collapse
Affiliation(s)
- Yuan Chen
- Section Biomedical Imaging, Department of Radiation, Science and Technology, Delft University of Technology, Delft, The Netherlands
| | | | | |
Collapse
|
143
|
Ghosal P, Chowdhury T, Kumar A, Bhadra AK, Chakraborty J, Nandi D. MhURI:A Supervised Segmentation Approach to Leverage Salient Brain Tissues in Magnetic Resonance Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105841. [PMID: 33221057 PMCID: PMC9096474 DOI: 10.1016/j.cmpb.2020.105841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 11/07/2020] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Accurate segmentation of critical tissues from a brain MRI is pivotal for characterization and quantitative pattern analysis of the human brain and thereby, identifies the earliest signs of various neurodegenerative diseases. To date, in most cases, it is done manually by the radiologists. The overwhelming workload in some of the thickly populated nations may cause exhaustion leading to interruption for the doctors, which may pose a continuing threat to patient safety. A novel fusion method called U-Net inception based on 3D convolutions and transition layers is proposed to address this issue. METHODS A 3D deep learning method called Multi headed U-Net with Residual Inception (MhURI) accompanied by Morphological Gradient channel for brain tissue segmentation is proposed, which incorporates Residual Inception 2-Residual (RI2R) module as the basic building block. The model exploits the benefits of morphological pre-processing for structural enhancement of MR images. A multi-path data encoding pipeline is introduced on top of the U-Net backbone, which encapsulates initial global features and captures the information from each MRI modality. RESULTS The proposed model has accomplished encouraging outcomes, which appreciates the adequacy in terms of some of the established quality metrices when compared with some of the state-of-the-art methods while evaluating with respect to two popular publicly available data sets. CONCLUSION The model is entirely automatic and able to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from brain MRI effectively with sufficient accuracy. Hence, it may be considered to be a potential computer-aided diagnostic (CAD) tool for radiologists and other medical practitioners in their clinical diagnosis workflow.
Collapse
Affiliation(s)
- Palash Ghosal
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Tamal Chowdhury
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Amish Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Ashok Kumar Bhadra
- Department of Radiology, KPC Medical College and Hospital, Jadavpur, 700032, West Bengal, India.
| | - Jayasree Chakraborty
- Department of Hepatopancreatobiliary Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| |
Collapse
|
144
|
Arani A, Manduca A, Ehman RL, Huston Iii J. Harnessing brain waves: a review of brain magnetic resonance elastography for clinicians and scientists entering the field. Br J Radiol 2021; 94:20200265. [PMID: 33605783 PMCID: PMC8011257 DOI: 10.1259/bjr.20200265] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Brain magnetic resonance elastography (MRE) is an imaging technique capable of accurately and non-invasively measuring the mechanical properties of the living human brain. Recent studies have shown that MRE has potential to provide clinically useful information in patients with intracranial tumors, demyelinating disease, neurodegenerative disease, elevated intracranial pressure, and altered functional states. The objectives of this review are: (1) to give a general overview of the types of measurements that have been obtained with brain MRE in patient populations, (2) to survey the tools currently being used to make these measurements possible, and (3) to highlight brain MRE-based quantitative biomarkers that have the highest potential of being adopted into clinical use within the next 5 to 10 years. The specifics of MRE methodology strategies are described, from wave generation to material parameter estimations. The potential clinical role of MRE for characterizing and planning surgical resection of intracranial tumors and assessing diffuse changes in brain stiffness resulting from diffuse neurological diseases and altered intracranial pressure are described. In addition, the emerging technique of functional MRE, the role of artificial intelligence in MRE, and promising applications of MRE in general neuroscience research are presented.
Collapse
Affiliation(s)
- Arvin Arani
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Armando Manduca
- Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | | | | |
Collapse
|
145
|
Image Segmentation Using Encoder-Decoder with Deformable Convolutions. SENSORS 2021; 21:s21051570. [PMID: 33668156 PMCID: PMC7956600 DOI: 10.3390/s21051570] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 02/15/2021] [Accepted: 02/19/2021] [Indexed: 11/25/2022]
Abstract
Image segmentation is an essential step in image analysis that brings meaning to the pixels in the image. Nevertheless, it is also a difficult task due to the lack of a general suited approach to this problem and the use of real-life pictures that can suffer from noise or object obstruction. This paper proposes an architecture for semantic segmentation using a convolutional neural network based on the Xception model, which was previously used for classification. Different experiments were made in order to find the best performances of the model (e.g., different resolution and depth of the network and data augmentation techniques were applied). Additionally, the network was improved by adding a deformable convolution module. The proposed architecture obtained a 76.8 mean IoU on the Pascal VOC 2012 dataset and 58.1 on the Cityscapes dataset. It outperforms SegNet and U-Net networks, both networks having considerably more parameters and also a higher inference time.
Collapse
|
146
|
A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images with Atrous Convolution. SENSORS 2021; 21:s21041441. [PMID: 33669539 PMCID: PMC7922083 DOI: 10.3390/s21041441] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/14/2021] [Accepted: 02/17/2021] [Indexed: 01/05/2023]
Abstract
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In this study, we developed a fully automated pixel-wise polyp segmentation model named A-DenseUNet. The proposed architecture adapts different datasets, adjusting for the unknown depth of the network by sharing multiscale encoding information to the different levels of the decoder side. We also used multiple dilated convolutions with various atrous rates to observe a large field of view without increasing the computational cost and prevent loss of spatial information, which would cause dimensionality reduction. We utilized an attention mechanism to remove noise and inappropriate information, leading to the comprehensive re-establishment of contextual features. Our experiments demonstrated that the proposed architecture achieved significant segmentation results on public datasets. A-DenseUNet achieved a 90% Dice coefficient score on the Kvasir-SEG dataset and a 91% Dice coefficient score on the CVC-612 dataset, both of which were higher than the scores of other deep learning models such as UNet++, ResUNet, U-Net, PraNet, and ResUNet++ for segmenting polyps in colonoscopy images.
Collapse
|
147
|
Martin M, Sciolla B, Sdika M, Quétin P, Delachartre P. Automatic segmentation and location learning of neonatal cerebral ventricles in 3D ultrasound data combining CNN and CPPN. Comput Biol Med 2021; 131:104268. [PMID: 33639351 DOI: 10.1016/j.compbiomed.2021.104268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 02/09/2021] [Accepted: 02/09/2021] [Indexed: 10/22/2022]
Abstract
Preterm neonates are highly likely to suffer from ventriculomegaly, a dilation of the Cerebral Ventricular System (CVS). This condition can develop into life-threatening hydrocephalus and is correlated with future neuro-developmental impairments. Consequently, it must be detected and monitored by physicians. In clinical routing, manual 2D measurements are performed on 2D ultrasound (US) images to estimate the CVS volume but this practice is imprecise due to the unavailability of 3D information. A way to tackle this problem would be to develop automatic CVS segmentation algorithms for 3D US data. In this paper, we investigate the potential of 2D and 3D Convolutional Neural Networks (CNN) to solve this complex task and propose to use Compositional Pattern Producing Network (CPPN) to enable Fully Convolutional Networks (FCN) to learn CVS location. Our database was composed of 25 3D US volumes collected on 21 preterm nenonates at the age of 35.8±1.6 gestational weeks. We found that the CPPN enables to encode CVS location, which increases the accuracy of the CNNs when they have few layers. Accuracy of the 2D and 3D FCNs reached intraobserver variability (IOV) in the case of dilated ventricles with Dice of 0.893±0.008 and 0.886±0.004 respectively (IOV = 0.898±0.008) and with volume errors of 0.45±0.42 cm3 and 0.36±0.24 cm3 respectively (IOV = 0.41±0.05 cm3). 3D FCNs were more accurate than 2D FCNs in the case of normal ventricles with Dice of 0.797±0.041 against 0.776±0.038 (IOV = 0.816±0.009) and volume errors of 0.35±0.29 cm3 against 0.35±0.24 cm3 (IOV = 0.2±0.11 cm3). The best segmentation time of volumes of size 320×320×320 was obtained by a 2D FCN in 3.5±0.2 s.
Collapse
Affiliation(s)
- Matthieu Martin
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France.
| | - Bruno Sciolla
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| | - Michaël Sdika
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| | | | - Philippe Delachartre
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| |
Collapse
|
148
|
Gao Y, Li Z, Song C, Li L, Li M, Schmall J, Liu H, Yuan J, Wang Z, Zeng T, Hu L, Chen Q, Zhang Y. Automatic rat brain image segmentation using triple cascaded convolutional neural networks in a clinical PET/MR. Phys Med Biol 2021; 66:04NT01. [PMID: 33527911 DOI: 10.1088/1361-6560/abd2c5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The purpose of this work was to develop and evaluate a deep learning approach for automatic rat brain image segmentation of magnetic resonance imaging (MRI) images in a clinical PET/MR, providing a useful tool for analyzing studies of the pathology and progression of neurological disease and to validate new radiotracers and therapeutic agents. Rat brain PET/MR images (N = 56) were collected from a clinical PET/MR system using a dedicated small-animal imaging phased array coil. A segmentation method based on a triple cascaded convolutional neural network (CNN) was developed, where, for a rectangular region of interest covering the whole brain, the entire brain volume was outlined using a CNN, then the outlined brain was fed into the cascaded network to segment both the cerebellum and cerebrum, and finally the sub-cortical structures within the cerebrum including hippocampus, thalamus, striatum, lateral ventricles and prefrontal cortex were segmented out using the last cascaded CNN. The dice score coefficient (DSC) between manually drawn labels and predicted labels were used to quantitatively evaluate the segmentation accuracy. The proposed method achieved a mean DSC of 0.965, 0.927, 0.858, 0.594, 0.847, 0.674 and 0.838 for whole brain, cerebellum, hippocampus, lateral ventricles, striatum, prefrontal cortex and thalamus, respectively. Compared with the segmentation results reported in previous publications using atlas-based methods, the proposed method demonstrated improved performance in the whole brain and cerebellum segmentation. In conclusion, the proposed method achieved high accuracy for rat brain segmentation in MRI images from a clinical PET/MR and enabled the possibility of automatic rat brain image processing for small animal neurological research.
Collapse
Affiliation(s)
- Ya Gao
- First Affiliated Hospital of Dalian Medical University, Dalian 116044, People's Republic of China
| | - Zaisheng Li
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, People's Republic of China.,School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | - Cheng Song
- First Affiliated Hospital of Dalian Medical University, Dalian 116044, People's Republic of China
| | - Lei Li
- First Affiliated Hospital of Dalian Medical University, Dalian 116044, People's Republic of China
| | - Mengmeng Li
- First Affiliated Hospital of Dalian Medical University, Dalian 116044, People's Republic of China
| | | | - Hui Liu
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Jianmin Yuan
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Zhe Wang
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Tianyi Zeng
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China.,Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Lingzhi Hu
- UIH America Inc., Houston 77054, United States of America.,Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Qun Chen
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 201807, People's Republic of China
| | - Yanjun Zhang
- First Affiliated Hospital of Dalian Medical University, Dalian 116044, People's Republic of China
| |
Collapse
|
149
|
Zhang Y, Jiang K, Jiang W, Wang N, Wright AJ, Liu A, Wang J. Multi-task convolutional neural network-based design of radio frequency pulse and the accompanying gradients for magnetic resonance imaging. NMR IN BIOMEDICINE 2021; 34:e4443. [PMID: 33200468 DOI: 10.1002/nbm.4443] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 10/21/2020] [Accepted: 10/21/2020] [Indexed: 06/11/2023]
Abstract
Modern MRI systems usually load the predesigned RFs and the accompanying gradients during clinical scans, with minimal adaption to the specific requirements of each scan. Here, we describe a neural network-based method for real-time design of excitation RF pulses and the accompanying gradients' waveforms to achieve spatially two-dimensional selectivity. Nine thousand sets of radio frequency (RF) and gradient waveforms with two-dimensional spatial selectivity were generated as the training dataset using the Shinnar-Le Roux (SLR) method. Neural networks were created and trained with five strategies (TS-1 to TS-5). The neural network-designed RF and gradients were compared with their SLR-designed counterparts and underwent Bloch simulation and phantom imaging to investigate their performances in spin manipulations. We demonstrate a convolutional neural network (TS-5) with multi-task learning to yield both the RF pulses and the accompanying two channels of gradient waveforms that comply with the SLR design, and these design results also provide excitation spatial profiles comparable with SLR pulses in both simulation (normalized root mean square error [NRMSE] of 0.0075 ± 0.0038 over the 400 sets of testing data between TS-5 and SLR) and phantom imaging. The output RF and gradient waveforms between the neural network and SLR methods were also compared, and the joint NRMSE, with both RF and the two channels of gradient waveforms considered, was 0.0098 ± 0.0024 between TS-5 and SLR. The RF and gradients were generated on a commercially available workstation, which took ~130 ms for TS-5. In conclusion, we present a convolutional neural network with multi-task learning, trained with SLR transformation pairs, that is capable of simultaneously generating RF and two channels of gradient waveforms, given the desired spatially two-dimensional excitation profiles.
Collapse
Affiliation(s)
- Yajing Zhang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Ke Jiang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Weiwei Jiang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Nan Wang
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Alan J Wright
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Cambridge, UK
| | - Ailian Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Jiazheng Wang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| |
Collapse
|
150
|
Aranguren I, Valdivia A, Morales-Castañeda B, Oliva D, Abd Elaziz M, Perez-Cisneros M. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102259] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|