251
|
Pagnozzi AM, Fripp J, Rose SE. Quantifying deep grey matter atrophy using automated segmentation approaches: A systematic review of structural MRI studies. Neuroimage 2019; 201:116018. [PMID: 31319182 DOI: 10.1016/j.neuroimage.2019.116018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 07/01/2019] [Accepted: 07/12/2019] [Indexed: 12/13/2022] Open
Abstract
The deep grey matter (DGM) nuclei of the brain play a crucial role in learning, behaviour, cognition, movement and memory. Although automated segmentation strategies can provide insight into the impact of multiple neurological conditions affecting these structures, such as Multiple Sclerosis (MS), Huntington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD) and Cerebral Palsy (CP), there are a number of technical challenges limiting an accurate automated segmentation of the DGM. Namely, the insufficient contrast of T1 sequences to completely identify the boundaries of these structures, as well as the presence of iso-intense white matter lesions or extensive tissue loss caused by brain injury. Therefore in this systematic review, 269 eligible studies were analysed and compared to determine the optimal approaches for addressing these technical challenges. The automated approaches used among the reviewed studies fall into three broad categories, atlas-based approaches focusing on the accurate alignment of atlas priors, algorithmic approaches which utilise intensity information to a greater extent, and learning-based approaches that require an annotated training set. Studies that utilise freely available software packages such as FIRST, FreeSurfer and LesionTOADS were also eligible, and their performance compared. Overall, deep learning approaches achieved the best overall performance, however these strategies are currently hampered by the lack of large-scale annotated data. Improving model generalisability to new datasets could be achieved in future studies with data augmentation and transfer learning. Multi-atlas approaches provided the second-best performance overall, and may be utilised to construct a "silver standard" annotated training set for deep learning. To address the technical challenges, providing robustness to injury can be improved by using multiple channels, highly elastic diffeomorphic transformations such as LDDMM, and by following atlas-based approaches with an intensity driven refinement of the segmentation, which has been done with the Expectation Maximisation (EM) and level sets methods. Accounting for potential lesions should be achieved with a separate lesion segmentation approach, as in LesionTOADS. Finally, to address the issue of limited contrast, R2*, T2* and QSM sequences could be used to better highlight the DGM due to its higher iron content. Future studies could look to additionally acquire these sequences by retaining the phase information from standard structural scans, or alternatively acquiring these sequences for only a training set, allowing models to learn the "improved" segmentation from T1-sequences alone.
Collapse
Affiliation(s)
- Alex M Pagnozzi
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia.
| | - Jurgen Fripp
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| | - Stephen E Rose
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| |
Collapse
|
252
|
Fatnassi C, Zaidi H. Fast and accurate pseudo multispectral technique for whole-brain MRI tissue classification. Phys Med Biol 2019; 64:145005. [PMID: 31117058 DOI: 10.1088/1361-6560/ab239e] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Numerous strategies have been proposed to classify brain tissues into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). However, many of them fail when classifying specific regions with low contrast between tissues. In this work, we propose an alternative pseudo multispectral classification (PMC) technique using CIE LAB spaces instead of gray scale T1-weighted MPRAGE images, combined with a new preprocessing technique for contrast enhancement and an optimized iterative K-means clustering. To improve the accuracy of the classification process, gray scale images were converted to multispectral CIE LAB data by applying several transformation matrices. Thus, the amount of information associated with each image voxel was increased. The image contrast was then enhanced by applying a real time function that separates brain tissue distributions and improve image contrast in certain brain regions. The data were then classified using an optimized iterative and convergent K-means classifier. The performance of the proposed approach was assessed using simulation and in vivo human studies through comparison with three common software packages used for brain MR image segmentation, namely FSL, SPM8 and K-means clustering. In the presence of high SNR, the results showed that the four algorithms achieve a good classification. Conversely, in the presence of low SNR, PMC was shown to outperform the other methods by accurately recovering all tissue volumes. The quantitative assessment of brain tissue classification for simulated studies showed that the PMC algorithm resulted in a mean Jaccard index (JI) of 0.74 compared to 0.75 for FSL, 0.7 for SPM and 0.8 for K-means. The in vivo human studies showed that the PMC algorithm resulted in a mean JI of 0.92, which reflects a good spatial overlap between segmented and actual volumes, compared to 0.84 for FSL, 0.78 for SPM and 0.66 for K-means. The proposed algorithm presents a high potential for improving the accuracy of automatic brain tissues classification and was found to be accurate even in the presence of high noise level.
Collapse
Affiliation(s)
- Chemseddine Fatnassi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | | |
Collapse
|
253
|
Dabiri S, Popuri K, Cespedes Feliciano EM, Caan BJ, Baracos VE, Beg MF. Muscle segmentation in axial computed tomography (CT) images at the lumbar (L3) and thoracic (T4) levels for body composition analysis. Comput Med Imaging Graph 2019; 75:47-55. [PMID: 31132616 PMCID: PMC6620151 DOI: 10.1016/j.compmedimag.2019.04.007] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 04/07/2019] [Accepted: 04/29/2019] [Indexed: 01/06/2023]
Abstract
In diseases such as cancer, patients suffer from degenerative loss of skeletal muscle (cachexia). Muscle wasting and loss of muscle function/performance (sarcopenia) can also occur during advanced aging. Assessing skeletal muscle mass in sarcopenia and cachexia is therefore of clinical interest for risk stratification. In comparison with fat, body fluids and bone, quantifying the skeletal muscle mass is more challenging. Computed tomography (CT) is one of the gold standard techniques for cancer diagnostics and analysis of progression, and therefore a valuable source of imaging for in vivo quantification of skeletal muscle mass. In this paper, we design a novel deep neural network-based algorithm for the automated segmentation of skeletal muscle in axial CT images at the third lumbar (L3) and the fourth thoracic (T4) levels. A two-branch network with two training steps is investigated. The network's performance is evaluated for three trained models on separate datasets. These datasets were generated by different CT devices and data acquisition settings. To ensure the model's robustness, each trained model was tested on all three available test sets. Errors and the effect of labeling protocol in these cases were analyzed and reported. The best performance of the proposed algorithm was achieved on 1327 L3 test samples with an overlap Jaccard score of 98% and sensitivity and specificity greater than 99%.
Collapse
Affiliation(s)
- Setareh Dabiri
- School of Engineering Science, Simon Fraser University, Canada.
| | - Karteek Popuri
- School of Engineering Science, Simon Fraser University, Canada
| | | | - Bette J Caan
- Division of Research, Kaiser Permanente Northern California, USA
| | | | | |
Collapse
|
254
|
Automatic detection and localization of Focal Cortical Dysplasia lesions in MRI using fully convolutional neural network. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.04.024] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
255
|
Ribalta Lorenzo P, Nalepa J, Bobek-Billewicz B, Wawrzyniak P, Mrukwa G, Kawulok M, Ulrych P, Hayball MP. Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:135-148. [PMID: 31200901 DOI: 10.1016/j.cmpb.2019.05.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 04/05/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) is an indispensable tool in diagnosing brain-tumor patients. Automated tumor segmentation is being widely researched to accelerate the MRI analysis and allow clinicians to precisely plan treatment-accurate delineation of brain tumors is a critical step in assessing their volume, shape, boundaries, and other characteristics. However, it is still a very challenging task due to inherent MR data characteristics and high variability, e.g., in tumor sizes or shapes. We present a new deep learning approach for accurate brain tumor segmentation which can be trained from small and heterogeneous datasets annotated by a human reader (providing high-quality ground-truth segmentation is very costly in practice). METHODS In this paper, we present a new deep learning technique for segmenting brain tumors from fluid attenuation inversion recovery MRI. Our technique exploits fully convolutional neural networks, and it is equipped with a battery of augmentation techniques that make the algorithm robust against low data quality, and heterogeneity of small training sets. We train our models using only positive (tumorous) examples, due to the limited amount of available data. RESULTS Our algorithm was tested on a set of stage II-IV brain-tumor patients (image data collected using MAGNETOM Prisma 3T, Siemens). Rigorous experiments, backed up with statistical tests, revealed that our approach outperforms the state-of-the-art approach (utilizing hand-crafted features) in terms of segmentation accuracy, offers very fast training and instant segmentation (analysis of an image takes less than a second). Building our deep model is 1.3 times faster compared with extracting features for extremely randomized trees, and this training time can be controlled. Finally, we showed that too aggressive data augmentation may lead to deteriorated performance of the model, especially in the fixed-budget training (with maximum numbers of training epochs). CONCLUSIONS Our method yields the better performance when compared with the state of the art method which utilizes hand-crafted features. In addition, our deep network can be effectively applied to difficult (small, imbalanced, and heterogeneous) datasets, offers controllable training time, and infers in real-time.
Collapse
Affiliation(s)
| | - Jakub Nalepa
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Barbara Bobek-Billewicz
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | - Pawel Wawrzyniak
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | | - Michal Kawulok
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Pawel Ulrych
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | |
Collapse
|
256
|
Cui L, Feng J, Yang L. Towards Fine Whole-Slide Skeletal Muscle Image Segmentation through Deep Hierarchically Connected Networks. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:5191630. [PMID: 31346401 PMCID: PMC6620852 DOI: 10.1155/2019/5191630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 03/14/2019] [Indexed: 11/28/2022]
Abstract
Automatic skeletal muscle image segmentation (MIS) is crucial in the diagnosis of muscle-related diseases. However, accurate methods often suffer from expensive computations, which are not scalable to large-scale, whole-slide muscle images. In this paper, we present a fast and accurate method to enable the more clinically meaningful whole-slide MIS. Leveraging on recently popular convolutional neural network (CNN), we train our network in an end-to-end manner so as to directly perform pixelwise classification. Our deep network is comprised of the encoder and decoder modules. The encoder module captures rich and hierarchical representations through a series of convolutional and max-pooling layers. Then, the multiple decoders utilize multilevel representations to perform multiscale predictions. The multiscale predictions are then combined together to generate a more robust dense segmentation as the network output. The decoder modules have independent loss function, which are jointly trained with a weighted loss function to address fine-grained pixelwise prediction. We also propose a two-stage transfer learning strategy to effectively train such deep network. Sufficient experiments on a challenging muscle image dataset demonstrate the significantly improved efficiency and accuracy of our method compared with recent state of the arts.
Collapse
Affiliation(s)
- Lei Cui
- Department of Information Science and Technology, Northwest University, Xi'an, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, China
| | - Lin Yang
- The College of Life Sciences, Northwest University, Xi'an, China
| |
Collapse
|
257
|
Lin X, Li X. Image Based Brain Segmentation: From Multi-Atlas Fusion to Deep Learning. Curr Med Imaging 2019; 15:443-452. [DOI: 10.2174/1573405614666180817125454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/28/2018] [Accepted: 08/07/2018] [Indexed: 01/10/2023]
Abstract
Background:
This review aims to identify the development of the algorithms for brain
tissue and structure segmentation in MRI images.
Discussion:
Starting from the results of the Grand Challenges on brain tissue and structure segmentation
held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this
review analyses the development of the algorithms and discusses the tendency from multi-atlas label
fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand
Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully.
Conclusion:
Although deep learning has got higher rankings in the challenge, it has not yet met the
expectations in terms of accuracy. More effective and specialized work should be done in the future.
Collapse
Affiliation(s)
- Xiangbo Lin
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| | - Xiaoxi Li
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| |
Collapse
|
258
|
Age-specific optimization of T1-weighted brain MRI throughout infancy. Neuroimage 2019; 199:387-395. [PMID: 31154050 DOI: 10.1016/j.neuroimage.2019.05.075] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 05/10/2019] [Accepted: 05/28/2019] [Indexed: 12/16/2022] Open
Abstract
The infant brain undergoes drastic morphological and functional development during the first year of life. Three-dimensional T1-weighted Magnetic Resonance Imaging (3D T1w-MRI) is a major tool to characterize the brain anatomy, which however, manifests inherently low and rapidly changing contrast between white matter (WM) and gray matter (GM) in the infant brains (0-12 month-old). Despite the prior efforts made to maximize tissue contrast in the neonatal brains (≤1 months), optimization of imaging methods in the rest of the infancy (1-12 months) is not fully addressed, while brains in the latter period exhibit even more challenging contrast. Here, we performed a systematic investigation to improve the contrast between cortical GM and subcortical WM throughout the infancy. We first performed simultaneous T1 and proton density mapping in a normally developing infant cohort at 3T (n = 57). Based on the evolution of T1 relaxation times, we defined three age groups and simulated the relative tissue contrast between WM and GM in each group. Age-specific imaging strategies were proposed according to the Bloch simulation: inversion time (TI) around 800 ms for the 0-3 month-old group, dual TI at 500 ms and 700 ms for the 3-7 month-old group, and TI around 700 ms for 7-12 month-old group, using a centrically encoded 3D-MPRAGE sequence at 3T. Experimental results with varying TIs in each group confirmed improved contrast at the proposed optimal TIs, even in 3-7 month-old infants who had nearly isointense contrast. We further demonstrated the advantage of improved relative contrast in segmenting the neonatal brains using a multi-atlas segmentation method. The proposed age-specific optimization strategies can be easily adapted to routine clinical examinations, and the improved image contrast would facilitate quantitative analysis of the infant brain development.
Collapse
|
259
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
260
|
Wang M, Li P, Liu F. Multi-atlas active contour segmentation method using template optimization algorithm. BMC Med Imaging 2019; 19:42. [PMID: 31126254 PMCID: PMC6534882 DOI: 10.1186/s12880-019-0340-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Accepted: 05/14/2019] [Indexed: 11/10/2022] Open
Abstract
Background Brain image segmentation is the basis and key to brain disease diagnosis, treatment planning and tissue 3D reconstruction. The accuracy of segmentation directly affects the therapeutic effect. Manual segmentation of these images is time-consuming and subjective. Therefore, it is important to research semi-automatic and automatic image segmentation methods. In this paper, we propose a semi-automatic image segmentation method combined with a multi-atlas registration method and an active contour model (ACM). Method We propose a multi-atlas active contour segmentation method using a template optimization algorithm. First, a multi-atlas registration method is used to obtain the prior shape information of the target tissue, and then a label fusion algorithm is used to generate the initial template. Second, a template optimization algorithm is used to reduce the multi-atlas registration errors and generate the initial active contour (IAC). Finally, a ACM is used to segment the target tissue. Results The proposed method was applied to the challenging publicly available MR datasets IBSR and MRBrainS13. In the MRBrainS13 datasets, we obtained an average thalamus Dice similarity coefficient of 0.927 ± 0.014 and an average Hausdorff distance (HD) of 2.92 ± 0.53. In the IBSR datasets, we obtained a white matter (WM) average Dice similarity coefficient of 0.827 ± 0.04 and a gray gray matter (GM) average Dice similarity coefficient of 0.853 ± 0.03. Conclusion In this paper, we propose a semi-automatic brain image segmentation method. The main contributions of this paper are as follows: 1) Our method uses a multi-atlas registration method based on affine transformation, which effectively reduces the multi-atlas registration time compared to the complex nonlinear registration method. The average registration time of each target image in the IBSR datasets is 255 s, and the average registration time of each target image in the MRBrainS13 datasets is 409 s. 2) We used a template optimization algorithm to improve registration error and generate a continuous IAC. 3) Finally, we used a ACM to segment the target tissue and obtain a smooth continuous target contour.
Collapse
Affiliation(s)
- Monan Wang
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China.
| | - Pengcheng Li
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| | - Fengjie Liu
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| |
Collapse
|
261
|
Ker J, Singh SP, Bai Y, Rao J, Lim T, Wang L. Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans. SENSORS 2019; 19:s19092167. [PMID: 31083289 PMCID: PMC6539746 DOI: 10.3390/s19092167] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 05/01/2019] [Accepted: 05/04/2019] [Indexed: 12/31/2022]
Abstract
Intracranial hemorrhage is a medical emergency that requires urgent diagnosis and immediate treatment to improve patient outcome. Machine learning algorithms can be used to perform medical image classification and assist clinicians in diagnosing radiological scans. In this paper, we apply 3-dimensional convolutional neural networks (3D CNN) to classify computed tomography (CT) brain scans into normal scans (N) and abnormal scans containing subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), acute subdural hemorrhage (ASDH) and brain polytrauma hemorrhage (BPH). The dataset used consists of 399 volumetric CT brain images representing approximately 12,000 images from the National Neuroscience Institute, Singapore. We used a 3D CNN to perform both 2-class (normal versus a specific abnormal class) and 4-class classification (between normal, SAH, IPH, ASDH). We apply image thresholding at the image pre-processing step, that improves 3D CNN classification accuracy and performance by accentuating the pixel intensities that contribute most to feature discrimination. For 2-class classification, the F1 scores for various pairs of medical diagnoses ranged from 0.706 to 0.902 without thresholding. With thresholding implemented, the F1 scores improved and ranged from 0.919 to 0.952. Our results are comparable to, and in some cases, exceed the results published in other work applying 3D CNN to CT or magnetic resonance imaging (MRI) brain scan classification. This work represents a direct application of a 3D CNN to a real hospital scenario involving a medically emergent CT brain diagnosis.
Collapse
Affiliation(s)
- Justin Ker
- Neurosurgery, National Neuroscience Institute, Singapore 308433, Singapore.
| | - Satya P Singh
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore.
| | - Yeqi Bai
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore.
| | - Jai Rao
- Neurosurgery, National Neuroscience Institute, Singapore 308433, Singapore.
| | - Tchoyoson Lim
- Neuroradiology, National Neuroscience Institute, Singapore 308433, Singapore.
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore.
| |
Collapse
|
262
|
Tong N, Gou S, Yang S, Cao M, Sheng K. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med Phys 2019; 46:2669-2682. [PMID: 31002188 DOI: 10.1002/mp.13553] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/14/2019] [Accepted: 04/15/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI. METHODS AND MATERIAL A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation. RESULTS The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU. CONCLUSION The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.
Collapse
Affiliation(s)
- Nuo Tong
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.,Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Shuyuan Yang
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Minsong Cao
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
263
|
Dolz J, Gopinath K, Yuan J, Lombaert H, Desrosiers C, Ben Ayed I. HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1116-1126. [PMID: 30387726 DOI: 10.1109/tmi.2018.2878669] [Citation(s) in RCA: 174] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet that connects each layer to every other layer in a feed-forward fashion and has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on six month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available.
Collapse
|
264
|
A gentle introduction to deep learning in medical image processing. Z Med Phys 2019; 29:86-101. [DOI: 10.1016/j.zemedi.2018.12.003] [Citation(s) in RCA: 229] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 02/07/2023]
|
265
|
Liu F. SUSAN: segment unannotated image structure using adversarial network. Magn Reson Med 2019; 81:3330-3345. [PMID: 30536427 PMCID: PMC7140982 DOI: 10.1002/mrm.27627] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 11/13/2018] [Accepted: 11/13/2018] [Indexed: 12/20/2022]
Abstract
PURPOSE To describe and evaluate a segmentation method using joint adversarial and segmentation convolutional neural network to achieve accurate segmentation using unannotated MR image datasets. THEORY AND METHODS A segmentation pipeline was built using joint adversarial and segmentation network. A convolutional neural network technique called cycle-consistent generative adversarial network (CycleGAN) was applied as the core of the method to perform unpaired image-to-image translation between different MR image datasets. A joint segmentation network was incorporated into the adversarial network to obtain additional functionality for semantic segmentation. The fully automated segmentation method termed as SUSAN was tested for segmenting bone and cartilage on 2 clinical knee MR image datasets using images and annotated segmentation masks from an online publicly available knee MR image dataset. The segmentation results were compared using quantitative segmentation metrics with the results from a supervised U-Net segmentation method and 2 registration methods. The Wilcoxon signed-rank test was used to evaluate the value difference of quantitative metrics between different methods. RESULTS The proposed method SUSAN provided high segmentation accuracy with results comparable to the supervised U-Net segmentation method (most quantitative metrics having P > 0.05) and significantly better than a multiatlas registration method (all quantitative metrics having P < 0.001) and a direct registration method (all quantitative metrics having P< 0.0001) for the clinical knee image datasets. SUSAN also demonstrated the applicability for segmenting knee MR images with different tissue contrasts. CONCLUSION SUSAN performed rapid and accurate tissue segmentation for multiple MR image datasets without the need for sequence specific segmentation annotation. The joint adversarial and segmentation network and training strategy have promising potential applications in medical image segmentation.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705–2275
| |
Collapse
|
266
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
267
|
Cizmeci MN, Khalili N, Claessens NHP, Groenendaal F, Liem KD, Heep A, Benavente-Fernández I, van Straaten HLM, van Wezel-Meijler G, Steggerda SJ, Dudink J, Išgum I, Whitelaw A, Benders MJNL, de Vries LS, Woerdeman P, ter Horst H, Dijkman K, Ley D, Fellman V, de Haan T, Brouwer A, van ‘t Verlaat E, Govaert P, Smit B, Agut Quijano T, Barcik U, Mathur A, Graca A. Assessment of Brain Injury and Brain Volumes after Posthemorrhagic Ventricular Dilatation: A Nested Substudy of the Randomized Controlled ELVIS Trial. J Pediatr 2019; 208:191-197.e2. [PMID: 30878207 DOI: 10.1016/j.jpeds.2018.12.062] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 11/26/2018] [Accepted: 12/31/2018] [Indexed: 10/27/2022]
Abstract
OBJECTIVE To compare the effect of early and late intervention for posthemorrhagic ventricular dilatation on additional brain injury and ventricular volume using term-equivalent age-MRI. STUDY DESIGN In the Early vs Late Ventricular Intervention Study (ELVIS) trial, 126 preterm infants ≤34 weeks of gestation with posthemorrhagic ventricular dilatation were randomized to low-threshold (ventricular index >p97 and anterior horn width >6 mm) or high-threshold (ventricular index >p97 + 4 mm and anterior horn width >10 mm) groups. In 88 of those (80%) with a term-equivalent age-MRI, the Kidokoro Global Brain Abnormality Score and the frontal and occipital horn ratio were measured. Automatic segmentation was used for volumetric analysis. RESULTS The total Kidokoro score of the infants in the low-threshold group (n = 44) was lower than in the high-threshold group (n = 44; median, 8 [IQR, 5-12] vs median 12 [IQR, 9-17], respectively; P < .001). More infants in the low-threshold group had a normal or mildly increased score vs more infants in the high-threshold group with a moderately or severely increased score (46% vs 11% and 89% vs 54%, respectively; P = .002). The frontal and occipital horn ratio was lower in the low-threshold group (median, 0.42 [IQR, 0.34-0.63]) than the high-threshold group (median 0.48 [IQR, 0.37-0.68], respectively; P = .001). Ventricular cerebrospinal fluid volumes could be calculated in 47 infants and were smaller in the low-threshold group (P = .03). CONCLUSIONS More brain injury and larger ventricular volumes were demonstrated in the high vs the low-threshold group. These results support the positive effects of early intervention for posthemorrhagic ventricular dilatation. TRIAL REGISTRATION ISRCTN43171322.
Collapse
Affiliation(s)
- Mehmet N Cizmeci
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nadieh Khalili
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathalie H P Claessens
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Floris Groenendaal
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Kian D Liem
- Department of Neonatology, Amalia Children's Hospital, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Axel Heep
- Department of Neonatology, Southmead Hospital, School of Clinical Science, University of Bristol, Bristol, United Kingdom
| | | | | | - Gerda van Wezel-Meijler
- Department of Neonatology, Isala Women and Children's Hospital, Zwolle, The Netherlands; Department of Neonatology, Leiden University Medical Center, Leiden, The Netherlands
| | - Sylke J Steggerda
- Department of Neonatology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen Dudink
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Andrew Whitelaw
- Department of Neonatology, Southmead Hospital, School of Clinical Science, University of Bristol, Bristol, United Kingdom
| | - Manon J N L Benders
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Linda S de Vries
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, The Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
268
|
Zhu H, Shi F, Wang L, Hung SC, Chen MH, Wang S, Lin W, Shen D. Dilated Dense U-Net for Infant Hippocampus Subfield Segmentation. Front Neuroinform 2019; 13:30. [PMID: 31068797 PMCID: PMC6491864 DOI: 10.3389/fninf.2019.00030] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 04/02/2019] [Indexed: 01/16/2023] Open
Abstract
Accurate and automatic segmentation of infant hippocampal subfields from magnetic resonance (MR) images is an important step for studying memory related infant neurological diseases. However, existing hippocampal subfield segmentation methods were generally designed based on adult subjects, and would compromise performance when applied to infant subjects due to insufficient tissue contrast and fast changing structural patterns of early hippocampal development. In this paper, we propose a new fully convolutional network (FCN) for infant hippocampal subfield segmentation by embedding the dilated dense network in the U-net, namely DUnet. The embedded dilated dense network can generate multi-scale features while keeping high spatial resolution, which is useful in fusing the low-level features in the contracting path with the high-level features in the expanding path. To further improve the performance, we group every pair of convolutional layers with one residual connection in the DUnet, and obtain the Residual DUnet (ResDUnet). Experimental results show that our proposed DUnet and ResDUnet improve the average Dice coefficient by 2.1 and 2.5% for infant hippocampal subfield segmentation, respectively, when compared with the classic 3D U-net. The results also demonstrate that our methods outperform other state-of-the-art methods.
Collapse
Affiliation(s)
- Hancan Zhu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, China
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Sheng-Che Hung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Meng-Hsiang Chen
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Shuai Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| |
Collapse
|
269
|
Shen N, Li X, Zheng S, Zhang L, Fu Y, Liu X, Li M, Li J, Guo S, Zhang H. Automated and accurate quantification of subcutaneous and visceral adipose tissue from magnetic resonance imaging based on machine learning. Magn Reson Imaging 2019; 64:28-36. [PMID: 31004712 DOI: 10.1016/j.mri.2019.04.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/02/2019] [Accepted: 04/17/2019] [Indexed: 02/07/2023]
Abstract
Accurate measuring of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) is vital for the research of many diseases. The localization and quantification of SAT and VAT by computed tomography (CT) expose patients to harmful ionizing radiation. Magnetic resonance imaging (MRI) is a safe and painless test. The aim of this paper is to explore a practical method for the segmentation of SAT and VAT based on the iterative decomposition of water and fat with echo asymmetry and least square estimation‑iron quantification (IDEAL-IQ) technology and machine learning. The approach involves two main steps. First, a deep network is designed to segment the inner and outer boundaries of SAT in fat images and the peritoneal cavity contour in water images. Second, after mapping the peritoneal cavity contour onto the fat images, the assumption-free K-means++ with a Markov chain Monte Carlo (AFK-MC2) clustering method is used to obtain the VAT content. An MRI data set from 75 subjects is utilized to construct and evaluate the new strategy. The Dice coefficients for the SAT and VAT content obtained from the proposed method and the manual measurements performed by experts are 0.96 and 0.97, respectively. The experimental results indicate that the proposed method and the manual measurements exhibit high reliability.
Collapse
Affiliation(s)
- Ning Shen
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Xueyan Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Shuang Zheng
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Lei Zhang
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Yu Fu
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Xiaoming Liu
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Mingyang Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China
| | - Jiasheng Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Shuxu Guo
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Huimao Zhang
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China.
| |
Collapse
|
270
|
Claessens NHP, Khalili N, Isgum I, Ter Heide H, Steenhuis TJ, Turk E, Jansen NJG, de Vries LS, Breur JMPJ, de Heus R, Benders MJNL. Brain and CSF Volumes in Fetuses and Neonates with Antenatal Diagnosis of Critical Congenital Heart Disease: A Longitudinal MRI Study. AJNR Am J Neuroradiol 2019; 40:885-891. [PMID: 30923087 DOI: 10.3174/ajnr.a6021] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 02/27/2019] [Indexed: 12/15/2022]
Abstract
BACKGROUND AND PURPOSE Fetuses and neonates with critical congenital heart disease are at risk of delayed brain development and neurodevelopmental impairments. Our aim was to investigate the association between fetal and neonatal brain volumes and neonatal brain injury in a longitudinally scanned cohort with an antenatal diagnosis of critical congenital heart disease and to relate fetal and neonatal brain volumes to postmenstrual age and type of congenital heart disease. MATERIALS AND METHODS This was a prospective, longitudinal study including 61 neonates with critical congenital heart disease undergoing surgery with cardiopulmonary bypass <30 days after birth and MR imaging of the brain; antenatally (33 weeks postmenstrual age), neonatal preoperatively (first week), and postoperatively (7 days postoperatively). Twenty-six had 3 MR imaging scans; 61 had at least 1 fetal and/or neonatal MR imaging scan. Volumes (cubic centimeters) were calculated for total brain volume, unmyelinated white matter, cortical gray matter, cerebellum, extracerebral CSF, and ventricular CSF. MR images were reviewed for ischemic brain injury. RESULTS Total fetal brain volume, cortical gray matter, and unmyelinated white matter positively correlated with preoperative neonatal total brain volume, cortical gray matter, and unmyelinated white matter (r = 0.5-0.58); fetal ventricular CSF and extracerebral CSF correlated with neonatal ventricular CSF and extracerebral CSF (r = 0.64 and 0.82). Fetal cortical gray matter, unmyelinated white matter, and the cerebellum were negatively correlated with neonatal ischemic injury (r = -0.46 to -0.41); fetal extracerebral CSF and ventricular CSF were positively correlated with neonatal ischemic injury (r = 0.40 and 0.23). Unmyelinated white matter:total brain volume ratio decreased with increasing postmenstrual age, with a parallel increase of cortical gray matter:total brain volume and cerebellum:total brain volume. Fetal ventricular CSF:intracranial volume and extracerebral CSF:intracranial volume ratios decreased with increasing postmenstrual age; however, neonatal ventricular CSF:intracranial volume and extracerebral CSF:intracranial volume ratios increased with postmenstrual age. CONCLUSIONS This study reveals that fetal brain volumes relate to neonatal brain volumes in critical congenital heart disease, with a negative correlation between fetal brain volumes and neonatal ischemic injury. Fetal brain imaging has the potential to provide early neurologic biomarkers.
Collapse
Affiliation(s)
- N H P Claessens
- From the Departments of Neonatology (N.H.P.C., E.T., L.S.d.V., M.J.N.L.B.) .,Pediatric Cardiology (N.H.P.C., H.t.H., T.J.S., J.M.P.J.B.).,Pediatric Intensive Care (N.H.P.C., N.J.G.J.)
| | - N Khalili
- Image Sciences Institute (N.K., I.I.), University Medical Center Utrecht, Utrecht, the Netherlands
| | - I Isgum
- Image Sciences Institute (N.K., I.I.), University Medical Center Utrecht, Utrecht, the Netherlands
| | - H Ter Heide
- Pediatric Cardiology (N.H.P.C., H.t.H., T.J.S., J.M.P.J.B.)
| | - T J Steenhuis
- Pediatric Cardiology (N.H.P.C., H.t.H., T.J.S., J.M.P.J.B.)
| | - E Turk
- From the Departments of Neonatology (N.H.P.C., E.T., L.S.d.V., M.J.N.L.B.)
| | - N J G Jansen
- Pediatric Intensive Care (N.H.P.C., N.J.G.J.).,Department of Pediatrics (N.J.G.J.), Beatrix Children's Hospital, University Medical Center Groningen, Groningen, the Netherlands
| | - L S de Vries
- From the Departments of Neonatology (N.H.P.C., E.T., L.S.d.V., M.J.N.L.B.)
| | - J M P J Breur
- Pediatric Cardiology (N.H.P.C., H.t.H., T.J.S., J.M.P.J.B.)
| | - R de Heus
- Obstetrics (R.d.H.), Wilhelmina Children's Hospital, Utrecht, the Netherlands
| | - M J N L Benders
- From the Departments of Neonatology (N.H.P.C., E.T., L.S.d.V., M.J.N.L.B.)
| |
Collapse
|
271
|
Yu Q, Shi Y, Sun J, Gao Y, Zhu J, Dai Y. Crossbar-Net: A Novel Convolutional Neural Network for Kidney Tumor Segmentation in CT Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4060-4074. [PMID: 30892206 DOI: 10.1109/tip.2019.2905537] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Due to the unpredictable location, fuzzy texture and diverse shape, accurate segmentation of the kidney tumor in CT images is an important yet challenging task. To this end, we in this paper present a cascaded trainable segmentation model termed as Crossbar-Net. Our method combines two novel schemes: (1) we originally proposed the crossbar patches, which consists of two orthogonal non-squared patches (i.e., the vertical patch and horizontal patch). The crossbar patches are able to capture both the global and local appearance information of the kidney tumors from both the vertical and horizontal directions simultaneously. (2) With the obtained crossbar patches, we iteratively train two sub-models (i.e., horizontal sub-model and vertical sub-model) in a cascaded training manner. During the training, the trained sub-models are encouraged to become more focus on the difficult parts of the tumor automatically (i.e., mis-segmented regions). Specifically, the vertical (horizontal) sub-model is required to help segment the mis-segmented regions for the horizontal (vertical) sub-model. Thus, the two sub-models could complement each other to achieve the self-improvement until convergence. In the experiment, we evaluate our method on a real CT kidney tumor dataset which is collected from 94 different patients including 3,500 CT slices. Compared with the state-of-the-art segmentation methods, the results demonstrate the superior performance of our method on the Dice similarity coefficient, true positive fraction, centroid distance and Hausdorff distance. Moreover, to exploit the generalization to other segmentation tasks, we also extend our Crossbar-Net to two related segmentation tasks: (1) cardiac segmentation in MR images and (2) breast mass segmentation in X-ray images, showing the promising results for these two tasks. Our implementation is released at https: //github.com/Qianyu1226/Crossbar-Net.
Collapse
|
272
|
Abstract
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.
Collapse
|
273
|
Nie D, Wang L, Adeli E, Lao C, Lin W, Shen D. 3-D Fully Convolutional Networks for Multimodal Isointense Infant Brain Image Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:1123-1136. [PMID: 29994385 PMCID: PMC6230311 DOI: 10.1109/tcyb.2018.2797905] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Accurate segmentation of infant brain images into different regions of interest is one of the most important fundamental steps in studying early brain development. In the isointense phase (approximately 6-8 months of age), white matter and gray matter exhibit similar levels of intensities in magnetic resonance (MR) images, due to the ongoing myelination and maturation. This results in extremely low tissue contrast and thus makes tissue segmentation very challenging. Existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single modality. To address the challenge, we propose a novel 3-D multimodal fully convolutional network (FCN) architecture for segmentation of isointense phase brain MR images. Specifically, we extend the conventional FCN architectures from 2-D to 3-D, and, rather than directly using FCN, we intuitively integrate coarse (naturally high-resolution) and dense (highly semantic) feature maps to better model tiny tissue regions, in addition, we further propose a transformation module to better connect the aggregating layers; we also propose a fusion module to better serve the fusion of feature maps. We compare the performance of our approach with several baseline and state-of-the-art methods on two sets of isointense phase brain images. The comparison results show that our proposed 3-D multimodal FCN model outperforms all previous methods by a large margin in terms of segmentation accuracy. In addition, the proposed framework also achieves faster segmentation results compared to all other methods. Our experiments further demonstrate that: 1) carefully integrating coarse and dense feature maps can considerably improve the segmentation performance; 2) batch normalization can speed up the convergence of the networks, especially when hierarchical feature aggregations occur; and 3) integrating multimodal information can further boost the segmentation performance.
Collapse
|
274
|
Wang L, Nie D, Li G, Puybareau É, Dolz J, Zhang Q, Wang F, Xia J, Wu Z, Chen J, Thung KH, Bui TD, Shin J, Zeng G, Zheng G, Fonov VS, Doyle A, Xu Y, Moeskops P, Pluim JP, Desrosiers C, Ayed IB, Sanroma G, Benkarim OM, Casamitjana A, Vilaplana V, Lin W, Li G, Shen D. Benchmark on Automatic 6-month-old Infant Brain Segmentation Algorithms: The iSeg-2017 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:10.1109/TMI.2019.2901712. [PMID: 30835215 PMCID: PMC6754324 DOI: 10.1109/tmi.2019.2901712] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Accurate segmentation of infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is an indispensable foundation for early studying of brain growth patterns and morphological changes in neurodevelopmental disorders. Nevertheless, in the isointense phase (approximately 6-9 months of age), due to inherent myelination and maturation process, WM and GM exhibit similar levels of intensity in both T1-weighted (T1w) and T2-weighted (T2w) MR images, making tissue segmentation very challenging. Despite many efforts were devoted to brain segmentation, only few studies have focused on the segmentation of 6-month infant brain images. With the idea of boosting methodological development in the community, iSeg-2017 challenge (http://iseg2017.web.unc.edu) provides a set of 6-month infant subjects with manual labels for training and testing the participating methods. Among the 21 automatic segmentation methods participating in iSeg-2017, we review the 8 top-ranked teams, in terms of Dice ratio, modified Hausdorff distance and average surface distance, and introduce their pipelines, implementations, as well as source codes. We further discuss limitations and possible future directions. We hope the dataset in iSeg-2017 and this review article could provide insights into methodological development for the community.
Collapse
Affiliation(s)
- Li Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dong Nie
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Guannan Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Élodie Puybareau
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Qian Zhang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Fan Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jing Xia
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jiawei Chen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Toan Duc Bui
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Jitae Shin
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Guodong Zeng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Guoyan Zheng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Vladimir S. Fonov
- NeuroImaging and Surgical Technologies Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Andrew Doyle
- McGill Centre for Integrative Neuroscience, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Yongchao Xu
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Pim Moeskops
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Josien P.W. Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Gerard Sanroma
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | - Oualid M. Benkarim
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | | | | | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, USA, and also Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
275
|
Liu L, Chen S, Zhang F, Wu FX, Pan Y, Wang J. Deep convolutional neural network for automatically segmenting acute ischemic stroke lesion in multi-modality MRI. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04096-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
276
|
Abstract
In brain magnetic resonance (MR) images, image quality is often degraded due to the influence of noise and outliers, which brings some difficulties for doctors to segment and extract brain tissue accurately. In this paper, a modified robust fuzzy c-means (MRFCM) algorithm for brain MR image segmentation is proposed. According to the gray level information of the pixels in the local neighborhood, the deviation values of each adjacent pixel are calculated in kernel space based on their median value, and the normalized adaptive weighted measure of each pixel is obtained. Both impulse noise and Gaussian noise in the image can be effectively suppressed, and the detail and edge information of the brain MR image can be better preserved. At the same time, the gray histogram is used to replace single pixel during the clustering process. The results of segmentation of MRFCM are compared with the state-of-the-art algorithms based on fuzzy clustering, and the proposed algorithm has the stronger anti-noise property, better robustness to various noises and higher segmentation accuracy.
Collapse
|
277
|
Lessmann N, van Ginneken B, de Jong PA, Išgum I. Iterative fully convolutional neural networks for automatic vertebra segmentation and identification. Med Image Anal 2019; 53:142-155. [PMID: 30771712 DOI: 10.1016/j.media.2019.02.005] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 01/19/2019] [Accepted: 02/11/2019] [Indexed: 12/28/2022]
Abstract
Precise segmentation and anatomical identification of the vertebrae provides the basis for automatic analysis of the spine, such as detection of vertebral compression fractures or other abnormalities. Most dedicated spine CT and MR scans as well as scans of the chest, abdomen or neck cover only part of the spine. Segmentation and identification should therefore not rely on the visibility of certain vertebrae or a certain number of vertebrae. We propose an iterative instance segmentation approach that uses a fully convolutional neural network to segment and label vertebrae one after the other, independently of the number of visible vertebrae. This instance-by-instance segmentation is enabled by combining the network with a memory component that retains information about already segmented vertebrae. The network iteratively analyzes image patches, using information from both image and memory to search for the next vertebra. To efficiently traverse the image, we include the prior knowledge that the vertebrae are always located next to each other, which is used to follow the vertebral column. The network concurrently performs multiple tasks, which are segmentation of a vertebra, regression of its anatomical label and prediction whether the vertebra is completely visible in the image, which allows to exclude incompletely visible vertebrae from further analyses. The predicted anatomical labels of the individual vertebrae are additionally refined with a maximum likelihood approach, choosing the overall most likely labeling if all detected vertebrae are taken into account. This method was evaluated with five diverse datasets, including multiple modalities (CT and MR), various fields of view and coverages of different sections of the spine, and a particularly challenging set of low-dose chest CT scans. For vertebra segmentation, the average Dice score was 94.9 ± 2.1% with an average absolute symmetric surface distance of 0.2 ± 10.1mm. The anatomical identification had an accuracy of 93%, corresponding to a single case with mislabeled vertebrae. Vertebrae were classified as completely or incompletely visible with an accuracy of 97%. The proposed iterative segmentation method compares favorably with state-of-the-art methods and is fast, flexible and generalizable.
Collapse
Affiliation(s)
- Nikolas Lessmann
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands.
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center Nijmegen, The Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Center Utrecht, The Netherlands; Utrecht University, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands
| |
Collapse
|
278
|
Yurtsever M, Yurtsever U. Use of a convolutional neural network for the classification of microbeads in urban wastewater. CHEMOSPHERE 2019; 216:271-280. [PMID: 30384295 DOI: 10.1016/j.chemosphere.2018.10.084] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 10/08/2018] [Accepted: 10/14/2018] [Indexed: 06/08/2023]
Abstract
Scientists are on the lookout for a practical model that can serve as a standard for sorting out, identifying, and characterizing microplastics which are common occurrences in water sources and wastewaters. The microbeads (MBs) used in cosmetics and discharged into the sewer systems after use cause substantial microplastics pollution in the receiving waters. Today, the use of plastic microbeads in cosmetics is banned. The existing use cases are to be discontinued within a few years. Yet, there are no restrictions regarding the use of microbeads in a number of industries, cleaning products, pharmaceuticals and medical practices. In this context, the determination and classification of MBs which had so far been discharged to water sources and which continue to be discharged, represent crucial problems. In this work, we examined a new approach for the classification of MBs based on microscopic images. For classification purposes, Convolutional Neural Network (CNN) -a Deep Learning algorithm- was employed, whereas GoogLeNet architecture served as the model. The network is built from scratch, and trained then after tested on a total of 42928 images containing MBs in 5 distinct cleansers. The study performed with the CNN which achieved a classification performance of 89% for MBs in wastewater.
Collapse
Affiliation(s)
- Meral Yurtsever
- Department of Environmental Engineering, Sakarya University, 54187, Sakarya, Turkey.
| | - Ulaş Yurtsever
- Department of Computer and Information Engineering, Sakarya University, 54187, Sakarya, Turkey.
| |
Collapse
|
279
|
Guha Roy A, Conjeti S, Navab N, Wachinger C. QuickNAT: A fully convolutional network for quick and accurate segmentation of neuroanatomy. Neuroimage 2019; 186:713-727. [DOI: 10.1016/j.neuroimage.2018.11.042] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 10/29/2018] [Accepted: 11/23/2018] [Indexed: 01/27/2023] Open
|
280
|
Wang C, Tyagi N, Rimner A, Hu YC, Veeraraghavan H, Li G, Hunt M, Mageras G, Zhang P. Segmenting lung tumors on longitudinal imaging studies via a patient-specific adaptive convolutional neural network. Radiother Oncol 2019; 131:101-107. [PMID: 30773175 PMCID: PMC6615045 DOI: 10.1016/j.radonc.2018.10.037] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 10/25/2018] [Accepted: 10/29/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE To design a deep learning algorithm that automatically delineates lung tumors seen on weekly magnetic resonance imaging (MRI) scans acquired during radiotherapy and facilitates the analysis of geometric tumor changes. METHODS This longitudinal imaging study comprised 9 lung cancer patients who had 6-7 weekly T2-weighted MRI scans during radiotherapy. Tumors on all scans were manually contoured as the ground truth. Meanwhile, a patient-specific adaptive convolutional neural network (A-net) was developed to simulate the workflow of adaptive radiotherapy and to utilize past weekly MRI and tumor contours to segment tumors on the current weekly MRI. To augment the training data, each voxel inside the volume of interest was expanded to a 3 × 3 cm patch as the input, whereas the classification of the corresponding patch, background or tumor, was the output. Training was updated weekly to incorporate the latest MRI scan. For comparison, a population-based neural network was implemented, trained, and validated on the leave-one-out scheme. Both algorithms were evaluated by their precision, DICE coefficient, and root mean square surface distance between the manual and computerized segmentations. RESULTS Training of A-net converged well within 2 h of computations on a computer cluster. A-net segmented the weekly MR with a precision, DICE, and root mean square surface distance of 0.81 ± 0.10, 0.82 ± 0.10, and 2.4 ± 1.4 mm, and outperformed the population-based algorithm with 0.63 ± 0.21, 0.64 ± 0.19, and 4.1 ± 3.0 mm, respectively. CONCLUSION A-net can be feasibly integrated into the clinical workflow of a longitudinal imaging study and become a valuable tool to facilitate decision- making in adaptive radiotherapy.
Collapse
Affiliation(s)
- Chuang Wang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Andreas Rimner
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Guang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Margie Hunt
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Gig Mageras
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, USA; Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, USA.
| |
Collapse
|
281
|
Kong Z, Li T, Luo J, Xu S. Automatic Tissue Image Segmentation Based on Image Processing and Deep Learning. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:2912458. [PMID: 30838122 PMCID: PMC6374831 DOI: 10.1155/2019/2912458] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 09/17/2018] [Accepted: 12/20/2018] [Indexed: 11/17/2022]
Abstract
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies, or other novel imaging technologies. In addition, image segmentation also provides detailed structural description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation methods. Here, we first use some preprocessing methods such as wavelet denoising to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM), and white matter (WM) on 5 MRI head image datasets. We then realize automatic image segmentation with deep learning by using convolutional neural network. We also introduce parallel computing. Such approaches greatly reduced the processing time compared to manual and semiautomatic segmentation and are of great importance in improving the speed and accuracy as more and more samples are being learned. The segmented data of grey and white matter are counted by computer in volume, which indicates the potential of this segmentation technology in diagnosing cerebral atrophy quantitatively. We demonstrate the great potential of such image processing and deep learning-combined automatic tissue image segmentation in neurology medicine.
Collapse
Affiliation(s)
| | - Ting Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Science and Peking Union, Tianjin 300192, China
| | - Junyi Luo
- University of Electronic Science and Technology of China, Chengdu, China
| | - Shengpu Xu
- Institute of Biomedical Engineering, Chinese Academy of Medical Science and Peking Union, Tianjin 300192, China
| |
Collapse
|
282
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 308] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
283
|
Qin P, Zhang J, Zeng J, Liu H, Cui Y. A framework combining DNN and level-set method to segment brain tumor in multi-modalities MR image. Soft comput 2019. [DOI: 10.1007/s00500-019-03778-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
284
|
Hai J, Qiao K, Chen J, Tan H, Xu J, Zeng L, Shi D, Yan B. Fully Convolutional DenseNet with Multiscale Context for Automated Breast Tumor Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:8415485. [PMID: 30774849 PMCID: PMC6350548 DOI: 10.1155/2019/8415485] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 10/20/2018] [Accepted: 11/25/2018] [Indexed: 11/17/2022]
Abstract
Breast tumor segmentation plays a crucial role in subsequent disease diagnosis, and most algorithms need interactive prior to firstly locate tumors and perform segmentation based on tumor-centric candidates. In this paper, we propose a fully convolutional network to achieve automatic segmentation of breast tumor in an end-to-end manner. Considering the diversity of shape and size for malignant tumors in the digital mammograms, we introduce multiscale image information into the fully convolutional dense network architecture to improve the segmentation precision. Multiple sampling rates of atrous convolution are concatenated to acquire different field-of-views of image features without adding additional number of parameters to avoid over fitting. Weighted loss function is also employed during training according to the proportion of the tumor pixels in the entire image, in order to weaken unbalanced classes problem. Qualitative and quantitative comparisons demonstrate that the proposed algorithm can achieve automatic tumor segmentation and has high segmentation precision for various size and shapes of tumor images without preprocessing and postprocessing.
Collapse
Affiliation(s)
- Jinjin Hai
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| | - Kai Qiao
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| | - Jian Chen
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| | - Hongna Tan
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, China
| | - Jingbo Xu
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| | - Lei Zeng
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| | - Dapeng Shi
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, China
| | - Bin Yan
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou, Henan Province, China
| |
Collapse
|
285
|
Dolz J, Desrosiers C, Ben Ayed I. IVD-Net: Intervertebral Disc Localization and Segmentation in MRI with a Multi-modal UNet. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-13736-6_11] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
286
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
287
|
|
288
|
Sun L, Fan Z, Ding X, Huang Y, Paisley J. Joint CS-MRI Reconstruction and Segmentation with a Unified Deep Network. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-20351-1_38] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
289
|
Ali HM, Kaiser MS, Mahmud M. Application of Convolutional Neural Network in Segmenting Brain Regions from MRI Data. Brain Inform 2019. [DOI: 10.1007/978-3-030-37078-7_14] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
290
|
Van Opbroek A, Achterberg HC, Vernooij MW, De Bruijne M. Transfer Learning for Image Segmentation by Combining Image Weighting and Kernel Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:213-224. [PMID: 30047874 DOI: 10.1109/tmi.2018.2859478] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Many medical image segmentation methods are based on the supervised classification of voxels. Such methods generally perform well when provided with a training set that is representative of the test images to the segment. However, problems may arise when training and test data follow different distributions, for example, due to differences in scanners, scanning protocols, or patient groups. Under such conditions, weighting training images according to distribution similarity have been shown to greatly improve performance. However, this assumes that a part of the training data is representative of the test data; it does not make unrepresentative data more similar. We, therefore, investigate kernel learning as a way to reduce differences between training and test data and explore the added value of kernel learning for image weighting. We also propose a new image weighting method that minimizes maximum mean discrepancy (MMD) between training and test data, which enables the joint optimization of image weights and kernel. Experiments on brain tissue, white matter lesion, and hippocampus segmentation show that both kernel learning and image weighting, when used separately, greatly improve performance on heterogeneous data. Here, MMD weighting obtains similar performance to previously proposed image weighting methods. Combining image weighting and kernel learning, optimized either individually or jointly, can give a small additional improvement in performance.
Collapse
|
291
|
Dolz J, Ben Ayed I, Desrosiers C. Dense Multi-path U-Net for Ischemic Stroke Lesion Segmentation in Multiple Image Modalities. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2019. [DOI: 10.1007/978-3-030-11723-8_27] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
292
|
Kim YC. Fast upper airway magnetic resonance imaging for assessment of speech production and sleep apnea. PRECISION AND FUTURE MEDICINE 2018. [DOI: 10.23838/pfm.2018.00100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
|
293
|
Hashemi SR, Salehi SSM, Erdogmus D, Prabhu SP, Warfield SK, Gholipour A. Asymmetric Loss Functions and Deep Densely Connected Networks for Highly Imbalanced Medical Image Segmentation: Application to Multiple Sclerosis Lesion Detection. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 7:721-1735. [PMID: 31528523 PMCID: PMC6746414 DOI: 10.1109/access.2018.2886371] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when data is unbalanced, which is common in many medical imaging applications such as lesion segmentation where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased towards the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem including two step training, sample re-weighting, balanced sampling, and more recently similarity loss functions, and focal loss. In this work we trained fully convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better trade-off between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using F β scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both challenges. We compared the performance of our network trained with F β loss, focal loss, and generalized Dice loss (GDL) functions. Through September 2018 our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on F β scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patchwise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.
Collapse
Affiliation(s)
- Seyed Raein Hashemi
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
- Computer and Information Science Department, Northeastern University, Boston, MA, 02115
| | - Seyed Sadegh Mohseni Salehi
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
- Electrical and Computer Engineering Department, Northeastern University, Boston, MA, 02115
| | - Deniz Erdogmus
- Electrical and Computer Engineering Department, Northeastern University, Boston, MA, 02115
| | - Sanjay P Prabhu
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| | - Ali Gholipour
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| |
Collapse
|
294
|
Trivizakis E, Manikis GC, Nikiforaki K, Drevelegas K, Constantinides M, Drevelegas A, Marias K. Extending 2-D Convolutional Neural Networks to 3-D for Advancing Deep Learning Cancer Classification With Application to MRI Liver Tumor Differentiation. IEEE J Biomed Health Inform 2018; 23:923-930. [PMID: 30561355 DOI: 10.1109/jbhi.2018.2886276] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) architectures have opened new horizons in medical image analysis attaining unprecedented performance in tasks such as tissue classification and segmentation as well as prediction of several clinical outcomes. In this paper, we propose and evaluate a novel three-dimensional (3-D) convolutional neural network (CNN) designed for tissue classification in medical imaging and applied for discriminating between primary and metastatic liver tumors from diffusion weighted MRI (DW-MRI) data. The proposed network consists of four consecutive strided 3-D convolutional layers with 3 × 3 × 3 kernel size and rectified linear unit (ReLU) as activation function, followed by a fully connected layer with 2048 neurons and a Softmax layer for binary classification. A dataset comprising 130 DW-MRI scans was used for the training and validation of the network. To the best of our knowledge this is the first DL solution for the specific clinical problem and the first 3-D CNN for cancer classification operating directly on whole 3-D tomographic data without the need of any preprocessing step such as region cropping, annotating, or detecting regions of interest. The classification performance results, 83% (3-D) versus 69.6% and 65.2% (2-D), demonstrated significant tissue classification accuracy improvement compared to two 2-D CNNs of different architectures also designed for the specific clinical problem with the same dataset. These results suggest that the proposed 3-D CNN architecture can bring significant benefit in DW-MRI liver discrimination and potentially, in numerous other tissue classification problems based on tomographic data, especially in size-limited, disease-specific clinical datasets.
Collapse
|
295
|
Diniz PHB, Valente TLA, Diniz JOB, Silva AC, Gattass M, Ventura N, Muniz BC, Gasparetto EL. Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 167:49-63. [PMID: 29706405 DOI: 10.1016/j.cmpb.2018.04.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 02/12/2018] [Accepted: 04/17/2018] [Indexed: 05/06/2023]
Abstract
BACKGROUND AND OBJECTIVE White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. METHODS The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. RESULTS The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. CONCLUSIONS It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions.
Collapse
Affiliation(s)
- Pedro Henrique Bandeira Diniz
- Pontifical Catholic University of Rio de Janeiro - PUC - RioR. São Vicente, 225, Gávea, RJ, Rio de Janeiro, 22453-900, Brazil.
| | - Thales Levi Azevedo Valente
- Pontifical Catholic University of Rio de Janeiro - PUC - RioR. São Vicente, 225, Gávea, RJ, Rio de Janeiro, 22453-900, Brazil.
| | - João Otávio Bandeira Diniz
- Federal University of Maranhão - UFMA Applied Computing Group - NCA Av. dos Portugueses, SN, Bacanga, MA, São Luís, 65085-580, Brazil.
| | - Aristófanes Corrêa Silva
- Federal University of Maranhão - UFMA Applied Computing Group - NCA Av. dos Portugueses, SN, Bacanga, MA, São Luís, 65085-580, Brazil.
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro - PUC - RioR. São Vicente, 225, Gávea, RJ, Rio de Janeiro, 22453-900, Brazil.
| | - Nina Ventura
- Paulo Niemeyer State Brain Institute - IECR. Lobo Júnior, 2293, Penha -RJ, 21070-060, Brazil.
| | - Bernardo Carvalho Muniz
- Paulo Niemeyer State Brain Institute - IECR. Lobo Júnior, 2293, Penha -RJ, 21070-060, Brazil.
| | | |
Collapse
|
296
|
Minnema J, van Eijnatten M, Kouw W, Diblen F, Mendrik A, Wolff J. CT image segmentation of bone for medical additive manufacturing using a convolutional neural network. Comput Biol Med 2018; 103:130-139. [DOI: 10.1016/j.compbiomed.2018.10.012] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 10/11/2018] [Accepted: 10/13/2018] [Indexed: 11/16/2022]
|
297
|
Bai X, Zhang Y, Liu H, Wang Y. Intuitionistic Center-Free FCM Clustering for MR Brain Image Segmentation. IEEE J Biomed Health Inform 2018; 23:2039-2051. [PMID: 30507540 DOI: 10.1109/jbhi.2018.2884208] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, an intuitionistic center-free fuzzy c-means clustering method (ICFFCM) is proposed for magnetic resonance (MR) brain image segmentation. First, in order to suppress the effect of noise in MR brain images, a pixel-to-pixel similarity with spatial information is defined. Then, for the purpose of handling the vagueness in MR brain images as well as the uncertainty in clustering process, a pixel-to-cluster similarity measure is defined by employing the intuitionistic fuzzy membership function. These two similarities are used to modify the center-free FCM so that the ability of the method for MR brain image segmentation could be improved. Second, on the basis of the improved center-free FCM method, a local information term, which is also intuitionistic and center-free, is appended to the objective function. This generates the final proposed ICFFCM. The consideration of local information further enhances the robustness of ICFFCM to the noise in MR brain images. Experimental results on the simulated and real MR brain image datasets show that ICFFCM is effective and robust. Moreover, ICFFCM could outperform several fuzzy-clustering-based methods and could achieve comparable results to the standard published methods like statistical parametric mapping and FMRIB automated segmentation tool.
Collapse
|
298
|
Abstract
In order to solve the problem that, in complex and wide traffic scenes, the accuracy and speed of multi-object detection can hardly be balanced by the existing object detection algorithms that are based on deep learning and big data, we improve the object detection framework SSD (Single Shot Multi-box Detector) and propose a new detection framework AP-SSD (Adaptive Perceive). We design a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor and then we train and screen the optimal feature extraction convolution kernel to replace the low-level convolution kernel of the original network to improve the detection accuracy. After that, we combine the single image detection framework with convolution long-term and short-term memory networks and by using the Bottle Neck-LSTM memory layer to refine and propagate the feature mapping between frames, we realize the temporal association of network frame-level information, reduce the calculation cost, succeed in tracking and identifying the targets affected by strong interference in video and reduce the missed alarm rate and false alarm rate by adding an adaptive threshold strategy. Moreover, we design a dynamic region amplification network framework to improve the detection and recognition accuracy of low-resolution small objects. Therefore, experiments on the improved AP-SSD show that this new algorithm can achieve better detection results when small objects, multiple objects, cluttered background and large-area occlusion are involved, thus ensuring this algorithm a good engineering application prospect.
Collapse
|
299
|
McBee MP, Awan OA, Colucci AT, Ghobadi CW, Kadom N, Kansagra AP, Tridandapani S, Auffermann WF. Deep Learning in Radiology. Acad Radiol 2018; 25:1472-1480. [PMID: 29606338 DOI: 10.1016/j.acra.2018.02.018] [Citation(s) in RCA: 248] [Impact Index Per Article: 35.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 02/07/2023]
Abstract
As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care.
Collapse
Affiliation(s)
- Morgan P McBee
- Department of Radiology and Medical Imaging, Cincinnati Children's Hospital, Cincinnati, Ohio
| | - Omer A Awan
- Department of Radiology, Temple University Hospital, Philadelphia, Pennsylvania
| | - Andrew T Colucci
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | | | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Children's Healthcare of Atlanta (Egleston), Emory University School of Medicine, Atlanta, Georgia
| | - Akash P Kansagra
- Mallinckrodt Institute of Radiology and Departments of Neurological Surgery and Neurology, Washington University School of Medicine, Saint Louis, Missouri
| | - Srini Tridandapani
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia
| | - William F Auffermann
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1365 Clifton Road NE, Atlanta, GA 30322.
| |
Collapse
|
300
|
|