151
|
Automatic segmentation of white matter hyperintensities from brain magnetic resonance images in the era of deep learning and big data - A systematic review. Comput Med Imaging Graph 2021; 88:101867. [PMID: 33508567 DOI: 10.1016/j.compmedimag.2021.101867] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 12/23/2020] [Accepted: 12/31/2020] [Indexed: 11/20/2022]
Abstract
BACKGROUND White matter hyperintensities (WMH), of presumed vascular origin, are visible and quantifiable neuroradiological markers of brain parenchymal change. These changes may range from damage secondary to inflammation and other neurological conditions, through to healthy ageing. Fully automatic WMH quantification methods are promising, but still, traditional semi-automatic methods seem to be preferred in clinical research. We systematically reviewed the literature for fully automatic methods developed in the last five years, to assess what are considered state-of-the-art techniques, as well as trends in the analysis of WMH of presumed vascular origin. METHOD We registered the systematic review protocol with the International Prospective Register of Systematic Reviews (PROSPERO), registration number - CRD42019132200. We conducted the search for fully automatic methods developed from 2015 to July 2020 on Medline, Science direct, IEE Explore, and Web of Science. We assessed risk of bias and applicability of the studies using QUADAS 2. RESULTS The search yielded 2327 papers after removing 104 duplicates. After screening titles, abstracts and full text, 37 were selected for detailed analysis. Of these, 16 proposed a supervised segmentation method, 10 proposed an unsupervised segmentation method, and 11 proposed a deep learning segmentation method. Average DSC values ranged from 0.538 to 0.91, being the highest value obtained from an unsupervised segmentation method. Only four studies validated their method in longitudinal samples, and eight performed an additional validation using clinical parameters. Only 8/37 studies made available their methods in public repositories. CONCLUSIONS We found no evidence that favours deep learning methods over the more established k-NN, linear regression and unsupervised methods in this task. Data and code availability, bias in study design and ground truth generation influence the wider validation and applicability of these methods in clinical research.
Collapse
|
152
|
|
153
|
Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. EVOLUTIONARY INTELLIGENCE 2021; 15:1-22. [PMID: 33425040 PMCID: PMC7778711 DOI: 10.1007/s12065-020-00540-3] [Citation(s) in RCA: 180] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 10/05/2020] [Accepted: 11/22/2020] [Indexed: 12/23/2022]
Abstract
Imaging techniques are used to capture anomalies of the human body. The captured images must be understood for diagnosis, prognosis and treatment planning of the anomalies. Medical image understanding is generally performed by skilled medical professionals. However, the scarce availability of human experts and the fatigue and rough estimate procedures involved with them limit the effectiveness of image understanding performed by skilled medical professionals. Convolutional neural networks (CNNs) are effective tools for image understanding. They have outperformed human experts in many image understanding tasks. This article aims to provide a comprehensive survey of applications of CNNs in medical image understanding. The underlying objective is to motivate medical image understanding researchers to extensively apply CNNs in their research and diagnosis. A brief introduction to CNNs has been presented. A discussion on CNN and its various award-winning frameworks have been presented. The major medical image understanding tasks, namely image classification, segmentation, localization and detection have been introduced. Applications of CNN in medical image understanding of the ailments of brain, breast, lung and other organs have been surveyed critically and comprehensively. A critical discussion on some of the challenges is also presented.
Collapse
|
154
|
Uçar M, Akyol K, Atila Ü, Uçar E. Classification of Different Tympanic Membrane Conditions Using Fused Deep Hypercolumn Features and Bidirectional LSTM. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.01.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
155
|
Artificial Intelligence in Pediatrics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_316-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
156
|
|
157
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
158
|
Adegun AA, Viriri S, Ogundokun RO. Deep Learning Approach for Medical Image Analysis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021. [DOI: 10.1155/2021/6215281] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 04/27/2021] [Indexed: 08/29/2023]
Abstract
Localization of region of interest (ROI) is paramount to the analysis of medical images to assist in the identification and detection of diseases. In this research, we explore the application of a deep learning approach in the analysis of some medical images. Traditional methods have been restricted due to the coarse and granulated appearance of most of these images. Recently, deep learning techniques have produced promising results in the segmentation of medical images for the diagnosis of diseases. This research experiments on medical images using a robust deep learning architecture based on the Fully Convolutional Network‐ (FCN‐) UNET method for the segmentation of three samples of medical images such as skin lesion, retinal images, and brain Magnetic Resonance Imaging (MRI) images. The proposed method can efficiently identify the ROI on these images to assist in the diagnosis of diseases such as skin cancer, eye defects and diabetes, and brain tumor. This system was evaluated on publicly available databases such as the International Symposium on Biomedical Imaging (ISBI) skin lesion images, retina images, and brain tumor datasets with over 90% accuracy and dice coefficient.
Collapse
|
159
|
Deep Learning Approach for Generating MRA Images From 3D Quantitative Synthetic MRI Without Additional Scans. Invest Radiol 2020; 55:249-256. [PMID: 31977603 DOI: 10.1097/rli.0000000000000628] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Quantitative synthetic magnetic resonance imaging (MRI) enables synthesis of various contrast-weighted images as well as simultaneous quantification of T1 and T2 relaxation times and proton density. However, to date, it has been challenging to generate magnetic resonance angiography (MRA) images with synthetic MRI. The purpose of this study was to develop a deep learning algorithm to generate MRA images based on 3D synthetic MRI raw data. MATERIALS AND METHODS Eleven healthy volunteers and 4 patients with intracranial aneurysms were included in this study. All participants underwent a time-of-flight (TOF) MRA sequence and a 3D-QALAS synthetic MRI sequence. The 3D-QALAS sequence acquires 5 raw images, which were used as the input for a deep learning network. The input was converted to its corresponding MRA images by a combination of a single-convolution and a U-net model with a 5-fold cross-validation, which were then compared with a simple linear combination model. Image quality was evaluated by calculating the peak signal-to-noise ratio (PSNR), structural similarity index measurements (SSIMs), and high frequency error norm (HFEN). These calculations were performed for deep learning MRA (DL-MRA) and linear combination MRA (linear-MR), relative to TOF-MRA, and compared with each other using a nonparametric Wilcoxon signed-rank test. Overall image quality and branch visualization, each scored on a 5-point Likert scale, were blindly and independently rated by 2 board-certified radiologists. RESULTS Deep learning MRA was successfully obtained in all subjects. The mean PSNR, SSIM, and HFEN of the DL-MRA were significantly higher, higher, and lower, respectively, than those of the linear-MRA (PSNR, 35.3 ± 0.5 vs 34.0 ± 0.5, P < 0.001; SSIM, 0.93 ± 0.02 vs 0.82 ± 0.02, P < 0.001; HFEN, 0.61 ± 0.08 vs 0.86 ± 0.05, P < 0.001). The overall image quality of the DL-MRA was comparable to that of TOF-MRA (4.2 ± 0.7 vs 4.4 ± 0.7, P = 0.99), and both types of images were superior to that of linear-MRA (1.5 ± 0.6, for both P < 0.001). No significant differences were identified between DL-MRA and TOF-MRA in the branch visibility of intracranial arteries, except for ophthalmic artery (1.2 ± 0.5 vs 2.3 ± 1.2, P < 0.001). CONCLUSIONS Magnetic resonance angiography generated by deep learning from 3D synthetic MRI data visualized major intracranial arteries as effectively as TOF-MRA, with inherently aligned quantitative maps and multiple contrast-weighted images. Our proposed algorithm may be useful as a screening tool for intracranial aneurysms without requiring additional scanning time.
Collapse
|
160
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
161
|
Echtioui A, Zouch W, Ghorbel M, Mhiri C, Hamam H. Detection Methods of COVID-19. SLAS Technol 2020; 25:566-572. [PMID: 32997560 PMCID: PMC7533467 DOI: 10.1177/2472630320962002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 08/28/2020] [Accepted: 09/06/2020] [Indexed: 01/19/2023]
Abstract
Since being first detected in China, coronavirus disease 2019 (COVID-19) has spread rapidly across the world, triggering a global pandemic with no viable cure in sight. As a result, national responses have focused on the effective minimization of the spread. Border control measures and travel restrictions have been implemented in a number of countries to limit the import and export of the virus. The detection of COVID-19 is a key task for physicians. The erroneous results of early laboratory tests and their delays led researchers to focus on different options. Information obtained from computed tomography (CT) and radiological images is important for clinical diagnosis. Therefore, it is worth developing a rapid method of detection of viral diseases through the analysis of radiographic images. We propose a novel method of detection of COVID-19. The purpose is to provide clinical decision support to healthcare workers and researchers. The article is to support researchers working on early detection of COVID-19 as well as similar viral diseases.
Collapse
Affiliation(s)
- Amira Echtioui
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia
| | - Wassim Zouch
- King Abdulaziz University (KAU), Jeddah, Saudi Arabia
| | - Mohamed Ghorbel
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia
| | - Chokri Mhiri
- Department of Neurology, Habib Bourguiba University Hospital, Sfax, Tunisia
- Neuroscience Laboratory “LR-12-SP-19,” Faculty of Medicine, Sfax University, Sfax, Tunisia
| | - Habib Hamam
- Faculty of Engineering, Moncton University, Moncton, NB, Canada
| |
Collapse
|
162
|
Yildirim O, Talo M, Ciaccio EJ, Tan RS, Acharya UR. Accurate deep neural network model to detect cardiac arrhythmia on more than 10,000 individual subject ECG records. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105740. [PMID: 32932129 PMCID: PMC7477611 DOI: 10.1016/j.cmpb.2020.105740] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Accepted: 08/31/2020] [Indexed: 05/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Cardiac arrhythmia, which is an abnormal heart rhythm, is a common clinical problem in cardiology. Detection of arrhythmia on an extended duration electrocardiogram (ECG) is done based on initial algorithmic software screening, with final visual validation by cardiologists. It is a time consuming and subjective process. Therefore, fully automated computer-assisted detection systems with a high degree of accuracy have an essential role in this task. In this study, we proposed an effective deep neural network (DNN) model to detect different rhythm classes from a new ECG database. METHODS Our DNN model was designed for high performance on all ECG leads. The proposed model, which included both representation learning and sequence learning tasks, showed promising results on all 12-lead inputs. Convolutional layers and sub-sampling layers were used in the representation learning phase. The sequence learning part involved a long short-term memory (LSTM) unit after representation of learning layers. RESULTS We performed two different class scenarios, including reduced rhythms (seven rhythm types) and merged rhythms (four rhythm types) according to the records from the database. Our trained DNN model achieved 92.24% and 96.13% accuracies for the reduced and merged rhythm classes, respectively. CONCLUSION Recently, deep learning algorithms have been found to be useful because of their high performance. The main challenge is the scarcity of appropriate training and testing resources because model performance is dependent on the quality and quantity of case samples. In this study, we used a new public arrhythmia database comprising more than 10,000 records. We constructed an efficient DNN model for automated detection of arrhythmia using these records.
Collapse
Affiliation(s)
- Ozal Yildirim
- Department of Computer Engineering, Munzur University, Tunceli,62000, Turkey
| | - Muhammed Talo
- Department of Software Engineering, Firat University, Elazig, Turkey
| | - Edward J Ciaccio
- Department of Medicine, Division of Cardiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Ru San Tan
- National Heart Centre Singapore, Singapore; Duke-NUS Medical School, Singapore
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan; School of Management and Enterprise University of Southern Queensland, Springfield, Australia.
| |
Collapse
|
163
|
Awasthi N, Jain G, Kalva SK, Pramanik M, Yalavarthy PK. Deep Neural Network-Based Sinogram Super-Resolution and Bandwidth Enhancement for Limited-Data Photoacoustic Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2660-2673. [PMID: 32142429 DOI: 10.1109/tuffc.2020.2977210] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Photoacoustic tomography (PAT) is a noninvasive imaging modality combining the benefits of optical contrast at ultrasonic resolution. Analytical reconstruction algorithms for photoacoustic (PA) signals require a large number of data points for accurate image reconstruction. However, in practical scenarios, data are collected using the limited number of transducers along with data being often corrupted with noise resulting in only qualitative images. Furthermore, the collected boundary data are band-limited due to limited bandwidth (BW) of the transducer, making the PA imaging with limited data being qualitative. In this work, a deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain. The proposed network has been compared with traditional as well as other popular deep-learning methods in numerical as well as experimental cases and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image. The improvement obtained in the Pearson correlation, structural similarity index metric, and root-mean-square error was as high as 35.62%, 33.81%, and 41.07%, respectively, for phantom cases and signal-to-noise ratio improvement in the reconstructed PA images was as high as 11.65 dB for in vivo cases compared with reconstructed image obtained using original limited BW data. Code is available at https://sites.google.com/site/sercmig/home/dnnpat.
Collapse
|
164
|
Yu Y, Gao Y, Wei J, Liao F, Xiao Q, Zhang J, Yin W, Lu B. A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection. Korean J Radiol 2020; 22:168-178. [PMID: 33236538 PMCID: PMC7817629 DOI: 10.3348/kjr.2020.0313] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/16/2020] [Accepted: 05/24/2020] [Indexed: 12/22/2022] Open
Abstract
OBJECTIVE To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). MATERIALS AND METHODS Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. RESULTS The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). CONCLUSION The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.
Collapse
Affiliation(s)
- Yitong Yu
- Department of Radiology, Fuwai Hospital, Peking Union Medical College & Chinese Academy of Medical Sciences; State Key Lab and National Center for Cardiovascular Diseases, Beijng, China
| | - Yang Gao
- Department of Radiology, Fuwai Hospital, Peking Union Medical College & Chinese Academy of Medical Sciences; State Key Lab and National Center for Cardiovascular Diseases, Beijng, China
| | - Jianyong Wei
- ShuKun (BeiJing) Technology Co., Ltd., Beijing, China
| | - Fangzhou Liao
- Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
| | | | - Jie Zhang
- Department of Radiology, Fuwai Hospital, Peking Union Medical College & Chinese Academy of Medical Sciences; State Key Lab and National Center for Cardiovascular Diseases, Beijng, China
| | - Weihua Yin
- Department of Radiology, Fuwai Hospital, Peking Union Medical College & Chinese Academy of Medical Sciences; State Key Lab and National Center for Cardiovascular Diseases, Beijng, China
| | - Bin Lu
- Department of Radiology, Fuwai Hospital, Peking Union Medical College & Chinese Academy of Medical Sciences; State Key Lab and National Center for Cardiovascular Diseases, Beijng, China.
| |
Collapse
|
165
|
Blade Rub-Impact Fault Identification Using Autoencoder-Based Nonlinear Function Approximation and a Deep Neural Network. SENSORS 2020; 20:s20216265. [PMID: 33153120 PMCID: PMC7662213 DOI: 10.3390/s20216265] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/25/2020] [Accepted: 11/02/2020] [Indexed: 11/16/2022]
Abstract
A blade rub-impact fault is one of the complex and frequently appearing faults in turbines. Due to their nonlinear and nonstationary nature, complex signal analysis techniques, which are expensive in terms of computation time, are required to extract valuable fault information from the vibration signals collected from rotor systems. In this work, a novel method for diagnosing the blade rub-impact faults of different severity levels is proposed. Specifically, the deep undercomplete denoising autoencoder is first used for estimating the nonlinear function of the system under normal operating conditions. Next, the residual signals obtained as the difference between the original signals and their estimates by the autoencoder are computed. Finally, these residual signals are used as inputs to a deep neural network to determine the current state of the rotor system. The experimental results demonstrate that the amplitudes of the residual signals reflect the changes in states of the rotor system and the fault severity levels. Furthermore, these residual signals in combination with the deep neural network demonstrated promising fault identification results when applied to a complex nonlinear fault, such as a blade-rubbing fault. To test the effectiveness of the proposed nonlinear-based fault diagnosis algorithm, this technique is compared with the autoregressive with external input Laguerre proportional-integral observer that is a linear-based fault diagnosis observation technique.
Collapse
|
166
|
Chaves H, Dorr F, Costa ME, Serra MM, Slezak DF, Farez MF, Sevlever G, Yañez P, Cejas C. Brain volumes quantification from MRI in healthy controls: Assessing correlation, agreement and robustness of a convolutional neural network-based software against FreeSurfer, CAT12 and FSL. J Neuroradiol 2020; 48:147-156. [PMID: 33137334 DOI: 10.1016/j.neurad.2020.10.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 09/13/2020] [Accepted: 10/19/2020] [Indexed: 01/22/2023]
Abstract
BACKGROUND AND PURPOSE There are instances in which an estimate of the brain volume should be obtained from MRI in clinical practice. Our objective is to calculate cross-sectional robustness of a convolutional neural network (CNN) based software (Entelai Pic) for brain volume estimation and compare it to traditional software such as FreeSurfer, CAT12 and FSL in healthy controls (HC). MATERIALS AND METHODS Sixteen HC were scanned four times, two different days on two different MRI scanners (1.5 T and 3 T). Volumetric T1-weighted images were acquired and post-processed with FreeSurfer v6.0.0, Entelai Pic v2, CAT12 v12.5 and FSL v5.0.9. Whole-brain, grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) volumes were calculated. Correlation and agreement between methods was assessed using intraclass correlation coefficient (ICC) and Bland Altman plots. Robustness was assessed using the coefficient of variation (CV). RESULTS Whole-brain volume estimation had better correlation between FreeSurfer and Entelai Pic (ICC (95% CI) 0.96 (0.94-0.97)) than FreeSurfer and CAT12 (0.92 (0.88-0.96)) and FSL (0.87 (0.79-0.91)). WM, GM and CSF showed a similar trend. Compared to FreeSurfer, Entelai Pic provided similarly robust segmentations of brain volumes both on same-scanner (mean CV 1.07, range 0.20-3.13% vs. mean CV 1.05, range 0.21-3.20%, p = 0.86) and on different-scanner variables (mean CV 3.84, range 2.49-5.91% vs. mean CV 3.84, range 2.62-5.13%, p = 0.96). Mean post-processing times were 480, 5, 40 and 5 min for FreeSurfer, Entelai Pic, CAT12 and FSL respectively. CONCLUSION Based on robustness and processing times, our CNN-based model is suitable for cross-sectional volumetry on clinical practice.
Collapse
Affiliation(s)
- Hernán Chaves
- Diagnostic Imaging Department, Fleni, Buenos Aires, Argentina; Entelai, Buenos Aires, Argentina.
| | | | | | - María Mercedes Serra
- Diagnostic Imaging Department, Fleni, Buenos Aires, Argentina; Entelai, Buenos Aires, Argentina
| | - Diego Fernández Slezak
- Entelai, Buenos Aires, Argentina; Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina; Instituto en Ciencias de la Computación (ICC), CONICET-Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Mauricio F Farez
- Entelai, Buenos Aires, Argentina; Neurology Department, Fleni, Buenos Aires, Argentina; Center for Research on Neuroimmunological Diseases (CIEN), Fleni, Buenos Aires, Argentina; Center for Biostatistics, Epidemiology and Public Health (CEBES), Fleni, Buenos Aires, Argentina
| | | | - Paulina Yañez
- Diagnostic Imaging Department, Fleni, Buenos Aires, Argentina
| | - Claudia Cejas
- Diagnostic Imaging Department, Fleni, Buenos Aires, Argentina
| |
Collapse
|
167
|
Küstner T, Hepp T, Fischer M, Schwartz M, Fritsche A, Häring HU, Nikolaou K, Bamberg F, Yang B, Schick F, Gatidis S, Machann J. Fully Automated and Standardized Segmentation of Adipose Tissue Compartments via Deep Learning in 3D Whole-Body MRI of Epidemiologic Cohort Studies. Radiol Artif Intell 2020; 2:e200010. [PMID: 33937847 PMCID: PMC8082356 DOI: 10.1148/ryai.2020200010] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/02/2020] [Accepted: 06/26/2020] [Indexed: 04/28/2023]
Abstract
PURPOSE To enable fast and reliable assessment of subcutaneous and visceral adipose tissue compartments derived from whole-body MRI. MATERIALS AND METHODS Quantification and localization of different adipose tissue compartments derived from whole-body MR images is of high interest in research concerning metabolic conditions. For correct identification and phenotyping of individuals at increased risk for metabolic diseases, a reliable automated segmentation of adipose tissue into subcutaneous and visceral adipose tissue is required. In this work, a three-dimensional (3D) densely connected convolutional neural network (DCNet) is proposed to provide robust and objective segmentation. In this retrospective study, 1000 cases (average age, 66 years ± 13 [standard deviation]; 523 women) from the Tuebingen Family Study database and the German Center for Diabetes research database and 300 cases (average age, 53 years ± 11; 152 women) from the German National Cohort (NAKO) database were collected for model training, validation, and testing, with transfer learning between the cohorts. These datasets included variable imaging sequences, imaging contrasts, receiver coil arrangements, scanners, and imaging field strengths. The proposed DCNet was compared to a similar 3D U-Net segmentation in terms of sensitivity, specificity, precision, accuracy, and Dice overlap. RESULTS Fast (range, 5-7 seconds) and reliable adipose tissue segmentation can be performed with high Dice overlap (0.94), sensitivity (96.6%), specificity (95.1%), precision (92.1%), and accuracy (98.4%) from 3D whole-body MRI datasets (field of view coverage, 450 × 450 × 2000 mm). Segmentation masks and adipose tissue profiles are automatically reported back to the referring physician. CONCLUSION Automated adipose tissue segmentation is feasible in 3D whole-body MRI datasets and is generalizable to different epidemiologic cohort studies with the proposed DCNet.Supplemental material is available for this article.© RSNA, 2020.
Collapse
|
168
|
Peña-Solórzano CA, Albrecht DW, Bassed RB, Burke MD, Dimmock MR. Findings from machine learning in clinical medical imaging applications - Lessons for translation to the forensic setting. Forensic Sci Int 2020; 316:110538. [PMID: 33120319 PMCID: PMC7568766 DOI: 10.1016/j.forsciint.2020.110538] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Machine learning (ML) techniques are increasingly being used in clinical medical imaging to automate distinct processing tasks. In post-mortem forensic radiology, the use of these algorithms presents significant challenges due to variability in organ position, structural changes from decomposition, inconsistent body placement in the scanner, and the presence of foreign bodies. Existing ML approaches in clinical imaging can likely be transferred to the forensic setting with careful consideration to account for the increased variability and temporal factors that affect the data used to train these algorithms. Additional steps are required to deal with these issues, by incorporating the possible variability into the training data through data augmentation, or by using atlases as a pre-processing step to account for death-related factors. A key application of ML would be then to highlight anatomical and gross pathological features of interest, or present information to help optimally determine the cause of death. In this review, we highlight results and limitations of applications in clinical medical imaging that use ML to determine key implications for their application in the forensic setting.
Collapse
Affiliation(s)
- Carlos A Peña-Solórzano
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - David W Albrecht
- Clayton School of Information Technology, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Richard B Bassed
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Michael D Burke
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Matthew R Dimmock
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| |
Collapse
|
169
|
Cizmeci MN, Groenendaal F, Liem KD, van Haastert IC, Benavente-Fernández I, van Straaten HLM, Steggerda S, Smit BJ, Whitelaw A, Woerdeman P, Heep A, de Vries LS. Randomized Controlled Early versus Late Ventricular Intervention Study in Posthemorrhagic Ventricular Dilatation: Outcome at 2 Years. J Pediatr 2020; 226:28-35.e3. [PMID: 32800815 DOI: 10.1016/j.jpeds.2020.08.014] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 07/16/2020] [Accepted: 08/06/2020] [Indexed: 11/17/2022]
Abstract
OBJECTIVE To compare the effect of intervention at low vs high threshold of ventriculomegaly in preterm infants with posthemorrhagic ventricular dilatation on death or severe neurodevelopmental disability. STUDY DESIGN This multicenter randomized controlled trial reviewed lumbar punctures initiated after either a low threshold (ventricular index of >p97 and anterior horn width of >6 mm) or high threshold (ventricular index of >p97 + 4 mm and anterior horn width of >10 mm). The composite adverse outcome was defined as death or cerebral palsy or Bayley composite cognitive/motor scores <-2 SDs at 24 months corrected age. RESULTS Outcomes were assessed in 113 of 126 infants. The composite adverse outcome was seen in 20 of 58 infants (35%) in the low threshold group and 28 of 55 (51%) in the high threshold (P = .07). The low threshold intervention was associated with a decreased risk of an adverse outcome after correcting for gestational age, severity of intraventricular hemorrhage, and cerebellar hemorrhage (aOR, 0.24; 95% CI, 0.07-0.87; P = .03). Infants with a favorable outcome had a smaller fronto-occipital horn ratio (crude mean difference, -0.06; 95% CI, -0.09 to -0.03; P < .001) at term-equivalent age. Infants in the low threshold group with a ventriculoperitoneal shunt, had cognitive and motor scores similar to those without (P = .3 for both), whereas in the high threshold group those with a ventriculoperitoneal shunt had significantly lower scores than those without a ventriculoperitoneal shunt (P = .01 and P = .004, respectively). CONCLUSIONS In a post hoc analysis, earlier intervention was associated with a lower odds of death or severe neurodevelopmental disability in preterm infants with progressive posthemorrhagic ventricular dilatation. TRIAL REGISTRATION ISRCTN43171322.
Collapse
Affiliation(s)
- Mehmet N Cizmeci
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center; Utrecht; University Medical Center Utrecht, Utrecht Brain Center, Utrecht, the Netherlands; Division of Neonatology, Department of Pediatrics, The Hospital for Sick Children, University of Toronto, Toronto, Canada
| | - Floris Groenendaal
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center; Utrecht; University Medical Center Utrecht, Utrecht Brain Center, Utrecht, the Netherlands
| | - Kian D Liem
- Department of Neonatology, Amalia Children's Hospital, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Ingrid C van Haastert
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center; Utrecht; University Medical Center Utrecht, Utrecht Brain Center, Utrecht, the Netherlands
| | | | | | - Sylke Steggerda
- Department of Neonatology, Leiden University Medical Center, Leiden, the Netherlands
| | - Bert J Smit
- Directorate Quality & Patient Care, Erasmus MC, University Medical Center Rotterdam, the Netherlands
| | - Andrew Whitelaw
- Neonatal Intensive Care Unit, Southmead Hospital and Neonatal Neuroscience, University of Bristol, Bristol, United Kingdom
| | - Peter Woerdeman
- Division of Neuroscience, Department of Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Axel Heep
- Neonatal Intensive Care Unit, Southmead Hospital and Neonatal Neuroscience, University of Bristol, Bristol, United Kingdom
| | - Linda S de Vries
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center; Utrecht; University Medical Center Utrecht, Utrecht Brain Center, Utrecht, the Netherlands.
| |
Collapse
|
170
|
Liu L, Wu FX, Wang YP, Wang J. Multi-Receptive-Field CNN for Semantic Segmentation of Medical Images. IEEE J Biomed Health Inform 2020; 24:3215-3225. [DOI: 10.1109/jbhi.2020.3016306] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
171
|
Xi X, Meng X, Qin Z, Nie X, Yin Y, Chen X. IA-net: informative attention convolutional neural network for choroidal neovascularization segmentation in OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:6122-6136. [PMID: 33282479 PMCID: PMC7687935 DOI: 10.1364/boe.400816] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/22/2020] [Accepted: 09/22/2020] [Indexed: 05/08/2023]
Abstract
Choroidal neovascularization (CNV) is a characteristic feature of wet age-related macular degeneration (AMD). Quantification of CNV is useful to clinicians in the diagnosis and treatment of CNV disease. Before quantification, CNV lesion should be delineated by automatic CNV segmentation technology. Recently, deep learning methods have achieved significant success for medical image segmentation. However, some CNVs are small objects which are hard to discriminate, resulting in performance degradation. In addition, it's difficult to train an effective network for accurate segmentation due to the complicated characteristics of CNV in OCT images. In order to tackle these two challenges, this paper proposed a novel Informative Attention Convolutional Neural Network (IA-net) for automatic CNV segmentation in OCT images. Considering that the attention mechanism has the ability to enhance the discriminative power of the interesting regions in the feature maps, the attention enhancement block is developed by introducing the additional attention constraint. It has the ability to force the model to pay high attention on CNV in the learned feature maps, improving the discriminative ability of the learned CNV features, which is useful to improve the segmentation performance on small CNV. For accurate pixel classification, the novel informative loss is proposed with the incorporation of an informative attention map. It can focus training on a set of informative samples that are difficult to be predicted. Therefore, the trained model has the ability to learn enough information to classify these informative samples, further improving the performance. The experimental results on our database demonstrate that the proposed method outperforms traditional CNV segmentation methods.
Collapse
Affiliation(s)
- Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, 250101, China
| | - Xianjing Meng
- School of Computer Science and Technology, Shandong University of Finance and Economics, 250014, China
| | - Zheyun Qin
- School of Software, Shandong University, 250101, China
| | - Xiushan Nie
- School of Computer Science and Technology, Shandong Jianzhu University, 250101, China
| | - Yilong Yin
- School of Software, Shandong University, 250101, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, 215006, China
| |
Collapse
|
172
|
Gray Matter Segmentation of Brain MRI Using Hybrid Enhanced Independent Component Analysis in Noisy and Noise Free Environment. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2020. [DOI: 10.4028/www.scientific.net/jbbbe.47.75] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Medical segmentation is the primary task performed to diagnosis the abnormalities in the human body. The brain is the complex organ and anatomical segmentation of brain tissues is a challenging task. In this paper, we used Enhanced Independent component analysis to perform the segmentation of gray matter. We used modified K means, Expected Maximization and Hidden Markov random field to provide better spatial correlation that overcomes in-homogeneity, noise and low contrast. Our objective is achieved in two steps (i) initially unwanted tissues are clipped from the MRI image using skull stripped Algorithm (ii) Enhanced Independent Component analysis is used to perform the segmentation of gray matter. We apply the proposed method on both T1w and T2w MRI to perform segmentation of gray matter at different noisy environments. We evaluate the the performance of our proposed system with Jaccard Index, Dice Coefficient and Accuracy. We further compared the proposed system performance with the existing frameworks. Our proposed method gives better segmentation of gray matter useful for diagnosis neurodegenerative disorders.
Collapse
|
173
|
Ahmad HM, Khan MJ, Yousaf A, Ghuffar S, Khurshid K. Deep Learning: A Breakthrough in Medical Imaging. Curr Med Imaging 2020; 16:946-956. [PMID: 33081657 DOI: 10.2174/1573405615666191219100824] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 11/25/2019] [Accepted: 12/06/2019] [Indexed: 02/08/2023]
Abstract
Deep learning has attracted great attention in the medical imaging community as a promising solution for automated, fast and accurate medical image analysis, which is mandatory for quality healthcare. Convolutional neural networks and its variants have become the most preferred and widely used deep learning models in medical image analysis. In this paper, concise overviews of the modern deep learning models applied in medical image analysis are provided and the key tasks performed by deep learning models, i.e. classification, segmentation, retrieval, detection, and registration are reviewed in detail. Some recent researches have shown that deep learning models can outperform medical experts in certain tasks. With the significant breakthroughs made by deep learning methods, it is expected that patients will soon be able to safely and conveniently interact with AI-based medical systems and such intelligent systems will actually improve patient healthcare. There are various complexities and challenges involved in deep learning-based medical image analysis, such as limited datasets. But researchers are actively working in this area to mitigate these challenges and further improve health care with AI.
Collapse
Affiliation(s)
- Hafiz Mughees Ahmad
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Muhammad Jaleed Khan
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Adeel Yousaf
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Avionics Engineering, Institute of Space Technology, Islamabad, Pakistan
| | - Sajid Ghuffar
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Space Science, Institute of Space Technology, Islamabad, Pakistan
| | - Khurram Khurshid
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| |
Collapse
|
174
|
Rasse TM, Hollandi R, Horvath P. OpSeF: Open Source Python Framework for Collaborative Instance Segmentation of Bioimages. Front Bioeng Biotechnol 2020; 8:558880. [PMID: 33117778 PMCID: PMC7576117 DOI: 10.3389/fbioe.2020.558880] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 09/15/2020] [Indexed: 11/13/2022] Open
Abstract
Various pre-trained deep learning models for the segmentation of bioimages have been made available as developer-to-end-user solutions. They are optimized for ease of use and usually require neither knowledge of machine learning nor coding skills. However, individually testing these tools is tedious and success is uncertain. Here, we present the Open Segmentation Framework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts' knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and postprocessing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and postprocessing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data. We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows. Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little; the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.
Collapse
Affiliation(s)
- Tobias M. Rasse
- Scientific Service Group Microscopy, Max Planck Institute for Heart and Lung Research, Bad Nauheim, Germany
| | - Réka Hollandi
- Synthetic and Systems Biology Unit, Biological Research Center (BRC), Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Center (BRC), Szeged, Hungary
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland
| |
Collapse
|
175
|
Henschel L, Conjeti S, Estrada S, Diers K, Fischl B, Reuter M. FastSurfer - A fast and accurate deep learning based neuroimaging pipeline. Neuroimage 2020; 219:117012. [PMID: 32526386 PMCID: PMC7898243 DOI: 10.1016/j.neuroimage.2020.117012] [Citation(s) in RCA: 222] [Impact Index Per Article: 44.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 05/29/2020] [Accepted: 05/31/2020] [Indexed: 02/01/2023] Open
Abstract
Traditional neuroimage analysis pipelines involve computationally intensive, time-consuming optimization steps, and thus, do not scale well to large cohort studies with thousands or tens of thousands of individuals. In this work we propose a fast and accurate deep learning based neuroimaging pipeline for the automated processing of structural human brain MRI scans, replicating FreeSurfer's anatomical segmentation including surface reconstruction and cortical parcellation. To this end, we introduce an advanced deep learning architecture capable of whole-brain segmentation into 95 classes. The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both cortical and subcortical structures. Further, we perform fast cortical surface reconstruction and thickness analysis by introducing a spectral spherical embedding and by directly mapping the cortical labels from the image to the surface. This approach provides a full FreeSurfer alternative for volumetric analysis (in under 1 min) and surface-based thickness analysis (within only around 1 h runtime). For sustainability of this approach we perform extensive validation: we assert high segmentation accuracy on several unseen datasets, measure generalizability and demonstrate increased test-retest reliability, and high sensitivity to group differences in dementia.
Collapse
Affiliation(s)
- Leonie Henschel
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Sailesh Conjeti
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Santiago Estrada
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Kersten Diers
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Bruce Fischl
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
176
|
Gieraerts C, Dangis A, Janssen L, Demeyere A, De Bruecker Y, De Brucker N, van Den Bergh A, Lauwerier T, Heremans A, Frans E, Laurent M, Ector B, Roosen J, Smismans A, Frans J, Gillis M, Symons R. Prognostic Value and Reproducibility of AI-assisted Analysis of Lung Involvement in COVID-19 on Low-Dose Submillisievert Chest CT: Sample Size Implications for Clinical Trials. Radiol Cardiothorac Imaging 2020; 2:e200441. [PMID: 33778634 PMCID: PMC7586438 DOI: 10.1148/ryct.2020200441] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE To compare the prognostic value and reproducibility of visual versus AI-assisted analysis of lung involvement on submillisievert low-dose chest CT in COVID-19 patients. MATERIALS AND METHODS This was a HIPAA-compliant, institutional review board-approved retrospective study. From March 15 to June 1, 2020, 250 RT-PCR confirmed COVID-19 patients were studied with low-dose chest CT at admission. Visual and AI-assisted analysis of lung involvement was performed by using a semi-quantitative CT score and a quantitative percentage of lung involvement. Adverse outcome was defined as intensive care unit (ICU) admission or death. Cox regression analysis, Kaplan-Meier curves, and cross-validated receiver operating characteristic curve with area under the curve (AUROC) analysis was performed to compare model performance. Intraclass correlation coefficients (ICCs) and Bland- Altman analysis was used to assess intra- and interreader reproducibility. RESULTS Adverse outcome occurred in 39 patients (11 deaths, 28 ICU admissions). AUC values from AI-assisted analysis were significantly higher than those from visual analysis for both semi-quantitative CT scores and percentages of lung involvement (all P<0.001). Intrareader and interreader agreement rates were significantly higher for AI-assisted analysis than visual analysis (all ICC ≥0.960 versus ≥0.885). AI-assisted variability for quantitative percentage of lung involvement was 17.2% (coefficient of variation) versus 34.7% for visual analysis. The sample size to detect a 5% change in lung involvement with 90% power and an α error of 0.05 was 250 patients with AI-assisted analysis and 1014 patients with visual analysis. CONCLUSION AI-assisted analysis of lung involvement on submillisievert low-dose chest CT outperformed conventional visual analysis in predicting outcome in COVID-19 patients while reducing CT variability. Lung involvement on chest CT could be used as a reliable metric in future clinical trials.
Collapse
Affiliation(s)
- Christopher Gieraerts
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Anthony Dangis
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Lode Janssen
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annick Demeyere
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Yves De Bruecker
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Nele De Brucker
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annelies van Den Bergh
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Tine Lauwerier
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - André Heremans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Eric Frans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Michaël Laurent
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Bavo Ector
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - John Roosen
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annick Smismans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Johan Frans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Marc Gillis
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Rolf Symons
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| |
Collapse
|
177
|
Deep Learning Signature Based on Staging CT for Preoperative Prediction of Sentinel Lymph Node Metastasis in Breast Cancer. Acad Radiol 2020; 27:1226-1233. [PMID: 31818648 DOI: 10.1016/j.acra.2019.11.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 11/10/2019] [Accepted: 11/13/2019] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the noninvasive predictive performance of deep learning features based on staging CT for sentinel lymph node (SLN) metastasis of breast cancer. MATERIALS AND METHODS A total of 348 breast cancer patients were enrolled in this study, with their SLN metastases pathologically confirmed. All patients received contrast-enhanced CT preoperative examinations and CT images were segmented and analyzed to extract deep features. After the feature selection, deep learning signature was built with the selected key features. The performance of the deep learning signatures was assessed with respect to discrimination, calibration, and clinical usefulness in the primary cohort (184 patients from January 2016 to March 2017) and then validated in the independent validation cohort (164 patients from April 2017 to December 2018). RESULTS Ten deep learning features were automatically selected in the primary cohort to establish the deep learning signature of SLN metastasis. The deep learning signature shows favorable discriminative ability with an area under curve of 0.801 (95% confidence interval: 0.736-0.867) in primary cohort and 0.817 (95% confidence interval: 0.751-0.884) in validation cohort. To further distinguish the number of metastatic SLNs (1-2 or more than two metastatic SLN), another deep learning signature was constructed and also showed moderate performance (area under curve 0.770). CONCLUSION We developed the deep learning signatures for preoperative prediction of SLN metastasis status and numbers (1-2 or more than two metastatic SLN) in patients with breast cancer. The deep learning signature may potentially provide a noninvasive approach to assist clinicians in predicting SLN metastasis in patients with breast cancer.
Collapse
|
178
|
Bin-Nun A, Kassirer Y, Mimouni FB, Shchors I, Hammerman C. Head Circumference Growth Is Enhanced by SMOFlipid in Preterm Neonates. Am J Perinatol 2020; 37:1130-1133. [PMID: 31167235 DOI: 10.1055/s-0039-1692390] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
BACKGROUND Suboptimal fat intake during the early postnatal weeks significantly affects brain growth and maturation. Studies to date have focused on the quantity rather than the quality of fat intake. OBJECTIVE We hypothesized that early nutrition of premature neonates should also include optimization of the type of fat intake, and thus those receiving SMOFlipid, a balanced multicomponent lipid emulsion, would have improved head growth as measured by head circumference (HC) at discharge. STUDY DESIGN We retrospectively reviewed HC in infants weighing <1,500 g who were hospitalized for two or more weeks during a 20-month period, in which all preterm infants received fat as Lipofundin, and the following 20-month period, in which all such infants received SMOFlipid.Lipids were dosed up to 3 g/kg/day and reduced as enteral nutrition progressed. Parenteral fish oil (Omegaven) was permitted as rescue therapy during both periods. RESULTS Period 2 infants had better head growth (0.79 [0.69,0.90] vs. 0.75 [0.64,0.86] cm/week; p = 0.0158). More infants reached discharge with an HC of ≥50 percentile (51 vs. 31%; p = 0.0007), and fewer infants had an HC of ≤3 percentile (11 vs. 14%; p = 0.023). Median length of stay was reduced by more than 1 week.A multivariable regression was performed using the weekly increase in HC as the dependent variable, and the time epoch, birth weight, gestational age, hospitalization days, and gender as independent variables. Only the time epoch and days of hospitalization were significant (both p < 0.0001). CONCLUSION Our data offer preliminary evidence of improved brain growth in those receiving a balanced lipid emulsion as compared with a soybean oil emulsion.
Collapse
Affiliation(s)
- Alona Bin-Nun
- Department of Neonatology, Shaare Zedek Medical Center, Jerusalem, Israel.,Faculty of Medicine, Hebrew University, Jerusalem, Israel
| | - Yair Kassirer
- Department of Neonatology, Shaare Zedek Medical Center, Jerusalem, Israel
| | - Francis B Mimouni
- Department of Neonatology, Shaare Zedek Medical Center, Jerusalem, Israel.,Sackler Faculty of Medicine, Tel Aviv University Tel Aviv, Tel Aviv, Israel
| | - Irina Shchors
- Department of Neonatology, Shaare Zedek Medical Center, Jerusalem, Israel
| | - Cathy Hammerman
- Department of Neonatology, Shaare Zedek Medical Center, Jerusalem, Israel.,Faculty of Medicine, Hebrew University, Jerusalem, Israel
| |
Collapse
|
179
|
Bearing Fault Identification Using Machine Learning and Adaptive Cascade Fault Observer. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10175827] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this work, a hybrid procedure for bearing fault identification using a machine learning and adaptive cascade observer is explained. To design an adaptive cascade observer, the normal signal approximation is the first step. Therefore, the fuzzy orthonormal regressive (FOR) technique was developed to approximate the acoustic emission (AE) and vibration (non-stationary and nonlinear) bearing signals in normal conditions. After approximating the normal signal of bearing using the FOR technique, the adaptive cascade observer is modeled in four steps. First, the linear observation technique using a FOR proportional-integral (PI) observer (FOR-PIO) is developed. In the second step, to increase the power of uncertaintie rejection (robustness) of the FOR-PIO, the structure procedure is used serially. Next, the fuzzy like observer is selected to increase the accuracy of FOR structure PI observer (FOR-SPIO). Moreover, the adaptive technique is used to develop the reliability of the cascade (fuzzy-structure PI) observer. Additionally to fault identification, the machine-learning algorithm using a support vector machine (SVM) is recommended. The effectiveness of the adaptive cascade observer with the SVM fault identifier was validated by a vibration and AE datasets. Based on the results, the average vibration and AE fault diagnosis using the adaptive cascade observer with the SVM fault identifier are 97.8% and 97.65%, respectively.
Collapse
|
180
|
Majdi MS, Keerthivasan MB, Rutt BK, Zahr NM, Rodriguez JJ, Saranathan M. Automated thalamic nuclei segmentation using multi-planar cascaded convolutional neural networks. Magn Reson Imaging 2020; 73:45-54. [PMID: 32828985 DOI: 10.1016/j.mri.2020.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/25/2020] [Accepted: 08/17/2020] [Indexed: 12/15/2022]
Abstract
PURPOSE To develop a fast and accurate convolutional neural network based method for segmentation of thalamic nuclei. METHODS A cascaded multi-planar scheme with a modified residual U-Net architecture was used to segment thalamic nuclei on conventional and white-matter-nulled (WMn) magnetization prepared rapid gradient echo (MPRAGE) data. A single network was optimized to work with images from healthy controls and patients with multiple sclerosis (MS) and essential tremor (ET), acquired at both 3 T and 7 T field strengths. WMn-MPRAGE images were manually delineated by a trained neuroradiologist using the Morel histological atlas as a guide to generate reference ground truth labels. Dice similarity coefficient and volume similarity index (VSI) were used to evaluate performance. Clinical utility was demonstrated by applying this method to study the effect of MS on thalamic nuclei atrophy. RESULTS Segmentation of each thalamus into twelve nuclei was achieved in under a minute. For 7 T WMn-MPRAGE, the proposed method outperforms current state-of-the-art on patients with ET with statistically significant improvements in Dice for five nuclei (increase in the range of 0.05-0.18) and VSI for four nuclei (increase in the range of 0.05-0.19), while performing comparably for healthy and MS subjects. Dice and VSI achieved using 7 T WMn-MPRAGE data are comparable to those using 3 T WMn-MPRAGE data. For conventional MPRAGE, the proposed method shows a statistically significant Dice improvement in the range of 0.14-0.63 over FreeSurfer for all nuclei and disease types. Effect of noise on network performance shows robustness to images with SNR as low as half the baseline SNR. Atrophy of four thalamic nuclei and whole thalamus was observed for MS patients compared to healthy control subjects, after controlling for the effect of parallel imaging, intracranial volume, gender, and age (p < 0.004). CONCLUSION The proposed segmentation method is fast, accurate, performs well across disease types and field strengths, and shows great potential for improving our understanding of thalamic nuclei involvement in neurological diseases.
Collapse
Affiliation(s)
- Mohammad S Majdi
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States of America
| | - Mahesh B Keerthivasan
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States of America; Siemens Healthcare, Tucson, AZ, USA
| | - Brian K Rutt
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Natalie M Zahr
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States of America
| | - Jeffrey J Rodriguez
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States of America
| | - Manojkumar Saranathan
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States of America; Department of Medical Imaging, University of Arizona, Tucson, AZ, United States of America.
| |
Collapse
|
181
|
Renard F, Guedria S, Palma ND, Vuillerme N. Variability and reproducibility in deep learning for medical image segmentation. Sci Rep 2020; 10:13724. [PMID: 32792540 PMCID: PMC7426407 DOI: 10.1038/s41598-020-69920-0] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 07/11/2020] [Indexed: 12/11/2022] Open
Abstract
Medical image segmentation is an important tool for current clinical applications. It is the backbone of numerous clinical diagnosis methods, oncological treatments and computer-integrated surgeries. A new class of machine learning algorithm, deep learning algorithms, outperforms the results of classical segmentation in terms of accuracy. However, these techniques are complex and can have a high range of variability, calling the reproducibility of the results into question. In this article, through a literature review, we propose an original overview of the sources of variability to better understand the challenges and issues of reproducibility related to deep learning for medical image segmentation. Finally, we propose 3 main recommendations to address these potential issues: (1) an adequate description of the framework of deep learning, (2) a suitable analysis of the different sources of variability in the framework of deep learning, and (3) an efficient system for evaluating the segmentation results.
Collapse
Affiliation(s)
- Félix Renard
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France.
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France.
| | - Soulaimane Guedria
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
| | - Noel De Palma
- Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000, Grenoble, France
| | - Nicolas Vuillerme
- Univ. Grenoble Alpes, AGEIS, 38000, Grenoble, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
182
|
Automatic segmentation of brain MRI using a novel patch-wise U-net deep architecture. PLoS One 2020; 15:e0236493. [PMID: 32745102 PMCID: PMC7398543 DOI: 10.1371/journal.pone.0236493] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 07/07/2020] [Indexed: 12/22/2022] Open
Abstract
Accurate segmentation of brain magnetic resonance imaging (MRI) is an essential step in quantifying the changes in brain structure. Deep learning in recent years has been extensively used for brain image segmentation with highly promising performance. In particular, the U-net architecture has been widely used for segmentation in various biomedical related fields. In this paper, we propose a patch-wise U-net architecture for the automatic segmentation of brain structures in structural MRI. In the proposed brain segmentation method, the non-overlapping patch-wise U-net is used to overcome the drawbacks of conventional U-net with more retention of local information. In our proposed method, the slices from an MRI scan are divided into non-overlapping patches that are fed into the U-net model along with their corresponding patches of ground truth so as to train the network. The experimental results show that the proposed patch-wise U-net model achieves a Dice similarity coefficient (DSC) score of 0.93 in average and outperforms the conventional U-net and the SegNet-based methods by 3% and 10%, respectively, for on Open Access Series of Imaging Studies (OASIS) and Internet Brain Segmentation Repository (IBSR) dataset.
Collapse
|
183
|
Dong X, Xu S, Liu Y, Wang A, Saripan MI, Li L, Zhang X, Lu L. Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation. Cancer Imaging 2020; 20:53. [PMID: 32738913 PMCID: PMC7395980 DOI: 10.1186/s40644-020-00331-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 07/19/2020] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have been extensively applied to two-dimensional (2D) medical image segmentation, yielding excellent performance. However, their application to three-dimensional (3D) nodule segmentation remains a challenge. METHODS In this study, we propose a multi-view secondary input residual (MV-SIR) convolutional neural network model for 3D lung nodule segmentation using the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset of chest computed tomography (CT) images. Lung nodule cubes are prepared from the sample CT images. Further, from the axial, coronal, and sagittal perspectives, multi-view patches are generated with randomly selected voxels in the lung nodule cubes as centers. Our model consists of six submodels, which enable learning of 3D lung nodules sliced into three views of features; each submodel extracts voxel heterogeneity and shape heterogeneity features. We convert the segmentation of 3D lung nodules into voxel classification by inputting the multi-view patches into the model and determine whether the voxel points belong to the nodule. The structure of the secondary input residual submodel comprises a residual block followed by a secondary input module. We integrate the six submodels to classify whether voxel points belong to nodules, and then reconstruct the segmentation image. RESULTS The results of tests conducted using our model and comparison with other existing CNN models indicate that the MV-SIR model achieves excellent results in the 3D segmentation of pulmonary nodules, with a Dice coefficient of 0.926 and an average surface distance of 0.072. CONCLUSION our MV-SIR model can accurately perform 3D segmentation of lung nodules with the same segmentation accuracy as the U-net model.
Collapse
Affiliation(s)
- Xianling Dong
- Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China
| | - Shiqi Xu
- Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China
| | - Yanli Liu
- Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China
| | - Aihui Wang
- Department of Nuclear Medicine, Affiliated Hospital, Chengde Medical University, Chengde City, China
| | - M Iqbal Saripan
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
| | - Li Li
- Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China
| | - Xiaolei Zhang
- Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China.
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincal Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China.
| |
Collapse
|
184
|
Hassan M, Ali S, Alquhayz H, Safdar K. Developing intelligent medical image modality classification system using deep transfer learning and LDA. Sci Rep 2020; 10:12868. [PMID: 32732962 PMCID: PMC7393510 DOI: 10.1038/s41598-020-69813-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 07/19/2020] [Indexed: 01/07/2023] Open
Abstract
Rapid advancement in imaging technology generates an enormous amount of heterogeneous medical data for disease diagnosis and rehabilitation process. Radiologists may require related clinical cases from medical archives for analysis and disease diagnosis. It is challenging to retrieve the associated clinical cases automatically, efficiently and accurately from the substantial medical image archive due to diversity in diseases and imaging modalities. We proposed an efficient and accurate approach for medical image modality classification that can used for retrieval of clinical cases from large medical repositories. The proposed approach is developed using transfer learning concept with pre-trained ResNet50 Deep learning model for optimized features extraction followed by linear discriminant analysis classification (TLRN-LDA). Extensive experiments are performed on challenging standard benchmark ImageCLEF-2012 dataset of 31 classes. The developed approach yields improved average classification accuracy of 87.91%, which is higher up-to 10% compared to the state-of-the-art approaches on the same dataset. Moreover, hand-crafted features are extracted for comparison. Performance of TLRN-LDA system demonstrates the effectiveness over state-of-the-art systems. The developed approach may be deployed to diagnostic centers to assist the practitioners for accurate and efficient clinical case retrieval and disease diagnosis.
Collapse
Affiliation(s)
- Mehdi Hassan
- Department of Computer Science, Air University, PAF Complex Sector E-9, Islamabad, Pakistan.
| | - Safdar Ali
- Directorate General National Repository, Islamabad, Pakistan
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, Al-Majmaah, 11952, Saudi Arabia
| | - Khushbakht Safdar
- Al Nafees Medical College and Teaching Hospital, ISRA University, Lehtrar Road, Islamabad, Pakistan
| |
Collapse
|
185
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
186
|
Machicado JD, Koay EJ, Krishna SG. Radiomics for the Diagnosis and Differentiation of Pancreatic Cystic Lesions. Diagnostics (Basel) 2020; 10:505. [PMID: 32708348 PMCID: PMC7399814 DOI: 10.3390/diagnostics10070505] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/20/2020] [Accepted: 07/20/2020] [Indexed: 12/12/2022] Open
Abstract
Radiomics, also known as quantitative imaging or texture analysis, involves extracting a large number of features traditionally unmeasured in conventional radiological cross-sectional images and converting them into mathematical models. This review describes this approach and its use in the evaluation of pancreatic cystic lesions (PCLs). This discipline has the potential of more accurately assessing, classifying, risk stratifying, and guiding the management of PCLs. Existing studies have provided important insight into the role of radiomics in managing PCLs. Although these studies are limited by the use of retrospective design, single center data, and small sample sizes, radiomic features in combination with clinical data appear to be superior to the current standard of care in differentiating cyst type and in identifying mucinous PCLs with high-grade dysplasia. Combining radiomic features with other novel endoscopic diagnostics, including cyst fluid molecular analysis and confocal endomicroscopy, can potentially optimize the predictive accuracy of these models. There is a need for multicenter prospective studies to elucidate the role of radiomics in the management of PCLs.
Collapse
Affiliation(s)
- Jorge D. Machicado
- Division of Gastroenterology and Hepatology, Mayo Clinic Heath System, Eau Claire, WI 54703, USA;
| | - Eugene J. Koay
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Somashekar G. Krishna
- Division of Gastroenterology, Hepatology and Nutrition, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
187
|
Essa E, Aldesouky D, Hussein SE, Rashad MZ. Neuro-fuzzy patch-wise R-CNN for multiple sclerosis segmentation. Med Biol Eng Comput 2020; 58:2161-2175. [PMID: 32681214 DOI: 10.1007/s11517-020-02225-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 06/29/2020] [Indexed: 12/21/2022]
Abstract
The segmentation of the lesion plays a core role in diagnosis and monitoring of multiple sclerosis (MS). Magnetic resonance imaging (MRI) is the most frequent image modality used to evaluate such lesions. Because of the massive amount of data, manual segmentation cannot be achieved within a sensible time that restricts the usage of accurate quantitative measurement in clinical practice. Therefore, the need for effective automated segmentation techniques is critical. However, a large spatial variability between the structure of brain lesions makes it more challenging. Recently, convolutional neural network (CNN), in particular, the region-based CNN (R-CNN), have attained tremendous progress within the field of object recognition because of its ability to learn and represent features. CNN has proven a last-breaking performance in various fields, such as object recognition, and has also gained more attention in brain imaging, especially in tissue and brain segmentation. In this paper, an automated technique for MS lesion segmentation is proposed, which is built on a 3D patch-wise R-CNN. The proposed system includes two stages: first, segmenting MS lesions in T2-w and FLAIR sequences using R-CNN, then an adaptive neuro-fuzzy inference system (ANFIS) is applied to fuse the results of the two modalities. To evaluate the performance of the proposed method, the public MICCAI2008 MS challenge dataset is employed to segment MS lesions. The experimental results show competitive results of the proposed method compared with the state-of-the-art MS lesion segmentation methods with an average total score of 83.25 and an average sensitivity of 61.8% on the MICCAI2008 testing set. Graphical Abstract The proposed system overview. First, the input of two modalities FLAIR and T2 are pre-processed to remove the skull and correct the bias field. Then 3D patches for lesion and non-lesion tissues are extracted and fed to R-CNN. Each R-CNN produces a probability map of the segmentation result that provides to ANFIS to fuse the results and obtain the final MS lesion segmentation. The MS lesions are shown on a pre-processed FLAIR image.
Collapse
Affiliation(s)
- Ehab Essa
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura, Dakahlia Governorate, Egypt.
| | - Doaa Aldesouky
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura, Dakahlia Governorate, Egypt
| | - Sherif E Hussein
- Computer Engineering and Systems Department, Faculty of Engineering, Mansoura University, Mansoura, Dakahlia Governorate, Egypt
| | - M Z Rashad
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura, Dakahlia Governorate, Egypt
| |
Collapse
|
188
|
Dou Q, Liu Q, Heng PA, Glocker B. Unpaired Multi-Modal Segmentation via Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2415-2425. [PMID: 32012001 DOI: 10.1109/tmi.2019.2963882] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.
Collapse
|
189
|
Kumar S, Mankame DP. Optimization driven Deep Convolution Neural Network for brain tumor classification. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.05.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
190
|
Coupé P, Mansencal B, Clément M, Giraud R, Denis de Senneville B, Ta VT, Lepetit V, Manjon JV. AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. Neuroimage 2020; 219:117026. [PMID: 32522665 DOI: 10.1016/j.neuroimage.2020.117026] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 05/28/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022] Open
Abstract
Whole brain segmentation of fine-grained structures using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a relevant consensus. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
Collapse
Affiliation(s)
- Pierrick Coupé
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France.
| | - Boris Mansencal
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Michaël Clément
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Rémi Giraud
- Bordeaux INP, Univ. Bordeaux, CNRS, IMS, UMR 5218, F-33400, Talence, France
| | | | - Vinh-Thong Ta
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Vincent Lepetit
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - José V Manjon
- ITACA, Universitat Politècnica de València, 46022, Valencia, Spain
| |
Collapse
|
191
|
Khan Z, Yahya N, Alsaih K, Ali SSA, Meriaudeau F. Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI. SENSORS 2020; 20:s20113183. [PMID: 32503330 PMCID: PMC7309110 DOI: 10.3390/s20113183] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 04/04/2020] [Accepted: 04/12/2020] [Indexed: 12/23/2022]
Abstract
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder–decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92.8%. This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.
Collapse
Affiliation(s)
- Zia Khan
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | - Norashikin Yahya
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
- Correspondence:
| | - Khaled Alsaih
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | - Syed Saad Azhar Ali
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | | |
Collapse
|
192
|
Seo H, Khuzani MB, Vasudevan V, Huang C, Ren H, Xiao R, Jia X, Xing L. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys 2020; 47:e148-e167. [PMID: 32418337 PMCID: PMC7338207 DOI: 10.1002/mp.13649] [Citation(s) in RCA: 109] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 05/22/2019] [Accepted: 05/30/2019] [Indexed: 12/13/2022] Open
Abstract
In recent years, significant progress has been made in developing more accurate and efficient machine learning algorithms for segmentation of medical and natural images. In this review article, we highlight the imperative role of machine learning algorithms in enabling efficient and accurate segmentation in the field of medical imaging. We specifically focus on several key studies pertaining to the application of machine learning methods to biomedical image segmentation. We review classical machine learning algorithms such as Markov random fields, k-means clustering, random forest, etc. Although such classical learning models are often less accurate compared to the deep-learning techniques, they are often more sample efficient and have a less complex structure. We also review different deep-learning architectures, such as the artificial neural networks (ANNs), the convolutional neural networks (CNNs), and the recurrent neural networks (RNNs), and present the segmentation results attained by those learning models that were published in the past 3 yr. We highlight the successes and limitations of each machine learning paradigm. In addition, we discuss several challenges related to the training of different machine learning models, and we present some heuristics to address those challenges.
Collapse
Affiliation(s)
- Hyunseok Seo
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| | - Masoud Badiei Khuzani
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| | - Varun Vasudevan
- Institute for Computational and Mathematical Engineering, School of Engineering, Stanford University, Stanford, CA, 94305-4042, USA
| | - Charles Huang
- Department of Bioengineering, School of Engineering and Medicine, Stanford University, Stanford, CA, 94305-4245, USA
| | - Hongyi Ren
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| | - Ruoxiu Xiao
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| | - Xiao Jia
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| | - Lei Xing
- Medical Physics Division in the Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, 94305-5847, USA
| |
Collapse
|
193
|
White AE, Dikow RB, Baugh M, Jenkins A, Frandsen PB. Generating segmentation masks of herbarium specimens and a data set for training segmentation models using deep learning. APPLICATIONS IN PLANT SCIENCES 2020; 8:e11352. [PMID: 32626607 PMCID: PMC7328659 DOI: 10.1002/aps3.11352] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 02/03/2020] [Indexed: 05/03/2023]
Abstract
PREMISE Digitized images of herbarium specimens are highly diverse with many potential sources of visual noise and bias. The systematic removal of noise and minimization of bias must be achieved in order to generate biological insights based on the plants rather than the digitization and mounting practices involved. Here, we develop a workflow and data set of high-resolution image masks to segment plant tissues in herbarium specimen images and remove background pixels using deep learning. METHODS AND RESULTS We generated 400 curated, high-resolution masks of ferns using a combination of automatic and manual tools for image manipulation. We used those images to train a U-Net-style deep learning model for image segmentation, achieving a final Sørensen-Dice coefficient of 0.96. The resulting model can automatically, efficiently, and accurately segment massive data sets of digitized herbarium specimens, particularly for ferns. CONCLUSIONS The application of deep learning in herbarium sciences requires transparent and systematic protocols for generating training data so that these labor-intensive resources can be generalized to other deep learning applications. Segmentation ground-truth masks are hard-won data, and we share these data and the model openly in the hopes of furthering model training and transfer learning opportunities for broader herbarium applications.
Collapse
Affiliation(s)
- Alexander E. White
- Data Science LabOffice of the Chief Information OfficerSmithsonian InstitutionWashingtonD.C.USA
- Department of BotanyNational Museum of Natural HistorySmithsonian InstitutionWashingtonD.C.USA
| | - Rebecca B. Dikow
- Data Science LabOffice of the Chief Information OfficerSmithsonian InstitutionWashingtonD.C.USA
| | - Makinnon Baugh
- Department of Plant and Wildlife SciencesBrigham Young UniversityProvoUtahUSA
| | - Abigail Jenkins
- Department of Plant and Wildlife SciencesBrigham Young UniversityProvoUtahUSA
| | - Paul B. Frandsen
- Data Science LabOffice of the Chief Information OfficerSmithsonian InstitutionWashingtonD.C.USA
- Department of Plant and Wildlife SciencesBrigham Young UniversityProvoUtahUSA
| |
Collapse
|
194
|
Monshi MMA, Poon J, Chung V. Deep learning in generating radiology reports: A survey. Artif Intell Med 2020; 106:101878. [PMID: 32425358 PMCID: PMC7227610 DOI: 10.1016/j.artmed.2020.101878] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 04/30/2020] [Accepted: 05/10/2020] [Indexed: 12/27/2022]
Abstract
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets. Generating radiology coherent paragraphs that do more than traditional medical image annotation, or single sentence-based description, has been the subject of recent academic attention. This presents a more practical and challenging application and moves towards bridging visual medical features and radiologist text. So far, the most common approach has been to utilize publicly available datasets and develop DL models that integrate convolutional neural networks (CNN) for image analysis alongside recurrent neural networks (RNN) for natural language processing (NLP) and natural language generation (NLG). This is an area of research that we anticipate will grow in the near future. We focus our investigation on the following critical challenges: understanding radiology text/image structures and datasets, applying DL algorithms (mainly CNN and RNN), generating radiology text, and improving existing DL based models and evaluation metrics. Lastly, we include a critical discussion and future research recommendations. This survey will be useful for researchers interested in DL, particularly those interested in applying DL to radiology reporting.
Collapse
Affiliation(s)
- Maram Mahmoud A Monshi
- School of Computer Science, University of Sydney, Sydney, Australia; Department of Information Technology, Taif University, Taif, Saudi Arabia.
| | - Josiah Poon
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Vera Chung
- School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
195
|
Liu Y, Nacewicz BM, Zhao G, Adluru N, Kirk GR, Ferrazzano PA, Styner MA, Alexander AL. A 3D Fully Convolutional Neural Network With Top-Down Attention-Guided Refinement for Accurate and Robust Automatic Segmentation of Amygdala and Its Subnuclei. Front Neurosci 2020; 14:260. [PMID: 32508558 PMCID: PMC7253589 DOI: 10.3389/fnins.2020.00260] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 03/09/2020] [Indexed: 12/17/2022] Open
Abstract
Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most existing deep learning based approaches in neuroimaging do not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the subnuclei of the amygdala. To tackle this challenging task, we developed a dual-branch dilated residual 3D fully convolutional network with parallel convolutions to extract more global context and alleviate the class imbalance issue by maintaining a small receptive field that is just the size of the regions of interest (ROIs). We also conduct multi-scale feature fusion in both parallel and series to compensate the potential information loss during convolutions, which has been shown to be important for small objects. The serial feature fusion enabled by residual connections is further enhanced by a proposed top-down attention-guided refinement unit, where the high-resolution low-level spatial details are selectively integrated to complement the high-level but coarse semantic information, enriching the final feature representations. As a result, the segmentations resulting from our method are more accurate both volumetrically and morphologically, compared with other deep learning based approaches. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala. We also demonstrated the feasibility of using a cycle-consistent generative adversarial network (CycleGAN) to harmonize multi-site MRI data, and show that our method generalizes well to challenging traumatic brain injury (TBI) datasets collected from multiple centers. This appears to be a promising strategy for image segmentation for multiple site studies and increased morphological variability from significant brain pathology.
Collapse
Affiliation(s)
- Yilin Liu
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Brendon M. Nacewicz
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
| | - Gengyan Zhao
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| | - Nagesh Adluru
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Gregory R. Kirk
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Peter A. Ferrazzano
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Pediatrics, University of Wisconsin-Madison, Madison, WI, United States
| | - Martin A. Styner
- Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
- Department of Computer Science, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Andrew L. Alexander
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
196
|
Ataloglou D, Dimou A, Zarpalas D, Daras P. Fast and Precise Hippocampus Segmentation Through Deep Convolutional Neural Network Ensembles and Transfer Learning. Neuroinformatics 2020; 17:563-582. [PMID: 30877605 DOI: 10.1007/s12021-019-09417-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Automatic segmentation of the hippocampus from 3D magnetic resonance imaging mostly relied on multi-atlas registration methods. In this work, we exploit recent advances in deep learning to design and implement a fully automatic segmentation method, offering both superior accuracy and fast result. The proposed method is based on deep Convolutional Neural Networks (CNNs) and incorporates distinct segmentation and error correction steps. Segmentation masks are produced by an ensemble of three independent models, operating with orthogonal slices of the input volume, while erroneous labels are subsequently corrected by a combination of Replace and Refine networks. We explore different training approaches and demonstrate how, in CNN-based segmentation, multiple datasets can be effectively combined through transfer learning techniques, allowing for improved segmentation quality. The proposed method was evaluated using two different public datasets and compared favorably to existing methodologies. In the EADC-ADNI HarP dataset, the correspondence between the method's output and the available ground truth manual tracings yielded a mean Dice value of 0.9015, while the required segmentation time for an entire MRI volume was 14.8 seconds. In the MICCAI dataset, the mean Dice value increased to 0.8835 through transfer learning from the larger EADC-ADNI HarP dataset.
Collapse
Affiliation(s)
- Dimitrios Ataloglou
- Information Technologies Institute (ITI), Centre for Research and Technology HELLAS, 1st km Thermi - Panorama, 57001, Thessaloniki, Greece.
| | - Anastasios Dimou
- Information Technologies Institute (ITI), Centre for Research and Technology HELLAS, 1st km Thermi - Panorama, 57001, Thessaloniki, Greece
| | - Dimitrios Zarpalas
- Information Technologies Institute (ITI), Centre for Research and Technology HELLAS, 1st km Thermi - Panorama, 57001, Thessaloniki, Greece
| | - Petros Daras
- Information Technologies Institute (ITI), Centre for Research and Technology HELLAS, 1st km Thermi - Panorama, 57001, Thessaloniki, Greece
| |
Collapse
|
197
|
|
198
|
Mahata N, Sing JK. A novel fuzzy clustering algorithm by minimizing global and spatially constrained likelihood-based local entropies for noisy 3D brain MR image segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106171] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
199
|
|
200
|
Singh D, Kumar V, Kaur M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis 2020; 39:1379-1389. [PMID: 32337662 PMCID: PMC7183816 DOI: 10.1007/s10096-020-03901-z] [Citation(s) in RCA: 252] [Impact Index Per Article: 50.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 04/07/2020] [Indexed: 12/23/2022]
Abstract
Early classification of 2019 novel coronavirus disease (COVID-19) is essential for disease cure and control. Compared with reverse-transcription polymerase chain reaction (RT-PCR), chest computed tomography (CT) imaging may be a significantly more trustworthy, useful, and rapid technique to classify and evaluate COVID-19, specifically in the epidemic region. Almost all hospitals have CT imaging machines; therefore, the chest CT images can be utilized for early classification of COVID-19 patients. However, the chest CT-based COVID-19 classification involves a radiology expert and considerable time, which is valuable when COVID-19 infection is growing at rapid rate. Therefore, an automated analysis of chest CT images is desirable to save the medical professionals' precious time. In this paper, a convolutional neural networks (CNN) is used to classify the COVID-19-infected patients as infected (+ve) or not (-ve). Additionally, the initial parameters of CNN are tuned using multi-objective differential evolution (MODE). Extensive experiments are performed by considering the proposed and the competitive machine learning techniques on the chest CT images. Extensive analysis shows that the proposed model can classify the chest CT images at a good accuracy rate.
Collapse
Affiliation(s)
- Dilbag Singh
- Computer Science and Engineering Department, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India
| | - Vijay Kumar
- Computer Science and Engineering Department, National Institute of Technology, Hamirpur, Himachal Pradesh, India
| | - Manjit Kaur
- Computer and Communication Engineering Department, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India.
| |
Collapse
|