1
|
Gu X, Yu X, Shi G, Li Y, Yang L. Can PD-L1 expression be predicted by contrast-enhanced CT in patients with gastric adenocarcinoma? a preliminary retrospective study. Abdom Radiol (NY) 2023; 48:220-8. [PMID: 36271155 DOI: 10.1007/s00261-022-03709-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/08/2022] [Accepted: 10/10/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND This study aimed to construct a computed tomography (CT) radiomics model to predict programmed cell death-ligand 1 (PD-L1) expression in gastric adenocarcinoma patients using radiomics features. METHODS A total of 169 patients with gastric adenocarcinoma were studied retrospectively and randomly divided into training and testing datasets. The clinical data of the patients were recorded. Radiomics features were extracted to construct a radiomics model. The random forest-based Boruta algorithm was used to screen the features of the training dataset. A receiver operating characteristic (ROC) curve was used to evaluate the predictive performance of the model. RESULTS Four radiomics features were selected to construct a radiomics model. The radiomics signature showed good efficacy in predicting PD-L1 expression, with an area under the receiver operating characteristic curve (AUC) of 0.786 (p < 0.001), a sensitivity of 0.681, and a specificity of 0.826. The radiomics model achieved the greatest areas under the curve (AUCs) in the training dataset (AUC = 0.786) and testing dataset (AUC = 0.774). The calibration curves of the radiomics model showed great calibration performances outcomes in the training dataset and testing dataset. The net clinical benefit for the radiomics model was high. CONCLUSION CT radiomics has important value in predicting the expression of PD-L1 in patients with gastric adenocarcinoma.
Collapse
|
2
|
Aruna Kumar SV, Yaghoubi E, Proença H. A Fuzzy Consensus Clustering Algorithm for MRI Brain Tissue Segmentation. Applied Sciences 2022; 12:7385. [DOI: 10.3390/app12157385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Brain tissue segmentation is an important component of the clinical diagnosis of brain diseases using multi-modal magnetic resonance imaging (MR). Brain tissue segmentation has been developed by many unsupervised methods in the literature. The most commonly used unsupervised methods are K-Means, Expectation-Maximization, and Fuzzy Clustering. Fuzzy clustering methods offer considerable benefits compared with the aforementioned methods as they are capable of handling brain images that are complex, largely uncertain, and imprecise. However, this approach suffers from the intrinsic noise and intensity inhomogeneity (IIH) in the data resulting from the acquisition process. To resolve these issues, we propose a fuzzy consensus clustering algorithm that defines a membership function resulting from a voting schema to cluster the pixels. In particular, we first pre-process the MRI data and employ several segmentation techniques based on traditional fuzzy sets and intuitionistic sets. Then, we adopted a voting schema to fuse the results of the applied clustering methods. Finally, to evaluate the proposed method, we used the well-known performance measures (boundary measure, overlap measure, and volume measure) on two publicly available datasets (OASIS and IBSR18). The experimental results show the superior performance of the proposed method in comparison with the recent state of the art. The performance of the proposed method is also presented using a real-world Autism Spectrum Disorder Detection problem with better accuracy compared to other existing methods.
Collapse
|
3
|
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
4
|
Bouvier C, Souedet N, Levy J, Jan C, You Z, Herard AS, Mergoil G, Rodriguez BH, Clouchoux C, Delzescaux T. Reduced and stable feature sets selection with random forest for neurons segmentation in histological images of macaque brain. Sci Rep 2021; 11:22973. [PMID: 34836996 PMCID: PMC8626511 DOI: 10.1038/s41598-021-02344-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 10/27/2021] [Indexed: 01/01/2023] Open
Abstract
In preclinical research, histology images are produced using powerful optical microscopes to digitize entire sections at cell scale. Quantification of stained tissue relies on machine learning driven segmentation. However, such methods require multiple additional information, or features, which are increasing the quantity of data to process. As a result, the quantity of features to deal with represents a drawback to process large series or massive histological images rapidly in a robust manner. Existing feature selection methods can reduce the amount of required information but the selected subsets lack reproducibility. We propose a novel methodology operating on high performance computing (HPC) infrastructures and aiming at finding small and stable sets of features for fast and robust segmentation of high-resolution histological images. This selection has two steps: (1) selection at features families scale (an intermediate pool of features, between spaces and individual features) and (2) feature selection performed on pre-selected features families. We show that the selected sets of features are stables for two different neuron staining. In order to test different configurations, one of these dataset is a mono-subject dataset and the other is a multi-subjects dataset to test different configurations. Furthermore, the feature selection results in a significant reduction of computation time and memory cost. This methodology will allow exhaustive histological studies at a high-resolution scale on HPC infrastructures for both preclinical and clinical research.
Collapse
Affiliation(s)
- C Bouvier
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Witsee, Paris, France
| | - N Souedet
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - J Levy
- Service de Médecine Physique Et de Réadaptation - APHP Hôpital Raymond Poincaré, Garches, France
- UMR 1179, Handicap Neuromusculaire - INSERM-UVSQ, Montigny le Bretonneux, France
| | - C Jan
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Z You
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - A-S Herard
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | | | | | - C Clouchoux
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Witsee, Paris, France
| | - T Delzescaux
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France.
| |
Collapse
|
5
|
Ghosal P, Chowdhury T, Kumar A, Bhadra AK, Chakraborty J, Nandi D. MhURI:A Supervised Segmentation Approach to Leverage Salient Brain Tissues in Magnetic Resonance Images. Comput Methods Programs Biomed 2021; 200:105841. [PMID: 33221057 PMCID: PMC9096474 DOI: 10.1016/j.cmpb.2020.105841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 11/07/2020] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Accurate segmentation of critical tissues from a brain MRI is pivotal for characterization and quantitative pattern analysis of the human brain and thereby, identifies the earliest signs of various neurodegenerative diseases. To date, in most cases, it is done manually by the radiologists. The overwhelming workload in some of the thickly populated nations may cause exhaustion leading to interruption for the doctors, which may pose a continuing threat to patient safety. A novel fusion method called U-Net inception based on 3D convolutions and transition layers is proposed to address this issue. METHODS A 3D deep learning method called Multi headed U-Net with Residual Inception (MhURI) accompanied by Morphological Gradient channel for brain tissue segmentation is proposed, which incorporates Residual Inception 2-Residual (RI2R) module as the basic building block. The model exploits the benefits of morphological pre-processing for structural enhancement of MR images. A multi-path data encoding pipeline is introduced on top of the U-Net backbone, which encapsulates initial global features and captures the information from each MRI modality. RESULTS The proposed model has accomplished encouraging outcomes, which appreciates the adequacy in terms of some of the established quality metrices when compared with some of the state-of-the-art methods while evaluating with respect to two popular publicly available data sets. CONCLUSION The model is entirely automatic and able to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from brain MRI effectively with sufficient accuracy. Hence, it may be considered to be a potential computer-aided diagnostic (CAD) tool for radiologists and other medical practitioners in their clinical diagnosis workflow.
Collapse
Affiliation(s)
- Palash Ghosal
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Tamal Chowdhury
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Amish Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Ashok Kumar Bhadra
- Department of Radiology, KPC Medical College and Hospital, Jadavpur, 700032, West Bengal, India.
| | - Jayasree Chakraborty
- Department of Hepatopancreatobiliary Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| |
Collapse
|
6
|
Abstract
Abstract Accurate measurement of fetal biometrics in ultrasound at different trimesters is essential in assisting clinicians to conduct pregnancy diagnosis. However, the accuracy of manual segmentation for measurement is highly user-dependent. Here, we design a general framework
for automatically segmenting fetal anatomical structures in two-dimensional (2D) ultrasound (US) images and thus make objective biometric measurements available. We first introduce structured random forests (SRFs) as the core discriminative predictor to recognize the region of fetal anatomical
structures with a primary classification map. The patch-wise joint labeling presented by SRFs has inherent advantages in identifying an ambiguous/fuzzy boundary and reconstructing incomplete anatomical boundary in US. Then, to get a more accurate and smooth classification map, a scale-aware
auto-context model is injected to enhance the contour details of the classification map from various visual levels. Final segmentation can be obtained from the converged classification map with thresholding. Our framework is validated on two important biometric measurements, which are fetal
head circumference (HC) and abdominal circumference (AC). The final results illustrate that our proposed method outperforms state-of-the-art methods in terms of segmentation accuracy.
Collapse
Affiliation(s)
- Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| | - Haoming Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen
518060, China
| | - Li Liu
- Department of Electronic Engineering, the Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| |
Collapse
|
7
|
|
8
|
Sun L, Ma W, Ding X, Huang Y, Liang D, Paisley J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI. IEEE Trans Med Imaging 2020; 39:898-909. [PMID: 31449009 DOI: 10.1109/tmi.2019.2937271] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.
Collapse
|
9
|
Rose RA, Annadhason A. GHT based automatic kidney image segmentation using modified AAM and GBDT. Health Technol 2020; 10:353-362. [DOI: 10.1007/s12553-019-00297-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
10
|
Lei Y, Shu HK, Tian S, Wang T, Liu T, Mao H, Shim H, Curran WJ, Yang X. Pseudo CT Estimation using Patch-based Joint Dictionary Learning. Annu Int Conf IEEE Eng Med Biol Soc 2019; 2018:5150-5153. [PMID: 30441499 DOI: 10.1109/embc.2018.8513475] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Magnetic resonance (MR) simulators have recently gained popularity; it avoids the unnecessary radiation exposure associated with Computed Tomography (CT) when used for radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on joint dictionary learning. Patient-specific anatomical features were extracted from the aligned training images and adopted as signatures for each voxel. The most relevant and informative features were identified to train the joint dictionary learning-based model. The well-trained dictionary was used to predict the pseudo CT of a new patient. This prediction technique was validated with a clinical study of 12 patients with MR and CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes were used to quantify the prediction accuracy. We compared our proposed method with a state-of-the-art dictionary learning method. Overall our proposed method significantly improves the prediction accuracy over the state-of-the-art dictionary learning method. We have investigated a novel joint dictionary Iearning- based approach to predict CT images from routine MRIs and demonstrated its reliability. This CT prediction technique could be a useful tool for MRI-based radiation treatment planning or attenuation correction for quantifying PET images for PET/MR imaging.
Collapse
|
11
|
Ryou H, Yaqub M, Cavallaro A, Papageorghiou AT, Alison Noble J. Automated 3D ultrasound image analysis for first trimester assessment of fetal health. Phys Med Biol 2019; 64:185010. [PMID: 31408850 DOI: 10.1088/1361-6560/ab3ad1] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus, and to detect fetal anomalies early in pregnancy. First trimester ultrasound images have a different appearance than for the second trimester scan, reflecting the different stage of fetal development. There is limited literature on automation of image-based assessment for this earlier trimester, and most of the literature is focused on one specific fetal anatomy. In this paper, we consider automation to support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan. We present a deep learning and image processing solution (i) to perform semantic segmentation of the whole fetus, (ii) to estimate plane orientation for standard biometry views, (iii) to localize and automatically estimate biometry, and (iv) to detect fetal limbs from a 3D first trimester volume. Computational analysis methods were built using a real-world dataset (n = 44 volumes). An evaluation on a further independent clinical dataset (n = 21 volumes) showed that the automated methods approached human expert assessment of a 3D volume.
Collapse
Affiliation(s)
- Hosuk Ryou
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom. Author to whom correspondence should be addressed
| | | | | | | | | |
Collapse
|
12
|
Yang X, Yu L, Li S, Wen H, Luo D, Bian C, Qin J, Ni D, Heng PA. Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound. IEEE Trans Med Imaging 2019; 38:180-193. [PMID: 30040635 DOI: 10.1109/tmi.2018.2858779] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.
Collapse
|
13
|
Xiang D, Chen G, Shi F, Zhu W, Liu Q, Yuan S, Chen X. Automatic Retinal Layer Segmentation of OCT Images With Central Serous Retinopathy. IEEE J Biomed Health Inform 2019; 23:283-295. [DOI: 10.1109/jbhi.2018.2803063] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Lei Y, Shu HK, Tian S, Jeong JJ, Liu T, Shim H, Mao H, Wang T, Jani AB, Curran WJ, Yang X. Magnetic resonance imaging-based pseudo computed tomography using anatomic signature and joint dictionary learning. J Med Imaging (Bellingham) 2018; 5:034001. [PMID: 30155512 DOI: 10.1117/1.jmi.5.3.034001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Accepted: 08/06/2018] [Indexed: 12/30/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides a number of advantages over computed tomography (CT) for radiation therapy treatment planning; however, MRI lacks the key electron density information necessary for accurate dose calculation. We propose a dictionary-learning-based method to derive electron density information from MRIs. Specifically, we first partition a given MR image into a set of patches, for which we used a joint dictionary learning method to directly predict a CT patch as a structured output. Then a feature selection method is used to ensure prediction robustness. Finally, we combine all the predicted CT patches to obtain the final prediction for the given MR image. This prediction technique was validated for a clinical application using 14 patients with brain MR and CT images. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), normalized cross-correlation (NCC) indices and similarity index (SI) for air, soft-tissue and bone region were used to quantify the prediction accuracy. The mean ± std of PSNR, MAE, and NCC were: 22.4±1.9 dB , 82.6±26.1 HU, and 0.91±0.03 for the 14 patients. The SIs for air, soft-tissue, and bone regions are 0.98±0.01 , 0.88±0.03 , and 0.69±0.08 . These indices demonstrate the CT prediction accuracy of the proposed learning-based method. This CT image prediction technique could be used as a tool for MRI-based radiation treatment planning, or for PET attenuation correction in a PET/MRI scanner.
Collapse
Affiliation(s)
- Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Sibo Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Jiwoong Jason Jeong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hyunsuk Shim
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States.,Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Hui Mao
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Ashesh B Jani
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
15
|
Yaqub M, Kelly B, Papageorghiou AT, Noble JA. A Deep Learning Solution for Automatic Fetal Neurosonographic Diagnostic Plane Verification Using Clinical Standard Constraints. Ultrasound Med Biol 2017; 43:2925-2933. [PMID: 28958729 DOI: 10.1016/j.ultrasmedbio.2017.07.013] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 06/19/2017] [Accepted: 07/17/2017] [Indexed: 06/07/2023]
Abstract
During routine ultrasound assessment of the fetal brain for biometry estimation and detection of fetal abnormalities, accurate imaging planes must be found by sonologists following a well-defined imaging protocol or clinical standard, which can be difficult for non-experts to do well. This assessment helps provide accurate biometry estimation and the detection of possible brain abnormalities. We describe a machine-learning method to assess automatically that transventricular ultrasound images of the fetal brain have been correctly acquired and meet the required clinical standard. We propose a deep learning solution, which breaks the problem down into three stages: (i) accurate localization of the fetal brain, (ii) detection of regions that contain structures of interest and (iii) learning the acoustic patterns in the regions that enable plane verification. We evaluate the developed methodology on a large real-world clinical data set of 2-D mid-gestation fetal images. We show that the automatic verification method approaches human expert assessment.
Collapse
Affiliation(s)
- Mohammad Yaqub
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Brenda Kelly
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
16
|
Jin C, Shi F, Xiang D, Zhang L, Chen X. Fast segmentation of kidney components using random forests and ferns. Med Phys 2017; 44:6353-6363. [DOI: 10.1002/mp.12594] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2017] [Revised: 08/21/2017] [Accepted: 09/08/2017] [Indexed: 11/06/2022] Open
Affiliation(s)
- Chao Jin
- School of Electronic and Information Engineering; Soochow University; Suzhou 215000 China
| | - Fei Shi
- School of Electronic and Information Engineering; Soochow University; Suzhou 215000 China
| | - Dehui Xiang
- School of Electronic and Information Engineering; Soochow University; Suzhou 215000 China
| | - Lichun Zhang
- School of Electronic and Information Engineering; Soochow University; Suzhou 215000 China
| | - Xinjian Chen
- School of Electronic and Information Engineering; Soochow University; Suzhou 215000 China
| |
Collapse
|
17
|
Vishnuvarthanan A, Rajasekaran MP, Govindaraj V, Zhang Y, Thiyagarajan A. An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl Soft Comput 2017. [DOI: 10.1016/j.asoc.2017.04.023] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
18
|
Pereira S, Pinto A, Oliveira J, Mendrik AM, Correia JH, Silva CA. Automatic brain tissue segmentation in MR images using Random Forests and Conditional Random Fields. J Neurosci Methods 2016; 270:111-123. [DOI: 10.1016/j.jneumeth.2016.06.017] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 06/17/2016] [Accepted: 06/17/2016] [Indexed: 11/24/2022]
|
19
|
Yaqub M, Rueda S, Kopuri A, Melo P, Papageorghiou AT, Sullivan PB, McCormick K, Noble JA. Plane Localization in 3-D Fetal Neurosonography for Longitudinal Analysis of the Developing Brain. IEEE J Biomed Health Inform 2016; 20:1120-8. [DOI: 10.1109/jbhi.2015.2435651] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Kang J, Gao Y, Shi F, Lalush DS, Lin W, Shen D. Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images. Med Phys 2016; 42:5301-9. [PMID: 26328979 DOI: 10.1118/1.4928400] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient's exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [(18)F]FDG PET image by using a low-dose brain [(18)F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. METHODS The authors employ a regression forest for predicting the standard-dose brain [(18)F]FDG PET image by low-dose brain [(18)F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [(18)F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. RESULTS The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [(18)F]FDG PET image and substantially enhanced image quality of low-dose brain [(18)F]FDG PET image. CONCLUSIONS In this paper, the authors propose a framework to generate standard-dose brain [(18)F]FDG PET image using low-dose brain [(18)F]FDG PET and MRI images. Both the visual and quantitative results indicate that the standard-dose brain [(18)F]FDG PET can be well-predicted using MRI and low-dose brain [(18)F]FDG PET.
Collapse
Affiliation(s)
- Jiayin Kang
- School of Electronics Engineering, Huaihai Institute of Technology, Lianyungang, Jiangsu 222005, China and IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Feng Shi
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - David S Lalush
- Joint UNC-NCSU Department of Biomedical Engineering, North Carolina State University, Raleigh, North Carolina 27695
| | - Weili Lin
- MRI Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Dinggang Shen
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, South Korea
| |
Collapse
|
21
|
Jin C, Shi F, Xiang D, Jiang X, Zhang B, Wang X, Zhu W, Gao E, Chen X. 3D Fast Automatic Segmentation of Kidney Based on Modified AAM and Random Forest. IEEE Trans Med Imaging 2016; 35:1395-407. [PMID: 26742124 DOI: 10.1109/tmi.2015.2512606] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, a fully automatic method is proposed to segment the kidney into multiple components: renal cortex, renal column, renal medulla and renal pelvis, in clinical 3D CT abdominal images. The proposed fast automatic segmentation method of kidney consists of two main parts: localization of renal cortex and segmentation of kidney components. In the localization of renal cortex phase, a method which fully combines 3D Generalized Hough Transform (GHT) and 3D Active Appearance Models (AAM) is applied to localize the renal cortex. In the segmentation of kidney components phase, a modified Random Forests (RF) method is proposed to segment the kidney into four components based on the result from localization phase. During the implementation, a multithreading technology is applied to speed up the segmentation process. The proposed method was evaluated on a clinical abdomen CT data set, including 37 contrast-enhanced volume data using leave-one-out strategy. The overall true-positive volume fraction and false-positive volume fraction were 93.15%, 0.37% for renal cortex segmentation; 83.09%, 0.97% for renal column segmentation; 81.92%, 0.55% for renal medulla segmentation; and 80.28%, 0.30% for renal pelvis segmentation, respectively. The average computational time of segmenting kidney into four components took 20 seconds.
Collapse
|
22
|
Wang K, Ma C. A robust statistics driven volume-scalable active contour for segmenting anatomical structures in volumetric medical images with complex conditions. Biomed Eng Online 2016; 15:39. [PMID: 27074891 PMCID: PMC4831199 DOI: 10.1186/s12938-016-0153-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 04/01/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Accurate segmentation of anatomical structures in medical images is a critical step in the development of computer assisted intervention systems. However, complex image conditions, such as intensity inhomogeneity, noise and weak object boundary, often cause considerable difficulties in medical image segmentation. To cope with these difficulties, we propose a novel robust statistics driven volume-scalable active contour framework, to extract desired object boundary from magnetic resonance (MR) and computed tomography (CT) imagery in 3D. METHODS We define an energy functional in terms of the initial seeded labels and two fitting functions that are derived from object local robust statistics features. This energy is then incorporated into a level set scheme which drives the active contour evolving and converging at the desired position of the object boundary. Due to the local robust statistics and the volume scaling function in the energy fitting term, the object features in local volumes are learned adaptively to guide the motion of the contours, which thereby guarantees the capability of our method to cope with intensity inhomogeneity, noise and weak boundary. In addition, the initialization of active contour is simplified by select several seeds in the object and/or background to eliminate the sensitivity to initialization. RESULTS The proposed method was applied to extensive public available volumetric medical images with challenging image conditions. The segmentation results of various anatomical structures, such as white matter (WM), atrium, caudate nucleus and brain tumor, were evaluated quantitatively by comparing with the corresponding ground truths. It was found that the proposed method achieves consistent and coherent segmentation accuracy of 0.9246 ± 0.0068 for WM, 0.9043 ± 0.0131 for liver tumors, 0.8725 ± 0.0374 for caudate nucleus, 0.8802 ± 0.0595 for brain tumors, etc., measured by Dice similarity coefficients value for the overlap between the algorithm one and the ground truth. Further comparative experimental results showed desirable performances of the proposed method over several well-known segmentation methods in terms of accuracy and robustness. CONCLUSION We proposed an approach to accurate segment volumetric medical images with complex conditions. The accuracy of segmentation, robustness to noise and contour initialization were validated on the basis of extensive MR and CT volumes.
Collapse
Affiliation(s)
- Kuanquan Wang
- School of Computer Science and Technology, Biocomputing Research Center, Harbin Institute of Technology, Harbin, China.
| | - Chao Ma
- School of Computer Science and Technology, Biocomputing Research Center, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
23
|
Ryou H, Yaqub M, Cavallaro A, Roseman F, Papageorghiou A, Noble JA. Automated 3D Ultrasound Biometry Planes Extraction for First Trimester Fetal Assessment. Machine Learning in Medical Imaging 2016. [DOI: 10.1007/978-3-319-47157-0_24] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
24
|
Rueda S, Knight CL, Papageorghiou AT, Noble JA. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med Image Anal 2015; 26:30-46. [PMID: 26319973 PMCID: PMC4686006 DOI: 10.1016/j.media.2015.07.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 05/28/2015] [Accepted: 07/11/2015] [Indexed: 11/24/2022]
Abstract
Medical ultrasound (US) image segmentation and quantification can be challenging due to signal dropouts, missing boundaries, and presence of speckle, which gives images of similar objects quite different appearance. Typically, purely intensity-based methods do not lead to a good segmentation of the structures of interest. Prior work has shown that local phase and feature asymmetry, derived from the monogenic signal, extract structural information from US images. This paper proposes a new US segmentation approach based on the fuzzy connectedness framework. The approach uses local phase and feature asymmetry to define a novel affinity function, which drives the segmentation algorithm, incorporates a shape-based object completion step, and regularises the result by mean curvature flow. To appreciate the accuracy and robustness of the methodology across clinical data of varying appearance and quality, a novel entropy-based quantitative image quality assessment of the different regions of interest is introduced. The new method is applied to 81 US images of the fetal arm acquired at multiple gestational ages, as a means to define a new automated image-based biomarker of fetal nutrition. Quantitative and qualitative evaluation shows that the segmentation method is comparable to manual delineations and robust across image qualities that are typical of clinical practice.
Collapse
Affiliation(s)
- Sylvia Rueda
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK.
| | - Caroline L Knight
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK; Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K; Oxford Maternal & Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - J Alison Noble
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK
| |
Collapse
|
25
|
Liu Y, Dawant BM. Automatic localization of the anterior commissure, posterior commissure, and midsagittal plane in MRI scans using regression forests. IEEE J Biomed Health Inform 2015; 19:1362-74. [PMID: 25955855 PMCID: PMC4519399 DOI: 10.1109/jbhi.2015.2428672] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Localizing the anterior and posterior commissures (AC/PC) and the midsagittal plane (MSP) is crucial in stereotactic and functional neurosurgery, human brain mapping, and medical image processing. We present a learning-based method for automatic and efficient localization of these landmarks and the plane using regression forests. Given a point in an image, we first extract a set of multiscale long-range contextual features. We then build random forests models to learn a nonlinear relationship between these features and the probability of the point being a landmark or in the plane. Three-stage coarse-to-fine models are trained for the AC, PC, and MSP separately using downsampled by 4, downsampled by 2, and the original images. Localization is performed hierarchically, starting with a rough estimation that is progressively refined. We evaluate our method using a leave-one-out approach with 100 clinical T1-weighted images and compare it to state-of-the-art methods including an atlas-based approach with six nonrigid registration algorithms and a model-based approach for the AC and PC, and a global symmetry-based approach for the MSP. Our method results in an overall error of 0.55 ±0.30 mm for AC, 0.56 ±0.28 mm for PC, 1.08(°) ±0.66 in the plane's normal direction, and 1.22 ±0.73 voxels in average distance for MSP; it performs significantly better than four registration algorithms and the model-based method for AC and PC, and the global symmetry-based method for MSP. We also evaluate the sensitivity of our method to image quality and parameter values. We show that it is robust to asymmetry, noise, and rotation. Computation time is 25 s.
Collapse
|
26
|
Song Y, Cai W, Huang H, Zhou Y, Feng DD, Fulham MJ, Chen M. Large Margin Local Estimate With Applications to Medical Image Classification. IEEE Trans Med Imaging 2015; 34:1362-1377. [PMID: 25616009 DOI: 10.1109/tmi.2015.2393954] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Medical images usually exhibit large intra-class variation and inter-class ambiguity in the feature space, which could affect classification accuracy. To tackle this issue, we propose a new Large Margin Local Estimate (LMLE) classification model with sub-categorization based sparse representation. We first sub-categorize the reference sets of different classes into multiple clusters, to reduce feature variation within each subcategory compared to the entire reference set. Local estimates are generated for the test image using sparse representation with reference subcategories as the dictionaries. The similarity between the test image and each class is then computed by fusing the distances with the local estimates in a learning-based large margin aggregation construct to alleviate the problem of inter-class ambiguity. The derived similarities are finally used to determine the class label. We demonstrate that our LMLE model is generally applicable to different imaging modalities, and applied it to three tasks: interstitial lung disease (ILD) classification on high-resolution computed tomography (HRCT) images, phenotype binary classification and continuous regression on brain magnetic resonance (MR) imaging. Our experimental results show statistically significant performance improvements over existing popular classifiers.
Collapse
|
27
|
Song Y, Cai W, Huang H, Zhou Y, Wang Y, Feng DD. Locality-constrained Subcluster Representation Ensemble for lung image classification. Med Image Anal 2015; 22:102-13. [PMID: 25839422 DOI: 10.1016/j.media.2015.03.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 03/06/2015] [Accepted: 03/13/2015] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a new Locality-constrained Subcluster Representation Ensemble (LSRE) model, to classify high-resolution computed tomography (HRCT) images of interstitial lung diseases (ILDs). Medical images normally exhibit large intra-class variation and inter-class ambiguity in the feature space. Modelling of feature space separation between different classes is thus problematic and this affects the classification performance. Our LSRE model tackles this issue in an ensemble classification construct. The image set is first partitioned into subclusters based on spectral clustering with approximation-based affinity matrix. Basis representations of the test image are then generated with sparse approximation from the subclusters. These basis representations are finally fused with approximation- and distribution-based weights to classify the test image. Our experimental results on a large HRCT database show good performance improvement over existing popular classifiers.
Collapse
Affiliation(s)
- Yang Song
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia.
| | - Weidong Cai
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| | - Heng Huang
- Department of Computer Science and Engineering, University of Texas, Arlington, TX 76019, USA
| | - Yun Zhou
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yue Wang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA 22203, USA
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| |
Collapse
|
28
|
Yaqub M, Kelly B, Papageorghiou AT, Noble JA. Guided Random Forests for Identification of Key Fetal Anatomy and Image Categorization in Ultrasound Scans. Lecture Notes in Computer Science 2015. [DOI: 10.1007/978-3-319-24574-4_82] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
29
|
Yaqub M, Kopuri A, Rueda S, Sullivan PB, Mccormick K, Noble JA. A Constrained Regression Forests Solution to 3D Fetal Ultrasound Plane Localization for Longitudinal Analysis of Brain Growth and Maturation. Machine Learning in Medical Imaging 2014. [DOI: 10.1007/978-3-319-10581-9_14] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|