1
|
Keramati H, de Vecchi A, Rajani R, Niederer SA. Using Gaussian process for velocity reconstruction after coronary stenosis applicable in positron emission particle tracking: An in-silico study. PLoS One 2023; 18:e0295789. [PMID: 38096169 PMCID: PMC10721050 DOI: 10.1371/journal.pone.0295789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/28/2023] [Indexed: 12/17/2023] Open
Abstract
Accurate velocity reconstruction is essential for assessing coronary artery disease. We propose a Gaussian process method to reconstruct the velocity profile using the sparse data of the positron emission particle tracking (PEPT) in a biological environment, which allows the measurement of tracer particle velocity to infer fluid velocity fields. We investigated the influence of tracer particle quantity and detection time interval on flow reconstruction accuracy. Three models were used to represent different levels of stenosis and anatomical complexity: a narrowed straight tube, an idealized coronary bifurcation with stenosis, and patient-specific coronary arteries with a stenotic left circumflex artery. Computational fluid dynamics (CFD), particle tracking, and the Gaussian process of kriging were employed to simulate and reconstruct the pulsatile flow field. The study examined the error and uncertainty in velocity profile reconstruction after stenosis by comparing particle-derived flow velocity with the CFD solution. Using 600 particles (15 batches of 40 particles) released in the main coronary artery, the time-averaged error in velocity reconstruction ranged from 13.4% (no occlusion) to 161% (70% occlusion) in patient-specific anatomy. The error in maximum cross-sectional velocity at peak flow was consistently below 10% in all cases. PEPT and kriging tended to overestimate area-averaged velocity in higher occlusion cases but accurately predicted maximum cross-sectional velocity, particularly at peak flow. Kriging was shown to be useful to estimate the maximum velocity after the stenosis in the absence of negative near-wall velocity.
Collapse
Affiliation(s)
- Hamed Keramati
- School of Bioengineering and Imaging Sciences, King’s College London, London, United Kingdom
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | - Adelaide de Vecchi
- School of Bioengineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Ronak Rajani
- School of Bioengineering and Imaging Sciences, King’s College London, London, United Kingdom
- Cardiology Department, Guy’s and St, Thomas’s Hospital, London, United Kingdom
| | - Steven A. Niederer
- School of Bioengineering and Imaging Sciences, King’s College London, London, United Kingdom
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
- Turing Research and Innovation Cluster in Digital Twins (TRIC: DT), The Alan Turing Institute, London, United Kingdom
| |
Collapse
|
2
|
Mu X, Wang S, Jiang P, Wu Y. Estimation of surface ozone concentration over Jiangsu province using a high-performance deep learning model. J Environ Sci (China) 2023; 132:122-133. [PMID: 37336603 DOI: 10.1016/j.jes.2022.09.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Revised: 09/22/2022] [Accepted: 09/26/2022] [Indexed: 06/21/2023]
Abstract
Recently, the global background concentration of ozone (O3) has demonstrated a rising trend. Among various methods, groun-based monitoring of O3 concentrations is highly reliable for research analysis. To obtain information on the spatial characteristics of O3 concentrations, it is necessary that the ground monitoring sites be constructed in sufficient density. In recent years, many researchers have used machine learning models to estimate surface O3 concentrations, which cannot fully provide the spatial and temporal information contained in a sample dataset. To solve this problem, the current study utilized a deep learning model called the Residual connection Convolutional Long Short-Term Memory network (R-ConvLSTM) to estimate daily maximum 8-hr average (MDA8) O3 over Jiangsu province, China during 2020. In this research, the R-ConvLSTM model not only provides the spatiotemporal information of MDA8 O3, but also involves residual connection to avoid the problem of gradient explosion and gradient disappearance with the deepening of network layers. We utilized the TROPOMI total O3 column retrieved from Sentinel-5 Precursor, ERA5 reanalysis meteorological data, and other supplementary data to build a pre-trained dataset. The R-ConvLSTM model achieved an overall sample-base cross-validation (CV) R2 of 0.955 with root mean square error (RMSE) of 9.372 µg/m3. Model estimation also showed a city-based CV R2 of 0.896 with RMSE of 14.029 µg/m3, the highest MDA8 O3 in spring being 122.60 ± 31.60 µg/m3 and the lowest in winter being 69.93 ± 18.48 µg/m3.
Collapse
Affiliation(s)
- Xi Mu
- School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China
| | - Sichen Wang
- School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China
| | - Peng Jiang
- School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China; Information Materials and Intelligent Sensing Laboratory of Anhui Province, Hefei 230601, China; Anhui Province Engineering Laboratory for Mine Ecological Remediation, Anhui University, Hefei 230601, China.
| | - Yanlan Wu
- School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China; Information Materials and Intelligent Sensing Laboratory of Anhui Province, Hefei 230601, China
| |
Collapse
|
3
|
Rogers W, Keek SA, Beuque M, Lavrova E, Primakov S, Wu G, Yan C, Sanduleanu S, Gietema HA, Casale R, Occhipinti M, Woodruff HC, Jochems A, Lambin P. Towards texture accurate slice interpolation of medical images using PixelMiner. Comput Biol Med 2023; 161:106701. [PMID: 37244145 DOI: 10.1016/j.compbiomed.2023.106701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 08/06/2022] [Accepted: 11/23/2022] [Indexed: 05/29/2023]
Abstract
Quantitative image analysis models are used for medical imaging tasks such as registration, classification, object detection, and segmentation. For these models to be capable of making accurate predictions, they need valid and precise information. We propose PixelMiner, a convolution-based deep-learning model for interpolating computed tomography (CT) imaging slices. PixelMiner was designed to produce texture-accurate slice interpolations by trading off pixel accuracy for texture accuracy. PixelMiner was trained on a dataset of 7829 CT scans and validated using an external dataset. We demonstrated the model's effectiveness by using the structural similarity index (SSIM), peak signal to noise ratio (PSNR), and the root mean squared error (RMSE) of extracted texture features. Additionally, we developed and used a new metric, the mean squared mapped feature error (MSMFE). The performance of PixelMiner was compared to four other interpolation methods: (tri-)linear, (tri-)cubic, windowed sinc (WS), and nearest neighbor (NN). PixelMiner produced texture with a significantly lowest average texture error compared to all other methods with a normalized root mean squared error (NRMSE) of 0.11 (p < .01), and the significantly highest reproducibility with a concordance correlation coefficient (CCC) ≥ 0.85 (p < .01). PixelMiner was not only shown to better preserve features but was also validated using an ablation study by removing auto-regression from the model and was shown to improve segmentations on interpolated slices.
Collapse
Affiliation(s)
- W Rogers
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - S A Keek
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - M Beuque
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - E Lavrova
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; GIGA Cyclotron Research Centre in Vivo Imaging, University of Liège, Liège, Belgium
| | - S Primakov
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - G Wu
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - C Yan
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - S Sanduleanu
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - H A Gietema
- Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - R Casale
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology, Institut Jules Bordet, Université Libre de Bruxelles, Brussels, Belgium
| | - M Occhipinti
- Radiomics, Clos Chanmurly 13, 4000, Liege, Belgium
| | - H C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - A Jochems
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - P Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands.
| |
Collapse
|
4
|
Machine Learning Model Based on Optimized Radiomics Feature from 18F-FDG-PET/CT and Clinical Characteristics Predicts Prognosis of Multiple Myeloma: A Preliminary Study. J Clin Med 2023; 12:jcm12062280. [PMID: 36983281 PMCID: PMC10059677 DOI: 10.3390/jcm12062280] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 02/27/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023] Open
Abstract
Objects: To evaluate the prognostic value of radiomics features extracted from 18F-FDG-PET/CT images and integrated with clinical characteristics and conventional PET/CT metrics in newly diagnosed multiple myeloma (NDMM) patients. Methods: We retrospectively reviewed baseline clinical information and 18F-FDG-PET/CT imaging data of MM patients with 18F-FDG-PET/CT. Multivariate Cox regression models involving different combinations were constructed, and stepwise regression was performed: (1) radiomics features of PET/CT alone (Rad Model); (2) Using clinical data (including clinical/laboratory parameters and conventional PET/CT metrics) only (Cli Model); (3) Combination radiomics features and clinical data (Cli-Rad Model). Model performance was evaluated by C-index and Net Reclassification Index (NRI). Results: Ninety-eight patients with NDMM who underwent 18F-FDG-PET/CT between 2014 and 2019 were included in this study. Combining radiomics features from PET/CT with clinical data showed higher prognostic performance than models with radiomics features or clinical data alone (C-index 0.790 vs. 0.675 vs. 0.736 in training cohort; 0.698 vs. 0.651 vs. 0.563 in validation cohort; AUC 0.761, sensitivity 56.7%, specificity 85.7%, p < 0.05 in training cohort and AUC 0.650, sensitivity 80.0%, specificity78.6%, p < 0.05 in validation cohort) When clinical data was combined with radiomics, an increase in the performance of the model was observed (NRI > 0). Conclusions: Radiomics features extracted from the PET and CT components of baseline 18F-FDG-PET/CT images may become an effective complement to provide prognostic information; therefore, radiomics features combined with clinical characteristic may provide clinical value for MM prognosis prediction.
Collapse
|
5
|
Grahovac M, Spielvogel CP, Krajnc D, Ecsedi B, Traub-Weidinger T, Rasul S, Kluge K, Zhao M, Li X, Hacker M, Haug A, Papp L. Machine learning predictive performance evaluation of conventional and fuzzy radiomics in clinical cancer imaging cohorts. Eur J Nucl Med Mol Imaging 2023; 50:1607-1620. [PMID: 36738311 PMCID: PMC10119059 DOI: 10.1007/s00259-023-06127-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND Hybrid imaging became an instrumental part of medical imaging, particularly cancer imaging processes in clinical routine. To date, several radiomic and machine learning studies investigated the feasibility of in vivo tumor characterization with variable outcomes. This study aims to investigate the effect of recently proposed fuzzy radiomics and compare its predictive performance to conventional radiomics in cancer imaging cohorts. In addition, lesion vs. lesion+surrounding fuzzy and conventional radiomic analysis was conducted. METHODS Previously published 11C Methionine (MET) positron emission tomography (PET) glioma, 18F-FDG PET/computed tomography (CT) lung, and 68GA-PSMA-11 PET/magneto-resonance imaging (MRI) prostate cancer retrospective cohorts were included in the analysis to predict their respective clinical endpoints. Four delineation methods including manually defined reference binary (Ref-B), its smoothed, fuzzified version (Ref-F), as well as extended binary (Ext-B) and its fuzzified version (Ext-F) were incorporated to extract imaging biomarker standardization initiative (IBSI)-conform radiomic features from each cohort. Machine learning for the four delineation approaches was performed utilizing a Monte Carlo cross-validation scheme to estimate the predictive performance of the four delineation methods. RESULTS Reference fuzzy (Ref-F) delineation outperformed its binary delineation (Ref-B) counterpart in all cohorts within a volume range of 938-354987 mm3 with relative cross-validation area under the receiver operator characteristics curve (AUC) of +4.7-10.4. Compared to Ref-B, the highest AUC performance difference was observed by the Ref-F delineation in the glioma cohort (Ref-F: 0.74 vs. Ref-B: 0.70) and in the prostate cohort by Ref-F and Ext-F (Ref-F: 0.84, Ext-F: 0.86 vs. Ref-B: 0.80). In addition, fuzzy radiomics decreased feature redundancy by approx. 20%. CONCLUSIONS Fuzzy radiomics has the potential to increase predictive performance particularly in small lesion sizes compared to conventional binary radiomics in PET. We hypothesize that this effect is due to the ability of fuzzy radiomics to model partial volume effects and delineation uncertainties at small lesion boundaries. In addition, we consider that the lower redundancy of fuzzy radiomic features supports the identification of imaging biomarkers in future studies. Future studies shall consider systematically analyzing lesions and their surroundings with fuzzy and binary radiomics.
Collapse
Affiliation(s)
- M Grahovac
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - C P Spielvogel
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - D Krajnc
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20, AT-1090, Vienna, Austria
| | - B Ecsedi
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20, AT-1090, Vienna, Austria
| | - T Traub-Weidinger
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - S Rasul
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - K Kluge
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - M Zhao
- Department of Nuclear Medicine, Peking University Third Hospital, Beijing, People's Republic of China
| | - X Li
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - M Hacker
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - A Haug
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20, AT-1090, Vienna, Austria.
| |
Collapse
|
6
|
Zhao M, Kluge K, Papp L, Grahovac M, Yang S, Jiang C, Krajnc D, Spielvogel CP, Ecsedi B, Haug A, Wang S, Hacker M, Zhang W, Li X. Multi-lesion radiomics of PET/CT for non-invasive survival stratification and histologic tumor risk profiling in patients with lung adenocarcinoma. Eur Radiol 2022; 32:7056-7067. [PMID: 35896836 DOI: 10.1007/s00330-022-08999-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/15/2022] [Accepted: 06/27/2022] [Indexed: 12/01/2022]
Abstract
OBJECTIVES This study investigates the ability of machine learning (ML) models trained on clinical data and 2-deoxy-2-[18F]fluoro-D-glucose(FDG) positron emission tomography/computed tomography (PET/CT) radiomics to predict overall survival (OS), tumor grade (TG), and histologic growth pattern risk (GPR) in lung adenocarcinoma (LUAD) patients. METHODS A total of 421 treatment-naive patients with histologically-proven LUAD and available FDG PET/CT imaging were retrospectively included. Four cohorts were assessed for predicting 4-year OS (n = 276), 3-year OS (n = 280), TG (n = 298), and GPR (n = 265). FDG-avid lesions were delineated, and 2082 radiomics features were extracted and combined with endpoint-specific clinical parameters. ML models were built for the prediction of 4-year OS (M4OS), 3-year OS (M3OS), tumor grading (MTG), and histologic growth pattern risk (MGPR). A 100-fold Monte Carlo cross-validation with 80:20 training to validation split was employed as a performance evaluation for all models. The association between the M4OS and M3OS predictions with OS was assessed by the Kaplan-Meier survival analysis. RESULTS The area under the receiver operator characteristics curve (AUC) was the highest for M4OS (AUC 0.88, 95% confidence interval (CI) 86.7-88.7), followed by M3OS (AUC 0.84, CI 82.9-84.9), while MTG and MGPR performed equally well (AUC 0.76, CI 74.4-77.9, CI 74.6-78, respectively). Predictions of M4OS (hazard ratio (HR) -2.4, CI -2.47 to -1.64, p < 0.05) and M3OS (HR -2.36, CI -2.79 to -1.93, p < 0.05) were independently associated with OS. CONCLUSION ML models are able to predict long-term survival outcomes in LUAD patients with high accuracy. Furthermore, histologic grade and predominant growth pattern risk can be predicted with satisfactory accuracy. KEY POINTS • Machine learning models trained on pre-therapeutic PET/CT radiomics enable highly accurate long-term survival prediction of patients with lung adenocarcinoma. • Highly accurate survival predictions are achieved in lung adenocarcinoma patients despite heterogenous histologies and treatment regimens. • Radiomic machine learning models are able to predict lung adenocarcinoma tumor grade and histologic growth pattern risk with satisfactory accuracy.
Collapse
Affiliation(s)
- Meixin Zhao
- Department of Nuclear Medicine, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Kilian Kluge
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria.,Christian Doppler Laboratory for Applied Metabolomics (CDLAM), Vienna, Austria
| | - Laszlo Papp
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Marko Grahovac
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria
| | - Shaomin Yang
- Department of Pathology, Peking University Health Science Center, Beijing, China
| | - Chunting Jiang
- Department of Nuclear Medicine, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Denis Krajnc
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Clemens P Spielvogel
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria.,Christian Doppler Laboratory for Applied Metabolomics (CDLAM), Vienna, Austria
| | - Boglarka Ecsedi
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Alexander Haug
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria.,Christian Doppler Laboratory for Applied Metabolomics (CDLAM), Vienna, Austria
| | - Shiwei Wang
- Evomics Medical Technology Co., Ltd., Shanghai, China
| | - Marcus Hacker
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria
| | - Weifang Zhang
- Department of Nuclear Medicine, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China.
| | - Xiang Li
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, Floor 3L, 1090, Vienna, Austria.
| |
Collapse
|
7
|
Papp L, Spielvogel CP, Grubmüller B, Grahovac M, Krajnc D, Ecsedi B, Sareshgi RAM, Mohamad D, Hamboeck M, Rausch I, Mitterhauser M, Wadsak W, Haug AR, Kenner L, Mazal P, Susani M, Hartenbach S, Baltzer P, Helbich TH, Kramer G, Shariat SF, Beyer T, Hartenbach M, Hacker M. Supervised machine learning enables non-invasive lesion characterization in primary prostate cancer with [ 68Ga]Ga-PSMA-11 PET/MRI. Eur J Nucl Med Mol Imaging 2021; 48:1795-1805. [PMID: 33341915 PMCID: PMC8113201 DOI: 10.1007/s00259-020-05140-y] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 11/29/2020] [Indexed: 12/13/2022]
Abstract
PURPOSE Risk classification of primary prostate cancer in clinical routine is mainly based on prostate-specific antigen (PSA) levels, Gleason scores from biopsy samples, and tumor-nodes-metastasis (TNM) staging. This study aimed to investigate the diagnostic performance of positron emission tomography/magnetic resonance imaging (PET/MRI) in vivo models for predicting low-vs-high lesion risk (LH) as well as biochemical recurrence (BCR) and overall patient risk (OPR) with machine learning. METHODS Fifty-two patients who underwent multi-parametric dual-tracer [18F]FMC and [68Ga]Ga-PSMA-11 PET/MRI as well as radical prostatectomy between 2014 and 2015 were included as part of a single-center pilot to a randomized prospective trial (NCT02659527). Radiomics in combination with ensemble machine learning was applied including the [68Ga]Ga-PSMA-11 PET, the apparent diffusion coefficient, and the transverse relaxation time-weighted MRI scans of each patient to establish a low-vs-high risk lesion prediction model (MLH). Furthermore, MBCR and MOPR predictive model schemes were built by combining MLH, PSA, and clinical stage values of patients. Performance evaluation of the established models was performed with 1000-fold Monte Carlo (MC) cross-validation. Results were additionally compared to conventional [68Ga]Ga-PSMA-11 standardized uptake value (SUV) analyses. RESULTS The area under the receiver operator characteristic curve (AUC) of the MLH model (0.86) was higher than the AUC of the [68Ga]Ga-PSMA-11 SUVmax analysis (0.80). MC cross-validation revealed 89% and 91% accuracies with 0.90 and 0.94 AUCs for the MBCR and MOPR models respectively, while standard routine analysis based on PSA, biopsy Gleason score, and TNM staging resulted in 69% and 70% accuracies to predict BCR and OPR respectively. CONCLUSION Our results demonstrate the potential to enhance risk classification in primary prostate cancer patients built on PET/MRI radiomics and machine learning without biopsy sampling.
Collapse
Affiliation(s)
- L Papp
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - C P Spielvogel
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Vienna, Austria
| | - B Grubmüller
- Department of Urology, Medical University of Vienna, Vienna, Austria
| | - M Grahovac
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - D Krajnc
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - B Ecsedi
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - R A M Sareshgi
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - D Mohamad
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - M Hamboeck
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - I Rausch
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - M Mitterhauser
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Applied Diagnostics, Vienna, Austria
| | - W Wadsak
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - A R Haug
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Vienna, Austria
| | - L Kenner
- Christian Doppler Laboratory for Applied Metabolomics, Vienna, Austria
- Clinical Institute of Pathology, Medical University of Vienna, Vienna, Austria
| | - P Mazal
- Clinical Institute of Pathology, Medical University of Vienna, Vienna, Austria
| | - M Susani
- Clinical Institute of Pathology, Medical University of Vienna, Vienna, Austria
| | | | - P Baltzer
- Department of Biomedical Imaging and Image-guided Therapy, Division of Common General and Pediatric Radiology, Medical University of Vienna, Vienna, Austria
| | - T H Helbich
- Department of Biomedical Imaging and Image-guided Therapy, Division of Common General and Pediatric Radiology, Medical University of Vienna, Vienna, Austria
| | - G Kramer
- Department of Urology, Medical University of Vienna, Vienna, Austria
| | - S F Shariat
- Department of Urology, Medical University of Vienna, Vienna, Austria
| | - T Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - M Hartenbach
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - M Hacker
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
8
|
El-Torky DMS, Al-Berry MN, Salem MAM, Roushdy MI. 3D Visualization of Brain Tumors Using MR Images: A Survey. Curr Med Imaging 2020; 15:353-361. [PMID: 31989903 DOI: 10.2174/1573405614666180111142055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 01/02/2018] [Accepted: 01/02/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Three-Dimensional visualization of brain tumors is very useful in both diagnosis and treatment stages of brain cancer. DISCUSSION It helps the oncologist/neurosurgeon to take the best decision in Radiotherapy and/or surgical resection techniques. 3D visualization involves two main steps; tumor segmentation and 3D modeling. CONCLUSION In this article, we illustrate the most widely used segmentation and 3D modeling techniques for brain tumors visualization. We also survey the public databases available for evaluation of the mentioned techniques.
Collapse
Affiliation(s)
| | - Maryam Nabil Al-Berry
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| | - Mohammed Abdel-Megeed Salem
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| | - Mohamed Ismail Roushdy
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| |
Collapse
|
9
|
Chao Z, Kim HJ. Slice interpolation of medical images using enhanced fuzzy radial basis function neural networks. Comput Biol Med 2019; 110:66-78. [PMID: 31129416 DOI: 10.1016/j.compbiomed.2019.05.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 05/15/2019] [Accepted: 05/15/2019] [Indexed: 11/29/2022]
Abstract
Volume data composed of complete slice images play an indispensable role in medical diagnoses. However, system or human factors often lead to the loss of slice images. In recent years, various interpolation algorithms have been proposed to solve these problems. Although these algorithms are effective, the interpolated images have some shortcomings, such as less accurate recovery and missing details. In this study, we propose a new method based on an enhanced fuzzy radial basis function neural network to improve the performance of the interpolation method. The neural network includes an input layer (six input neurons), three hidden layers of neurons, and the output layer (one output neuron), and we propose a patch matching method to select the input variables of the neural network. Accordingly, we use two normal pending images to be interpolated as the input. Final output data is obtained by applying the trained neural network. In examining four groups of medical images, the proposed method outperforms five other methods, achieving the highest similarity image metric (ESSIM) values of 0.96, 0.95, 0.94, and 0.92 and the lowest mean squared difference (MSD) values of 35.5, 41.2, 50.9, and 47.1. In addition, for a whole MRI brain volume data experiment, the average MSD and ESSIM values of the proposed method and other methods are (41.62, 0.95) and (57.13, 0.90), respectively. The results indicate that the proposed method is superior to the other methods.
Collapse
Affiliation(s)
- Zhen Chao
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea
| | - Hee-Joung Kim
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea; Department of Radiological Science, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea.
| |
Collapse
|
10
|
Kocev B, Hahn HK, Linsen L, Wells WM, Kikinis R. Uncertainty-aware asynchronous scattered motion interpolation using Gaussian process regression. Comput Med Imaging Graph 2019; 72:1-12. [PMID: 30654093 PMCID: PMC6433137 DOI: 10.1016/j.compmedimag.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 08/16/2018] [Accepted: 12/03/2018] [Indexed: 11/28/2022]
Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.
Collapse
Affiliation(s)
- Bojan Kocev
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany.
| | - Horst Karl Hahn
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany
| | - Lars Linsen
- Institute of Computer Science, Westfälische Wilhelms-Universität Münster, Germany
| | - William M Wells
- Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Ron Kikinis
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| |
Collapse
|
11
|
Hadagali P, Peters JR, Balasubramanian S. Morphing the feature-based multi-blocks of normative/healthy vertebral geometries to scoliosis vertebral geometries: development of personalized finite element models. Comput Methods Biomech Biomed Engin 2018. [PMID: 29528253 DOI: 10.1080/10255842.2018.1448391] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.
Collapse
Affiliation(s)
- Prasannaah Hadagali
- a Orthopedic Biomechanics Laboratory, School of Biomedical Engineering Science and Health Systems , Drexel University , Philadelphia , PA , USA
| | - James R Peters
- a Orthopedic Biomechanics Laboratory, School of Biomedical Engineering Science and Health Systems , Drexel University , Philadelphia , PA , USA
| | - Sriram Balasubramanian
- a Orthopedic Biomechanics Laboratory, School of Biomedical Engineering Science and Health Systems , Drexel University , Philadelphia , PA , USA
| |
Collapse
|
12
|
Papp L, Pötsch N, Grahovac M, Schmidbauer V, Woehrer A, Preusser M, Mitterhauser M, Kiesel B, Wadsak W, Beyer T, Hacker M, Traub-Weidinger T. Glioma Survival Prediction with Combined Analysis of In Vivo 11C-MET PET Features, Ex Vivo Features, and Patient Features by Supervised Machine Learning. J Nucl Med 2017; 59:892-899. [DOI: 10.2967/jnumed.117.202267] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 10/31/2017] [Indexed: 01/03/2023] Open
|
13
|
Geometry reconstruction method for patient-specific finite element models for the assessment of tibia fracture risk in osteogenesis imperfecta. Med Biol Eng Comput 2016; 55:549-560. [DOI: 10.1007/s11517-016-1526-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 05/11/2016] [Indexed: 10/21/2022]
|
14
|
Wachinger C, Fritscher K, Sharp G, Golland P. Contour-Driven Atlas-Based Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:2492-505. [PMID: 26068202 PMCID: PMC4756595 DOI: 10.1109/tmi.2015.2442753] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images.
Collapse
|
15
|
Abstract
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.
Collapse
|
16
|
Lalonde NM, Petit Y, Aubin CE, Wagnac E, Arnoux PJ. Method to Geometrically Personalize a Detailed Finite-Element Model of the Spine. IEEE Trans Biomed Eng 2013; 60:2014-21. [DOI: 10.1109/tbme.2013.2246865] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
17
|
Aissiou M, Périé D, Gervais J, Trochu F. Development of a progressive dual kriging technique for 2D and 3D multi-parametric MRI data interpolation. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2013. [DOI: 10.1080/21681163.2013.765712] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
18
|
Indhumathi C, Cai YY, Guan YQ, Opas M, Zheng J. Adaptive-weighted cubic B-spline using lookup tables for fast and efficient axial resampling of 3D confocal microscopy images. Microsc Res Tech 2011; 75:20-7. [PMID: 21618651 DOI: 10.1002/jemt.21017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2010] [Accepted: 03/17/2011] [Indexed: 11/08/2022]
Abstract
Confocal laser scanning microscopy has become a most powerful tool to visualize and analyze the dynamic behavior of cellular molecules. Photobleaching of fluorochromes is a major problem with confocal image acquisition that will lead to intensity attenuation. Photobleaching effect can be reduced by optimizing the collection efficiency of the confocal image by fast z-scanning. However, such images suffer from distortions, particularly in the z dimension, which causes disparities in the x, y, and z directions of the voxels with the original image stacks. As a result, reliable segmentation and feature extraction of these images may be difficult or even impossible. Image interpolation is especially needed for the correction of undersampling artifact in the axial plane of three-dimensional images generated by a confocal microscope to obtain cubic voxels. In this work, we present an adaptive cubic B-spline-based interpolation with the aid of lookup tables by deriving adaptive weights based on local gradients for the sampling nodes in the interpolation formulae. Thus, the proposed method enhances the axial resolution of confocal images by improving the accuracy of the interpolated value simultaneously with great reduction in computational cost. Numerical experimental results confirm the effectiveness of the proposed interpolation approach and demonstrate its superiority both in terms of accuracy and speed compared to other interpolation algorithms.
Collapse
Affiliation(s)
- C Indhumathi
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore
| | | | | | | | | |
Collapse
|
19
|
Research on Interpolation Methods in Medical Image Processing. J Med Syst 2010; 36:777-807. [DOI: 10.1007/s10916-010-9544-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2010] [Accepted: 06/13/2010] [Indexed: 10/19/2022]
|
20
|
Abstract
Three-dimensional (3D) imaging was developed to provide both qualitative and quantitative information about an object or object system from images obtained with multiple modalities including digital radiography, computed tomography, magnetic resonance imaging, positron emission tomography, single photon emission computed tomography, and ultrasonography. Three-dimensional imaging operations may be classified under four basic headings: preprocessing, visualization, manipulation, and analysis. Preprocessing operations (volume of interest, filtering, interpolation, registration, segmentation) are aimed at extracting or improving the extraction of object information in given images. Visualization operations facilitate seeing and comprehending objects in their full dimensionality and may be either scene-based or object-based. Manipulation may be either rigid or deformable and allows alteration of object structures and of relationships between objects. Analysis operations, like visualization operations, may be either scene-based or object-based and deal with methods of quantifying object information. There are many challenges involving matters of precision, accuracy, and efficiency in 3D imaging. Nevertheless, 3D imaging is an exciting technology that promises to offer an expanding number and variety of applications.
Collapse
Affiliation(s)
- J K Udupa
- Department of Radiology, University of Pennsylvania, Philadelphia 19104-6021, USA
| |
Collapse
|
21
|
Grevera GJ, Udupa JK, Miki Y. A task-specific evaluation of three-dimensional image interpolation techniques. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:137-143. [PMID: 10232670 DOI: 10.1109/42.759116] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. In a previous paper, we presented a framework for the task-independent comparison of interpolation methods based on certain image-derived figures of merit using a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this work, we present an objective task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of multiple sclerosis (MS) patients. Sixty lesion-detection experiments coming from ten patient studies, two subsampling techniques and the original data, and three interpolation methods are carried out, along with a statistical analysis of the results.
Collapse
Affiliation(s)
- G J Grevera
- Department of Radiology, University of Pennsylvania Health System, Philadelphia 19104, USA
| | | | | |
Collapse
|
22
|
Grevera GJ, Udupa JK. An objective comparison of 3-D image interpolation methods. IEEE TRANSACTIONS ON MEDICAL IMAGING 1998; 17:642-652. [PMID: 9845319 DOI: 10.1109/42.730408] [Citation(s) in RCA: 47] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To aid in the display, manipulation, and analysis of biomedical image data, they usually need to he converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater (typically 4-10x) amount of data. To mitigate this problem, a method called shape-based interpolation of binary data was developed 121. Besides significantly reducing user time, this method has been shown to provide more accurate results than grey-level interpolation. We proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data. This method has characteristics similar to those of the binary shape-based method. In particular, we showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentrating on the three-dimensional (3-D) interpolation problem, we compare statistically the accuracy of eight different methods: nearest-neighbor, linear grey-level, grey-level cubic spline, grey-level modified cubic spline, Goshtasby et al., and three methods from the grey-level shape-based class. A population of patient magnetic resonance and computed tomography images, corresponding to different parts of the human anatomy, coming from different three-dimensional imaging applications, are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location using three measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically significantly outperformed all other methods in all measures in all applications considered here with a statistical relevance ranging from 10% to 32% (mean = 15%) for mean-squared difference.
Collapse
Affiliation(s)
- G J Grevera
- Department of Radiology, University of Pennsylvania, Philadelphia 19104-6021, USA.
| | | |
Collapse
|
23
|
Choi SM, Lee JE, Kim J, Kim MH. Volumetric object reconstruction using the 3D-MRF model-based segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 1997; 16:887-892. [PMID: 9533588 DOI: 10.1109/42.650884] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
A number of segmentation algorithms have been developed, but those algorithms are not effective on volume reconstruction because they are limited to operating only on two-dimensional (2-D) images. In this paper, we propose the volumetric object reconstruction method using the three-dimensional Markov random field (3D-MRF) model-based segmentation. The 3D-MRF model is known to be one of efficient ways to model spatial contextual information. The method is compared with the 2-D region growing scheme under three types of interpolation. The results show that the proposed method is better in the aspect of image quality than other methods.
Collapse
Affiliation(s)
- S M Choi
- Department of Computer Science and Engineering, Ewha Womans University, Seoul, Korea
| | | | | | | |
Collapse
|
24
|
Grevera GJ, Udupa JK. Shape-based interpolation of multidimensional grey-level images. IEEE TRANSACTIONS ON MEDICAL IMAGING 1996; 15:881-892. [PMID: 18215967 DOI: 10.1109/42.544506] [Citation(s) in RCA: 80] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. Here, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n+1)-dimensional [(n+1)-D] space. The binary shape-based method is then applied to this image to create an (n+1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.
Collapse
Affiliation(s)
- G J Grevera
- Dept. of Radiol., Pennsylvania Univ., Philadelphia, PA
| | | |
Collapse
|