1
|
Chang Y, Li Z, Xu W. CGNet: A Correlation-Guided Registration Network for Unsupervised Deformable Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1468-1479. [PMID: 40030290 DOI: 10.1109/tmi.2024.3505853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/18/2025]
Abstract
Deformable medical image registration plays a significant role in medical image analysis. With the advancement of deep neural networks, learning-based deformable registration methods have made great strides due to their ability to perform fast end-to-end registration and their competitive performance compared to traditional methods. However, these methods primarily improve registration performance by replacing specific layers of the encoder-decoder architecture designed for segmentation tasks with advanced network structures like Transformers, overlooking the crucial difference between these two tasks, which is feature matching. In this paper, we propose a novel correlation-guided registration network (CGNet) specifically designed for deformable medical image registration tasks, which achieves a reasonable and accurate registration through three main components: dual-stream encoder, correlation learning module, and coarse-to-fine decoder. Specifically, the dual-stream encoder is used to independently extract hierarchical features from a moving image and a fixed image. The correlation learning module is used to calculate correlation maps, enabling explicit feature matching between input image pairs. The coarse-to-fine decoder outputs deformation sub-fields for each decoding layer in a coarse-to-fine manner, facilitating accurate estimation of the final deformation field. Extensive experiments on four 3D brain MRI datasets show that the proposed method achieves state-of-the-art performance on three evaluation metrics compared to twelve learning-based registration methods, demonstrating the potential of our model for deformable medical image registration.
Collapse
|
2
|
Im JE, Khalifa M, Gregory AV, Erickson BJ, Kline TL. A Systematic Review on the Use of Registration-Based Change Tracking Methods in Longitudinal Radiological Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01333-1. [PMID: 39578321 DOI: 10.1007/s10278-024-01333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 11/05/2024] [Accepted: 11/06/2024] [Indexed: 11/24/2024]
Abstract
Registration is the process of spatially and/or temporally aligning different images. It is a critical tool that can facilitate the automatic tracking of pathological changes detected in radiological images and align images captured by different imaging systems and/or those acquired using different acquisition parameters. The longitudinal analysis of clinical changes has a significant role in helping clinicians evaluate disease progression and determine the most suitable course of treatment for patients. This study provides a comprehensive review of the role registration-based approaches play in automated change tracking in radiological imaging and explores the three types of registration approaches which include rigid, affine, and nonrigid registration, as well as methods of detecting and quantifying changes in registered longitudinal images: the intensity-based approach and the deformation-based approach. After providing an overview and background, we highlight the clinical applications of these methods, specifically focusing on computed tomography (CT) and magnetic resonance imaging (MRI) in tumors and multiple sclerosis (MS), two of the most heavily studied areas in automated change tracking. We conclude with a discussion and recommendation for future directions.
Collapse
Affiliation(s)
- Jeeho E Im
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Muhammed Khalifa
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Adriana V Gregory
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | - Timothy L Kline
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA.
| |
Collapse
|
3
|
Fang J, Lv N, Li J, Zhang H, Wen J, Yang W, Wu J, Wen Z. Decoupled learning for brain image registration. Front Neurosci 2023; 17:1246769. [PMID: 37694117 PMCID: PMC10485259 DOI: 10.3389/fnins.2023.1246769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023] Open
Abstract
Image registration is one of the important parts in medical image processing and intelligent analysis. The accuracy of image registration will greatly affect the subsequent image processing and analysis. This paper focuses on the problem of brain image registration based on deep learning, and proposes the unsupervised deep learning methods based on model decoupling and regularization learning. Specifically, we first decompose the highly ill-conditioned inverse problem of brain image registration into two simpler sub-problems, to reduce the model complexity. Further, two light neural networks are constructed to approximate the solution of the two sub-problems and the training strategy of alternating iteration is used to solve the problem. The performance of algorithms utilizing model decoupling is evaluated through experiments conducted on brain MRI images from the LPBA40 dataset. The obtained experimental results demonstrate the superiority of the proposed algorithm over conventional learning methods in the context of brain image registration tasks.
Collapse
Affiliation(s)
- Jinwu Fang
- Institute of Infectious Disease and Biosecurity, School of Public Health, Fudan University, Shanghai, China
- China Academy of Information and Communication Technology, Beijing, China
- Industrial Internet Innovation Center (Shanghai) Co., Ltd., Shanghai, China
| | - Na Lv
- School of Health and Social Care, Shanghai Urban Construction Vocational College, Shanghai, China
| | - Jia Li
- Institute of Infectious Disease and Biosecurity, School of Public Health, Fudan University, Shanghai, China
| | - Hao Zhang
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China
| | - Jiayuan Wen
- College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Wan Yang
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China
| | - Jingfei Wu
- School of Economics, Shanghai University, Shanghai, China
| | - Zhijie Wen
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China
| |
Collapse
|
4
|
Ho TT, Kim WJ, Lee CH, Jin GY, Chae KJ, Choi S. An unsupervised image registration method employing chest computed tomography images and deep neural networks. Comput Biol Med 2023; 154:106612. [PMID: 36738711 DOI: 10.1016/j.compbiomed.2023.106612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 01/11/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023]
Abstract
BACKGROUND Deformable image registration is crucial for multiple radiation therapy applications. Fast registration of computed tomography (CT) lung images is challenging because of the large and nonlinear deformation between inspiration and expiration. With advancements in deep learning techniques, learning-based registration methods are considered efficient alternatives to traditional methods in terms of accuracy and computational cost. METHOD In this study, an unsupervised lung registration network (LRN) with cycle-consistent training is proposed to align two acquired CT-derived lung datasets during breath-holds at inspiratory and expiratory levels without utilizing any ground-truth registration results. Generally, the LRN model uses three loss functions: image similarity, regularization, and Jacobian determinant. Here, LRN was trained on the CT datasets of 705 subjects and tested using 10 pairs of public CT DIR-Lab datasets. Furthermore, to evaluate the effectiveness of the registration technique, target registration errors (TREs) of the LRN model were compared with those of the conventional algorithm (sum of squared tissue volume difference; SSTVD) and a state-of-the-art unsupervised registration method (VoxelMorph). RESULTS The results showed that the LRN with an average TRE of 1.78 ± 1.56 mm outperformed VoxelMorph with an average TRE of 2.43 ± 2.43 mm, which is comparable to that of SSTVD with an average TRE of 1.66 ± 1.49 mm. In addition, estimating the displacement vector field without any folding voxel consumed less than 2 s, demonstrating the superiority of the learning-based method with respect to fiducial marker tracking and the overall soft tissue alignment with a nearly real-time speed. CONCLUSIONS Therefore, this proposed method shows significant potential for use in time-sensitive pulmonary studies, such as lung motion tracking and image-guided surgery.
Collapse
Affiliation(s)
- Thao Thi Ho
- School of Mechanical Engineering, Kyungpook National University, Daegu, South Korea
| | - Woo Jin Kim
- Department of Internal Medicine and Environmental Health Center, Kangwon National University Hospital, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Chang Hyun Lee
- Department of Radiology, Seoul National University, College of Medicine, Seoul National University Hospital, Seoul, South Korea; Department of Radiology, College of Medicine, The University of Iowa, Iowa City, IA, USA
| | - Gong Yong Jin
- Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University, Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, South Korea
| | - Kum Ju Chae
- Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University, Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, South Korea
| | - Sanghun Choi
- School of Mechanical Engineering, Kyungpook National University, Daegu, South Korea.
| |
Collapse
|
5
|
Lei Y, Fu Y, Wang T, Liu Y, Patel P, Curran WJ, Liu T, Yang X. 4D-CT deformable image registration using multiscale unsupervised deep learning. Phys Med Biol 2020; 65:085003. [PMID: 32097902 PMCID: PMC7775640 DOI: 10.1088/1361-6560/ab79c4] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA, 30322
| | | | | | | | | | | | | | | |
Collapse
|
6
|
Developing global image feature analysis models to predict cancer risk and prognosis. Vis Comput Ind Biomed Art 2019; 2:17. [PMID: 32190407 PMCID: PMC7055572 DOI: 10.1186/s42492-019-0026-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 10/09/2019] [Indexed: 12/18/2022] Open
Abstract
In order to develop precision or personalized medicine, identifying new quantitative imaging markers and building machine learning models to predict cancer risk and prognosis has been attracting broad research interest recently. Most of these research approaches use the similar concepts of the conventional computer-aided detection schemes of medical images, which include steps in detecting and segmenting suspicious regions or tumors, followed by training machine learning models based on the fusion of multiple image features computed from the segmented regions or tumors. However, due to the heterogeneity and boundary fuzziness of the suspicious regions or tumors, segmenting subtle regions is often difficult and unreliable. Additionally, ignoring global and/or background parenchymal tissue characteristics may also be a limitation of the conventional approaches. In our recent studies, we investigated the feasibility of developing new computer-aided schemes implemented with the machine learning models that are trained by global image features to predict cancer risk and prognosis. We trained and tested several models using images obtained from full-field digital mammography, magnetic resonance imaging, and computed tomography of breast, lung, and ovarian cancers. Study results showed that many of these new models yielded higher performance than other approaches used in current clinical practice. Furthermore, the computed global image features also contain complementary information from the features computed from the segmented regions or tumors in predicting cancer prognosis. Therefore, the global image features can be used alone to develop new case-based prediction models or can be added to current tumor-based models to increase their discriminatory power.
Collapse
|
7
|
Zargari A, Du Y, Heidari M, Thai TC, Gunderson CC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Prediction of chemotherapy response in ovarian cancer patients using a new clustered quantitative image marker. Phys Med Biol 2018; 63:155020. [PMID: 30010611 DOI: 10.1088/1361-6560/aad3ab] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
This study aimed to investigate the feasibility of integrating image features computed from both spatial and frequency domain to better describe the tumor heterogeneity for precise prediction of tumor response to postsurgical chemotherapy in patients with advanced-stage ovarian cancer. A computer-aided scheme was applied to first compute 133 features from five categories namely, shape and density, fast Fourier transform, discrete cosine transform (DCT), wavelet, and gray level difference method. An optimal feature cluster was then determined by the scheme using the particle swarm optimization algorithm aiming to achieve an enhanced discrimination power that was unattainable with the single features. The scheme was tested using a balanced dataset (responders and non-responders defined using 6 month PFS) retrospectively collected from 120 ovarian cancer patients. By evaluating the performance of the individual features among the five categories, the DCT features achieved the highest predicting accuracy than the features in other groups. By comparison, a quantitative image marker generated from the optimal feature cluster yielded the area under ROC curve (AUC) of 0.86, while the top performing single feature only had an AUC of 0.74. Furthermore, it was observed that the features computed from the frequency domain were as important as those computed from the spatial domain. In conclusion, this study demonstrates the potential of our proposed new quantitative image marker fused with the features computed from both spatial and frequency domain for a reliable prediction of tumor response to postsurgical chemotherapy.
Collapse
Affiliation(s)
- Abolfazl Zargari
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, United States of America. These authors contributed equally to this work
| | | | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Riyahi S, Choi W, Liu CJ, Zhong H, Wu AJ, Mechalakos JG, Lu W. Quantifying local tumor morphological changes with Jacobian map for prediction of pathologic tumor response to chemo-radiotherapy in locally advanced esophageal cancer. Phys Med Biol 2018; 63:145020. [PMID: 29911659 PMCID: PMC6064042 DOI: 10.1088/1361-6560/aacd22] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
We proposed a framework to detect and quantify local tumor morphological changes due to chemo-radiotherapy (CRT) using a Jacobian map and to extract quantitative radiomic features from the Jacobian map to predict the pathologic tumor response in locally advanced esophageal cancer patients. In 20 patients who underwent CRT, a multi-resolution BSpline deformable registration was performed to register the follow-up (post-CRT) CT to the baseline CT image. The Jacobian map (J) was computed as the determinant of the gradient of the deformation vector field. The Jacobian map measured the ratio of local tumor volume change where J < 1 indicated tumor shrinkage and J > 1 denoted expansion. The tumor was manually delineated and corresponding anatomical landmarks were generated on the baseline and follow-up images. Intensity, texture and geometry features were then extracted from the Jacobian map of the tumor to quantify tumor morphological changes. The importance of each Jacobian feature in predicting pathologic tumor response was evaluated by both univariate and multivariate analysis. We constructed a multivariate prediction model by using a support vector machine (SVM) classifier coupled with a least absolute shrinkage and selection operator (LASSO) for feature selection. The SVM-LASSO model was evaluated using ten-times repeated 10-fold cross-validation (10 × 10-fold CV). After registration, the average target registration error was 4.30 ± 1.09 mm (LR:1.63 mm AP:1.59 mm SI:3.05 mm) indicating registration error was within two voxels and close to 4 mm slice thickness. Visually, the Jacobian map showed smoothly-varying local shrinkage and expansion regions in a tumor. Quantitatively, the average median Jacobian was 0.80 ± 0.10 and 1.05 ± 0.15 for responder and non-responder tumors, respectively. These indicated that on average responder tumors had 20% median volume shrinkage while non-responder tumors had 5% median volume expansion. In univariate analysis, the minimum Jacobian (p = 0.009, AUC = 0.98) and median Jacobian (p = 0.004, AUC = 0.95) were the most significant predictors. The SVM-LASSO model achieved the highest accuracy when these two features were selected (sensitivity = 94.4%, specificity = 91.8%, AUC = 0.94). Novel features extracted from the Jacobian map quantified local tumor morphological changes using only baseline tumor contour without post-treatment tumor segmentation. The SVM-LASSO model using the median Jacobian and minimum Jacobian achieved high accuracy in predicting pathologic tumor response. The Jacobian map showed great potential for longitudinal evaluation of tumor response.
Collapse
Affiliation(s)
- Sadegh Riyahi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wookjin Choi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Chia-Ju Liu
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Hualiang Zhong
- Department of Radiation Oncology, Henry Ford Hospital, Detroit, MI 48202, USA
| | - Abraham J. Wu
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - James G. Mechalakos
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
9
|
Danala G, Thai T, Gunderson CC, Moxley KM, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Applying Quantitative CT Image Feature Analysis to Predict Response of Ovarian Cancer Patients to Chemotherapy. Acad Radiol 2017; 24:1233-1239. [PMID: 28554551 PMCID: PMC5875685 DOI: 10.1016/j.acra.2017.04.014] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 04/03/2017] [Accepted: 04/03/2017] [Indexed: 12/19/2022]
Abstract
RATIONALE AND OBJECTIVES The study aimed to investigate the role of applying quantitative image features computed from computed tomography (CT) images for early prediction of tumor response to chemotherapy in the clinical trials for treating ovarian cancer patients. MATERIALS AND METHODS A dataset involving 91 patients was retrospectively assembled. Each patient had two sets of pre- and post-therapy CT images. A computer-aided detection scheme was applied to segment metastatic tumors previously tracked by radiologists on CT images and computed image features. Two initial feature pools were built using image features computed from pre-therapy CT images only and image feature difference computed from both pre- and post-therapy images. A feature selection method was applied to select optimal features, and an equal-weighted fusion method was used to generate a new quantitative imaging marker from each pool to predict 6-month progression-free survival. The prediction accuracy between quantitative imaging markers and the Response Evaluation Criteria in Solid Tumors (RECIST) criteria was also compared. RESULTS The highest areas under the receiver operating characteristic curve are 0.684 ± 0.056 and 0.771 ± 0.050 when using a single image feature computed from pre-therapy CT images and feature difference computed from pre- and post-therapy CT images, respectively. Using two corresponding fusion-based image markers, the areas under the receiver operating characteristic curve significantly increased to 0.810 ± 0.045 and 0.829 ± 0.043 (P < 0.05), respectively. Overall prediction accuracy levels are 71.4%, 80.2%, and 74.7% when using two imaging markers and RECIST, respectively. CONCLUSIONS This study demonstrated the feasibility of predicting patients' response to chemotherapy using quantitative imaging markers computed from pre-therapy CT images. However, using image feature difference computed between pre- and post-therapy CT images yielded higher prediction accuracy.
Collapse
Affiliation(s)
- Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L. Boren Blvd, Norman, OK 73019
| | - Theresa Thai
- Health Science Center of University of Oklahoma, Oklahoma City, Oklahoma
| | | | - Katherine M Moxley
- Health Science Center of University of Oklahoma, Oklahoma City, Oklahoma
| | - Kathleen Moore
- Health Science Center of University of Oklahoma, Oklahoma City, Oklahoma
| | - Robert S Mannel
- Health Science Center of University of Oklahoma, Oklahoma City, Oklahoma
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L. Boren Blvd, Norman, OK 73019
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L. Boren Blvd, Norman, OK 73019
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L. Boren Blvd, Norman, OK 73019.
| |
Collapse
|
10
|
Abdel-Nasser M, Moreno A, A. Rashwan H, Puig D. Analyzing the evolution of breast tumors through flow fields and strain tensors. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2016.11.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Wang Y, Qiu Y, Thai T, Moore K, Liu H, Zheng B. Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome. BMC Med Imaging 2016; 16:52. [PMID: 27581075 PMCID: PMC5006425 DOI: 10.1186/s12880-016-0157-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2016] [Accepted: 08/17/2016] [Indexed: 01/03/2023] Open
Abstract
Background To investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome. Methods A computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients. Results The correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01). Conclusion Using a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome.
Collapse
Affiliation(s)
- Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA.
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Theresa Thai
- Health Science Center of University of Oklahoma, Oklahoma City, OK, 73104, USA
| | - Kathleen Moore
- Health Science Center of University of Oklahoma, Oklahoma City, OK, 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| |
Collapse
|