1
|
Wang M, Jiang H. PST-Radiomics: a PET/CT lymphoma classification method based on pseudo spatial-temporal radiomic features and structured atrous recurrent convolutional neural network. Phys Med Biol 2023; 68:235014. [PMID: 37956448 DOI: 10.1088/1361-6560/ad0c0f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/13/2023] [Indexed: 11/15/2023]
Abstract
Objective.Existing radiomic methods tend to treat each isolated tumor as an inseparable whole, when extracting radiomic features. However, they may discard the critical intra-tumor metabolic heterogeneity (ITMH) information, that contributes to triggering tumor subtypes. To improve lymphoma classification performance, we propose a pseudo spatial-temporal radiomic method (PST-Radiomics) based on positron emission tomography computed tomography (PET/CT).Approach.Specifically, to enable exploitation of ITMH, we first present a multi-threshold gross tumor volume sequence (GTVS). Next, we extract 1D radiomic features based on PET images and each volume in GTVS and create a pseudo spatial-temporal feature sequence (PSTFS) tightly interwoven with ITMH. Then, we reshape PSTFS to create 2D pseudo spatial-temporal feature maps (PSTFM), of which the columns are elements of PSTFS. Finally, to learn from PSTFM in an end-to-end manner, we build a light-weighted pseudo spatial-temporal radiomic network (PSTR-Net), in which a structured atrous recurrent convolutional neural network serves as a PET branch to better exploit the strong local dependencies in PSTFM, and a residual convolutional neural network is used as a CT branch to exploit conventional radiomic features extracted from CT volumes.Main results.We validate PST-Radiomics based on a PET/CT lymphoma subtype classification task. Experimental results quantitatively demonstrate the superiority of PST-Radiomics, when compared to existing radiomic methods.Significance.Feature map visualization of our method shows that it performs complex feature selection while extracting hierarchical feature maps, which qualitatively demonstrates its superiority.
Collapse
Affiliation(s)
- Meng Wang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
2
|
Chen M, Guo Y, Wang P, Chen Q, Bai L, Wang S, Su Y, Wang L, Gong G. An Effective Approach to Improve the Automatic Segmentation and Classification Accuracy of Brain Metastasis by Combining Multi-phase Delay Enhanced MR Images. J Digit Imaging 2023; 36:1782-1793. [PMID: 37259008 PMCID: PMC10406988 DOI: 10.1007/s10278-023-00856-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 06/02/2023] Open
Abstract
The objective of this study is to analyse the diffusion rule of the contrast media in multi-phase delayed enhanced magnetic resonance (MR) T1 images using radiomics and to construct an automatic classification and segmentation model of brain metastases (BM) based on support vector machine (SVM) and Dpn-UNet. A total of 189 BM patients with 1047 metastases were enrolled. Contrast-enhanced MR images were obtained at 1, 3, 5, 10, 18, and 20 min following contrast medium injection. The tumour target volume was delineated, and the radiomics features were extracted and analysed. BM segmentation and classification models in the MR images with different enhancement phases were constructed using Dpn-UNet and SVM, and differences in the BM segmentation and classification models with different enhancement times were compared. (1) The signal intensity for BM decreased with time delay and peaked at 3 min. (2) Among the 144 optimal radiomics features, 22 showed strong correlation with time (highest R-value = 0.82), while 41 showed strong correlation with volume (highest R-value = 0.99). (3) The average dice similarity coefficients of both the training and test sets were the highest at 10 min for the automatic segmentation of BM, reaching 0.92 and 0.82, respectively. (4) The areas under the curve (AUCs) for the classification of BM pathology type applying single-phase MRI was the highest at 10 min, reaching 0.674. The AUC for the classification of BM by applying the six-phase image combination was the highest, reaching 0.9596, and improved by 42.3% compared with that by applying single-phase images at 10 min. The dynamic changes of contrast media diffusion in BM can be reflected by multi-phase delayed enhancement based on radiomics, which can more objectively reflect the pathological types and significantly improve the accuracy of BM segmentation and classification.
Collapse
Affiliation(s)
- Mingming Chen
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
- College of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250117, China
| | - Yujie Guo
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Pengcheng Wang
- College of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250117, China
| | - Qi Chen
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Lu Bai
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd, 100084, Beijing, China
| | - Ya Su
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Lizhen Wang
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China
| | - Guanzhong Gong
- Department of Radiation Physics, Shandong First Medical University Affiliated Cancer Hospital, Shandong Cancer Hospital and Institute (Shandong Cancer Hospital), Jinan, 250117, China.
- Department of Engineering Physics, Tsing Hua University, Beijing, 100084, China.
| |
Collapse
|
3
|
Lee W, Park HJ, Lee HJ, Jun E, Song KB, Hwang DW, Lee JH, Lim K, Kim N, Lee SS, Byun JH, Kim HJ, Kim SC. Preoperative data-based deep learning model for predicting postoperative survival in pancreatic cancer patients. Int J Surg 2022; 105:106851. [PMID: 36049618 DOI: 10.1016/j.ijsu.2022.106851] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 08/01/2022] [Accepted: 08/12/2022] [Indexed: 10/15/2022]
Abstract
BACKGROUND Pancreatic ductal adenocarcinoma (PDAC) has a poor prognosis even after curative resection. A deep learning-based stratification of postoperative survival in the preoperative setting may aid the treatment decisions for improving prognosis. This study was aimed to develop a deep learning model based on preoperative data for predicting postoperative survival. METHODS The patients who underwent surgery for PDAC between January 2014 and May 2015. Clinical data-based machine learning models and computed tomography (CT) data-based deep learning models were developed separately, and ensemble learning was utilized to combine two models. The primary outcomes were the prediction of 2-year overall survival (OS) and 1-year recurrence-free survival (RFS). The model's performance was measured by area under the receiver operating curve (AUC) and was compared with that of American Joint Committee on Cancer (AJCC) 8th stage. RESULTS The median OS and RFS were 23 and 10 months in training dataset (n = 229), and 22 and 11 months in test dataset (n = 53), respectively. The AUC of the ensemble model for predicting 2-year OS and 1-year RFS in the test dataset was 0.76 and 0.74, respectively. The performance of the ensemble model was comparable to that of the AJCC in predicting 2-year OS (AUC, 0.67; P = 0.35) and superior to the AJCC in predicting 1-year RFS (AUC, 0.54; P = 0.049). CONCLUSION and relevance: Our ensemble model based on routine preoperative variables showed good performance for predicting prognosis for PDAC patients after surgery.
Collapse
Affiliation(s)
- Woohyung Lee
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Hack-Jin Lee
- R&D Team, DoAI Inc., Seongnam-si, Gyeonggi-do, Republic of Korea.
| | - Eunsung Jun
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Ki Byung Song
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Dae Wook Hwang
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Jae Hoon Lee
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Kyongmook Lim
- R&D Team, DoAI Inc., Seongnam-si, Gyeonggi-do, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine and Radiology, Research Institute of Radiology and Institute of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Seung Soo Lee
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Jae Ho Byun
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Hyoung Jung Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Song Cheol Kim
- Division of Hepatobiliary and Pancreatic Surgery, Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Schuurmans M, Alves N, Vendittelli P, Huisman H, Hermans J. Setting the Research Agenda for Clinical Artificial Intelligence in Pancreatic Adenocarcinoma Imaging. Cancers (Basel) 2022; 14:cancers14143498. [PMID: 35884559 PMCID: PMC9316850 DOI: 10.3390/cancers14143498] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancers worldwide, associated with a 98% loss of life expectancy and a 30% increase in disability-adjusted life years. Image-based artificial intelligence (AI) can help improve outcomes for PDAC given that current clinical guidelines are non-uniform and lack evidence-based consensus. However, research on image-based AI for PDAC is too scattered and lacking in sufficient quality to be incorporated into clinical workflows. In this review, an international, multi-disciplinary team of the world’s leading experts in pancreatic cancer breaks down the patient pathway and pinpoints the current clinical touchpoints in each stage. The available PDAC imaging AI literature addressing each pathway stage is then rigorously analyzed, and current performance and pitfalls are identified in a comprehensive overview. Finally, the future research agenda for clinically relevant, image-driven AI in PDAC is proposed. Abstract Pancreatic ductal adenocarcinoma (PDAC), estimated to become the second leading cause of cancer deaths in western societies by 2030, was flagged as a neglected cancer by the European Commission and the United States Congress. Due to lack of investment in research and development, combined with a complex and aggressive tumour biology, PDAC overall survival has not significantly improved the past decades. Cross-sectional imaging and histopathology play a crucial role throughout the patient pathway. However, current clinical guidelines for diagnostic workup, patient stratification, treatment response assessment, and follow-up are non-uniform and lack evidence-based consensus. Artificial Intelligence (AI) can leverage multimodal data to improve patient outcomes, but PDAC AI research is too scattered and lacking in quality to be incorporated into clinical workflows. This review describes the patient pathway and derives touchpoints for image-based AI research in collaboration with a multi-disciplinary, multi-institutional expert panel. The literature exploring AI to address these touchpoints is thoroughly retrieved and analysed to identify the existing trends and knowledge gaps. The results show absence of multi-institutional, well-curated datasets, an essential building block for robust AI applications. Furthermore, most research is unimodal, does not use state-of-the-art AI techniques, and lacks reliable ground truth. Based on this, the future research agenda for clinically relevant, image-driven AI in PDAC is proposed.
Collapse
Affiliation(s)
- Megan Schuurmans
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
- Correspondence: (M.S.); (N.A.)
| | - Natália Alves
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
- Correspondence: (M.S.); (N.A.)
| | - Pierpaolo Vendittelli
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
| | - Henkjan Huisman
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
| | - John Hermans
- Department of Medical Imaging, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands;
| |
Collapse
|
5
|
A Semi-Unsupervised Segmentation Methodology Based on Texture Recognition for Radiomics: A Preliminary Study on Brain Tumours. ELECTRONICS 2022. [DOI: 10.3390/electronics11101573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Because of the intrinsic anatomic complexity of the brain structures, brain tumors have a high mortality and disability rate, and an early diagnosis is mandatory to contain damages. The commonly used biopsy is the diagnostic gold standard method, but it is invasive and, due to intratumoral heterogeneity, biopsies may lead to an incorrect result. Moreover, some tumors cannot be resectable if located in critical eloquent areas. On the other hand, medical imaging procedures can evaluate the entire tumor in a non-invasive and reproducible way. Radiomics is an emerging diagnosis technique based on quantitative medical image analyses, which makes use of data provided by non-invasive diagnosis techniques such as X-ray, computer-tomography (CT), magnetic resonance (MR), and proton emission tomography (PET). Radiomics techniques require the comprehensive analysis of huge numbers of medical images to extract a large and useful number of phenotypic features (usually called radiomics biomarkers). The goal is to explore and obtain the associations between features of tumors, diagnosis and patients’ prognoses to choose the best treatments and maximize the patient’s survival rate. Current radiomics techniques are not standardized in term of segmentation, feature extraction, and selection, moreover, the decision on suitable therapies still requires the supervision of an expert doctor. In this paper, we propose a semi-automatic methodology aimed to help the identification and segmentation of malignant tissues by using the combination of binary texture recognition, growing area algorithm, and machine learning techniques. In particular, the proposed method not only helps to better identify pathologic tissues but also permits to analyze in a fast way the huge amount of data, in Dicom format, provided by non-invasive diagnostic techniques. A preliminary experimental assessment has been conducted, considering a real MRI database of brain tumors. The method has been compared with the segmentation software’s tools “slicer 3D”. The obtained results are quite promising and demonstrate the potentialities of the proposed semi-unsupervised segmentation methodology.
Collapse
|
6
|
Abstract
The basic pancreatic lesions include location, size, shape, number, capsule, calcification/calculi, hemorrhage, cystic degeneration, fibrosis, pancreatic duct alterations, and microvessel. One or more basic lesions form a kind of pancreatic disease. As recognizing the characteristic imaging features of pancreatic basic lesions and their relationships with pathology aids in differentiating the variety of pancreatic diseases. The purpose of this study is to review the pathological and imaging features of the basic pancreatic lesions.
Collapse
|
7
|
Chaddad A, Hassan L, Desrosiers C. Deep Radiomic Analysis for Predicting Coronavirus Disease 2019 in Computerized Tomography and X-Ray Images. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3-11. [PMID: 34669582 DOI: 10.1109/tnnls.2021.3119071] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
This article proposes to encode the distribution of features learned from a convolutional neural network (CNN) using a Gaussian mixture model (GMM). These parametric features, called GMM-CNN, are derived from chest computed tomography (CT) and X-ray scans of patients with coronavirus disease 2019 (COVID-19). We use the proposed GMM-CNN features as input to a robust classifier based on random forests (RFs) to differentiate between COVID-19 and other pneumonia cases. Our experiments assess the advantage of GMM-CNN features compared with standard CNN classification on test images. Using an RF classifier (80% samples for training; 20% samples for testing), GMM-CNN features encoded with two mixture components provided a significantly better performance than standard CNN classification ( ). Specifically, our method achieved an accuracy in the range of 96.00%-96.70% and an area under the receiver operator characteristic (ROC) curve in the range of 99.29%-99.45%, with the best performance obtained by combining GMM-CNN features from both CT and X-ray images. Our results suggest that the proposed GMM-CNN features could improve the prediction of COVID-19 in chest CT and X-ray scans.
Collapse
|
8
|
Li X, Gao H, Zhu J, Huang Y, Zhu Y, Huang W, Li Z, Sun K, Liu Z, Tian J, Li B. 3D Deep Learning Model for the Pretreatment Evaluation of Treatment Response in Esophageal Carcinoma: A Prospective Study (ChiCTR2000039279). Int J Radiat Oncol Biol Phys 2021; 111:926-935. [PMID: 34229050 DOI: 10.1016/j.ijrobp.2021.06.033] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 06/11/2021] [Accepted: 06/21/2021] [Indexed: 12/22/2022]
Abstract
PURPOSE To develop and validate a pretreatment computed tomography (CT)-based deep-learning (DL) model for predicting the treatment response to concurrent chemoradiation therapy (CCRT) among patients with locally advanced thoracic esophageal squamous cell carcinoma (TESCC). METHODS AND MATERIALS We conducted a prospective, multicenter study on the therapeutic efficacy of CCRT among TESCC patients across 9 hospitals in China (ChiCTR2000039279). A total of 306 patients with locally advanced TESCC diagnosed by histopathology from August 2015 to May 2020 were included in this study. A 3-dimensional DL radiomics model (3D-DLRM) was developed and validated based on pretreatment CT images to predict the response to CCRT. Furthermore, the prediction performance of the newly developed 3D-DLRM was analyzed according to 3 categories: radiation therapy plan, radiation field, and prescription dose used. RESULTS The 3D-DLRM achieved good prediction performance, with areas under the receiver operating characteristic curve of 0.897 (95% confidence interval, 0.840-0.959) for the training cohort and 0.833 (95% confidence interval, 0.654-1.000) for the validation cohort. Specifically, the 3D-DLRM accurately predicted patients who would not respond to CCRT, with a positive predictive value (PPV) of 100% for the validation cohort. Moreover, the 3D-DLRM performed well in all 3 categories, each with areas under the receiver operating characteristic curve of >0.8 and positive predictive values of approximately 100%. CONCLUSION The proposed pretreatment CT-based 3D-DLRM provides a potential tool for predicting the response to CCRT among patients with locally advanced TESCC. With the help of precise pretreatment prediction, we may guide the individualized treatment of patients and improve survival.
Collapse
Affiliation(s)
- Xiaoqin Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Han Gao
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Huang
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yongbei Zhu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Wei Huang
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Kai Sun
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China; Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China; Shandong Medical Imaging and Radiotherapy Engineering Center (SMIREC), Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
9
|
Abstract
PURPOSE OF REVIEW Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis. RECENT FINDINGS Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets. SUMMARY To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| | - Yousef Katib
- Department of Radiology, Taibah University, Al-Madinah, Saudi Arabia
| | - Lama Hassan
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
10
|
Magnetic Resonance Imaging Based Radiomic Models of Prostate Cancer: A Narrative Review. Cancers (Basel) 2021; 13:cancers13030552. [PMID: 33535569 PMCID: PMC7867056 DOI: 10.3390/cancers13030552] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 12/11/2022] Open
Abstract
Simple Summary The increasing interest in implementing artificial intelligence in radiomic models has occurred alongside advancement in the tools used for computer-aided diagnosis. Such tools typically apply both statistical and machine learning methodologies to assess the various modalities used in medical image analysis. Specific to prostate cancer, the radiomics pipeline has multiple facets that are amenable to improvement. This review discusses the steps of a magnetic resonance imaging based radiomics pipeline. Present successes, existing opportunities for refinement, and the most pertinent pending steps leading to clinical validation are highlighted. Abstract The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor’s grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa’s grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.
Collapse
|