1
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
2
|
Yin S, Peng Q, Li H, Zhang Z, You X, Fischer K, Furth SL, Fan Y, Tasian GE. Multi-instance Deep Learning of Ultrasound Imaging Data for Pattern Classification of Congenital Abnormalities of the Kidney and Urinary Tract in Children. Urology 2020; 142:183-189. [PMID: 32445770 PMCID: PMC7387180 DOI: 10.1016/j.urology.2020.05.019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 05/08/2020] [Indexed: 01/25/2023]
Abstract
OBJECTIVE To reliably and quickly diagnose children with posterior urethral valves (PUV), we developed a multi-instance deep learning method to automate image analysis. METHODS We built a robust pattern classifier to distinguish 86 children with PUV from 71 children with mild unilateral hydronephrosis based on ultrasound images (3504 in sagittal view and 2558 in transverse view) obtained during routine clinical care. RESULTS The multi-instance deep learning classifier performed better than classifiers built on either single sagittal images or single transverse images. Particularly, the deep learning classifiers built on single images in the sagittal view and single images in the transverse view obtained area under the receiver operating characteristic curve (AUC) values of 0.796 ± 0.064 and 0.815 ± 0.071, respectively. AUC values of the multi-instance deep learning classifiers built on images in the sagittal and transverse views with mean pooling operation were 0.949 ± 0.035 and 0.954 ± 0.033, respectively. The multi-instance deep learning classifiers built on images in both the sagittal and transverse views with a mean pooling operation obtained an AUC of 0.961 ± 0.026 with a classification rate of 0.925 ± 0.060, specificity of 0.986 ± 0.032, and sensitivity of 0.873 ± 0.120, respectively. Discriminative regions of the kidney located using classification activation mapping demonstrated that the deep learning techniques could identify meaningful anatomical features from ultrasound images. CONCLUSION The multi-instance deep learning method provides an automatic and accurate means to extract informative features from ultrasound images and discriminate infants with PUV from male children with unilateral hydronephrosis.
Collapse
Affiliation(s)
- Shi Yin
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Qinmu Peng
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Zhengqiang Zhang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xinge You
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Katherine Fischer
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA; Center for Pediatric Clinical Effectiveness, The Children's Hospital of Philadelphia, Philadelphia, PA
| | - Susan L Furth
- Department of Pediatrics, Division of Pediatric Nephrology, The Children's Hospital of Philadelphia, Philadelphia, PA
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.
| | - Gregory E Tasian
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA; Center for Pediatric Clinical Effectiveness, The Children's Hospital of Philadelphia, Philadelphia, PA; Department of Biostatistics, Epidemiology, and Informatics, The University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
3
|
Luo Y, Chen S, Valdes G. Machine learning for radiation outcome modeling and prediction. Med Phys 2020; 47:e178-e184. [DOI: 10.1002/mp.13570] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 03/26/2019] [Accepted: 04/09/2019] [Indexed: 12/18/2022] Open
Affiliation(s)
- Yi Luo
- Department of Radiation Oncology University of Michigan Ann Arbor MI 48103USA
| | - Shifeng Chen
- Department of Radiation Oncology University of Maryland School of Medicine Baltimore MD 21201USA
| | - Gilmer Valdes
- Department of Radiation Oncology University of California San Francisco CA 94158USA
| |
Collapse
|
4
|
Liu H, Li H, Habes M, Li Y, Boimel P, Janopaul-Naylor J, Xiao Y, Ben-Josef E, Fan Y. Robust Collaborative Clustering of Subjects and Radiomic Features for Cancer Prognosis. IEEE Trans Biomed Eng 2020; 67:2735-2744. [PMID: 31995474 DOI: 10.1109/tbme.2020.2969839] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Feature dimensionality reduction plays an important role in radiomic studies with a large number of features. However, conventional radiomic approaches may suffer from noise, and feature dimensionality reduction techniques are not equipped to utilize latent supervision information of patient data under study, such as differences in patients, to learn discriminative low dimensional representations. To achieve robustness to noise and feature dimensionality reduction with improved discriminative power, we develop a robust collaborative clustering method to simultaneously cluster patients and radiomic features into distinct groups respectively under adaptive sparse regularization. Our method is built upon matrix tri-factorization enhanced by adaptive sparsity regularization for simultaneous feature dimensionality reduction and denoising. Particularly, latent grouping information of patients with distinct radiomic features is learned and utilized as supervision information to guide the feature dimensionality reduction, and noise in radiomic features is adaptively isolated in a Bayesian framework under a general assumption of Laplacian distributions of transform-domain coefficients. Experiments on synthetic data have demonstrated the effectiveness of the proposed approach in data clustering, and evaluation results on an FDG-PET/CT dataset of rectal cancer patients have demonstrated that the proposed method outperforms alternative methods in terms of both patient stratification and prediction of patient clinical outcomes.
Collapse
|
5
|
Liu H, Li H, Boimel P, Janopaul-Naylor J, Zhong H, Xiao Y, Ben-Josef E, Fan Y. COLLABORATIVE CLUSTERING OF SUBJECTS AND RADIOMIC FEATURES FOR PREDICTING CLINICAL OUTCOMES OF RECTAL CANCER PATIENTS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2019; 2019:1303-1306. [PMID: 31803347 DOI: 10.1109/isbi.2019.8759512] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Most machine learning approaches in radiomics studies ignore the underlying difference of radiomic features computed from heterogeneous groups of patients, and intrinsic correlations of the features are not fully exploited yet. In order to better predict clinical outcomes of cancer patients, we adopt an unsupervised machine learning method to simultaneously stratify cancer patients into distinct risk groups based on their radiomic features and learn low-dimensional representations of the radiomic features for robust prediction of their clinical outcomes. Based on nonnegative matrix tri-factorization techniques, the proposed method applies collaborative clustering to radiomic features of cancer patients to obtain clusters of both the patients and their radiomic features so that patients with distinct imaging patterns are stratified into different risk groups and highly correlated radiomic features are grouped in the same radiomic feature clusters. Experiments on a FDG-PET/CT dataset of rectal cancer patients have demonstrated that the proposed method facilitates better stratification of patients with distinct survival patterns and learning of more effective low-dimensional feature representations that ultimately leads to accurate prediction of patient survival, outperforming conventional methods under comparison.
Collapse
Affiliation(s)
- Hangfan Liu
- Center for Biomedical Image Computing and Analysis, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Hongming Li
- Center for Biomedical Image Computing and Analysis, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Pamela Boimel
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Haoyu Zhong
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ying Xiao
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Edgar Ben-Josef
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analysis, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
6
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 144] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|