1
|
Nguyen D, Balagopal A, Bai T, Dohopolski M, Lin MH, Jiang S. Prior guided deep difference meta-learner for fast adaptation to stylized segmentation. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2025; 6:025016. [PMID: 40247921 PMCID: PMC12001319 DOI: 10.1088/2632-2153/adc970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 03/28/2025] [Accepted: 04/04/2025] [Indexed: 04/19/2025] Open
Abstract
Radiotherapy treatment planning requires segmenting anatomical structures in various styles, influenced by guidelines, protocols, preferences, or dose planning needs. Deep learning-based auto-segmentation models, trained on anatomical definitions, may not match local clinicians' styles at new institutions. Adapting these models can be challenging without sufficient resources. We hypothesize that consistent differences between segmentation styles and anatomical definitions can be learned from initial patients and applied to pre-trained models for more precise segmentation. We propose a Prior-guided deep difference meta-learner (DDL) to learn and adapt these differences. We collected data from 440 patients for model development and 30 for testing. The dataset includes contours of the prostate clinical target volume (CTV), parotid, and rectum. We developed a deep learning framework that segments new images with a matching style using example styles as a prior, without model retraining. The pre-trained segmentation models were adapted to three different clinician styles for post-operative CTV for prostate, parotid gland, and rectum segmentation. We tested the model's ability to learn unseen styles and compared its performance with transfer learning, using varying amounts of prior patient style data (0-10 patients). Performance was quantitatively evaluated using dice similarity coefficient (DSC) and Hausdorff distance. With exposure to only three patients for the model, the average DSC (%) improved from 78.6, 71.9, 63.0, 69.6, 52.2 and 46.3-84.4, 77.8, 73.0, 77.8, 70.5, 68.1, for CTVstyle1, CTVstyle2, CTVstyle3, Parotidsuperficial, Rectumsuperior, and Rectumposterior, respectively. The proposed Prior-guided DDL is a fast and effortless network for adapting a structure to new styles. The improved segmentation accuracy may result in reduced contour editing time, providing a more efficient and streamlined clinical workflow.
Collapse
Affiliation(s)
- Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Anjali Balagopal
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Ti Bai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
2
|
Arjmandi N, Mosleh‐Shirazi MA, Mohebbi S, Nasseri S, Mehdizadeh A, Pishevar Z, Hosseini S, Tehranizadeh AA, Momennezhad M. Evaluating the dosimetric impact of deep-learning-based auto-segmentation in prostate cancer radiotherapy: Insights into real-world clinical implementation and inter-observer variability. J Appl Clin Med Phys 2025; 26:e14569. [PMID: 39616629 PMCID: PMC11905246 DOI: 10.1002/acm2.14569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 08/14/2024] [Accepted: 10/21/2024] [Indexed: 03/14/2025] Open
Abstract
PURPOSE This study aimed to investigate the dosimetric impact of deep-learning-based auto-contouring for clinical target volume (CTV) and organs at risk (OARs) delineation in prostate cancer radiotherapy planning. Additionally, we compared the geometric accuracy of auto-contouring system to the variability observed between human experts. METHODS We evaluated 28 planning CT volumes, each with three contour sets: reference original contours (OC), auto-segmented contours (AC), and expert-defined manual contours (EC). We generated 3D-CRT and intensity-modulated radiation therapy (IMRT) plans for each contour set and compared their dosimetric characteristics using dose-volume histograms (DVHs), homogeneity index (HI), conformity index (CI), and gamma pass rate (3%/3 mm). RESULTS The geometric differences between automated contours and both their original manual reference contours and a second set of manually generated contours are smaller than the differences between two manually contoured sets for bladder, right femoral head (RFH), and left femoral head (LFH) structures. Furthermore, dose distribution accuracy using planning target volumes (PTVs) derived from automatically contoured CTVs and auto-contoured OARs demonstrated consistency with plans based on reference contours across all evaluated cases for both 3D-CRT and IMRT plans. For example, in IMRT plans, the average D95 for PTVs was 77.71 ± 0.53 Gy for EC plans, 77.58 ± 0.69 Gy for OC plans, and 77.62 ± 0.38 Gy for AC plans. Automated contouring significantly reduced contouring time, averaging 0.53 ± 0.08 min compared to 24.9 ± 4.5 min for manual delineation. CONCLUSION Our automated contouring system can reduce inter-expert variability and achieve dosimetric accuracy comparable to gold standard reference contours, highlighting its potential for streamlining clinical workflows. The quantitative analysis revealed no consistent trend of increasing or decreasing PTVs derived from automatically contoured CTVs and OAR doses due to automated contours, indicating minimal impact on treatment outcomes. These findings support the clinical feasibility of utilizing our deep-learning-based auto-contouring model for prostate cancer radiotherapy planning.
Collapse
Affiliation(s)
- Najmeh Arjmandi
- Department of Medical PhysicsFaculty of MedicineMashhad University of Medical SciencesMashhadIran
| | - Mohammad Amin Mosleh‐Shirazi
- Physics Unit, Department of Radio‐OncologyShiraz University of Medical SciencesShirazIran
- Ionizing and Non‐Ionizing Radiation Protection Research CenterSchool of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | | | - Shahrokh Nasseri
- Department of Medical PhysicsFaculty of MedicineMashhad University of Medical SciencesMashhadIran
- Medical Physics Research CenterFaculty of MedicineMashhad University of Medical SciencesMashhadIran
| | - Alireza Mehdizadeh
- Ionizing and Non‐Ionizing Radiation Protection Research CenterSchool of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Zohreh Pishevar
- Department of Radiation OncologyMashhad University of Medical SciencesMashhadIran
| | - Sare Hosseini
- Department of Radiation OncologyMashhad University of Medical SciencesMashhadIran
- Cancer Research CenterMashhad University of Medical SciencesMashhadIran
| | - Amin Amiri Tehranizadeh
- Department of Medical InformaticsFaculty of MedicineMashhad University of Medical SciencesMashhadIran
| | - Mehdi Momennezhad
- Department of Medical PhysicsFaculty of MedicineMashhad University of Medical SciencesMashhadIran
- Medical Physics Research CenterFaculty of MedicineMashhad University of Medical SciencesMashhadIran
| |
Collapse
|
3
|
Arjmandi N, Nasseri S, Momennezhad M, Mehdizadeh A, Hosseini S, Mohebbi S, Tehranizadeh AA, Pishevar Z. Automated contouring of CTV and OARs in planning CT scans using novel hybrid convolution-transformer networks for prostate cancer radiotherapy. Discov Oncol 2024; 15:323. [PMID: 39085488 PMCID: PMC11555176 DOI: 10.1007/s12672-024-01177-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Accepted: 07/18/2024] [Indexed: 08/02/2024] Open
Abstract
PURPOSE OBJECTIVE(S) Manual contouring of the prostate region in planning computed tomography (CT) images is a challenging task due to factors such as low contrast in soft tissues, inter- and intra-observer variability, and variations in organ size and shape. Consequently, the use of automated contouring methods can offer significant advantages. In this study, we aimed to investigate automated male pelvic multi-organ contouring in multi-center planning CT images using a hybrid convolutional neural network-vision transformer (CNN-ViT) that combines convolutional and ViT techniques. MATERIALS/METHODS We used retrospective data from 104 localized prostate cancer patients, with delineations of the clinical target volume (CTV) and critical organs at risk (OAR) for external beam radiotherapy. We introduced a novel attention-based fusion module that merges detailed features extracted through convolution with the global features obtained through the ViT. RESULTS The average dice similarity coefficients (DSCs) achieved by VGG16-UNet-ViT for the prostate, bladder, rectum, right femoral head (RFH), and left femoral head (LFH) were 91.75%, 95.32%, 87.00%, 96.30%, and 96.34%, respectively. Experiments conducted on multi-center planning CT images indicate that combining the ViT structure with the CNN network resulted in superior performance for all organs compared to pure CNN and transformer architectures. Furthermore, the proposed method achieves more precise contours compared to state-of-the-art techniques. CONCLUSION Results demonstrate that integrating ViT into CNN architectures significantly improves segmentation performance. These results show promise as a reliable and efficient tool to facilitate prostate radiotherapy treatment planning.
Collapse
Affiliation(s)
- Najmeh Arjmandi
- Department of Medical Physics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Shahrokh Nasseri
- Department of Medical Physics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
- Medical Physics Research Center, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Mehdi Momennezhad
- Department of Medical Physics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
- Medical Physics Research Center, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Alireza Mehdizadeh
- Ionizing and Non-Ionizing Radiation Protection Research Center, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Sare Hosseini
- Department of Radiation Oncology, Mashhad University of Medical Sciences, Mashhad, Iran
- Cancer Research Center, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Shokoufeh Mohebbi
- Medical Physics Department, Reza Radiation Oncology Center, Mashhad, Iran
| | - Amin Amiri Tehranizadeh
- Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
| | - Zohreh Pishevar
- Department of Radiation Oncology, Mashhad University of Medical Sciences, Mashhad, Iran.
| |
Collapse
|
4
|
Berenato S, Williams M, Woodley O, Möhler C, Evans E, Millin AE, Wheeler PA. Novel dosimetric validation of a commercial CT scanner based deep learning automated contour solution for prostate radiotherapy. Phys Med 2024; 122:103339. [PMID: 38718703 DOI: 10.1016/j.ejmp.2024.103339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 06/13/2024] Open
Abstract
PURPOSE OAR delineation accuracy influences: (i) a patient's optimised dose distribution (PD), (ii) the reported doses (RD) presented at approval, which represent plan quality. This study utilised a novel dosimetric validation methodology, comprehensively evaluating a new CT-scanner-based AI contouring solution in terms of PD and RD within an automated planning workflow. METHODS 20 prostate patients were selected to evaluate AI contouring for rectum, bladder, and proximal femurs. Five planning 'pipelines' were considered; three using AI contours with differing levels of manual editing (nominally none (AIStd), minor editing in specific regions (AIMinEd), and fully corrected (AIFullEd)). Remaining pipelines were manual delineations from two observers (MDOb1, MDOb2). Automated radiotherapy plans were generated for each pipeline. Geometric and dosimetric agreement of contour sets AIStd, AIMinEd, AIFullEd and MDOb2 were evaluated against the reference set MDOb1. Non-inferiority of AI pipelines was assessed, hypothesising that compared to MDOb1, absolute deviations in metrics for AI contouring were no greater than that from MDOb2. RESULTS Compared to MDOb1, organ delineation time was reduced by 24.9 min (96 %), 21.4 min (79 %) and 12.2 min (45 %) for AIStd, AIMinEd and AIFullEd respectively. All pipelines exhibited generally good dosimetric agreement with MDOb1. For RD, median deviations were within ± 1.8 cm3, ± 1.7 % and ± 0.6 Gy for absolute volume, relative volume and mean dose metrics respectively. For PD, respective values were within ± 0.4 cm3, ± 0.5 % and ± 0.2 Gy. Statistically (p < 0.05), AIMinEd and AIFullEd were dosimetrically non-inferior to MDOb2. CONCLUSIONS This novel dosimetric validation demonstrated that following targeted minor editing (AIMinEd), AI contours were dosimetrically non-inferior to manual delineations, reducing delineation time by 79 %.
Collapse
Affiliation(s)
- Salvatore Berenato
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Matthew Williams
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Owain Woodley
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | | | - Elin Evans
- Velindre Cancer Centre, Medical Directorate, Cardiff, Wales, United Kingdom
| | - Anthony E Millin
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Philip A Wheeler
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom.
| |
Collapse
|
5
|
Ji W, Chung ACS. Unsupervised Domain Adaptation for Medical Image Segmentation Using Transformer With Meta Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:820-831. [PMID: 37801381 DOI: 10.1109/tmi.2023.3322581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
Image segmentation is essential to medical image analysis as it provides the labeled regions of interest for the subsequent diagnosis and treatment. However, fully-supervised segmentation methods require high-quality annotations produced by experts, which is laborious and expensive. In addition, when performing segmentation on another unlabeled image modality, the segmentation performance will be adversely affected due to the domain shift. Unsupervised domain adaptation (UDA) is an effective way to tackle these problems, but the performance of the existing methods is still desired to improve. Also, despite the effectiveness of recent Transformer-based methods in medical image segmentation, the adaptability of Transformers is rarely investigated. In this paper, we present a novel UDA framework using a Transformer for building a cross-modality segmentation method with the advantages of learning long-range dependencies and transferring attentive information. To fully utilize the attention learned by the Transformer in UDA, we propose Meta Attention (MA) and use it to perform a fully attention-based alignment scheme, which can learn the hierarchical consistencies of attention and transfer more discriminative information between two modalities. We have conducted extensive experiments on cross-modality segmentation using three datasets, including a whole heart segmentation dataset (MMWHS), an abdominal organ segmentation dataset, and a brain tumor segmentation dataset. The promising results show that our method can significantly improve performance compared with the state-of-the-art UDA methods.
Collapse
|
6
|
Zhang J, Yang Y, Fang M, Xu Y, Ji Y, Chen M. A research on the improved rotational robustness for thoracic organ delineation by using joint learning of segmenting spatially-correlated organs: A U-net based comparison. J Appl Clin Med Phys 2023; 24:e14096. [PMID: 37469242 PMCID: PMC10647980 DOI: 10.1002/acm2.14096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 06/29/2023] [Indexed: 07/21/2023] Open
Abstract
PURPOSE To study the improved rotational robustness by using joint learning of spatially-correlated organ segmentation (SCOS) for thoracic organ delineation. The network structure is not our point. METHODS The SCOS was implemented in a U-net-like model (abbr. SCOS-net) and evaluated on unseen rotated test sets. Two hundred sixty-seven patients with thoracic tumors (232 without rotation and 35 with rotation) were enrolled. The training and validation images came from 61 randomly chosen unrotated patients. The test data included two sets. One consisted of 3000 slices from the rest 171 unrotated patients. They were rotated by us by -30°∼30°. One was the images from the 35 rotated patients. The lung, heart, and spinal cord were delineated by experienced radiation oncologists and regarded as ground truth. The SCOS-net was compared with its single-task learning counterparts, two published multiple learning task settings, and rotation augmentation. Dice, 3 distance metrics (maximum and 95th percentile of Hausdorff distances and average surface distance (ASD)) and the number of cases where ASD = infinity were adopted. We analyzed the results using visualization techniques. RESULTS In terms of no augmentation, the SCOS-net achieves the best lung and spinal cord segmentations and comparable heart delineation. With augmentation, SCOS performs better in some cases. CONCLUSION The proposed SCOS can improve rotational robustness, and is promising in clinical applications for its low network capacity and computational cost.
Collapse
Affiliation(s)
- Jie Zhang
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| | - Yiwei Yang
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| | - Min Fang
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| | - Yujin Xu
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| | - Yongling Ji
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| | - Ming Chen
- The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital)HangzhouZhejiangChina
- Institute of Basic Medicine and Cancer (IBMC)Chinese Academy of SciencesHangzhouZhejiangChina
| |
Collapse
|
7
|
Cubero L, García-Elcano L, Mylona E, Boue-Rafle A, Cozzarini C, Ubeira Gabellini MG, Rancati T, Fiorino C, de Crevoisier R, Acosta O, Pascau J. Deep learning-based segmentation of prostatic urethra on computed tomography scans for treatment planning. Phys Imaging Radiat Oncol 2023; 26:100431. [PMID: 37007914 PMCID: PMC10064422 DOI: 10.1016/j.phro.2023.100431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/08/2023] [Accepted: 03/14/2023] [Indexed: 04/04/2023] Open
Abstract
Background and purpose The intraprostatic urethra is an organ at risk in prostate cancer radiotherapy, but its segmentation in computed tomography (CT) is challenging. This work sought to: i) propose an automatic pipeline for intraprostatic urethra segmentation in CT, ii) analyze the dose to the urethra, iii) compare the predictions to magnetic resonance (MR) contours. Materials and methods First, we trained Deep Learning networks to segment the rectum, bladder, prostate, and seminal vesicles. Then, the proposed Deep Learning Urethra Segmentation model was trained with the bladder and prostate distance transforms and 44 labeled CT with visible catheters. The evaluation was performed on 11 datasets, calculating centerline distance (CLD) and percentage of centerline within 3.5 and 5 mm. We applied this method to a dataset of 32 patients treated with intensity-modulated radiation therapy (IMRT) to quantify the urethral dose. Finally, we compared predicted intraprostatic urethra contours to manual delineations in MR for 15 patients without catheter. Results A mean CLD of 1.6 ± 0.8 mm for the whole urethra and 1.7 ± 1.4, 1.5 ± 0.9, and 1.7 ± 0.9 mm for the top, middle, and bottom thirds were obtained in CT. On average, 94% and 97% of the segmented centerlines were within a 3.5 mm and 5 mm radius, respectively. In IMRT, the urethra received a higher dose than the overall prostate. We also found a slight deviation between the predicted and manual MR delineations. Conclusion A fully-automatic segmentation pipeline was validated to delineate the intraprostatic urethra in CT images.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Laura García-Elcano
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Adrien Boue-Rafle
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Cesare Cozzarini
- Department of Radiation Oncology, San Raffaele Scientific Institute - IRCCS, Milan, Italy
| | | | - Tiziana Rancati
- Science Unit, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Claudio Fiorino
- Department of Medical Physics, San Raffaele Scientific Institute - IRCCS, Milan, Italy
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Corresponding author at: Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain.
| |
Collapse
|
8
|
Semi-Supervised Medical Image Segmentation Guided by Bi-Directional Constrained Dual-Task Consistency. Bioengineering (Basel) 2023; 10:bioengineering10020225. [PMID: 36829720 PMCID: PMC9952498 DOI: 10.3390/bioengineering10020225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 01/28/2023] [Accepted: 01/31/2023] [Indexed: 02/10/2023] Open
Abstract
BACKGROUND Medical image processing tasks represented by multi-object segmentation are of great significance for surgical planning, robot-assisted surgery, and surgical safety. However, the exceptionally low contrast among tissues and limited available annotated data makes developing an automatic segmentation algorithm for pelvic CT challenging. METHODS A bi-direction constrained dual-task consistency model named PICT is proposed to improve segmentation quality by leveraging free unlabeled data. First, to learn more unmarked data features, it encourages the model prediction of the interpolated image to be consistent with the interpolation of the model prediction at the pixel, model, and data levels. Moreover, to constrain the error prediction of interpolation interference, PICT designs an auxiliary pseudo-supervision task that focuses on the underlying information of non-interpolation data. Finally, an effective loss algorithm for both consistency tasks is designed to ensure the complementary manner and produce more reliable predictions. RESULTS Quantitative experiments show that the proposed PICT achieves 87.18%, 96.42%, and 79.41% mean DSC score on ACDC, CTPelvic1k, and the individual Multi-tissue Pelvis dataset with gains of around 0.8%, 0.5%, and 1% compared to the state-of-the-art semi-supervised method. Compared to the baseline supervised method, the PICT brings over 3-9% improvements. CONCLUSIONS The developed PICT model can effectively leverage unlabeled data to improve segmentation quality of low contrast medical images. The segmentation result could improve the precision of surgical path planning and provide input for robot-assisted surgery.
Collapse
|
9
|
Ramajayam S, Rajavel S, Samidurai R, Cao Y. Finite-Time Synchronization for T–S Fuzzy Complex-Valued Inertial Delayed Neural Networks Via Decomposition Approach. Neural Process Lett 2023. [DOI: 10.1007/s11063-022-11117-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
10
|
Xia W, Ameri G, Fakim D, Akhuanzada H, Raza MZ, Shobeiri SA, McLean L, Chen ECS. Automatic Plane of Minimal Hiatal Dimensions Extraction From 3D Female Pelvic Floor Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3873-3883. [PMID: 35984794 DOI: 10.1109/tmi.2022.3199968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
There is an increasing interest in the applications of 3D ultrasound imaging of the pelvic floor to improve the diagnosis, treatment, and surgical planning of female pelvic floor dysfunction (PFD). Pelvic floor biometrics are obtained on an oblique image plane known as the plane of minimal hiatal dimensions (PMHD). Identifying this plane requires the detection of two anatomical landmarks, the pubic symphysis and anorectal angle. The manual detection of the anatomical landmarks and the PMHD in 3D pelvic ultrasound requires expert knowledge of the pelvic floor anatomy, and is challenging, time-consuming, and subject to human error. These challenges have hindered the adoption of such quantitative analysis in the clinic. This work presents an automatic approach to identify the anatomical landmarks and extract the PMHD from 3D pelvic ultrasound volumes. To demonstrate clinical utility and a complete automated clinical task, an automatic segmentation of the levator-ani muscle on the extracted PMHD images was also performed. Experiments using 73 test images of patients during a pelvic muscle resting state showed that this algorithm has the capability to accurately identify the PMHD with an average Dice of 0.89 and an average mean boundary distance of 2.25mm. Further evaluation of the PMHD detection algorithm using 35 images of patients performing pelvic muscle contraction resulted in an average Dice of 0.88 and an average mean boundary distance of 2.75mm. This work had the potential to pave the way towards the adoption of ultrasound in the clinic and development of personalized treatment for PFD.
Collapse
|
11
|
Mao L, Ren F, Yang D, Zhang R. ChaInNet: Deep Chain Instance Segmentation Network for Panoptic Segmentation. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10899-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
ACP: Automatic Channel Pruning Method by Introducing Additional Loss for Deep Neural Networks. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10926-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Liu J, Shi J, Hao F, Dai M, Zhang Z. Arctangent entropy: a new fast threshold segmentation entropy for light colored character image on semiconductor chip surface. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01079-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
14
|
Tang Z, Li Z, Yang J, Qi F. P &GGD: A Joint-Way Model Optimization Strategy Based on Filter Pruning and Filter Grafting For Tea Leaves Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10813-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Semantic Image Segmentation with Feature Fusion Based on Laplacian Pyramid. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10801-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
16
|
Liu Y, Du J, Vong CM, Yue G, Yu J, Wang Y, Lei B, Wang T. Scale-adaptive super-feature based MetricUNet for brain tumor segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103442] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Nivetha S, Inbarani HH. Neighborhood Rough Neural Network Approach for COVID-19 Image Classification. Neural Process Lett 2022; 54:1919-1941. [PMID: 35079228 PMCID: PMC8776386 DOI: 10.1007/s11063-021-10712-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2021] [Indexed: 12/05/2022]
Abstract
The rapid spread of the new Coronavirus, COVID-19, causes serious symptoms in humans and can lead to fatality. A COVID-19 infected person can experience a dry cough, muscle pain, headache, fever, sore throat, and mild to moderate respiratory illness, according to a clinical report. A chest X-ray (also known as radiography) or a chest CT scan are more effective imaging techniques for diagnosing lung cancer. Computed Tomography (CT) scan images allow for fast and precise COVID-19 screening. In this paper, a novel hybridized approach based on the Neighborhood Rough Set Classification method (NRSC) and Backpropagation Neural Network (BPN) is proposed to classify COVID and NON-COVID images. The proposed novel classification algorithm is compared with other existing benchmark approaches such as Neighborhood Rough Set, Backpropagation Neural Network, Decision Tree, Random Forest Classifier, Naive Bayes Classifier, K- Nearest Neighbor, and Support Vector Machine. Various classification accuracy measures are used to assess the efficacy of the classification algorithms.
Collapse
Affiliation(s)
- S. Nivetha
- Department of Computer Science, Periyar University, Salem, Tamil Nadu India
| | - H. Hannah Inbarani
- Department of Computer Science, Periyar University, Salem, Tamil Nadu India
| |
Collapse
|
18
|
Hu Y, Zhang B, Zhang Y, Jiang C, Chen Z. A feature-level full-reference image denoising quality assessment method based on joint sparse representation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03052-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Luo A, Yan X, Luo J. A Novel Chinese Points of Interest Classification Method Based on Weighted Quadratic Surface Support Vector Machine. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10725-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
|
21
|
Distributed Analysis Dictionary Learning Using a Diffusion Strategy. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10729-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
22
|
Estimating pose from pressure data for smart beds with deep image-based pose estimators. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02418-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Huang H, Zheng H, Lin L, Cai M, Hu H, Zhang Q, Chen Q, Iwamoto Y, Han X, Chen YW, Tong R. Medical Image Segmentation With Deep Atlas Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3519-3530. [PMID: 34129495 DOI: 10.1109/tmi.2021.3089661] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Organ segmentation from medical images is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of limited annotated data, low-contrast and non-homogenous textures. Compared with natural images, organs in the medical images have obvious anatomical prior knowledge (e.g., organ shape and position), which can be used to improve the segmentation accuracy. In this paper, we propose a novel segmentation framework which integrates the medical image anatomical prior through loss into the deep learning models. The proposed prior loss function is based on probabilistic atlas, which is called as deep atlas prior (DAP). It includes prior location and shape information of organs, which are important prior information for accurate organ segmentation. Further, we combine the proposed deep atlas prior loss with the conventional likelihood losses such as Dice loss and focal loss into an adaptive Bayesian loss in a Bayesian framework, which consists of a prior and a likelihood. The adaptive Bayesian loss dynamically adjusts the ratio of the DAP loss and the likelihood loss in the training epoch for better learning. The proposed loss function is universal and can be combined with a wide variety of existing deep segmentation models to further enhance their performance. We verify the significance of our proposed framework with some state-of-the-art models, including fully-supervised and semi-supervised segmentation models on a public dataset (ISBI LiTS 2017 Challenge) for liver segmentation and a private dataset for spleen segmentation.
Collapse
|
24
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
25
|
A Hybrid Approach using the Fuzzy Logic System and the Modified Genetic Algorithm for Prediction of Skin Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10656-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
26
|
|
27
|
|
28
|
Abstract
Early screening of COVID-19 is essential for pandemic control, and thus to relieve stress on the health care system. Lung segmentation from chest X-ray (CXR) is a promising method for early diagnoses of pulmonary diseases. Recently, deep learning has achieved great success in supervised lung segmentation. However, how to effectively utilize the lung region in screening COVID-19 still remains a challenge due to domain shift and lack of manual pixel-level annotations. We hereby propose a multi-appearance COVID-19 screening framework by using lung region priors derived from CXR images. Firstly, we propose a multi-scale adversarial domain adaptation network (MS-AdaNet) to boost the cross-domain lung segmentation task as the prior knowledge to the classification network. Then, we construct a multi-appearance network (MA-Net), which is composed of three sub-networks to realize multi-appearance feature extraction and fusion using lung region priors. At last, we can obtain prediction results from normal, viral pneumonia, and COVID-19 using the proposed MA-Net. We extend the proposed MS-AdaNet for lung segmentation task on three different public CXR datasets. The results suggest that the MS-AdaNet outperforms contrastive methods in cross-domain lung segmentation. Moreover, experiments reveal that the proposed MA-Net achieves accuracy of 98.83% and F1-score of 98.71% on COVID-19 screening. The results indicate that the proposed MA-Net can obtain significant performance on COVID-19 screening.
Collapse
|
29
|
Yan C, Lu JJ, Chen K, Wang L, Lu H, Yu L, Sun M, Xu J. Scale- and Slice-aware Net (S 2 aNet) for 3D segmentation of organs and musculoskeletal structures in pelvic MRI. Magn Reson Med 2021; 87:431-445. [PMID: 34337773 DOI: 10.1002/mrm.28939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 06/11/2021] [Accepted: 07/04/2021] [Indexed: 11/06/2022]
Abstract
PURPOSE MRI of organs and musculoskeletal structures in the female pelvis presents a unique display of pelvic anatomy. Automated segmentation of pelvic structures plays an important role in personalized diagnosis and treatment on pelvic structures disease. Pelvic organ systems are very complicated, and it is a challenging task for 3D segmentation of massive pelvic structures on MRI. METHODS A new Scale- and Slice-aware Net ( S 2 aNet) is presented for 3D dense segmentation of 54 organs and musculoskeletal structures in female pelvic MR images. A Scale-aware module is designed to capture the spatial and semantic information of different-scale structures. A Slice-aware module is introduced to model similar spatial relationships of consecutive slices in 3D data. Moreover, S 2 aNet leverages a weight-adaptive loss optimization strategy to reinforce the supervision with more discriminative capability on hard samples and categories. RESULTS Experiments have been performed on a pelvic MRI cohort of 27 MR images from 27 patient cases. Across the cohort and 54 categories of organs and musculoskeletal structures manually delineated, S 2 aNet was shown to outperform the UNet framework and other state-of-the-art fully convolutional networks in terms of sensitivity, Dice similarity coefficient and relative volume difference. CONCLUSION The experimental results on the pelvic 3D MR dataset show that the proposed S 2 aNet achieves excellent segmentation results compared to other state-of-the-art models. To our knowledge, S 2 aNet is the first model to achieve 3D dense segmentation for 54 musculoskeletal structures on pelvic MRI, which will be leveraged to the clinical application under the support of more cases in the future.
Collapse
Affiliation(s)
- Chaoyang Yan
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Jing-Jing Lu
- Department of Radiology, Beijing United Family Hospital, Beijing, China.,Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Kang Chen
- Eight-Year Program of Clinical Medicine, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Lei Wang
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Li Yu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Mengyan Sun
- Department of Radiology, Beijing Chest Hospital, Capital Medical University, Beijing, China.,Beijing Tuberculosis and Thoracic Tumor Institute, Beijing, China
| | - Jun Xu
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
30
|
He K, Lian C, Zhang B, Zhang X, Cao X, Nie D, Gao Y, Zhang J, Shen D. HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2118-2128. [PMID: 33848243 DOI: 10.1109/tmi.2021.3072956] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.
Collapse
|
31
|
Kitrungrotsakul T, Chen Q, Wu H, Iwamoto Y, Hu H, Zhu W, Chen C, Xu F, Zhou Y, Lin L, Tong R, Li J, Chen YW. Attention-RefNet: Interactive Attention Refinement Network for Infected Area Segmentation of COVID-19. IEEE J Biomed Health Inform 2021; 25:2363-2373. [PMID: 34033549 PMCID: PMC8545076 DOI: 10.1109/jbhi.2021.3082527] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
COVID-19 pneumonia is a disease that causes an existential health crisis in many people by directly affecting and damaging lung cells. The segmentation of infected areas from computed tomography (CT) images can be used to assist and provide useful information for COVID-19 diagnosis. Although several deep learning-based segmentation methods have been proposed for COVID-19 segmentation and have achieved state-of-the-art results, the segmentation accuracy is still not high enough (approximately 85%) due to the variations of COVID-19 infected areas (such as shape and size variations) and the similarities between COVID-19 and non-COVID-infected areas. To improve the segmentation accuracy of COVID-19 infected areas, we propose an interactive attention refinement network (Attention RefNet). The interactive attention refinement network can be connected with any segmentation network and trained with the segmentation network in an end-to-end fashion. We propose a skip connection attention module to improve the important features in both segmentation and refinement networks and a seed point module to enhance the important seeds (positions) for interactive refinement. The effectiveness of the proposed method was demonstrated on public datasets (COVID-19CTSeg and MICCAI) and our private multicenter dataset. The segmentation accuracy was improved to more than 90%. We also confirmed the generalizability of the proposed network on our multicenter dataset. The proposed method can still achieve high segmentation accuracy.
Collapse
|
32
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
33
|
Xu X, Lian C, Wang S, Zhu T, Chen RC, Wang AZ, Royce TJ, Yap PT, Shen D, Lian J. Asymmetric multi-task attention network for prostate bed segmentation in computed tomography images. Med Image Anal 2021; 72:102116. [PMID: 34217953 DOI: 10.1016/j.media.2021.102116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 05/18/2021] [Accepted: 05/21/2021] [Indexed: 10/21/2022]
Abstract
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a "virtual" target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Shuai Wang
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Tong Zhu
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald C Chen
- Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Andrew Z Wang
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor J Royce
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | - Jun Lian
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
34
|
Balagopal A, Nguyen D, Morgan H, Weng Y, Dohopolski M, Lin MH, Barkousaraie AS, Gonzalez Y, Garant A, Desai N, Hannan R, Jiang S. A deep learning-based framework for segmenting invisible clinical target volumes with estimated uncertainties for post-operative prostate cancer radiotherapy. Med Image Anal 2021; 72:102101. [PMID: 34111573 DOI: 10.1016/j.media.2021.102101] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 04/18/2021] [Accepted: 05/06/2021] [Indexed: 12/16/2022]
Abstract
In post-operative radiotherapy for prostate cancer, precisely contouring the clinical target volume (CTV) to be irradiated is challenging, because the cancerous prostate gland has been surgically removed, so the CTV encompasses the microscopic spread of tumor cells, which cannot be visualized in clinical images like computed tomography or magnetic resonance imaging. In current clinical practice, physicians' segment CTVs manually based on their relationship with nearby organs and other clinical information, but this allows large inter-physician variability. Automating post-operative prostate CTV segmentation with traditional image segmentation methods has yielded suboptimal results. We propose using deep learning to accurately segment post-operative prostate CTVs. The model proposed is trained using labels that were clinically approved and used for patient treatment. To segment the CTV, we segment nearby organs first, then use their relationship with the CTV to assist CTV segmentation. To ease the encoding of distance-based features, which are important for learning both the CTV contours' overlap with the surrounding OARs and the distance from their borders, we add distance prediction as an auxiliary task to the CTV network. To make the DL model practical for clinical use, we use Monte Carlo dropout (MCDO) to estimate model uncertainty. Using MCDO, we estimate and visualize the 95% upper and lower confidence bounds for each prediction which informs the physicians of areas that might require correction. The model proposed achieves an average Dice similarity coefficient (DSC) of 0.87 on a holdout test dataset, much better than established methods, such as atlas-based methods (DSC<0.7). The predicted contours agree with physician contours better than medical resident contours do. A reader study showed that the clinical acceptability of the automatically segmented CTV contours is equal to that of approved clinical contours manually drawn by physicians. Our deep learning model can accurately segment CTVs with the help of surrounding organ masks. Because the DL framework can outperform residents, it can be implemented practically in a clinical workflow to generate initial CTV contours or to guide residents in generating these contours for physicians to review and revise. Providing physicians with the 95% confidence bounds could streamline the review process for an efficient clinical workflow as this would enable physicians to concentrate their inspecting and editing efforts on the large uncertain areas.
Collapse
Affiliation(s)
- Anjali Balagopal
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Howard Morgan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Yaochung Weng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Azar Sadeghnejad Barkousaraie
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Yesenia Gonzalez
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Aurelie Garant
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Neil Desai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Raquibul Hannan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States.
| |
Collapse
|
35
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
36
|
Sun H, Lu Z, Fan R, Xiong W, Xie K, Ni X, Yang J. Research on obtaining pseudo CT images based on stacked generative adversarial network. Quant Imaging Med Surg 2021; 11:1983-2000. [PMID: 33936980 DOI: 10.21037/qims-20-1019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background To investigate the feasibility of using a stacked generative adversarial network (sGAN) to synthesize pseudo computed tomography (CT) images based on ultrasound (US) images. Methods The pre-radiotherapy US and CT images of 75 patients with cervical cancer were selected for the training set of pseudo-image synthesis. In the first stage, labeled US images were used as the first conditional GAN input to obtain low-resolution pseudo CT images, and in the second stage, a super-resolution reconstruction GAN was used. The pseudo CT image obtained in the first stage was used as an input, following which a high-resolution pseudo CT image with clear texture and accurate grayscale information was obtained. Five cross validation tests were performed to verify our model. The mean absolute error (MAE) was used to compare each pseudo CT with the same patient's real CT image. Also, another 10 cases of patients with cervical cancer, before radiotherapy, were selected for testing, and the pseudo CT image obtained using the neural style transfer (NSF) and CycleGAN methods were compared with that obtained using the sGAN method proposed in this study. Finally, the dosimetric accuracy of pseudo CT images was verified by phantom experiments. Results The MAE metric values between the pseudo CT obtained based on sGAN, and the real CT in five-fold cross validation are 66.82±1.59 HU, 66.36±1.85 HU, 67.26±2.37 HU, 66.34±1.75 HU, and 67.22±1.30 HU, respectively. The results of the metrics, namely, normalized mutual information (NMI), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR), between the pseudo CT images obtained using the sGAN method and the ground truth CT (CTgt) images were compared with those of the other two methods via the paired t-test, and the differences were statistically significant. The dice similarity coefficient (DSC) measurement results showed that the pseudo CT images obtained using the sGAN method were more similar to the CTgt images of organs at risk. The dosimetric phantom experiments also showed that the dose distribution between the pseudo CT images synthesized by the new method was similar to that of the CTgt images. Conclusions Compared with NSF and CycleGAN methods, the sGAN method can obtain more accurate pseudo CT images, thereby providing a new method for image guidance in radiotherapy for cervical cancer.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Zhengda Lu
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,The Center of Medical Physics, Nanjing Medical University, Changzhou, China.,The Key Laboratory of Medical Physics, Changzhou, China
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Wenjun Xiong
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Kai Xie
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,The Center of Medical Physics, Nanjing Medical University, Changzhou, China.,The Key Laboratory of Medical Physics, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,The Center of Medical Physics, Nanjing Medical University, Changzhou, China.,The Key Laboratory of Medical Physics, Changzhou, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
37
|
He K, Zhao W, Xie X, Ji W, Liu M, Tang Z, Shi Y, Shi F, Gao Y, Liu J, Zhang J, Shen D. Synergistic learning of lung lobe segmentation and hierarchical multi-instance classification for automated severity assessment of COVID-19 in CT images. PATTERN RECOGNITION 2021; 113:107828. [PMID: 33495661 PMCID: PMC7816595 DOI: 10.1016/j.patcog.2021.107828] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 12/10/2020] [Accepted: 12/22/2020] [Indexed: 05/03/2023]
Abstract
Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M2 UNet) is then developed to assess the severity of COVID-19 patients and also segment the lung lobe simultaneously. Our M2 UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment (with a unique hierarchical multi-instance learning strategy). Here, the context information provided by segmentation can be implicitly employed to improve the performance of severity assessment. Extensive experiments were performed on a real COVID-19 CT image dataset consisting of 666 chest CT images, with results suggesting the effectiveness of our proposed method compared to several state-of-the-art methods.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China
- National Institute of Healthcare Data Science at Nanjing University, China
| | - Wei Zhao
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
| | - Xingzhi Xie
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
| | - Wen Ji
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Mingxia Liu
- Biomedical Research Imaging Center and the Department of Radiology, University of North Carolina, Chapel Hill, NC, U.S
| | - Zhenyu Tang
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China
| | - Yinghuan Shi
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Jun Liu
- Department of Radiology, the Second Xiangya Hospital, Central South University, Changsha,Hunan, China
- Department of Radiology Quality Control Center, Changsha, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China
- National Institute of Healthcare Data Science at Nanjing University, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
38
|
He K, Lian C, Adeli E, Huo J, Gao Y, Zhang B, Zhang J, Shen D. MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling. Med Image Anal 2021; 71:102039. [PMID: 33831595 DOI: 10.1016/j.media.2021.102039] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 02/13/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences and the Department of Computer Science, Stanford University, CA, USA
| | - Jing Huo
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital, Nanjing University Medical School, Nanjing, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
39
|
Sun H, Fan R, Li C, Lu Z, Xie K, Ni X, Yang J. Imaging Study of Pseudo-CT Synthesized From Cone-Beam CT Based on 3D CycleGAN in Radiotherapy. Front Oncol 2021; 11:603844. [PMID: 33777746 PMCID: PMC7994515 DOI: 10.3389/fonc.2021.603844] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose To propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans. Methods The improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis. Results The MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm). Conclusion The pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Chunying Li
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Zhengda Lu
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Kai Xie
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
40
|
Mandel W, Oulbacha R, Roy-Beaudry M, Parent S, Kadoury S. Image-Guided Tethering Spine Surgery With Outcome Prediction Using Spatio-Temporal Dynamic Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:491-502. [PMID: 33048671 DOI: 10.1109/tmi.2020.3030741] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recent fusionless surgical techniques for corrective spine surgery such as Anterior Vertebral Body Growth Modulation (AVBGM) allow to treat mild to severe spinal deformations by tethering vertebral bodies together, helping to preserve lower back flexibility. Forecasting the outcome of AVBGM from skeletally immature patients remains elusive with several factors involved in corrective vertebral tethering, but could help orthopaedic surgeons plan and tailor AVBGM procedures prior to surgery. We introduce an intra-operative framework forecasting the outcomes during AVBGM surgery in scoliosis patients. The method is based on spatial-temporal corrective networks, which learns the similarity in segmental corrections between patients and integrates a long-term shifting mechanism designed to cope with timing differences in onset to surgery dates, between patients in the training set. The model captures dynamic geometric dependencies in scoliosis patients, ensuring long-term dependency with temporal dynamics in curve evolution and integrated features from inter-vertebral disks extracted from T2-w MRI. The loss function of the network introduces a regularization term based on learned group-average piecewise-geodesic path to ensure the generated corrective transformations are coherent with regards to the observed evolution of spine corrections at follow-up exams. The network was trained on 695 3D spine models and tested on 72 operative patients using a set of 3D spine reconstructions as inputs. The spatio-temporal network predicted outputs with errors of 1.8 ± 0.8mm in 3D anatomical landmarks, yielding geometries similar to ground-truth spine reconstructions obtained at one and two year follow-ups and with significant improvements to comparative deep learning and biomechanical models.
Collapse
|
41
|
Gonzalez Y, Shen C, Jung H, Nguyen D, Jiang SB, Albuquerque K, Jia X. Semi-automatic sigmoid colon segmentation in CT for radiation therapy treatment planning via an iterative 2.5-D deep learning approach. Med Image Anal 2021; 68:101896. [PMID: 33383333 PMCID: PMC7847132 DOI: 10.1016/j.media.2020.101896] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 11/03/2020] [Accepted: 11/04/2020] [Indexed: 10/22/2022]
Abstract
Automatic sigmoid colon segmentation in CT for radiotherapy treatment planning is challenging due to complex organ shape, close distances to other organs, and large variations in size, shape, and filling status. The patient bowel is often not evacuated, and CT contrast enhancement is not used, which further increase problem difficulty. Deep learning (DL) has demonstrated its power in many segmentation problems. However, standard 2-D approaches cannot handle the sigmoid segmentation problem due to incomplete geometry information and 3-D approaches often encounters the challenge of a limited training data size. Motivated by human's behavior that segments the sigmoid slice by slice while considering connectivity between adjacent slices, we proposed an iterative 2.5-D DL approach to solve this problem. We constructed a network that took an axial CT slice, the sigmoid mask in this slice, and an adjacent CT slice to segment as input and output the predicted mask on the adjacent slice. We also considered other organ masks as prior information. We trained the iterative network with 50 patient cases using five-fold cross validation. The trained network was repeatedly applied to generate masks slice by slice. The method achieved average Dice similarity coefficients of 0.82 0.06 and 0.88 0.02 in 10 test cases without and with using prior information.
Collapse
Affiliation(s)
- Yesenia Gonzalez
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Chenyang Shen
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| | - Hyunuk Jung
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve B Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Kevin Albuquerque
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xun Jia
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| |
Collapse
|
42
|
Ben Tamou A, Benzinou A, Nasreddine K. Multi-stream fish detection in unconstrained underwater videos by the fusion of two convolutional neural network detectors. APPL INTELL 2021. [DOI: 10.1007/s10489-020-02155-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
43
|
Wang S, Liu M, Lian J, Shen D. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:310-320. [PMID: 32956051 PMCID: PMC8202780 DOI: 10.1109/tmi.2020.3025517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.
Collapse
|
44
|
Kiljunen T, Akram S, Niemelä J, Löyttyniemi E, Seppälä J, Heikkilä J, Vuolukka K, Kääriäinen OS, Heikkilä VP, Lehtiö K, Nikkinen J, Gershkevitsh E, Borkvel A, Adamson M, Zolotuhhin D, Kolk K, Pang EPP, Tuan JKL, Master Z, Chua MLK, Joensuu T, Kononen J, Myllykangas M, Riener M, Mokka M, Keyriläinen J. A Deep Learning-Based Automated CT Segmentation of Prostate Cancer Anatomy for Radiation Therapy Planning-A Retrospective Multicenter Study. Diagnostics (Basel) 2020; 10:E959. [PMID: 33212793 PMCID: PMC7697786 DOI: 10.3390/diagnostics10110959] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/06/2020] [Accepted: 11/13/2020] [Indexed: 12/24/2022] Open
Abstract
A commercial deep learning (DL)-based automated segmentation tool (AST) for computed tomography (CT) is evaluated for accuracy and efficiency gain within prostate cancer patients. Thirty patients from six clinics were reviewed with manual- (MC), automated- (AC) and automated and edited (AEC) contouring methods. In the AEC group, created contours (prostate, seminal vesicles, bladder, rectum, femoral heads and penile bulb) were edited, whereas the MC group included empty datasets for MC. In one clinic, lymph node CTV delineations were evaluated for interobserver variability. Compared to MC, the mean time saved using the AST was 12 min for the whole data set (46%) and 12 min for the lymph node CTV (60%), respectively. The delineation consistency between MC and AEC groups according to the Dice similarity coefficient (DSC) improved from 0.78 to 0.94 for the whole data set and from 0.76 to 0.91 for the lymph nodes. The mean DSCs between MC and AC for all six clinics were 0.82 for prostate, 0.72 for seminal vesicles, 0.93 for bladder, 0.84 for rectum, 0.69 for femoral heads and 0.51 for penile bulb. This study proves that using a general DL-based AST for CT images saves time and improves consistency.
Collapse
Affiliation(s)
- Timo Kiljunen
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Saad Akram
- MVision Ai, c/o Terkko Health hub, Haartmaninkatu 4, FI-00290 Helsinki, Finland; (S.A.); (J.N.)
| | - Jarkko Niemelä
- MVision Ai, c/o Terkko Health hub, Haartmaninkatu 4, FI-00290 Helsinki, Finland; (S.A.); (J.N.)
| | - Eliisa Löyttyniemi
- Department of Biostatistics, University of Turku, Kiinamyllynkatu 10, FI-20014 Turku, Finland;
| | - Jan Seppälä
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Janne Heikkilä
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Kristiina Vuolukka
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Okko-Sakari Kääriäinen
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Vesa-Pekka Heikkilä
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
- University of Oulu, Research Unit of Medical Imaging, Physics and Technology, Aapistie 5 A, FI-90220 Oulu, Finland
| | - Kaisa Lehtiö
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
| | - Juha Nikkinen
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
- University of Oulu, Research Unit of Medical Imaging, Physics and Technology, Aapistie 5 A, FI-90220 Oulu, Finland
| | - Eduard Gershkevitsh
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Anni Borkvel
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Merve Adamson
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Daniil Zolotuhhin
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Kati Kolk
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Eric Pei Ping Pang
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
| | - Jeffrey Kit Loong Tuan
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
- Oncology Academic Programme, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Zubin Master
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
| | - Melvin Lee Kiang Chua
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
- Oncology Academic Programme, Duke-NUS Medical School, Singapore 169857, Singapore
- National Cancer Centre Singapore, Division of Medical Sciences, Singapore 169610, Singapore
| | - Timo Joensuu
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Juha Kononen
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Mikko Myllykangas
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Maigo Riener
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Miia Mokka
- Turku University Hospital, Department of Oncology and Radiotherapy, Hämeentie 11, FI-20521 Turku, Finland; (M.M.); (J.K.)
| | - Jani Keyriläinen
- Turku University Hospital, Department of Oncology and Radiotherapy, Hämeentie 11, FI-20521 Turku, Finland; (M.M.); (J.K.)
- Turku University Hospital, Department of Medical Physics, Hämeentie 11, FI-20521 Turku, Finland
| |
Collapse
|
45
|
Fang X, Yan P. Multi-Organ Segmentation Over Partially Labeled Datasets With Multi-Scale Feature Abstraction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3619-3629. [PMID: 32746108 PMCID: PMC7665851 DOI: 10.1109/tmi.2020.3001036] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Shortage of fully annotated datasets has been a limiting factor in developing deep learning based image segmentation algorithms and the problem becomes more pronounced in multi-organ segmentation. In this paper, we propose a unified training strategy that enables a novel multi-scale deep neural network to be trained on multiple partially labeled datasets for multi-organ segmentation. In addition, a new network architecture for multi-scale feature abstraction is proposed to integrate pyramid input and feature analysis into a U-shape pyramid structure. To bridge the semantic gap caused by directly merging features from different scales, an equal convolutional depth mechanism is introduced. Furthermore, we employ a deep supervision mechanism to refine the outputs in different scales. To fully leverage the segmentation features from all the scales, we design an adaptive weighting layer to fuse the outputs in an automatic fashion. All these mechanisms together are integrated into a Pyramid Input Pyramid Output Feature Abstraction Network (PIPO-FAN). Our proposed method was evaluated on four publicly available datasets, including BTCV, LiTS, KiTS and Spleen, where very promising performance has been achieved. The source code of this work is publicly shared at https://github.com/DIAL-RPI/PIPO-FAN to facilitate others to reproduce the work and build their own models using the introduced mechanisms.
Collapse
|
46
|
Liu QM, Jia RS, Liu YB, Sun HB, Yu JZ, Sun HM. Infrared image super-resolution reconstruction by using generative adversarial network with an attention mechanism. APPL INTELL 2020. [DOI: 10.1007/s10489-020-01987-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
47
|
Xu X, Lian C, Wang S, Wang A, Royce T, Chen R, Lian J, Shen D. Asymmetrical Multi-task Attention U-Net for the Segmentation of Prostate Bed in CT Image. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12264:470-479. [PMID: 34179897 PMCID: PMC8221064 DOI: 10.1007/978-3-030-59719-1_46] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is an essential prerequisite for post-prostatectomy radiotherapy but also a challenging task due to its non-contrast boundaries and highly variable shapes relying on neighboring organs. In this work, we propose a novel deep learning-based method to automatically segment this "invisible target". As the main idea of our design, we expect to get reference from the surrounding normal structures (bladder&rectum) and take advantage of this information to facilitate the prostate bed segmentation. To achieve this goal, we first use a U-Net as the backbone network to perform the bladder&rectum segmentation, which serves as a low-level task that can provide references to the high-level task of the prostate bed segmentation. Based on the backbone network, we build a novel attention network with a series of cascaded attention modules to further extract discriminative features for the high-level prostate bed segmentation task. Since the attention network has one-sided dependency on the backbone network, simulating the clinical workflow to use normal structures to guide the segmentation of radiotherapy target, we name the final composition model asymmetrical multi-task attention U-Net. Extensive experiments on a clinical dataset consisting of 186 CT images demonstrate the effectiveness of this new design and the superior performance of the model in comparison to the conventional atlas-based methods for prostate bed segmentation. The source code is publicly available at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrew Wang
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor Royce
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald Chen
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
48
|
Hypergraph membrane system based F2 fully convolutional neural network for brain tumor segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106454] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
49
|
Deep Learning in Radiation Oncology Treatment Planning for Prostate Cancer: A Systematic Review. J Med Syst 2020; 44:179. [DOI: 10.1007/s10916-020-01641-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
|
50
|
Feng F, Ashton‐Miller JA, DeLancey JOL, Luo J. Convolutional neural network‐based pelvic floor structure segmentation using magnetic resonance imaging in pelvic organ prolapse. Med Phys 2020; 47:4281-4293. [DOI: 10.1002/mp.14377] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 06/18/2020] [Accepted: 06/22/2020] [Indexed: 11/11/2022] Open
Affiliation(s)
- Fei Feng
- University of Michigan‐Shanghai Jiao Tong University Joint Institute Shanghai Jiao Tong University Shanghai200240China
| | | | - John O. L. DeLancey
- Department of Obstetrics and Gynecology University of Michigan Ann Arbor MI48109USA
| | - Jiajia Luo
- Biomedical Engineering Department Peking University Beijing100191China
| |
Collapse
|