1
|
Aleong AM, Berlin A, Borg J, Helou J, Beiki-Ardakani A, Rink A, Raman S, Chung P, Weersink RA. Rapid multi-catheter segmentation for magnetic resonance image-guided catheter-based interventions. Med Phys 2024; 51:5361-5373. [PMID: 38713919 DOI: 10.1002/mp.17117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 04/02/2024] [Accepted: 04/18/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is the gold standard for delineating cancerous lesions in soft tissue. Catheter-based interventions require the accurate placement of multiple long, flexible catheters at the target site. The manual segmentation of catheters in MR images is a challenging and time-consuming task. There is a need for automated catheter segmentation to improve the efficiency of MR-guided procedures. PURPOSE To develop and assess a machine learning algorithm for the detection of multiple catheters in magnetic resonance images used during catheter-based interventions. METHODS In this work, a 3D U-Net was trained to retrospectively segment catheters in scans acquired during clinical MR-guided high dose rate (HDR) prostate brachytherapy cases. To assess confidence in segmentation, multiple AI models were trained. On clinical test cases, average segmentation results were used to plan the brachytherapy delivery. Dosimetric parameters were compared to the original clinical plan. Data was obtained from 35 patients who underwent HDR prostate brachytherapy for focal disease with a total of 214 image volumes. 185 image volumes from 30 patients were used for training using a five-fold cross validation split to divide the data for training and validation. To generate confidence measures of segmentation accuracy, five trained models were generated. The remaining five patients (29 volumes) were used to test the performance of the trained model by comparison to manual segmentations of three independent observers and assessment of dosimetric impact on the final clinical brachytherapy plans. RESULTS The network successfully identified 95% of catheters in the test set at a rate of 0.89 s per volume. The multi-model method identified the small number of cases where AI segmentation of individual catheters was poor, flagging the need for user input. AI-based segmentation performed as well as segmentations by independent observers. Plan dosimetry using AI-segmented catheters was comparable to the original plan. CONCLUSION The vast majority of catheters were accurately identified by AI segmentation, with minimal impact on plan outcomes. The use of multiple AI models provided confidence in the segmentation accuracy and identified catheter segmentations that required further manual assessment. Real-time AI catheter segmentation can be used during MR-guided insertions to assess deflections and for rapid planning of prostate brachytherapy.
Collapse
Affiliation(s)
- Amanda M Aleong
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Alejandro Berlin
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Jette Borg
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Joelle Helou
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Akbar Beiki-Ardakani
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Alexandra Rink
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Srinivas Raman
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Peter Chung
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Robert A Weersink
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Xu D, Ma TM, Savjani R, Pham J, Cao M, Yang Y, Kishan AU, Scalzo F, Sheng K. Fully automated segmentation of prostatic urethra for MR-guided radiation therapy. Med Phys 2023; 50:354-364. [PMID: 36106703 DOI: 10.1002/mp.15983] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 04/29/2022] [Accepted: 09/01/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Accurate delineation of the urethra is a prerequisite for urethral dose reduction in prostate radiotherapy. However, even in magnetic resonance-guided radiation therapy (MRgRT), consistent delineation of the urethra is challenging, particularly in online adaptive radiotherapy. This paper presented a fully automatic MRgRT-based prostatic urethra segmentation framework. METHODS Twenty-eight prostate cancer patients were included in this study. In-house 3D half fourier single-shot turbo spin-echo (HASTE) and turbo spin echo (TSE) sequences were used to image the Foley-free urethra on a 0.35 T MRgRT system. The segmentation pipeline uses 3D nnU-Net as the base and innovatively combines ground truth and its corresponding radial distance (RD) map during training supervision. Additionally, we evaluate the benefit of incorporating a convolutional long short term memory (LSTM-Conv) layer and spatial recurrent convolution layer (RCL) into nnU-Net. A novel slice-by-slice simple exponential smoothing (SEPS) method specifically for tubular structures was used to post-process the segmentation results. RESULTS The experimental results show that nnU-Net trained using a combination of Dice, cross-entropy and RD achieved a Dice score of 77.1 ± 2.3% in the testing dataset. With SEPS, Hausdorff distance (HD) and 95% HD were reduced to 2.95 ± 0.17 mm and 1.84 ± 0.11 mm, respectively. LSTM-Conv and RCL layers only minimally improved the segmentation precision. CONCLUSION We present the first Foley-free MRgRT-based automated urethra segmentation study. Our method is built on a data-driven neural network with novel cost functions and a post-processing step designed for tubular structures. The performance is consistent with the need for online and offline urethra dose reduction in prostate radiotherapy.
Collapse
Affiliation(s)
- Di Xu
- Department of Computer Science, University of California, Los Angeles, California, USA.,Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Ting Martin Ma
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Ricky Savjani
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Jonathan Pham
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Minsong Cao
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Yingli Yang
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Amar U Kishan
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Fabien Scalzo
- Department of Computer Science, Pepperdine University, Los Angeles, California, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| |
Collapse
|
3
|
|
4
|
Rodgers JR, Hrinivich WT, Surry K, Velker V, D'Souza D, Fenster A. A semiautomatic segmentation method for interstitial needles in intraoperative 3D transvaginal ultrasound images for high-dose-rate gynecologic brachytherapy of vaginal tumors. Brachytherapy 2020; 19:659-668. [PMID: 32631651 DOI: 10.1016/j.brachy.2020.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 05/28/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The purpose of this study was to evaluate the use of a semiautomatic algorithm to simultaneously segment multiple high-dose-rate (HDR) gynecologic interstitial brachytherapy (ISBT) needles in three-dimensional (3D) transvaginal ultrasound (TVUS) images, with the aim of providing a clinically useful tool for intraoperative implant assessment. METHODS AND MATERIALS A needle segmentation algorithm previously developed for HDR prostate brachytherapy was adapted and extended to 3D TVUS images from gynecologic ISBT patients with vaginal tumors. Two patients were used for refining/validating the modified algorithm and five patients (8-12 needles/patient) were reserved as an unseen test data set. The images were filtered to enhance needle edges, using intensity peaks to generate feature points, and leveraged the randomized 3D Hough transform to identify candidate needle trajectories. Algorithmic segmentations were compared against manual segmentations and calculated dwell positions were evaluated. RESULTS All 50 test data set needles were successfully segmented with 96% of algorithmically segmented needles having angular differences <3° compared with manually segmented needles and the maximum Euclidean distance was <2.1 mm. The median distance between corresponding dwell positions was 0.77 mm with 86% of needles having maximum differences <3 mm. The mean segmentation time using the algorithm was <30 s/patient. CONCLUSIONS We successfully segmented multiple needles simultaneously in intraoperative 3D TVUS images from gynecologic HDR-ISBT patients with vaginal tumors and demonstrated the robustness of the algorithmic approach to image artifacts. This method provided accurate segmentations within a clinically efficient timeframe, providing the potential to be translated into intraoperative clinical use for implant assessment.
Collapse
MESH Headings
- Adenocarcinoma, Clear Cell/radiotherapy
- Adenocarcinoma, Clear Cell/secondary
- Aged
- Aged, 80 and over
- Algorithms
- Brachytherapy/instrumentation
- Brachytherapy/methods
- Carcinoma, Endometrioid/radiotherapy
- Carcinoma, Endometrioid/secondary
- Carcinoma, Squamous Cell/pathology
- Carcinoma, Squamous Cell/radiotherapy
- Carcinoma, Squamous Cell/secondary
- Endometrial Neoplasms/pathology
- Female
- Humans
- Image Processing, Computer-Assisted
- Imaging, Three-Dimensional/methods
- Middle Aged
- Needles
- Ovarian Neoplasms/pathology
- Prostate/diagnostic imaging
- Radiotherapy Planning, Computer-Assisted
- Ultrasonography/methods
- Vaginal Neoplasms/pathology
- Vaginal Neoplasms/radiotherapy
- Vaginal Neoplasms/secondary
Collapse
Affiliation(s)
- Jessica Robin Rodgers
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada.
| | - William Thomas Hrinivich
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD
| | - Kathleen Surry
- Department of Medical Physics, London Regional Cancer Program, London, Ontario, Canada
| | - Vikram Velker
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - David D'Souza
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - Aaron Fenster
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
5
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
6
|
Xing Z. An improved emperor penguin optimization based multilevel thresholding for color image segmentation. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105570] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
7
|
Danilov VV, Skirnevskiy IP, Manakov RA, Gerget OM, Melgani F. Feature selection algorithm based on PDF/PMF area difference. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
8
|
Simultaneous reconstruction of multiple stiff wires from a single X-ray projection for endovascular aortic repair. Int J Comput Assist Radiol Surg 2019; 14:1891-1899. [PMID: 31440962 DOI: 10.1007/s11548-019-02052-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 08/05/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Endovascular repair of aortic aneurysms (EVAR) can be supported by fusing pre- and intraoperative data to allow for improved navigation and to reduce the amount of contrast agent needed during the intervention. However, stiff wires and delivery devices can deform the vasculature severely, which reduces the accuracy of the fusion. Knowledge about the 3D position of the inserted instruments can help to transfer these deformations to the preoperative information. METHOD We propose a method to simultaneously reconstruct the stiff wires in both iliac arteries based on only a single monoplane acquisition, thereby avoiding interference with the clinical workflow. In the available X-ray projection, the 2D course of the wire is extracted. Then, a virtual second view of each wire orthogonal to the real projection is estimated using the preoperative vessel anatomy from a computed tomography angiography as prior information. Based on the real and virtual 2D wire courses, the wires can then be reconstructed in 3D using epipolar geometry. RESULTS We achieve a mean modified Hausdorff distance of 4.2 mm between the estimated 3D position and the true wire course for the contralateral side and 4.5 mm for the ipsilateral side. CONCLUSION The accuracy and speed of the proposed method allow for use in an intraoperative setting of deformation correction for EVAR.
Collapse
|
9
|
Zaffino P, Pernelle G, Mastmeyer A, Mehrtash A, Zhang H, Kikinis R, Kapur T, Francesca Spadea M. Fully automatic catheter segmentation in MRI with 3D convolutional neural networks: application to MRI-guided gynecologic brachytherapy. Phys Med Biol 2019; 64:165008. [PMID: 31272095 DOI: 10.1088/1361-6560/ab2f47] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
External-beam radiotherapy followed by high dose rate (HDR) brachytherapy is the standard-of-care for treating gynecologic cancers. The enhanced soft-tissue contrast provided by magnetic resonance imaging (MRI) makes it a valuable imaging modality for diagnosing and treating these cancers. However, in contrast to computed tomography (CT) imaging, the appearance of the brachytherapy catheters, through which radiation sources are inserted to reach the cancerous tissue later on, is often variable across images. This paper reports, for the first time, a new deep-learning-based method for fully automatic segmentation of multiple closely spaced brachytherapy catheters in intraoperative MRI. Represented in the data are 50 gynecologic cancer patients treated by MRI-guided HDR brachytherapy. For each patient, a single intraoperative MRI was used. 826 catheters in the images were manually segmented by an expert radiation physicist who is also a trained radiation oncologist. The number of catheters in a patient ranged between 10 and 35. A deep 3D convolutional neural network (CNN) model was developed and trained. In order to make the learning process more robust, the network was trained 5 times, each time using a different combination of shown patients. Finally, each test case was processed by the five networks and the final segmentation was generated by voting on the obtained five candidate segmentations. 4-fold validation was executed and all the patients were segmented. An average distance error of 2.0 ± 3.4 mm was achieved. False positive and false negative catheters were 6.7% and 1.5% respectively. Average Dice score was equal to 0.60 ± 0.17. The algorithm is available for use in the open source software platform 3D Slicer allowing for wide scale testing and research discussion. In conclusion, to the best of our knowledge, fully automatic segmentation of multiple closely spaced catheters from intraoperative MR images was achieved for the first time in gynecological brachytherapy.
Collapse
Affiliation(s)
- Paolo Zaffino
- Department of Experimental and Clinical Medicine, Magna Graecia University, 88100, Catanzaro, Italy. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | |
Collapse
|
10
|
Kath N, Handels H, Mastmeyer A. Robust GPU-based virtual reality simulation of radio-frequency ablations for various needle geometries and locations. Int J Comput Assist Radiol Surg 2019; 14:1825-1835. [PMID: 31338680 DOI: 10.1007/s11548-019-02033-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 07/12/2019] [Indexed: 11/24/2022]
Abstract
PURPOSE Radio-frequency ablations play an important role in the therapy of malignant liver lesions. The navigation of a needle to the lesion poses a challenge for both the trainees and intervening physicians. METHODS This publication presents a new GPU-based, accurate method for the simulation of radio-frequency ablations for lesions at the needle tip in general and for an existing visuo-haptic 4D VR simulator. The method is implemented real time capable with Nvidia CUDA. RESULTS It performs better than a literature method concerning the theoretical characteristic of monotonic convergence of the bioheat PDE and a in vitro gold standard with significant improvements ([Formula: see text]) in terms of Pearson correlations. It shows no failure modes or theoretically inconsistent individual simulation results after the initial phase of 10 s. On the Nvidia 1080 Ti GPU, it achieves a very high frame rendering performance of > 480 Hz. CONCLUSION Our method provides a more robust and safer real-time ablation planning and intraoperative guidance technique, especially avoiding the overestimation of the ablated tissue death zone, which is risky for the patient in terms of tumor recurrence. Future in vitro measurements and optimization shall further improve the conservative estimate.
Collapse
Affiliation(s)
- Niclas Kath
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
| | - Andre Mastmeyer
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany.
| |
Collapse
|
11
|
Mehrtash A, Ghafoorian M, Pernelle G, Ziaei A, Heslinga FG, Tuncali K, Fedorov A, Kikinis R, Tempany CM, Wells WM, Abolmaesumi P, Kapur T. Automatic Needle Segmentation and Localization in MRI With 3-D Convolutional Neural Networks: Application to MRI-Targeted Prostate Biopsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1026-1036. [PMID: 30334789 PMCID: PMC6450731 DOI: 10.1109/tmi.2018.2876796] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Image guidance improves tissue sampling during biopsy by allowing the physician to visualize the tip and trajectory of the biopsy needle relative to the target in MRI, CT, ultrasound, or other relevant imagery. This paper reports a system for fast automatic needle tip and trajectory localization and visualization in MRI that has been developed and tested in the context of an active clinical research program in prostate biopsy. To the best of our knowledge, this is the first reported system for this clinical application and also the first reported system that leverages deep neural networks for segmentation and localization of needles in MRI across biomedical applications. Needle tip and trajectory were annotated on 583 T2-weighted intra-procedural MRI scans acquired after needle insertion for 71 patients who underwent transperineal MRI-targeted biopsy procedure at our institution. The images were divided into two independent training-validation and test sets at the patient level. A deep 3-D fully convolutional neural network model was developed, trained, and deployed on these samples. The accuracy of the proposed method, as tested on previously unseen data, was 2.80-mm average in needle tip detection and 0.98° in needle trajectory angle. An observer study was designed in which independent annotations by a second observer, blinded to the original observer, were compared with the output of the proposed method. The resultant error was comparable to the measured inter-observer concordance, reinforcing the clinical acceptability of the proposed method. The proposed system has the potential for deployment in clinical routine.
Collapse
Affiliation(s)
- Alireza Mehrtash
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | | | | | - Alireza Ziaei
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Friso G. Heslinga
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Kemal Tuncali
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Ron Kikinis
- Department of Computer Science at the University of Bremen, Bremen, Germany
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Clare M. Tempany
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - William M. Wells
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, The University of British Columbia Vancouver, BC, V5T 1Z4, Canada
| | - Tina Kapur
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| |
Collapse
|
12
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
13
|
Appearance Constrained Semi-Automatic Segmentation from DCE-MRI is Reproducible and Feasible for Breast Cancer Radiomics: A Feasibility Study. Sci Rep 2018; 8:4838. [PMID: 29556054 PMCID: PMC5859113 DOI: 10.1038/s41598-018-22980-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 02/27/2018] [Indexed: 11/24/2022] Open
Abstract
We present a segmentation approach that combines GrowCut (GC) with cancer-specific multi-parametric Gaussian Mixture Model (GCGMM) to produce accurate and reproducible segmentations. We evaluated GCGMM using a retrospectively collected 75 invasive ductal carcinoma with ERPR+ HER2− (n = 15), triple negative (TN) (n = 9), and ER-HER2+ (n = 57) cancers with variable presentation (mass and non-mass enhancement) and background parenchymal enhancement (mild and marked). Expert delineated manual contours were used to assess the segmentation performance using Dice coefficient (DSC), mean surface distance (mSD), Hausdorff distance, and volume ratio (VR). GCGMM segmentations were significantly more accurate than GrowCut (GC) and fuzzy c-means clustering (FCM). GCGMM’s segmentations and the texture features computed from those segmentations were the most reproducible compared with manual delineations and other analyzed segmentation methods. Finally, random forest (RF) classifier trained with leave-one-out cross-validation using features extracted from GCGMM segmentation resulted in the best accuracy for ER-HER2+ vs. ERPR+/TN (GCGMM 0.95, expert 0.95, GC 0.90, FCM 0.92) and for ERPR + HER2− vs. TN (GCGMM 0.92, expert 0.91, GC 0.77, FCM 0.83).
Collapse
|