1
|
Faust JF, Krafft AJ, Polak D, Speier P, Behl NGR, Ooms N, Roll J, Krieger J, Ladd ME, Maier F. Rapid CNN-based needle localization for automatic slice alignment in MR-guided interventions using 3D undersampled radial white-marker imaging. Med Phys 2024; 51:8018-8033. [PMID: 39292615 DOI: 10.1002/mp.17376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 06/25/2024] [Accepted: 07/31/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND In MR-guided in-bore percutaneous needle interventions, typically 2D interactive real-time imaging is used for navigating the needle into the target. Misaligned 2D imaging planes can result in losing visibility of the needle in the 2D images, which impedes successful targeting. Necessary iterative manual slice adjustment can prolong interventional workflows. Therefore, rapid automatic alignment of the imaging planes with the needle would be preferable to improve such workflows. PURPOSE To investigate rapid 3D localization of needles in MR-guided interventions via a convolutional neural network (CNN)-based localization algorithm using an undersampled white-marker contrast acquisition for the purpose of automatic imaging slice alignment. METHODS A radial 3D rf-spoiled gradient echo MR pulse sequence with white-marker encoding was implemented and a CNN-based localization algorithm was employed to extract position and orientation of an aspiration needle from the undersampled white-marker images. The CNN was trained using porcine tissue phantoms (257 needle trajectories, four-fold data augmentation, 90%/10% split into training and validation dataset). Achievable localization times and accuracy were evaluated retrospectively in an ex vivo study (109 needle trajectories) for a range of needle orientations between 78° and 90° relative to the B0 field. A proof-of-concept in vivo experiment was performed in two porcine animal models and feasibility of automatic imaging slice alignment was evaluated retrospectively. RESULTS Ex vivo needle localization was achieved with a median localization accuracy of 1.9 mm (distance needle tip to detected needle axis) and a median angular deviation of 2.6° for needle orientations between 86° and 90° to the B0 field from fully sampled WM images (resolution of (4 mm)3, 6434 acquired radial k-space spokes, acquisition time of 80.4 s) in a field-of-view of (256 mm)3. Localization accuracy decreased with increasing undersampling and needle trajectory increasingly aligned with B0. For needle orientations between 86° and 90° to the B0 field, a highly accelerated acquisition of only 32 k-space spokes (acquisition time of 0.4 s) yielded a median localization accuracy of 3.1 mm and a median angular deviation of 4.7°. For needle orientations between 78° and 82° to the B0 field, a median accuracy and angular deviation of 3.5 mm and 6.8° could still be achieved with 64 sampled spokes (acquisition time of 0.8 s). In vivo, a localization accuracy of 1.4 mm and angular deviation of 3.4° was achieved sampling 32 k-space spokes (acquisition time of 0.48 s) with the needle oriented at 87.7° to the B0 field. For a needle oriented at 77.6° to the B0 field, localization accuracy of 5.3 mm and angular deviation of 6.8° were still achieved sampling 128 k-space spokes (acquisition time of 1.92 s), allowing for retrospective slice alignment. CONCLUSION The investigated approach enables passive biopsy needle localization in 3D. Acceleration of the localization to real-time applicability is feasible for needle orientations approximately perpendicular to B0. The method can potentially facilitate MR-guided needle interventions by enabling automatic imaging slice alignment with the needle.
Collapse
Affiliation(s)
- Jonas Frederik Faust
- Faculty of Physics and Astronomy, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
- Siemens Healthineers AG, Erlangen, Germany
| | | | | | | | | | - Nathan Ooms
- Cook Advanced Technologies, West Lafayette, Indiana, USA
- School of Health Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Jesse Roll
- Cook Advanced Technologies, West Lafayette, Indiana, USA
| | - Joshua Krieger
- Cook Advanced Technologies, West Lafayette, Indiana, USA
| | - Mark Edward Ladd
- Faculty of Physics and Astronomy, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
- Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | |
Collapse
|
2
|
Zhou W, Li X, Zabihollahy F, Lu DS, Wu HH. Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI. Int J Comput Assist Radiol Surg 2024; 19:2227-2237. [PMID: 38520646 PMCID: PMC11541278 DOI: 10.1007/s11548-024-03077-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 02/09/2024] [Indexed: 03/25/2024]
Abstract
PURPOSE Accurate and rapid needle localization on 3D magnetic resonance imaging (MRI) is critical for MRI-guided percutaneous interventions. The current workflow requires manual needle localization on 3D MRI, which is time-consuming and cumbersome. Automatic methods using 2D deep learning networks for needle segmentation require manual image plane localization, while 3D networks are challenged by the need for sufficient training datasets. This work aimed to develop an automatic deep learning-based pipeline for accurate and rapid 3D needle localization on in vivo intra-procedural 3D MRI using a limited training dataset. METHODS The proposed automatic pipeline adopted Shifted Window (Swin) Transformers and employed a coarse-to-fine segmentation strategy: (1) initial 3D needle feature segmentation with 3D Swin UNEt TRansfomer (UNETR); (2) generation of a 2D reformatted image containing the needle feature; (3) fine 2D needle feature segmentation with 2D Swin Transformer and calculation of 3D needle tip position and axis orientation. Pre-training and data augmentation were performed to improve network training. The pipeline was evaluated via cross-validation with 49 in vivo intra-procedural 3D MR images from preclinical pig experiments. The needle tip and axis localization errors were compared with human intra-reader variation using the Wilcoxon signed rank test, with p < 0.05 considered significant. RESULTS The average end-to-end computational time for the pipeline was 6 s per 3D volume. The median Dice scores of the 3D Swin UNETR and 2D Swin Transformer in the pipeline were 0.80 and 0.93, respectively. The median 3D needle tip and axis localization errors were 1.48 mm (1.09 pixels) and 0.98°, respectively. Needle tip localization errors were significantly smaller than human intra-reader variation (median 1.70 mm; p < 0.01). CONCLUSION The proposed automatic pipeline achieved rapid pixel-level 3D needle localization on intra-procedural 3D MRI without requiring a large 3D training dataset and has the potential to assist MRI-guided percutaneous interventions.
Collapse
Affiliation(s)
- Wenqi Zhou
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Xinzhou Li
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Fatemeh Zabihollahy
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
- Joint Department of Medical Imaging, Sinai Health System and University of Toronto, Toronto, Canada
| | - David S Lu
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA.
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Aleong AM, Berlin A, Borg J, Helou J, Beiki-Ardakani A, Rink A, Raman S, Chung P, Weersink RA. Rapid multi-catheter segmentation for magnetic resonance image-guided catheter-based interventions. Med Phys 2024; 51:5361-5373. [PMID: 38713919 DOI: 10.1002/mp.17117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 04/02/2024] [Accepted: 04/18/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is the gold standard for delineating cancerous lesions in soft tissue. Catheter-based interventions require the accurate placement of multiple long, flexible catheters at the target site. The manual segmentation of catheters in MR images is a challenging and time-consuming task. There is a need for automated catheter segmentation to improve the efficiency of MR-guided procedures. PURPOSE To develop and assess a machine learning algorithm for the detection of multiple catheters in magnetic resonance images used during catheter-based interventions. METHODS In this work, a 3D U-Net was trained to retrospectively segment catheters in scans acquired during clinical MR-guided high dose rate (HDR) prostate brachytherapy cases. To assess confidence in segmentation, multiple AI models were trained. On clinical test cases, average segmentation results were used to plan the brachytherapy delivery. Dosimetric parameters were compared to the original clinical plan. Data was obtained from 35 patients who underwent HDR prostate brachytherapy for focal disease with a total of 214 image volumes. 185 image volumes from 30 patients were used for training using a five-fold cross validation split to divide the data for training and validation. To generate confidence measures of segmentation accuracy, five trained models were generated. The remaining five patients (29 volumes) were used to test the performance of the trained model by comparison to manual segmentations of three independent observers and assessment of dosimetric impact on the final clinical brachytherapy plans. RESULTS The network successfully identified 95% of catheters in the test set at a rate of 0.89 s per volume. The multi-model method identified the small number of cases where AI segmentation of individual catheters was poor, flagging the need for user input. AI-based segmentation performed as well as segmentations by independent observers. Plan dosimetry using AI-segmented catheters was comparable to the original plan. CONCLUSION The vast majority of catheters were accurately identified by AI segmentation, with minimal impact on plan outcomes. The use of multiple AI models provided confidence in the segmentation accuracy and identified catheter segmentations that required further manual assessment. Real-time AI catheter segmentation can be used during MR-guided insertions to assess deflections and for rapid planning of prostate brachytherapy.
Collapse
Affiliation(s)
- Amanda M Aleong
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Alejandro Berlin
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Jette Borg
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Joelle Helou
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Akbar Beiki-Ardakani
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Alexandra Rink
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Srinivas Raman
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Peter Chung
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Robert A Weersink
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Glielmo P, Fusco S, Gitto S, Zantonelli G, Albano D, Messina C, Sconfienza LM, Mauri G. Artificial intelligence in interventional radiology: state of the art. Eur Radiol Exp 2024; 8:62. [PMID: 38693468 PMCID: PMC11063019 DOI: 10.1186/s41747-024-00452-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/26/2024] [Indexed: 05/03/2024] Open
Abstract
Artificial intelligence (AI) has demonstrated great potential in a wide variety of applications in interventional radiology (IR). Support for decision-making and outcome prediction, new functions and improvements in fluoroscopy, ultrasound, computed tomography, and magnetic resonance imaging, specifically in the field of IR, have all been investigated. Furthermore, AI represents a significant boost for fusion imaging and simulated reality, robotics, touchless software interactions, and virtual biopsy. The procedural nature, heterogeneity, and lack of standardisation slow down the process of adoption of AI in IR. Research in AI is in its early stages as current literature is based on pilot or proof of concept studies. The full range of possibilities is yet to be explored.Relevance statement Exploring AI's transformative potential, this article assesses its current applications and challenges in IR, offering insights into decision support and outcome prediction, imaging enhancements, robotics, and touchless interactions, shaping the future of patient care.Key points• AI adoption in IR is more complex compared to diagnostic radiology.• Current literature about AI in IR is in its early stages.• AI has the potential to revolutionise every aspect of IR.
Collapse
Affiliation(s)
- Pierluigi Glielmo
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy.
| | - Stefano Fusco
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giulia Zantonelli
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Via della Commenda, 10, 20122, Milan, Italy
| | - Carmelo Messina
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Luca Maria Sconfienza
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giovanni Mauri
- Divisione di Radiologia Interventistica, IEO, IRCCS Istituto Europeo di Oncologia, Milan, Italy
| |
Collapse
|
5
|
Gómez FM, Van der Reijd DJ, Panfilov IA, Baetens T, Wiese K, Haverkamp-Begemann N, Lam SW, Runge JH, Rice SL, Klompenhouwer EG, Maas M, Helmberger T, Beets-Tan RG. Imaging in interventional oncology, the better you see, the better you treat. J Med Imaging Radiat Oncol 2023; 67:895-902. [PMID: 38062853 DOI: 10.1111/1754-9485.13610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 11/22/2023] [Indexed: 01/14/2024]
Abstract
Imaging and image processing is the fundamental pillar of interventional oncology in which diagnostic, procedure planning, treatment and follow-up are sustained. Knowing all the possibilities that the different image modalities can offer is capital to select the most appropriate and accurate guidance for interventional procedures. Despite there is a wide variability in physicians preferences and availability of the different image modalities to guide interventional procedures, it is important to recognize the advantages and limitations for each of them. In this review, we aim to provide an overview of the most frequently used image guidance modalities for interventional procedures and its typical and future applications including angiography, computed tomography (CT) and spectral CT, magnetic resonance imaging, Ultrasound and the use of hybrid systems. Finally, we resume the possible role of artificial intelligence related to image in patient selection, treatment and follow-up.
Collapse
Affiliation(s)
- Fernando M Gómez
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Valencia, Spain
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Valencia, Spain
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Ilia A Panfilov
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Tarik Baetens
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Kevin Wiese
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Siu W Lam
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jurgen H Runge
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Samuel L Rice
- Radiology, Interventional Radiology Section, UT Southwestern Medical Center, Dallas, TX, USA
| | | | - Monique Maas
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Thomas Helmberger
- Institut für Radiologie, Neuroradiologie und Minimal-Invasive Therapie, München Klinik Bogenhausen, Munich, Germany
| | - Regina Gh Beets-Tan
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, University of Maastricht, Maastricht, The Netherlands
| |
Collapse
|
6
|
Uppot RN, Wah TM, Mueller PR. Percutaneous treatment of renal tumours. J Med Imaging Radiat Oncol 2023; 67:853-861. [PMID: 37417722 DOI: 10.1111/1754-9485.13553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/15/2023] [Indexed: 07/08/2023]
Abstract
Image-guided ablation is an accepted treatment option in the management of renal cell carcinoma. Percutaneous renal ablation offers the possibility of minimally invasive treatment while attempting to preserve renal function. Over the past several years there have been advances in tools and techniques that have improved procedure safety and patient outcomes. This article provides an updated comprehensive review of percutaneous ablation in the management of renal cell carcinoma.
Collapse
Affiliation(s)
- Raul N Uppot
- Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Tze Min Wah
- Department of Interventional Radiology, Faculty of Medicine, Leeds Institute of Medical Research, University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Peter R Mueller
- Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
7
|
Liang D, Zhang S, Zhao Z, Wang G, Sun J, Zhao J, Li W, Xu LX. Two-stage generative adversarial networks for metal artifact reduction and visualization in ablation therapy of liver tumors. Int J Comput Assist Radiol Surg 2023; 18:1991-2000. [PMID: 37391537 DOI: 10.1007/s11548-023-02986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 06/12/2023] [Indexed: 07/02/2023]
Abstract
PURPOSE The strong metal artifacts produced by the electrode needle cause poor image quality, thus preventing physicians from observing the surgical situation during the puncture process. To address this issue, we propose a metal artifact reduction and visualization framework for CT-guided ablation therapy of liver tumors. METHODS Our framework contains a metal artifact reduction model and an ablation therapy visualization model. A two-stage generative adversarial network is proposed to reduce the metal artifacts of intraoperative CT images and avoid image blurring. To visualize the puncture process, the axis and tip of the needle are localized, and then the needle is rebuilt in 3D space intraoperatively. RESULTS Experiments show that our proposed metal artifact reduction method achieves higher SSIM (0.891) and PSNR (26.920) values than the state-of-the-art methods. The accuracy of ablation needle reconstruction is 2.76 mm average in needle tip localization and 1.64° average in needle axis localization. CONCLUSION We propose a novel metal artifact reduction and an ablation therapy visualization framework for CT-guided ablation therapy of liver cancer. The experiment results indicate that our approach can reduce metal artifacts and improve image quality. Furthermore, our proposed method demonstrates the potential for displaying the relative position of the tumor and the needle intraoperatively.
Collapse
Affiliation(s)
- Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guangzhi Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wentao Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200240, China
| | - Lisa X Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
8
|
Moreira P, Tuncali K, Tempany C, Tokuda J. AI-Based Isotherm Prediction for Focal Cryoablation of Prostate Cancer. Acad Radiol 2023; 30 Suppl 1:S14-S20. [PMID: 37236896 PMCID: PMC10524864 DOI: 10.1016/j.acra.2023.04.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 04/04/2023] [Accepted: 04/15/2023] [Indexed: 05/28/2023]
Abstract
RATIONALE AND OBJECTIVES Focal therapies have emerged as minimally invasive alternatives for patients with localized low-risk prostate cancer (PCa) and those with postradiation recurrence. Among the available focal treatment methods for PCa, cryoablation offers several technical advantages, including the visibility of the boundaries of frozen tissue on the intraprocedural images, access to anterior lesions, and the proven ability to treat postradiation recurrence. However, predicting the final volume of the frozen tissue is challenging as it depends on several patient-specific factors, such as proximity to heat sources and thermal properties of the prostatic tissue. MATERIALS AND METHODS This paper presents a convolutional neural network model based on 3D-Unet to predict the frozen isotherm boundaries (iceball) resultant from a given a cryo-needle placement. Intraprocedural magnetic resonance images acquired during 38 cases of focal cryoablation of PCa were retrospectively used to train and validate the model. The model accuracy was assessed and compared against a vendor-provided geometrical model, which is used as a guideline in routine procedures. RESULTS The mean Dice Similarity Coefficient using the proposed model was 0.79±0.08 (mean+SD) vs 0.72±0.06 using the geometrical model (P<.001). CONCLUSION The model provided an accurate iceball boundary prediction in less than 0.4second and has proven its feasibility to be implemented in an intraprocedural planning algorithm.
Collapse
Affiliation(s)
- Pedro Moreira
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.).
| | - Kemal Tuncali
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| | - Clare Tempany
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| | - Junichi Tokuda
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| |
Collapse
|
9
|
Ebrahimi A, Sefati S, Gehlbach P, Taylor RH, Iordachita I. Simultaneous Online Registration-Independent Stiffness Identification and Tip Localization of Surgical Instruments in Robot-assisted Eye Surgery. IEEE T ROBOT 2023; 39:1373-1387. [PMID: 37377922 PMCID: PMC10292740 DOI: 10.1109/tro.2022.3201393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.
Collapse
Affiliation(s)
- Ali Ebrahimi
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Shahriar Sefati
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, 21287, USA
| | - Russell H Taylor
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
10
|
Kobayashi S, King F, Hata N. Automatic segmentation of prostate and extracapsular structures in MRI to predict needle deflection in percutaneous prostate intervention. Int J Comput Assist Radiol Surg 2023; 18:449-460. [PMID: 36152168 PMCID: PMC9974805 DOI: 10.1007/s11548-022-02757-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 09/13/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Understanding the three-dimensional anatomy of percutaneous intervention in prostate cancer is essential to avoid complications. Recently, attempts have been made to use machine learning to automate the segmentation of functional structures such as the prostate gland, rectum, and bladder. However, a paucity of material is available to segment extracapsular structures that are known to cause needle deflection during percutaneous interventions. This research aims to explore the feasibility of the automatic segmentation of prostate and extracapsular structures to predict needle deflection. METHODS Using pelvic magnetic resonance imagings (MRIs), 3D U-Net was trained and optimized for the prostate and extracapsular structures (bladder, rectum, pubic bone, pelvic diaphragm muscle, bulbospongiosus muscle, bull of the penis, ischiocavernosus muscle, crus of the penis, transverse perineal muscle, obturator internus muscle, and seminal vesicle). The segmentation accuracy was validated by putting intra-procedural MRIs into the 3D U-Net to segment the prostate and extracapsular structures in the image. Then, the segmented structures were used to predict deflected needle path in in-bore MRI-guided biopsy using a model-based approach. RESULTS The 3D U-Net yielded Dice scores to parenchymal organs (0.61-0.83), such as prostate, bladder, rectum, bulb of the penis, crus of the penis, but lower in muscle structures (0.03-0.31), except and obturator internus muscle (0.71). The 3D U-Net showed higher Dice scores for functional structures ([Formula: see text]0.001) and complication-related structures ([Formula: see text]0.001). The segmentation of extracapsular anatomies helped to predict the deflected needle path in MRI-guided prostate interventions of the prostate with the accuracy of 0.9 to 4.9 mm. CONCLUSION Our segmentation method using 3D U-Net provided an accurate anatomical understanding of the prostate and extracapsular structures. In addition, our method was suitable for segmenting functional and complication-related structures. Finally, 3D images of the prostate and extracapsular structures could simulate the needle pathway to predict needle deflections.
Collapse
Affiliation(s)
- Satoshi Kobayashi
- National Center for Image Guided Therapy, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA.
- Urology, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 8128582, Japan.
| | - Franklin King
- National Center for Image Guided Therapy, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
| | - Nobuhiko Hata
- National Center for Image Guided Therapy, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
| |
Collapse
|
11
|
Asheghan MM, Javadikasgari H, Attary T, Rouhollahi A, Straughan R, Willi JN, Awal R, Sabe A, de la Cruz KI, Nezami FR. Predicting one-year left ventricular mass index regression following transcatheter aortic valve replacement in patients with severe aortic stenosis: A new era is coming. Front Cardiovasc Med 2023; 10:1130152. [PMID: 37082454 PMCID: PMC10111021 DOI: 10.3389/fcvm.2023.1130152] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 03/16/2023] [Indexed: 04/22/2023] Open
Abstract
Aortic stenosis (AS) is the most common valvular heart disease in the western world, particularly worrisome with an ever-aging population wherein postoperative outcome for aortic valve replacement is strongly related to the timing of surgery in the natural course of disease. Yet, guidelines for therapy planning overlook insightful, quantified measures from medical imaging to educate clinical decisions. Herein, we leverage statistical shape analysis (SSA) techniques combined with customized machine learning methods to extract latent information from segmented left ventricle (LV) shapes. This enabled us to predict left ventricular mass index (LVMI) regression a year after transcatheter aortic valve replacement (TAVR). LVMI regression is an expected phenomena in patients undergone aortic valve replacement reported to be tightly correlated with survival one and five year after the intervention. In brief, LV geometries were extracted from medical images of a cohort of AS patients using deep learning tools, and then analyzed to create a set of statistical shape models (SSMs). Then, the supervised shape features were extracted to feed a support vector regression (SVR) model to predict the LVMI regression. The average accuracy of the predictions was validated against clinical measurements calculating root mean square error and R 2 score which yielded the satisfactory values of 0.28 and 0.67, respectively, on test data. Our work reveals the promising capability of advanced mathematical and bioinformatics approaches such as SSA and machine learning to improve medical output prediction and treatment planning.
Collapse
Affiliation(s)
- Mohammad Mostafa Asheghan
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Hoda Javadikasgari
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Taraneh Attary
- Bio-Intelligence Unit, Sharif Brain Center, Electrical Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Amir Rouhollahi
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Ross Straughan
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - James Noel Willi
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Rabina Awal
- Mechanical Engineering Department, University of Louisiana at Lafayette, Louisiana, LA, United States
| | - Ashraf Sabe
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Kim I. de la Cruz
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Farhad R. Nezami
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Correspondence: Farhad R. Nezami
| |
Collapse
|
12
|
Proceedings from the Society of Interventional Radiology Foundation Research Consensus Panel on Artificial Intelligence in Interventional Radiology: From Code to Bedside. J Vasc Interv Radiol 2022; 33:1113-1120. [PMID: 35871021 DOI: 10.1016/j.jvir.2022.06.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI)-based technologies are the most rapidly growing field of innovation in healthcare with the promise to achieve substantial improvements in delivery of patient care across all disciplines of medicine. Recent advances in imaging technology along with marked expansion of readily available advanced health information, data offer a unique opportunity for interventional radiology (IR) to reinvent itself as a data-driven specialty. Additionally, the growth of AI-based applications in diagnostic imaging is expected to have downstream effects on all image-guidance modalities. Therefore, the Society of Interventional Radiology Foundation has called upon 13 key opinion leaders in the field of IR to develop research priorities for clinical applications of AI in IR. The objectives of the assembled research consensus panel were to assess the availability and understand the applicability of AI for IR, estimate current needs and clinical use cases, and assemble a list of research priorities for the development of AI in IR. Individual panel members proposed and all participants voted upon consensus statements to rank them according to their overall impact for IR. The results identified the top priorities for the IR research community and provide organizing principles for innovative academic-industrial research collaborations that will leverage both clinical expertise and cutting-edge technology to benefit patient care in IR.
Collapse
|
13
|
Li Y, Yang C, Bahl A, Persad R, Melhuish C. A review on the techniques used in prostate brachytherapy. COGNITIVE COMPUTATION AND SYSTEMS 2022. [DOI: 10.1049/ccs2.12067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Yanlei Li
- Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Chenguang Yang
- Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Amit Bahl
- University Hospitals Bristol and Weston NHS Trust and Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Raj Persad
- University Hospitals Bristol and Weston NHS Trust and Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Chris Melhuish
- Bristol Robotics Laboratory University of the West of England Bristol UK
| |
Collapse
|
14
|
Shaaer A, Paudel M, Smith M, Tonolete F, Ravi A. Deep-learning-assisted algorithm for catheter reconstruction during MR-only gynecological interstitial brachytherapy. J Appl Clin Med Phys 2021; 23:e13494. [PMID: 34889509 PMCID: PMC8833281 DOI: 10.1002/acm2.13494] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 11/18/2022] Open
Abstract
Magnetic resonance imaging (MRI) offers excellent soft‐tissue contrast enabling the contouring of targets and organs at risk during gynecological interstitial brachytherapy procedure. Despite its advantage, one of the main obstacles preventing a transition to an MRI‐only workflow is that implanted plastic catheters are not reliably visualized on MR images. This study aims to evaluate the feasibility of a deep‐learning‐based algorithm for semiautomatic reconstruction of interstitial catheters during an MR‐only workflow. MR images of 20 gynecological patients were used in this study. Note that 360 catheters were reconstructed using T1‐ and T2‐weighted images by five experienced brachytherapy planners. The mean of the five reconstructed paths were used for training (257 catheters), validation (15 catheters), and testing/evaluation (88 catheters). To automatically identify and localize the catheters, a two‐dimensional (2D) U‐net algorithm was used to find their approximate location in each image slice. Once localized, thresholding was applied to those regions to find the extrema, as catheters appear as bright and dark regions in T1‐ and T2‐weighted images, respectively. The localized dwell positions of the proposed algorithm were compared to the ground truth reconstruction. Reconstruction time was also evaluated. A total of 34 009 catheter dwell positions were evaluated between the algorithm and all planners to estimate the reconstruction variability. The average variation was 0.97 ± 0.66 mm. The average reconstruction time for this approach was 11 ± 1 min, compared with 46 ± 10 min for the expert planners. This study suggests that the proposed deep learning, MR‐based framework has potential to replace the conventional manual catheter reconstruction. The adoption of this approach in the brachytherapy workflow is expected to improve treatment efficiency while reducing planning time, resources, and human errors.
Collapse
Affiliation(s)
- Amani Shaaer
- Department of Physics, Ryerson University, Toronto, Ontario, Canada.,Department of Biomedical Physics, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia
| | - Moti Paudel
- Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Medical Physics, University of Toronto, Toronto, Ontario, Canada
| | - Mackenzie Smith
- Department of Radiation Therapy, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Frances Tonolete
- Department of Radiation Therapy, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Ananth Ravi
- Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Medical Physics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
15
|
Shen D, Pathrose A, Sarnari R, Blake A, Berhane H, Baraboo JJ, Carr JC, Markl M, Kim D. Automated segmentation of biventricular contours in tissue phase mapping using deep learning. NMR IN BIOMEDICINE 2021; 34:e4606. [PMID: 34476863 PMCID: PMC8795858 DOI: 10.1002/nbm.4606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 07/27/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Tissue phase mapping (TPM) is an MRI technique for quantification of regional biventricular myocardial velocities. Despite its potential, clinical use is limited due to the requisite labor-intensive manual segmentation of cardiac contours for all time frames. The purpose of this study was to develop a deep learning (DL) network for automated segmentation of TPM images, without significant loss in segmentation and myocardial velocity quantification accuracy compared with manual segmentation. We implemented a multi-channel 3D (three dimensional; 2D + time) dense U-Net that trained on magnitude and phase images and combined cross-entropy, Dice, and Hausdorff distance loss terms to improve the segmentation accuracy and suppress unnatural boundaries. The dense U-Net was trained and tested with 150 multi-slice, multi-phase TPM scans (114 scans for training, 36 for testing) from 99 heart transplant patients (44 females, 1-4 scans/patient), where the magnitude and velocity-encoded (Vx , Vy , Vz ) images were used as input and the corresponding manual segmentation masks were used as reference. The accuracy of DL segmentation was evaluated using quantitative metrics (Dice scores, Hausdorff distance) and linear regression and Bland-Altman analyses on the resulting peak radial and longitudinal velocities (Vr and Vz ). The mean segmentation time was about 2 h per patient for manual and 1.9 ± 0.3 s for DL. Our network produced good accuracy (median Dice = 0.85 for left ventricle (LV), 0.64 for right ventricle (RV), Hausdorff distance = 3.17 pixels) compared with manual segmentation. Peak Vr and Vz measured from manual and DL segmentations were strongly correlated (R ≥ 0.88) and in good agreement with manual analysis (mean difference and limits of agreement for Vz and Vr were -0.05 ± 0.98 cm/s and -0.06 ± 1.18 cm/s for LV, and -0.21 ± 2.33 cm/s and 0.46 ± 4.00 cm/s for RV, respectively). The proposed multi-channel 3D dense U-Net was capable of reducing the segmentation time by 3,600-fold, without significant loss in accuracy in tissue velocity measurements.
Collapse
Affiliation(s)
- Daming Shen
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Ashitha Pathrose
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Roberto Sarnari
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Allison Blake
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Haben Berhane
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Justin J Baraboo
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - James C Carr
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Michael Markl
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Daniel Kim
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| |
Collapse
|
16
|
Xiao X, Wu Y, Wu Q, Ren H. Concurrently bendable and rotatable continuum tubular robot for omnidirectional multi-core transurethral prostate biopsy. Med Biol Eng Comput 2021; 60:229-238. [PMID: 34813020 DOI: 10.1007/s11517-021-02434-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Accepted: 08/16/2021] [Indexed: 11/30/2022]
Abstract
A transurethral prostate biopsy device is proposed in this paper, which can shoot a biopsy needle at different angles to take samples from multiple locations within the prostate. Firstly, the traditional prostate biopsy methods, including transrectal prostate biopsy and transperineal prostate biopsy, are introduced and compared. Then, the working principles of the new prostate biopsy procedure are illustrated. The designs of the needle bending system and the flexible needle are presented, and a proofs-of-concept study of the robotic biopsy device is demonstrated. Design parameters, material selection, and control unit are introduced. Experiments are carried out to test and demonstrate the functions of the prototype. Theoretical and measured bending angles are compared and analyzed. The bending system can effectively bend the biopsy needle to any angle between 15 and 45°. The penetration force of the biopsy needle decreases with the increase of the bending angle. The range of rotation of the bending system on one hemisphere is ±25°. Together with the translational motion, the biopsy needle can reach any point within the workspace. Finally, a phantom test and a cadaver experiment were carried out to simulate biopsy.
Collapse
Affiliation(s)
- Xiao Xiao
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.,Department of Biomedical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore
| | - Yifan Wu
- Department of Biomedical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore
| | - Qinghui Wu
- Department of Urology, National University Hospital, Singapore, 119074, Singapore
| | - Hongliang Ren
- Department of Biomedical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore. .,Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, 999077, China. .,NUS (Suzhou) Research Institute (NUSRI), Suzhou, 215123, China.
| |
Collapse
|
17
|
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain adaptation for segmentation of critical structures for prostate cancer therapy. Sci Rep 2021; 11:11480. [PMID: 34075061 PMCID: PMC8169882 DOI: 10.1038/s41598-021-90294-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/04/2021] [Indexed: 11/23/2022] Open
Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Collapse
Affiliation(s)
- Anneke Meyer
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| | - Alireza Mehrtash
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marko Rak
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Oleksii Bashkanov
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Bjoern Langbein
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alireza Ziaei
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam S Kibel
- Division of Urology, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Christian Hansen
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
18
|
D'Amore B, Smolinski-Zhao S, Daye D, Uppot RN. Role of Machine Learning and Artificial Intelligence in Interventional Oncology. Curr Oncol Rep 2021; 23:70. [PMID: 33880651 DOI: 10.1007/s11912-021-01054-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2021] [Indexed: 12/12/2022]
Abstract
PURPOSE OF REVIEW The purpose of this review is to highlight the current role of machine learning and artificial intelligence and in the field of interventional oncology. RECENT FINDINGS With advancements in technology, there is a significant amount of research regarding the application of artificial intelligence and machine learning in medicine. Interventional oncology is a field that can benefit greatly from this research through enhanced image analysis and intraprocedural guidance. These software developments can increase detection of cancers through routine screening and improve diagnostic accuracy in classifying tumors. They may also aid in selecting the most effective treatment for the patient by predicting outcomes based on a combination of both clinical and radiologic factors. Furthermore, machine learning and artificial intelligence can advance intraprocedural guidance for the interventional oncologist through more accurate needle tracking and image fusion technology. This minimizes damage to nearby healthy tissue and maximizes treatment of the tumor. While there are several exciting developments, this review also discusses limitations before incorporating machine learning and artificial intelligence in the field of interventional oncology. These include data capture and processing, lack of transparency among developers, validating models, integrating workflow, and ethical challenged. In summary, machine learning and artificial intelligence have the potential to positively impact interventional oncologists and how they provide cancer care treatments.
Collapse
Affiliation(s)
- Brian D'Amore
- Drexel University College of Medicine, 2900 W Queen Lane, Philadelphia, PA, 19129, USA
| | - Sara Smolinski-Zhao
- Division of Interventional Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit Street; Gray #290, Boston, MA, 02114, USA
| | - Dania Daye
- Division of Interventional Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit Street; Gray #290, Boston, MA, 02114, USA
| | - Raul N Uppot
- Division of Interventional Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit Street; Gray #290, Boston, MA, 02114, USA.
| |
Collapse
|
19
|
Ervin B, Rozhkov L, Buroker J, Leach JL, Mangano FT, Greiner HM, Holland KD, Arya R. Fast Automated Stereo-EEG Electrode Contact Identification and Labeling Ensemble. Stereotact Funct Neurosurg 2021; 99:393-404. [PMID: 33849046 DOI: 10.1159/000515090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 02/02/2021] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Stereotactic electroencephalography (SEEG) has emerged as the preferred modality for intracranial monitoring in drug-resistant epilepsy (DRE) patients being evaluated for neurosurgery. After implantation of SEEG electrodes, it is important to determine the neuroanatomic locations of electrode contacts (ECs), to localize ictal onset and propagation, and integrate functional information to facilitate surgical decisions. Although there are tools for coregistration of preoperative MRI and postoperative CT scans, identification, sorting, and labeling of SEEG ECs is often performed manually, which is resource intensive. We report development and validation of a software named Fast Automated SEEG Electrode Contact Identification and Labeling Ensemble (FASCILE). METHODS FASCILE is written in Python 3.8.3 and employs a novel automated method for identifying ECs, assigning them to respected SEEG electrodes, and labeling. We compared FASCILE with our clinical process of identifying, sorting, and labeling ECs, by computing localization error in anteroposterior, superoinferior, and lateral dimensions. We also measured mean Euclidean distances between ECs identified by FASCILE and the clinical method. We compared time taken for EC identification, sorting, and labeling for the software developer using FASCILE, a first-time clinical user using FASCILE, and the conventional clinical process. RESULTS Validation in 35 consecutive DRE patients showed a mean overall localization error of 0.73 ± 0.15 mm. FASCILE required 10.7 ± 5.5 min/patient for identifying, sorting, and labeling ECs by a first-time clinical user, compared to 3.3 ± 0.7 h/patient required for the conventional clinical process. CONCLUSION Given the accuracy, speed, and ease of use, we expect FASCILE to be used frequently for SEEG-driven epilepsy surgery. It is freely available for noncommercial use. FASCILE is specifically designed to expedite localization of ECs, assigning them to respective SEEG electrodes (sorting), and labeling them and not for coregistration of CT and MRI data as there are commercial software available for this purpose.
Collapse
Affiliation(s)
- Brian Ervin
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, Ohio, USA
| | - Leonid Rozhkov
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Jason Buroker
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - James L Leach
- Division of Neuro-Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Francesco T Mangano
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA.,Division of Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Hansel M Greiner
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Katherine D Holland
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Ravindra Arya
- Division of Neurology, Comprehensive Epilepsy Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| |
Collapse
|
20
|
Mehrtash A, Wells WM, Tempany CM, Abolmaesumi P, Kapur T. Confidence Calibration and Predictive Uncertainty Estimation for Deep Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3868-3878. [PMID: 32746129 PMCID: PMC7704933 DOI: 10.1109/tmi.2020.3006437] [Citation(s) in RCA: 100] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2) We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.
Collapse
|
21
|
Zhang Y, Tian Z, Lei Y, Wang T, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Automatic multi-needle localization in ultrasound images using large margin mask RCNN for ultrasound-guided prostate brachytherapy. Phys Med Biol 2020; 65:205003. [PMID: 32640435 PMCID: PMC11758238 DOI: 10.1088/1361-6560/aba410] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Multi-needle localization in ultrasound (US) images is a crucial step of treatment planning for US-guided prostate brachytherapy. However, current computer-aided technologies are mostly focused on single-needle digitization, while manual digitization is labor intensive and time consuming. In this paper, we proposed a deep learning-based workflow for fast automatic multi-needle digitization, including needle shaft detection and needle tip detection. The major workflow is composed of two components: a large margin mask R-CNN model (LMMask R-CNN), which adopts the lager margin loss to reformulate Mask R-CNN for needle shaft localization, and a needle based density-based spatial clustering of application with noise algorithm which integrates priors to model a needle in an iteration for a needle shaft refinement and tip detections. Besides, we use the skipping connection in neural network architecture to improve the supervision in hidden layers. Our workflow was evaluated on 23 patients who underwent US-guided high-dose-rate (HDR) prostrate brachytherapy with 339 needles being tested in total. Our method detected 98% of the needles with 0.091 ± 0.043 mm shaft error and 0.330 ± 0.363 mm tip error. Compared with only using Mask R-CNN and only using LMMask R-CNN, the proposed method gains a significant improvement on both shaft error and tip error. The proposed method automatically digitizes needles per patient with in a second. It streamlines the workflow of transrectal ultrasound-guided HDR prostate brachytherapy and paves the way for the development of real-time treatment planning system that is expected to further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Gillies DJ, Rodgers JR, Gyacskov I, Roy P, Kakani N, Cool DW, Fenster A. Deep learning segmentation of general interventional tools in two‐dimensional ultrasound images. Med Phys 2020; 47:4956-4970. [DOI: 10.1002/mp.14427] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 07/05/2020] [Accepted: 07/21/2020] [Indexed: 12/18/2022] Open
Affiliation(s)
- Derek J. Gillies
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Jessica R. Rodgers
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Priyanka Roy
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Nirmal Kakani
- Department of Radiology Manchester Royal Infirmary ManchesterM13 9WL UK
| | - Derek W. Cool
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| |
Collapse
|
23
|
Li X, Young AS, Raman SS, Lu DS, Lee YH, Tsao TC, Wu HH. Automatic needle tracking using Mask R-CNN for MRI-guided percutaneous interventions. Int J Comput Assist Radiol Surg 2020; 15:1673-1684. [PMID: 32676870 DOI: 10.1007/s11548-020-02226-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 07/03/2020] [Indexed: 12/16/2022]
Abstract
PURPOSE Accurate needle tracking provides essential information for MRI-guided percutaneous interventions. Passive needle tracking using MR images is challenged by variations of the needle-induced signal void feature in different situations. This work aimed to develop an automatic needle tracking algorithm for MRI-guided interventions based on the Mask Region Proposal-Based Convolutional Neural Network (R-CNN). METHODS Mask R-CNN was adapted and trained to segment the needle feature using 250 intra-procedural images from 85 MRI-guided prostate biopsy cases and 180 real-time images from MRI-guided needle insertion in ex vivo tissue. The segmentation masks were passed into the needle feature localization algorithm to extract the needle feature tip location and axis orientation. The proposed algorithm was tested using 208 intra-procedural images from 40 MRI-guided prostate biopsy cases, and 3 real-time MRI datasets in ex vivo tissue. The algorithm results were compared with human-annotated references. RESULTS In prostate datasets, the proposed algorithm achieved needle feature tip localization error with median Euclidean distance (dxy) of 0.71 mm and median difference in axis orientation angle (dθ) of 1.28°, respectively. In 3 real-time MRI datasets, the proposed algorithm achieved consistent dynamic needle feature tracking performance with processing time of 75 ms/image: (a) median dxy = 0.90 mm, median dθ = 1.53°; (b) median dxy = 1.31 mm, median dθ = 1.9°; (c) median dxy = 1.09 mm, median dθ = 0.91°. CONCLUSIONS The proposed algorithm using Mask R-CNN can accurately track the needle feature tip and axis on MR images from in vivo intra-procedural prostate biopsy cases and ex vivo real-time MRI experiments with a range of different conditions. The algorithm achieved pixel-level tracking accuracy in real time and has potential to assist MRI-guided percutaneous interventions.
Collapse
Affiliation(s)
- Xinzhou Li
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Adam S Young
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
| | - Steven S Raman
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
| | - David S Lu
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA
| | - Yu-Hsiu Lee
- Department of Mechanical and Aerospace Engineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Tsu-Chin Tsao
- Department of Mechanical and Aerospace Engineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, 300 UCLA Medical Plaza, Suite B119, Los Angeles, CA, 90095, USA.
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
24
|
Park I, Kim HK, Chung WK, Kim K. Deep Learning Based Real-Time OCT Image Segmentation and Correction for Robotic Needle Insertion Systems. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3001474] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
25
|
Zaffino P, Moccia S, De Momi E, Spadea MF. A Review on Advances in Intra-operative Imaging for Surgery and Therapy: Imagining the Operating Room of the Future. Ann Biomed Eng 2020; 48:2171-2191. [PMID: 32601951 DOI: 10.1007/s10439-020-02553-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 06/17/2020] [Indexed: 12/19/2022]
Abstract
With the advent of Minimally Invasive Surgery (MIS), intra-operative imaging has become crucial for surgery and therapy guidance, allowing to partially compensate for the lack of information typical of MIS. This paper reviews the advancements in both classical (i.e. ultrasounds, X-ray, optical coherence tomography and magnetic resonance imaging) and more recent (i.e. multispectral, photoacoustic and Raman imaging) intra-operative imaging modalities. Each imaging modality was analyzed, focusing on benefits and disadvantages in terms of compatibility with the operating room, costs, acquisition time and image characteristics. Tables are included to summarize this information. New generation of hybrid surgical room and algorithms for real time/in room image processing were also investigated. Each imaging modality has its own (site- and procedure-specific) peculiarities in terms of spatial and temporal resolution, field of view and contrasted tissues. Besides the benefits that each technique offers for guidance, considerations about operators and patient risk, costs, and extra time required for surgical procedures have to be considered. The current trend is to equip surgical rooms with multimodal imaging systems, so as to integrate multiple information for real-time data extraction and computer-assisted processing. The future of surgery is to enhance surgeons eye to minimize intra- and after-surgery adverse events and provide surgeons with all possible support to objectify and optimize the care-delivery process.
Collapse
Affiliation(s)
- Paolo Zaffino
- Department of Experimental and Clinical Medicine, Universitá della Magna Graecia, Catanzaro, Italy
| | - Sara Moccia
- Department of Information Engineering (DII), Universitá Politecnica delle Marche, via Brecce Bianche, 12, 60131, Ancona, AN, Italy.
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133, Milano, MI, Italy
| | - Maria Francesca Spadea
- Department of Experimental and Clinical Medicine, Universitá della Magna Graecia, Catanzaro, Italy
| |
Collapse
|
26
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
27
|
Convolutional Neural Networks for Immediate Surgical Needle Automatic Detection in Craniofacial X-Ray Images. J Craniofac Surg 2020; 31:1647-1650. [PMID: 32516217 DOI: 10.1097/scs.0000000000006594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Immediate X-ray examination is necessary while the surgical needle falls off during operation. In this study, one convolutional neural network (CNN) model was introduced for automatically surgical needle detection in craniofacial X-ray images. MATERIALS AND METHODS The craniofacial surgical needle (5-0, ETHICON, USA) was localized in 8 different anatomic regions of 2 pig heads for bilateral X-ray examination separately. Thirty-two images were obtained finally which were cropped into fragmented images and divided into the training dataset and the test dataset. Then, one immediate needle detection CNN model was developed and trained. Its performance was quantitatively evaluated using the precision rate, the recall rate, and the f2-score. One 8-fold cross-validation experiment was performed. The detection rate and the time it took were calculated to quantify the degree of difference between the automatic detection and the manual detection by 3 experienced clinicians. RESULTS The precision rate, the recall rate, and the f2-score of the CNN model on fragmented images were 98.99%, 92.67%, and 93.85% respectively. For the 8-fold cross-validation experiments, 26 cases of all the 32 X-ray images were automatically marked the right position of the needle (detection rate of 81.25%). The average time of automatically detecting one image was 5.8 seconds. For the 3 clinicians, 65 images of all the 32× 3 images were checked right (detection rate of 67.7%) with the average time-consuming of 33 seconds. CONCLUSION In summary, after training with a large dataset, the CNN model showed potential for immediate surgical needle automatic detection in craniofacial X-ray images with better detection accuracy and efficiency than the conventional manual method.
Collapse
|
28
|
Zhang Y, Lei Y, Qiu RLJ, Wang T, Wang H, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy. Med Phys 2020; 47:2735-2745. [PMID: 32155666 DOI: 10.1002/mp.14128] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 02/17/2020] [Accepted: 03/04/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Ultrasound (US)-guided high dose rate (HDR) prostate brachytherapy requests the clinicians to place HDR needles (catheters) into the prostate gland under transrectal US (TRUS) guidance in the operating room. The quality of the subsequent radiation treatment plan is largely dictated by the needle placements, which varies upon the experience level of the clinicians and the procedure protocols. Real-time plan dose distribution, if available, could be a vital tool to provide more subjective assessment of the needle placements, hence potentially improving the radiation plan quality and the treatment outcome. However, due to low signal-to-noise ratio (SNR) in US imaging, real-time multi-needle segmentation in 3D TRUS, which is the major obstacle for real-time dose mapping, has not been realized to date. In this study, we propose a deep learning-based method that enables accurate and real-time digitization of the multiple needles in the 3D TRUS images of HDR prostate brachytherapy. METHODS A deep learning model based on the U-Net architecture was developed to segment multiple needles in the 3D TRUS images. Attention gates were considered in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient's TRUS images. We evaluated our proposed method based on the needle shaft and tip errors against manually defined ground truth and compared our method with other state-of-art methods (U-Net and deeply supervised attention U-Net). RESULTS Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.290 ± 0.236 mm at shaft error and 0.442 ± 0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference is observed (P = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deeply supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error (P < 0.05). CONCLUSIONS We proposed a new segmentation method to precisely localize the tips and shafts of multiple needles in 3D TRUS images of HDR prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time plan dose assessment tools that can further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, New York University, New York, NY, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
29
|
Herz C, MacNeil K, Behringer PA, Tokuda J, Mehrtash A, Mousavi P, Kikinis R, Fennessy FM, Tempany CM, Tuncali K, Fedorov A. Open Source Platform for Transperineal In-Bore MRI-Guided Targeted Prostate Biopsy. IEEE Trans Biomed Eng 2020; 67:565-576. [PMID: 31135342 PMCID: PMC6874712 DOI: 10.1109/tbme.2019.2918731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Accurate biopsy sampling of the suspected lesions is critical for the diagnosis and clinical management of prostate cancer. Transperineal in-bore MRI-guided prostate biopsy (tpMRgBx) is a targeted biopsy technique that was shown to be safe, efficient, and accurate. Our goal was to develop an open source software platform to support evaluation, refinement, and translation of this biopsy approach. METHODS We developed SliceTracker, a 3D Slicer extension to support tpMRgBx. We followed modular design of the implementation to enable customization of the interface and interchange of image segmentation and registration components to assess their effect on the processing time, precision, and accuracy of the biopsy needle placement. The platform and supporting documentation were developed to enable the use of software by an operator with minimal technical training to facilitate translation. Retrospective evaluation studied registration accuracy, effect of the prostate segmentation approach, and re-identification time of biopsy targets. Prospective evaluation focused on the total procedure time and biopsy targeting error (BTE). RESULTS Evaluation utilized data from 73 retrospective and ten prospective tpMRgBx cases. Mean landmark registration error for retrospective evaluation was 1.88 ± 2.63 mm, and was not sensitive to the approach used for prostate gland segmentation. Prospectively, we observed target re-identification time of 4.60 ± 2.40 min and BTE of 2.40 ± 0.98 mm. CONCLUSION SliceTracker is modular and extensible open source platform for supporting image processing aspects of the tpMRgBx procedure. It has been successfully utilized to support clinical research procedures at our site.
Collapse
|
30
|
Park SC, Cha JH, Lee S, Jang W, Lee CS, Lee JK. Deep Learning-Based Deep Brain Stimulation Targeting and Clinical Applications. Front Neurosci 2019; 13:1128. [PMID: 31708729 PMCID: PMC6821714 DOI: 10.3389/fnins.2019.01128] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 10/04/2019] [Indexed: 12/26/2022] Open
Abstract
Background The purpose of the present study was to evaluate deep learning-based image-guided surgical planning for deep brain stimulation (DBS). We developed deep learning semantic segmentation-based DBS targeting and prospectively applied the method clinically. Methods T2∗ fast gradient-echo images from 102 patients were used for training and validation. Manually drawn ground truth information was prepared for the subthalamic and red nuclei with an axial cut ∼4 mm below the anterior–posterior commissure line. A fully convolutional neural network (FCN-VGG-16) was used to ensure margin identification by semantic segmentation. Image contrast augmentation was performed nine times. Up to 102 original images and 918 augmented images were used for training and validation. The accuracy of semantic segmentation was measured in terms of mean accuracy and mean intersection over the union. Targets were calculated based on their relative distance from these segmented anatomical structures considering the Bejjani target. Results Mean accuracies and mean intersection over the union values were high: 0.904 and 0.813, respectively, for the 62 training images, and 0.911 and 0.821, respectively, for the 558 augmented training images when 360 augmented validation images were used. The Dice coefficient converted from the intersection over the union was 0.902 when 720 training and 198 validation images were used. Semantic segmentation was adaptive to high anatomical variations in size, shape, and asymmetry. For clinical application, two patients were assessed: one with essential tremor and another with bradykinesia and gait disturbance due to Parkinson’s disease. Both improved without complications after surgery, and microelectrode recordings showed subthalamic nuclei signals in the latter patient. Conclusion The accuracy of deep learning-based semantic segmentation may surpass that of previous methods. DBS targeting and its clinical application were made possible using accurate deep learning-based semantic segmentation, which is adaptive to anatomical variations.
Collapse
Affiliation(s)
- Seong-Cheol Park
- Department of Neurosurgery, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul, South Korea.,Department of Neurosurgery, Gangneung Asan Hospital, University of Ulsan, Gangneung, South Korea
| | - Joon Hyuk Cha
- Department of Neurosurgery, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul, South Korea.,School of Medicine, Inha University, Incheon, South Korea
| | - Seonhwa Lee
- Department of Neurosurgery, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul, South Korea.,Department of Bio-Convergence Engineering, College of Health Science, Korea University, Seoul, South Korea
| | - Wooyoung Jang
- Department of Neurology, Gangneung Asan Hospital, University of Ulsan, Gangneung, South Korea
| | - Chong Sik Lee
- Department of Neurology, Asan Medical Center, University of Ulsan, Seoul, South Korea
| | - Jung Kyo Lee
- Department of Neurosurgery, Asan Medical Center, University of Ulsan, Seoul, South Korea
| |
Collapse
|
31
|
Simultaneous reconstruction of multiple stiff wires from a single X-ray projection for endovascular aortic repair. Int J Comput Assist Radiol Surg 2019; 14:1891-1899. [PMID: 31440962 DOI: 10.1007/s11548-019-02052-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 08/05/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Endovascular repair of aortic aneurysms (EVAR) can be supported by fusing pre- and intraoperative data to allow for improved navigation and to reduce the amount of contrast agent needed during the intervention. However, stiff wires and delivery devices can deform the vasculature severely, which reduces the accuracy of the fusion. Knowledge about the 3D position of the inserted instruments can help to transfer these deformations to the preoperative information. METHOD We propose a method to simultaneously reconstruct the stiff wires in both iliac arteries based on only a single monoplane acquisition, thereby avoiding interference with the clinical workflow. In the available X-ray projection, the 2D course of the wire is extracted. Then, a virtual second view of each wire orthogonal to the real projection is estimated using the preoperative vessel anatomy from a computed tomography angiography as prior information. Based on the real and virtual 2D wire courses, the wires can then be reconstructed in 3D using epipolar geometry. RESULTS We achieve a mean modified Hausdorff distance of 4.2 mm between the estimated 3D position and the true wire course for the contralateral side and 4.5 mm for the ipsilateral side. CONCLUSION The accuracy and speed of the proposed method allow for use in an intraoperative setting of deformation correction for EVAR.
Collapse
|
32
|
Zaffino P, Pernelle G, Mastmeyer A, Mehrtash A, Zhang H, Kikinis R, Kapur T, Francesca Spadea M. Fully automatic catheter segmentation in MRI with 3D convolutional neural networks: application to MRI-guided gynecologic brachytherapy. Phys Med Biol 2019; 64:165008. [PMID: 31272095 DOI: 10.1088/1361-6560/ab2f47] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
External-beam radiotherapy followed by high dose rate (HDR) brachytherapy is the standard-of-care for treating gynecologic cancers. The enhanced soft-tissue contrast provided by magnetic resonance imaging (MRI) makes it a valuable imaging modality for diagnosing and treating these cancers. However, in contrast to computed tomography (CT) imaging, the appearance of the brachytherapy catheters, through which radiation sources are inserted to reach the cancerous tissue later on, is often variable across images. This paper reports, for the first time, a new deep-learning-based method for fully automatic segmentation of multiple closely spaced brachytherapy catheters in intraoperative MRI. Represented in the data are 50 gynecologic cancer patients treated by MRI-guided HDR brachytherapy. For each patient, a single intraoperative MRI was used. 826 catheters in the images were manually segmented by an expert radiation physicist who is also a trained radiation oncologist. The number of catheters in a patient ranged between 10 and 35. A deep 3D convolutional neural network (CNN) model was developed and trained. In order to make the learning process more robust, the network was trained 5 times, each time using a different combination of shown patients. Finally, each test case was processed by the five networks and the final segmentation was generated by voting on the obtained five candidate segmentations. 4-fold validation was executed and all the patients were segmented. An average distance error of 2.0 ± 3.4 mm was achieved. False positive and false negative catheters were 6.7% and 1.5% respectively. Average Dice score was equal to 0.60 ± 0.17. The algorithm is available for use in the open source software platform 3D Slicer allowing for wide scale testing and research discussion. In conclusion, to the best of our knowledge, fully automatic segmentation of multiple closely spaced catheters from intraoperative MR images was achieved for the first time in gynecological brachytherapy.
Collapse
Affiliation(s)
- Paolo Zaffino
- Department of Experimental and Clinical Medicine, Magna Graecia University, 88100, Catanzaro, Italy. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | |
Collapse
|
33
|
Hsieh YZ, Luo YC, Pan C, Su MC, Chen CJ, Hsieh KLC. Cerebral Small Vessel Disease Biomarkers Detection on MRI-Sensor-Based Image and Deep Learning. SENSORS 2019; 19:s19112573. [PMID: 31174277 PMCID: PMC6603587 DOI: 10.3390/s19112573] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 05/27/2019] [Accepted: 05/30/2019] [Indexed: 11/22/2022]
Abstract
Magnetic resonance imaging (MRI) offers the most detailed brain structure image available today; it can identify tiny lesions or cerebral cortical abnormalities. The primary purpose of the procedure is to confirm whether there is structural variation that causes epilepsy, such as hippocampal sclerotherapy, local cerebral cortical dysplasia, and cavernous hemangioma. Cerebrovascular disease, the second most common factor of death in the world, is also the fourth leading cause of death in Taiwan, with cerebrovascular disease having the highest rate of stroke. Among the most common are large vascular atherosclerotic lesions, small vascular lesions, and cardiac emboli. The purpose of this thesis is to establish a computer-aided diagnosis system based on small blood vessel lesions in MRI images, using the method of Convolutional Neural Network and deep learning to analyze brain vascular occlusion by analyzing brain MRI images. Blocks can help clinicians more quickly determine the probability and severity of stroke in patients. We analyzed MRI data from 50 patients, including 30 patients with stroke, 17 patients with occlusion but no stroke, and 3 patients with dementia. This system mainly helps doctors find out whether there are cerebral small vessel lesions in the brain MRI images, and to output the found results into labeled images. The marked contents include the position coordinates of the small blood vessel blockage, the block range, the area size, and if it may cause a stroke. Finally, all the MRI images of the patient are synthesized, showing a 3D display of the small blood vessels in the brain to assist the doctor in making a diagnosis or to provide accurate lesion location for the patient.
Collapse
Affiliation(s)
- Yi-Zeng Hsieh
- Department of Electrical Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan.
- Institute of Food Safety and Risk Management, National Taiwan Ocean University, Keelung 20224, Taiwan.
- Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan.
| | - Yu-Cin Luo
- Department of Electrical Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan.
| | - Chen Pan
- Department of Electrical Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan.
| | - Mu-Chun Su
- Department of Computer Science & Information Engineering, National Central University, Taoyuan City 32001, Taiwan.
| | - Chi-Jen Chen
- Department of Radiology, Shuang Ho Hospital, New Taipei City 23561, Taiwan.
| | - Kevin Li-Chun Hsieh
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei City 110, Taiwan.
- Translational Imaging Research Center, College of Medicine, Taipei Medical University, Taipei City 110, Taiwan.
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei City 110, Taiwan.
| |
Collapse
|