1
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
2
|
Hui X, Rajendran P, Ling T, Dai X, Xing L, Pramanik M. Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth. PHOTOACOUSTICS 2023; 34:100575. [PMID: 38174105 PMCID: PMC10761306 DOI: 10.1016/j.pacs.2023.100575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/15/2023] [Accepted: 11/27/2023] [Indexed: 01/05/2024]
Abstract
Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.
Collapse
Affiliation(s)
- Xie Hui
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
| | - Praveenbalaji Rajendran
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Tong Ling
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 637459, Singapore
| | - Xianjin Dai
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Lei Xing
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Manojit Pramanik
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, United States
| |
Collapse
|
3
|
Park CKS, Trumpour T, Aziz A, Bax JS, Tessier D, Gardi L, Fenster A. Cost-effective, portable, patient-dedicated three-dimensional automated breast ultrasound for point-of-care breast cancer screening. Sci Rep 2023; 13:14390. [PMID: 37658125 PMCID: PMC10474273 DOI: 10.1038/s41598-023-41424-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 08/26/2023] [Indexed: 09/03/2023] Open
Abstract
Breast cancer screening has substantially reduced mortality across screening populations. However, a clinical need persists for more accessible, cost-effective, and robust approaches for increased-risk and diverse patient populations, especially those with dense breasts where screening mammography is suboptimal. We developed and validated a cost-effective, portable, patient-dedicated three-dimensional (3D) automated breast ultrasound (ABUS) system for point-of-care breast cancer screening. The 3D ABUS system contains a wearable, rapid-prototype 3D-printed dam assembly, a compression assembly, and a computer-driven 3DUS scanner, adaptable to any commercially available US machine and transducer. Acquisition is operator-agnostic, involves a 40-second scan time, and provides multiplanar 3D visualization for whole-breast assessment. Geometric reconstruction accuracy was evaluated with a 3D grid phantom and tissue-mimicking breast phantoms, demonstrating linear measurement and volumetric reconstruction errors < 0.2 mm and < 3%, respectively. The system's capability was demonstrated in a healthy male volunteer and two healthy female volunteers, representing diverse patient geometries and breast sizes. The system enables comfortable ultrasonic coupling and tissue stabilization, with adjustable compression to improve image quality while alleviating discomfort. Moreover, the system effectively mitigates breathing and motion, since its assembly affixes directly onto the patient. While future studies are still required to evaluate the impact on current clinical practices and workflow, the 3D ABUS system shows potential for adoption as an alternative, cost-effective, dedicated point-of-care breast cancer screening approach for increased-risk populations and limited-resource settings.
Collapse
Affiliation(s)
- Claire Keun Sun Park
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada.
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada.
| | - Tiana Trumpour
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
| | - Amal Aziz
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
- School of Biomedical Engineering, Faculty of Engineering, Western University, London, ON, N6A 3K7, Canada
| | - Jeffrey Scott Bax
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
| | - David Tessier
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
| | - Lori Gardi
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
- Robarts Research Institute, 1151 Richmond St. N., London, ON, N6A 5B7, Canada
- Division of Imaging Sciences, Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
| |
Collapse
|
4
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
5
|
Amiri Tehrani Zade A, Jalili Aziz M, Majedi H, Mirbagheri A, Ahmadian A. Spatiotemporal analysis of speckle dynamics to track invisible needle in ultrasound sequences using convolutional neural networks: a phantom study. Int J Comput Assist Radiol Surg 2023; 18:1373-1382. [PMID: 36745339 DOI: 10.1007/s11548-022-02812-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/13/2022] [Indexed: 02/07/2023]
Abstract
PURPOSE Accurate needle placement into the target point is critical for ultrasound interventions like biopsies and epidural injections. However, aligning the needle to the thin plane of the transducer is a challenging issue as it leads to the decay of visibility by the naked eye. Therefore, we have developed a CNN-based framework to track the needle using the spatiotemporal features of the speckle dynamics. METHODS There are three key techniques to optimize the network for our application. First, we used Gunnar-Farneback (GF) as a traditional motion field estimation technique to augment the model input with the spatiotemporal features extracted from the stack of consecutive frames. We also designed an efficient network based on the state-of-the-art Yolo framework (nYolo). Lastly, the Assisted Excitation (AE) module was added at the neck of the network to handle the imbalance problem. RESULTS Fourteen freehand ultrasound sequences were collected by inserting an injection needle steeply into the Ultrasound Compatible Lumbar Epidural Simulator and Femoral Vascular Access Ezono test phantoms. We divided the dataset into two sub-categories. In the second category, in which the situation is more challenging and the needle is totally invisible, the angle and tip localization error were 2.43 ± 1.14° and 2.3 ± 1.76 mm using Yolov3+GF+AE and 2.08 ± 1.18° and 2.12 ± 1.43 mm using nYolo+GF+AE. CONCLUSION The proposed method has the potential to track the needle in a more reliable operation compared to other state-of-the-art methods and can accurately localize it in 2D B-mode US images in real time, allowing it to be used in current ultrasound intervention procedures.
Collapse
Affiliation(s)
- Amin Amiri Tehrani Zade
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Maryam Jalili Aziz
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Majedi
- Pain Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
- Department of Anesthesiology, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Mirbagheri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Robotic Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran.
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
6
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
7
|
Yan W, Ding Q, Chen J, Yan K, Tang RSY, Cheng SS. Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction. Med Image Anal 2023; 88:102847. [PMID: 37307759 DOI: 10.1016/j.media.2023.102847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 01/29/2023] [Accepted: 05/17/2023] [Indexed: 06/14/2023]
Abstract
Visual trackers are the most commonly adopted approach for needle tip tracking in ultrasound (US)-based procedures. However, they often perform unsatisfactorily in biological tissues due to the significant background noise and anatomical occlusion. This paper presents a learning-based needle tip tracking system, which consists of not only a visual tracking module, but also a motion prediction module. In the visual tracking module, two sets of masks are designed to improve the tracker's discriminability, and a template update submodule is used to keep up to date with the needle tip's current appearance. In the motion prediction module, a Transformer network-based prediction architecture estimates the target's current position according to its historical position data to tackle the problem of target's temporary disappearance. A data fusion module then integrates the results from the visual tracking and motion prediction modules to provide robust and accurate tracking results. Our proposed tracking system showed distinct improvement against other state-of-the-art trackers during the motorized needle insertion experiments in both gelatin phantom and biological tissue environments (e.g. 78% against <60% in terms of the tracking success rate in the most challenging scenario of "In-plane-static" during the tissue experiments). Its robustness was also verified in manual needle insertion experiments under varying needle velocities and directions, and occasional temporary needle tip disappearance, with its tracking success rate being >18% higher than the second best performing tracking system. The proposed tracking system, with its computational efficiency, tracking robustness, and tracking accuracy, will lead to safer targeting during existing clinical practice of US-guided needle operations and potentially be integrated in a tissue biopsy robotic system.
Collapse
Affiliation(s)
- Wanquan Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Qingpeng Ding
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Jianghua Chen
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Kim Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Shing-Yan Tang
- Department of Medicine and Therapeutics and Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong
| | - Shing Shin Cheng
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong; Institute of Medical Intelligence and XR, Multi-scale Medical Robotics Center, and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
8
|
Orlando N, Snir J, Barker K, D'Souza D, Velker V, Mendez LC, Fenster A, Hoover DA. A power Doppler ultrasound method for improving intraoperative tip localization for visually obstructed needles in interstitial prostate brachytherapy. Med Phys 2023; 50:2649-2661. [PMID: 36846880 DOI: 10.1002/mp.16336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 12/15/2022] [Accepted: 01/10/2023] [Indexed: 03/01/2023] Open
Abstract
PURPOSE High-dose-rate (HDR) interstitial brachytherapy (BT) is a common treatment technique for localized intermediate to high-risk prostate cancer. Transrectal ultrasound (US) imaging is typically used for guiding needle insertion, including localization of the needle tip which is critical for treatment planning. However, image artifacts can limit needle tip visibility in standard brightness (B)-mode US, potentially leading to dose delivery that deviates from the planned dose. To improve intraoperative tip visualization in visually obstructed needles, we propose a power Doppler (PD) US method which utilizes a novel wireless mechanical oscillator, validated in phantom experiments and clinical HDR-BT cases as part of a feasibility clinical trial. METHODS Our wireless oscillator contains a DC motor housed in a 3D printed case and is powered by rechargeable battery allowing the device to be operated by one person with no additional equipment required in the operating room. The oscillator end-piece features a cylindrical shape designed for BT applications to fit on top of the commonly used cylindrical needle mandrins. Phantom validation was completed using tissue-equivalent agar phantoms with the clinical US system and both plastic and metal needles. Our PD method was tested using a needle implant pattern matching a standard HDR-BT procedure as well as an implant pattern designed to maximize needle shadowing artifacts. Needle tip localization accuracy was assessed using the clinical method based on ideal reference needles as well as a comparison to computed tomography (CT) as a gold standard. Clinical validation was completed in five patients who underwent standard HDR-BT as part of a feasibility clinical trial. Needle tips positions were identified using B-mode US and PD US with perturbation from our wireless oscillator. RESULTS Absolute mean ± standard deviation tip error for B-mode alone, PD alone, and B-mode combined with PD was respectively: 0.3 ± 0.3 mm, 0.6 ± 0.5 mm, and 0.4 ± 0.2 mm for the mock HDR-BT needle implant; 0.8 ± 1.7 mm, 0.4 ± 0.6 mm, and 0.3 ± 0.5 mm for the explicit shadowing implant with plastic needles; and 0.5 ± 0.2 mm, 0.5 ± 0.3 mm, and 0.6 ± 0.2 mm for the explicit shadowing implant with metal needles. The total mean absolute tip error for all five patients in the feasibility clinical trial was 0.9 ± 0.7 mm using B-mode US alone and 0.8 ± 0.5 mm when including PD US, with increased benefit observed for needles classified as visually obstructed. CONCLUSIONS Our proposed PD needle tip localization method is easy to implement and requires no modifications or additions to the standard clinical equipment or workflow. We have demonstrated decreased tip localization error and variation for visually obstructed needles in both phantom and clinical cases, including providing the ability to visualize needles previously not visible using B-mode US alone. This method has the potential to improve needle visualization in challenging cases without burdening the clinical workflow, potentially improving treatment accuracy in HDR-BT and more broadly in any minimally invasive needle-based procedure.
Collapse
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics, Western University, London, Ontario, Canada.,Robarts Research Institute, Western University, London, Ontario, Canada
| | - Jonatan Snir
- London Health Sciences Centre, London, Ontario, Canada
| | - Kevin Barker
- Robarts Research Institute, Western University, London, Ontario, Canada
| | - David D'Souza
- London Health Sciences Centre, London, Ontario, Canada.,Department of Oncology, Western University, London, Ontario, Canada
| | - Vikram Velker
- London Health Sciences Centre, London, Ontario, Canada.,Department of Oncology, Western University, London, Ontario, Canada
| | - Lucas C Mendez
- London Health Sciences Centre, London, Ontario, Canada.,Department of Oncology, Western University, London, Ontario, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, Ontario, Canada.,Robarts Research Institute, Western University, London, Ontario, Canada.,Department of Oncology, Western University, London, Ontario, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, Ontario, Canada.,London Health Sciences Centre, London, Ontario, Canada.,Department of Oncology, Western University, London, Ontario, Canada
| |
Collapse
|
9
|
|
10
|
Yang H, Shan C, Kolen AF, de With PHN. Medical instrument detection in ultrasound: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractMedical instrument detection is essential for computer-assisted interventions, since it facilitates clinicians to find instruments efficiently with a better interpretation, thereby improving clinical outcomes. This article reviews image-based medical instrument detection methods for ultrasound-guided (US-guided) operations. Literature is selected based on an exhaustive search in different sources, including Google Scholar, PubMed, and Scopus. We first discuss the key clinical applications of medical instrument detection in the US, including delivering regional anesthesia, biopsy taking, prostate brachytherapy, and catheterization. Then, we present a comprehensive review of instrument detection methodologies, including non-machine-learning and machine-learning methods. The conventional non-machine-learning methods were extensively studied before the era of machine learning methods. The principal issues and potential research directions for future studies are summarized for the computer-assisted intervention community. In conclusion, although promising results have been obtained by the current (non-) machine learning methods for different clinical applications, thorough clinical validations are still required.
Collapse
|
11
|
Shi M, Zhao T, West SJ, Desjardins AE, Vercauteren T, Xia W. Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets. PHOTOACOUSTICS 2022; 26:100351. [PMID: 35495095 PMCID: PMC9048160 DOI: 10.1016/j.pacs.2022.100351] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.
Collapse
Affiliation(s)
- Mengjie Shi
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Tianrui Zhao
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Simeon J. West
- Department of Anaesthesia, University College Hospital, London NW1 2BU, United Kingdom
| | - Adrien E. Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1 W 7TY, United Kingdom
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| |
Collapse
|
12
|
Daoud MI, Abu-Hani AF, Shtaiyat A, Ali MZ, Alazrai R. Needle detection using ultrasound B-mode and power Doppler analyses. Med Phys 2022; 49:4999-5013. [PMID: 35608237 DOI: 10.1002/mp.15725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/31/2022] [Accepted: 04/13/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Ultrasound is employed in needle interventions to visualize the anatomical structures and track the needle. Nevertheless, needle detection in ultrasound images is a difficult task, specifically at steep insertion angles. PURPOSE A new method is presented to enable effective needle detection using ultrasound B-mode and power Doppler analyses. METHODS A small buzzer is used to excite the needle and an ultrasound system is utilized to acquire B-mode and power Doppler images for the needle. The B-mode and power Doppler images are processed using Radon transform and local phase analysis to initially detect the axis of the needle. The detection of the needle axis is improved by processing the power Doppler image using alpha shape analysis to define a region of interest (ROI) that contains the needle. Also, a set of feature maps are extracted from the ROI in the B-mode image. The feature maps are processed using a machine learning classifier to construct a likelihood image that visualizes the posterior needle likelihoods of the pixels. Radon transform is applied to the likelihood image to achieve an improved needle axis detection. Additionally, the region in the B-mode image surrounding the needle axis is analyzed to identify the needle tip using a custom-made probabilistic approach. Our method was utilized to detect needles inserted in ex vivo animal tissues at shallow [20° -40°), moderate [40° -60°), and steep [60° -85°] angles. RESULTS Our method detected the needles with failure rates equal to 0% and mean angle, axis, and tip errors less than or equal to 0.7°, 0.6 mm, and 0.7 mm, respectively. Additionally, our method achieved favorable results compared to two recently introduced needle detection methods. CONCLUSIONS The results indicate the potential of applying our method to achieve effective needle detection in ultrasound images. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mohammad I Daoud
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| | - Ayah F Abu-Hani
- Department of Electrical and Computer Engineering, Technical University of Munich, Munich, 80333, Germany
| | - Ahmad Shtaiyat
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| | - Mostafa Z Ali
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Rami Alazrai
- Department of Computer Engineering, German Jordanian University, Amman, 11180, Jordan
| |
Collapse
|
13
|
Pfannenstiel A, Iannuccilli J, Cornelis FH, Dupuy DE, Beard WL, Prakash P. Shaping the future of microwave tumor ablation: a new direction in precision and control of device performance. Int J Hyperthermia 2022; 39:664-674. [DOI: 10.1080/02656736.2021.1991012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Affiliation(s)
- Austin Pfannenstiel
- Precision Microwave Inc, Manhattan, KS, USA
- Department of Electrical and Computer Engineering, Kansas State University, Manhattan, KS, USA
| | - Jason Iannuccilli
- Department of Diagnostic Imaging, Division of Interventional Oncology, Rhode Island Hospital, Providence, RI, USA
| | - Francois H. Cornelis
- Interventional Radiology Service, Memorial Sloan Kettering Cancer Center, NY, USA
| | - Damian E. Dupuy
- Diagnostic Imaging, Brown University, Radiology, Cape Cod Hospital, MA, USA
| | - Warren L. Beard
- Department of Clinical Sciences, Kansas State University, Manhattan, KS, USA
| | - Punit Prakash
- Department of Electrical and Computer Engineering, Kansas State University, Manhattan, KS, USA
| |
Collapse
|
14
|
Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy. Brachytherapy 2022; 21:520-531. [DOI: 10.1016/j.brachy.2022.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/07/2022] [Accepted: 02/26/2022] [Indexed: 11/17/2022]
|
15
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
16
|
De Jesus-Rodriguez HJ, Morgan MA, Sagreiya H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv Chronic Kidney Dis 2021; 28:262-269. [PMID: 34906311 DOI: 10.1053/j.ackd.2021.07.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 07/06/2021] [Accepted: 07/06/2021] [Indexed: 12/19/2022]
Abstract
Ultrasonography is a practical imaging technique used in numerous health care settings. It is relatively inexpensive, portable, and safe, and it has dynamic capabilities that make it an invaluable tool for a wide variety of diagnostic and interventional studies. Recently, there has been a revolution in medical imaging using artificial intelligence (AI). A particularly potent form of AI is deep learning, in which the computer learns to recognize pixel or written data on its own without the selection of predetermined features, usually through a specific neural network architecture. Neural networks vary in architecture depending on their task, and key design considerations include the number of layers and complexity, data available, technical requirements, and domain knowledge. Deep learning models offer the potential for promising innovations to workflow, image quality, and vision tasks in sonography. However, there are key limitations and challenges in creating reliable and safe AI models for patients and clinicians.
Collapse
|
17
|
Deriving Non-Cloud Contaminated Sentinel-2 Images with RGB and Near-Infrared Bands from Sentinel-1 Images Based on a Conditional Generative Adversarial Network. REMOTE SENSING 2021. [DOI: 10.3390/rs13081512] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sentinel-2 images have been widely used in studying land surface phenomena and processes, but they inevitably suffer from cloud contamination. To solve this critical optical data availability issue, it is ideal to fuse Sentinel-1 and Sentinel-2 images to create fused, cloud-free Sentinel-2-like images for facilitating land surface applications. In this paper, we propose a new data fusion model, the Multi-channels Conditional Generative Adversarial Network (MCcGAN), based on the conditional generative adversarial network, which is able to convert images from Domain A to Domain B. With the model, we were able to generate fused, cloud-free Sentinel-2-like images for a target date by using a pair of reference Sentinel-1/Sentinel-2 images and target-date Sentinel-1 images as inputs. In order to demonstrate the superiority of our method, we also compared it with other state-of-the-art methods using the same data. To make the evaluation more objective and reliable, we calculated the root-mean-square-error (RSME), R2, Kling–Gupta efficiency (KGE), structural similarity index (SSIM), spectral angle mapper (SAM), and peak signal-to-noise ratio (PSNR) of the simulated Sentinel-2 images generated by different methods. The results show that the simulated Sentinel-2 images generated by the MCcGAN have a higher quality and accuracy than those produced via the previous methods.
Collapse
|
18
|
Park CKS, Bax JS, Gardi L, Knull E, Fenster A. Development of a mechatronic guidance system for targeted ultrasound-guided biopsy under high-resolution positron emission mammography localization. Med Phys 2021; 48:1859-1873. [PMID: 33577113 DOI: 10.1002/mp.14768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/20/2021] [Accepted: 02/05/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Image-guided needle biopsy of small, detectable lesions is crucial for early-stage diagnosis, treatment planning, and management of breast cancer. High-resolution positron emission mammography (PEM) is a dedicated functional imaging modality that can detect breast cancer independent of breast tissue density, but anatomical context and real-time needle visualization are not yet available to guide biopsy. We propose a mechatronic guidance system integrating an ultrasound (US)-guided core-needle biopsy (CNB) with high-resolution PEM localization to improve the spatial sampling of breast lesions. This paper presents the benchtop testing and phantom studies to evaluate the accuracy of the system and its constituent components for targeted PEM-US-guided biopsy under simulated high-resolution PEM localization. METHODS A mechatronic guidance system was developed to operate with the Radialis PEM system and a conventional US system. The system includes a user-operated guidance arm and end-effector biopsy device, integrating a US transducer and CNB gun, with its needle focused on a remote center of motion (RCM). Custom software modules were developed to track, display, and guide the end-effector biopsy device. Registration of the mechatronic guidance system to a simulated PEM detector plate was performed using a landmark-based method. Testing was performed with fiducials positioned in the peripheral and central regions of the simulated detector plate and registration error was quantified. Breast phantom experiments were performed under ideal detection and localization to evaluate for bias in the end-effector biopsy device. The accuracy of the complete mechatronic guidance system to perform targeted breast biopsy was assessed using breast phantoms with simulated lesions. Three-dimensional positioning error was quantified, and principal component analysis assessed for directional trends in 3D space within 95% prediction intervals. Targeted breast biopsies with test phantoms were performed and an overall in-plane needle targeting error was quantified. RESULTS The mean registration errors were 0.63 mm (N = 44) and 0.73 mm (N = 72) in the peripheral and central regions of the simulated PEM detector plate, respectively. A 3D 95% prediction ellipsoid shows an error volume <2.0 mm in diameter, centered on the mean registration error. Under ideal detection and localization, targets <1.0 mm in diameter can be sampled with 95% confidence. The complete mechatronic guidance system was able to successfully spatially sample simulated breast lesions, 4 mm and 6 mm in diameter and height (N = 20) in known 3D positions in the PEM image coordinate space. The 3D positioning error was 0.85 mm (N = 20) with 0.64 mm in-plane and 0.44 mm cross-plane component errors. Targeted breast biopsies resulted in a mean in-plane needle targeting error of 1.08 mm (N = 15) allowing for targets 1.32 mm in radius to be sampled with 95% confidence. CONCLUSIONS We demonstrated the utility of our mechatronic guidance system for targeted breast biopsy under high-resolution PEM localization. Breast phantom studies showed the ability to accurately guide, position, and target breast lesions with the accuracy to spatially sample targets <3.0 mm in diameter with 95% confidence. Future work will integrate the developed system with the Radialis PEM system toward combined PEM-US-guided breast biopsy.
Collapse
Affiliation(s)
- Claire Keun Sun Park
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, N6A 3K7, Canada.,Imaging Research Laboratories, Robarts Research Institute, London, Ontario, N6A 5B7, Canada
| | - Jeffrey Scott Bax
- Imaging Research Laboratories, Robarts Research Institute, London, Ontario, N6A 5B7, Canada
| | - Lori Gardi
- Imaging Research Laboratories, Robarts Research Institute, London, Ontario, N6A 5B7, Canada
| | - Eric Knull
- Imaging Research Laboratories, Robarts Research Institute, London, Ontario, N6A 5B7, Canada.,School of Biomedical Engineering, Faculty of Engineering, Western University, London, Ontario, N6A 3K7, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, N6A 3K7, Canada.,Imaging Research Laboratories, Robarts Research Institute, London, Ontario, N6A 5B7, Canada.,School of Biomedical Engineering, Faculty of Engineering, Western University, London, Ontario, N6A 3K7, Canada
| |
Collapse
|