1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Rodler S, Kidess MA, Westhofen T, Kowalewski KF, Belenchon IR, Taratkin M, Puliatti S, Gómez Rivas J, Veccia A, Piazza P, Checcucci E, Stief CG, Cacciamani GE. A Systematic Review of New Imaging Technologies for Robotic Prostatectomy: From Molecular Imaging to Augmented Reality. J Clin Med 2023; 12:5425. [PMID: 37629467 PMCID: PMC10455161 DOI: 10.3390/jcm12165425] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
New imaging technologies play a pivotal role in the current management of patients with prostate cancer. Robotic assisted radical prostatectomy (RARP) is a standard of care for localized disease and through the already imaging-based console subject of research towards combinations of imaging technologies and RARP as well as their impact on surgical outcomes. Therefore, we aimed to provide a comprehensive analysis of the currently available literature for new imaging technologies for RARP. On 24 January 2023, we performed a systematic review of the current literature on Pubmed, Scopus and Web of Science according to the PRISMA guidelines and Oxford levels of evidence. A total of 46 studies were identified of which 19 studies focus on imaging of the primary tumor, 12 studies on the intraoperative tumor detection of lymph nodes and 15 studies on the training of surgeons. While the feasibility of combined approaches using new imaging technologies including MRI, PSMA-PET CT or intraoperatively applied radioactive and fluorescent dyes has been demonstrated, the prospective confirmation of improvements in surgical outcomes is currently ongoing.
Collapse
Affiliation(s)
- Severin Rodler
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | - Marc Anwar Kidess
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | - Thilo Westhofen
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | | | - Ines Rivero Belenchon
- Urology and Nephrology Department, Virgen del Rocío University Hospital, Manuel Siurot s/n, 41013 Seville, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 117418 Moscow, Russia;
| | - Stefano Puliatti
- Department of Urology, University of Modena and Reggio Emilia, 42122 Modena, Italy;
| | - Juan Gómez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Alessandro Veccia
- Urology Unit, Azienda Ospedaliera Universitaria Integrata Verona, 37126 Verona, Italy;
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, 10060 Turin, Italy;
| | - Christian Georg Stief
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | | |
Collapse
|
3
|
Somers P, Schule J, Veil C, Sawodny O, Tarin C. Geometric Mapping Evaluation for Real-Time Local Sensor Simulation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:609-612. [PMID: 36086634 DOI: 10.1109/embc48229.2022.9871932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Medical augmented reality and simulated test environments struggle in accurately simulating local sensor measurements across large spatial domains while maintaining the proper resolution of information required and real time capability. Here, a simple method for real-time simulation of intraoperative sensors is presented to aid with medical sensor development and professional training. During a surgical intervention, the interaction between medical sensor systems and tissue leads to mechanical deformation of the tissue. Through the inclusion of detailed finite element simulations in a real-time augmented reality system the method presented will allow for more accurate simulation of intraoperative sensor measurements that are independent of the mechanical state of the tissue. This concept uses a coarse, macro-level deformation mesh to maintain both computational speed and the illusion of reality and a simple geometric point mapping method to include detailed fine mesh information. The resulting system allows for flexible simulation of different types of localized sensor measurement techniques. Preliminary simulation results are provided using a real-time capable simulation environment and prove the feasibility of the method.
Collapse
|
4
|
Tu P, Qin C, Guo Y, Li D, Lungu AJ, Wang H, Chen X. Ultrasound image guided and mixed reality-based surgical system with real-time soft tissue deformation computing for robotic cervical pedicle screw placement. IEEE Trans Biomed Eng 2022; 69:2593-2603. [PMID: 35157575 DOI: 10.1109/tbme.2022.3150952] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Cervical pedicle screw (CPS) placement surgery remains technically demanding due to the complicated anatomy with neurovascular structures. State-of-the-art surgical navigation or robotic systems still suffer from the problem of hand-eye coordination and soft tissue deformation. In this study, we aim at tracking the intraoperative soft tissue deformation and constructing a virtual physical fusion surgical scene, and integrating them into the robotic system for CPS placement surgery. Firstly, we propose a real-time deformation computation method based on the prior shape model and intraoperative partial information acquired from ultrasound images. According to the generated posterior shape, the structure representation of deformed target tissue gets updated continuously. Secondly, a hand tremble compensation method is proposed to improve the accuracy and robustness of the virtual-physical calibration procedure, and a mixed reality based surgical scene is further constructed for CPS placement surgery. Thirdly, we integrate the soft tissue deformation method and virtual-physical fusion method into our previously proposed surgical robotic system, and the surgical workflow for CPS placement surgery is introduced. We conducted phantom and animal experiments to evaluate the feasibility and accuracy of the proposed system. Our system yielded a mean surface distance error of 1.52 ± 0.43 mm for soft tissue deformation computing, and an average distance deviation of 1.04 ± 0.27 mm for CPS placement. Results demonstrated that our system involves tremendous clinical application potential. Our proposed system promotes the efficiency and safety of the CPS placement surgery.
Collapse
|
5
|
Mezheritsky T, Romaguera LV, Le W, Kadoury S. Population-based 3D respiratory motion modelling from convolutional autoencoders for 2D ultrasound-guided radiotherapy. Med Image Anal 2021; 75:102260. [PMID: 34670149 DOI: 10.1016/j.media.2021.102260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/29/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
Radiotherapy is a widely used treatment modality for various types of cancers. A challenge for precise delivery of radiation to the treatment site is the management of internal motion caused by the patient's breathing, especially around abdominal organs such as the liver. Current image-guided radiation therapy (IGRT) solutions rely on ionising imaging modalities such as X-ray or CBCT, which do not allow real-time target tracking. Ultrasound imaging (US) on the other hand is relatively inexpensive, portable and non-ionising. Although 2D US can be acquired at a sufficient temporal frequency, it doesn't allow for target tracking in multiple planes, while 3D US acquisitions are not adapted for real-time. In this work, a novel deep learning-based motion modelling framework is presented for ultrasound IGRT. Our solution includes an image similarity-based rigid alignment module combined with a deep deformable motion model. Leveraging the representational capabilities of convolutional autoencoders, our deformable motion model associates complex 3D deformations with 2D surrogate US images through a common learned low dimensional representation. The model is trained on a variety of deformations and anatomies which enables it to generate the 3D motion experienced by the liver of a previously unseen subject. During inference, our framework only requires two pre-treatment 3D volumes of the liver at extreme breathing phases and a live 2D surrogate image representing the current state of the organ. In this study, the presented model is evaluated on a 3D+t US data set of 20 volunteers based on image similarity as well as anatomical target tracking performance. We report results that surpass comparable methodologies in both metric categories with a mean tracking error of 3.5±2.4 mm, demonstrating the potential of this technique for IGRT.
Collapse
Affiliation(s)
- Tal Mezheritsky
- MedICAL Laboratory, École Polytechnique de Montréal, Montréal, Canada.
| | | | | | - Samuel Kadoury
- MedICAL Laboratory, École Polytechnique de Montréal, Montréal, Canada; CHUM Research Center, Montréal, Canada
| |
Collapse
|
6
|
Prostate brachytherapy intraoperative dosimetry using a combination of radiographic seed localization with a C-arm and deformed ultrasound prostate contours. Brachytherapy 2020; 19:589-598. [PMID: 32682777 DOI: 10.1016/j.brachy.2020.06.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 05/15/2020] [Accepted: 06/03/2020] [Indexed: 11/21/2022]
Abstract
PURPOSE The purpose of the study was to assess the feasibility of performing intraoperative dosimetry for permanent prostate brachytherapy by combining transrectal ultrasound (TRUS) and fluoroscopy/cone beam CT [CBCT] images and accounting for the effect of prostate deformation. METHODS AND MATERIALS 13 patients underwent TRUS and multiview two-dimensional fluoroscopic imaging partway through the implant, as well as repeat fluoroscopic imaging with the TRUS probe inserted and retracted, and finally three-dimensional CBCT imaging at the end of the implant. The locations of all the implanted seeds were obtained from the fluoroscopy/CBCT images and were registered to prostate contours delineated on the TRUS images based on a common subset of seeds identified on both image sets. Prostate contours were also deformed, using a finite-element model, to take into account the effect of the TRUS probe pressure. Prostate dosimetry parameters were obtained for fluoroscopic and CBCT-dosimetry approaches and compared with the standard-of-care Day-0 postimplant CT dosimetry. RESULTS High linear correlation (R2 > 0.8) was observed in the measured values of prostate D90%, V100%, and V150%, between the two intraoperative dosimetry approaches. The prostate D90% and V100% obtained from intraoperative dosimetry methods were in agreement with the postimplant CT dosimetry. Only the prostate V150% was on average 4.1% (p-value <0.05) higher in the CBCT-dosimetry approach and 6.7% (p-value <0.05) higher in postimplant CT dosimetry compared with the fluoroscopic dosimetry approach. Deformation of the prostate by the ultrasound probe appeared to have a minimal effect on prostate dosimetry. CONCLUSIONS The results of this study have shown that both of the proposed dosimetric evaluation approaches have potential for real-time intraoperative dosimetry.
Collapse
|
7
|
Abstract
Artificial intelligence (AI) - the ability of a machine to perform cognitive tasks to achieve a particular goal based on provided data - is revolutionizing and reshaping our health-care systems. The current availability of ever-increasing computational power, highly developed pattern recognition algorithms and advanced image processing software working at very high speeds has led to the emergence of computer-based systems that are trained to perform complex tasks in bioinformatics, medical imaging and medical robotics. Accessibility to 'big data' enables the 'cognitive' computer to scan billions of bits of unstructured information, extract the relevant information and recognize complex patterns with increasing confidence. Computer-based decision-support systems based on machine learning (ML) have the potential to revolutionize medicine by performing complex tasks that are currently assigned to specialists to improve diagnostic accuracy, increase efficiency of throughputs, improve clinical workflow, decrease human resource costs and improve treatment choices. These characteristics could be especially helpful in the management of prostate cancer, with growing applications in diagnostic imaging, surgical interventions, skills training and assessment, digital pathology and genomics. Medicine must adapt to this changing world, and urologists, oncologists, radiologists and pathologists, as high-volume users of imaging and pathology, need to understand this burgeoning science and acknowledge that the development of highly accurate AI-based decision-support applications of ML will require collaboration between data scientists, computer researchers and engineers.
Collapse
|
8
|
Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A Review of Augmented Reality in Robotic-Assisted Surgery. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tmrb.2019.2957061] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Kalia M, Mathur P, Navab N, Salcudean SE. Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality. Healthc Technol Lett 2019; 6:255-260. [PMID: 32038867 PMCID: PMC6952262 DOI: 10.1049/htl.2019.0094] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 10/02/2019] [Indexed: 12/28/2022] Open
Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Collapse
Affiliation(s)
- Megha Kalia
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada.,Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Prateek Mathur
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Septimiu E Salcudean
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|
10
|
A partial augmented reality system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy. Med Image Anal 2019; 60:101588. [PMID: 31739281 DOI: 10.1016/j.media.2019.101588] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 07/31/2019] [Accepted: 10/10/2019] [Indexed: 12/12/2022]
Abstract
We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 ± 1.3 mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 ± 0.3 mm and average target registration error of 2.1 ± 0.8 mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5 mm or less, which is consistent with our laboratory studies.
Collapse
|
11
|
Mathur P, Samei G, Tsang K, Lobo J, Salcudean S. On the feasibility of transperineal 3D ultrasound image guidance for robotic radical prostatectomy. Int J Comput Assist Radiol Surg 2019; 14:923-931. [PMID: 30863982 DOI: 10.1007/s11548-019-01938-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 03/05/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Prostate cancer is the most prevalent form of male-specific cancers. Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot has become the gold-standard treatment for organ-confined prostate cancer. To improve intraoperative visualization of anatomical structures, many groups have developed techniques integrating transrectal ultrasound (TRUS) into the surgical workflow. TRUS, however, is intrusive and does not provide real-time volumetric imaging. METHODS We propose a proof-of-concept system offering an alternative noninvasive transperineal view of the prostate and surrounding structures using 3D ultrasound (US), allowing for full-volume imaging in any anatomical plane desired. The system aims to automatically track da Vinci surgical instruments and display a real-time US image registered to preoperative MRI. We evaluate the approach using a custom prostate phantom, an iU22 (Philips Healthcare, Bothell, WA) US machine with an xMATRIX X6-1 transducer, and a custom probe fixture. A novel registration method between the da Vinci kinematic frame and 3D US is presented. To evaluate the entire registration pipeline, we use a previously developed MRI to US deformable registration algorithm. RESULTS Our US calibration technique yielded a registration error of 0.84 mm, compared to 1.76 mm with existing methods. We evaluated overall system error with a prostate phantom, achieving a target registration error of 2.55 mm. CONCLUSION Transperineal imaging using 3D US is a promising approach for image guidance during RALRP. Preliminary results suggest this system is comparable to existing guidance systems using TRUS. With further development and testing, we believe our system has the potential to improve patient outcomes by imaging anatomical structures and prostate cancer in real time.
Collapse
Affiliation(s)
- Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Golnoosh Samei
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Keith Tsang
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Julio Lobo
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Septimiu Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|