1
|
Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R. Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room. Comput Biol Med 2025; 193:110481. [PMID: 40449046 DOI: 10.1016/j.compbiomed.2025.110481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2025] [Revised: 05/06/2025] [Accepted: 05/27/2025] [Indexed: 06/02/2025]
Abstract
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.
Collapse
Affiliation(s)
- Santiago Cepeda
- Department of Neurosurgery, Rio Hortega University Hospital, Dulzaina 2, Valladolid, 47014, Valladolid, Spain; Specialized Group in Biomedical Imaging and Computational Analysis (GEIBAC), Instituto de Investigacion Biosanitaria de Valladolid (IBioVALL), Dulzaina 2, Valladolid, 47014, Valladolid, Spain.
| | - Olga Esteban-Sinovas
- Department of Neurosurgery, Rio Hortega University Hospital, Dulzaina 2, Valladolid, 47014, Valladolid, Spain; Specialized Group in Biomedical Imaging and Computational Analysis (GEIBAC), Instituto de Investigacion Biosanitaria de Valladolid (IBioVALL), Dulzaina 2, Valladolid, 47014, Valladolid, Spain
| | - Roberto Romero
- Biomedical Engineering Group, University of Valladolid, P. de Belen 15, Valladolid, 47011, Valladolid, Spain
| | - Vikas Singh
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Parel East, Mumbai, 400012, Maharashtra, India
| | - Prakash Shett
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Parel East, Mumbai, 400012, Maharashtra, India
| | - Aliasgar Moiyadi
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Parel East, Mumbai, 400012, Maharashtra, India
| | - Ilyess Zemmoura
- UMR 1253, iBrain, Universit'e de Tours, Inserm, 10 Bd Tonnelle, Tours, 37000, France; Department of Neurosurgery, CHRU de Tours, 2 Bd Tonnelle, Tours, 37000, France
| | - Giuseppe Roberto Giammalva
- Department of Neurosurgery, ARNAS Civico Di Cristina Benfratelli Hospital, P.Za Leotta Nicola, Palermo, 90127, Italy
| | - Massimiliano Del Bene
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, Milan, 20133, Italy; Department of Pharmacological and Biomolecular Sciences, University of Milan, Via Festa del Perdono 7, Milan, 20122, Italy
| | - Arianna Barbotti
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, Milan, 20133, Italy
| | - Francesco DiMeco
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, Milan, 20133, Italy; Department of Oncology and Hematology-Oncology, Universit'a Degli Studi di Milano, Via Festa del Perdono 7, Milan, 20122, Italy; Department of Neurological Surgery, Johns Hopkins Medical School, 733 N Broadway, Baltimore, 21205, Maryland, USA
| | - Timothy R West
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, 02114, Massachusetts, USA
| | - Brian V Nahed
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, 02114, Massachusetts, USA
| | - Ignacio Arrese
- Department of Neurosurgery, Rio Hortega University Hospital, Dulzaina 2, Valladolid, 47014, Valladolid, Spain; Specialized Group in Biomedical Imaging and Computational Analysis (GEIBAC), Instituto de Investigacion Biosanitaria de Valladolid (IBioVALL), Dulzaina 2, Valladolid, 47014, Valladolid, Spain
| | - Roberto Hornero
- Specialized Group in Biomedical Imaging and Computational Analysis (GEIBAC), Instituto de Investigacion Biosanitaria de Valladolid (IBioVALL), Dulzaina 2, Valladolid, 47014, Valladolid, Spain; Center for Biomedical Research in Network of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Av. Monforte de Lemos, 3-5, Valladolid, 47011, Spain; Institute for Research in Mathematics (IMUVA), University of Valladolid, P. de Belen 15, Valladolid, 47011, Spain
| | - Rosario Sarabia
- Department of Neurosurgery, Rio Hortega University Hospital, Dulzaina 2, Valladolid, 47014, Valladolid, Spain; Specialized Group in Biomedical Imaging and Computational Analysis (GEIBAC), Instituto de Investigacion Biosanitaria de Valladolid (IBioVALL), Dulzaina 2, Valladolid, 47014, Valladolid, Spain
| |
Collapse
|
2
|
Cepeda S, Esteban-Sinovas O, Singh V, Shetty P, Moiyadi A, Dixon L, Weld A, Anichini G, Giannarou S, Camp S, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Romero R, Arrese I, Hornero R, Sarabia R. Deep Learning-Based Glioma Segmentation of 2D Intraoperative Ultrasound Images: A Multicenter Study Using the Brain Tumor Intraoperative Ultrasound Database (BraTioUS). Cancers (Basel) 2025; 17:315. [PMID: 39858097 PMCID: PMC11763412 DOI: 10.3390/cancers17020315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 01/13/2025] [Accepted: 01/17/2025] [Indexed: 01/27/2025] Open
Abstract
Background: Intraoperative ultrasound (ioUS) provides real-time imaging during neurosurgical procedures, with advantages such as portability and cost-effectiveness. Accurate tumor segmentation has the potential to substantially enhance the interpretability of ioUS images; however, its implementation is limited by persistent challenges, including noise, artifacts, and anatomical variability. This study aims to develop a convolutional neural network (CNN) model for glioma segmentation in ioUS images via a multicenter dataset. Methods: We retrospectively collected data from the BraTioUS and ReMIND datasets, including histologically confirmed gliomas with high-quality B-mode images. For each patient, the tumor was manually segmented on the 2D slice with its largest diameter. A CNN was trained using the nnU-Net framework. The dataset was stratified by center and divided into training (70%) and testing (30%) subsets, with external validation performed on two independent cohorts: the RESECT-SEG database and the Imperial College NHS Trust London cohort. Performance was evaluated using metrics such as the Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile Hausdorff distance (HD95). Results: The training cohort consisted of 197 subjects, 56 of whom were in the hold-out testing set and 53 in the external validation cohort. In the hold-out testing set, the model achieved a median DSC of 0.90, ASSD of 8.51, and HD95 of 29.08. On external validation, the model achieved a DSC of 0.65, ASSD of 14.14, and HD95 of 44.02 on the RESECT-SEG database and a DSC of 0.93, ASSD of 8.58, and HD95 of 28.81 on the Imperial-NHS cohort. Conclusions: This study supports the feasibility of CNN-based glioma segmentation in ioUS across multiple centers. Future work should enhance segmentation detail and explore real-time clinical implementation, potentially expanding ioUS's role in neurosurgical resection.
Collapse
Affiliation(s)
- Santiago Cepeda
- Department of Neurosurgery, Río Hortega University Hospital, 47014 Valladolid, Spain; (O.E.-S.); (I.A.); (R.S.)
| | - Olga Esteban-Sinovas
- Department of Neurosurgery, Río Hortega University Hospital, 47014 Valladolid, Spain; (O.E.-S.); (I.A.); (R.S.)
| | - Vikas Singh
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India; (V.S.); (P.S.); (A.M.)
| | - Prakash Shetty
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India; (V.S.); (P.S.); (A.M.)
| | - Aliasgar Moiyadi
- Department of Neurosurgery, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India; (V.S.); (P.S.); (A.M.)
| | - Luke Dixon
- Department of Imaging, Charing Cross Hospital, Fulham Palace Rd, London W6 8RF, UK;
| | - Alistair Weld
- Hamlyn Centre, Imperial College London, Exhibition Rd, London SW7 2AZ, UK; (A.W.); (S.G.)
| | - Giulio Anichini
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, London W6 8RF, UK; (G.A.); (S.C.)
| | - Stamatia Giannarou
- Hamlyn Centre, Imperial College London, Exhibition Rd, London SW7 2AZ, UK; (A.W.); (S.G.)
| | - Sophie Camp
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, London W6 8RF, UK; (G.A.); (S.C.)
| | - Ilyess Zemmoura
- UMR 1253, iBrain, Université de Tours, Inserm, 37000 Tours, France;
- Department of Neurosurgery, CHRU de Tours, 37000 Tours, France
| | | | - Massimiliano Del Bene
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133 Milan, Italy; (M.D.B.); (A.B.); (F.D.)
- Department of Pharmacological and Biomolecular Sciences, University of Milan, 20122 Milan, Italy
| | - Arianna Barbotti
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133 Milan, Italy; (M.D.B.); (A.B.); (F.D.)
| | - Francesco DiMeco
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133 Milan, Italy; (M.D.B.); (A.B.); (F.D.)
- Department of Oncology and Hematology-Oncology, Università Degli Studi di Milano, 20122 Milan, Italy
- Department of Neurological Surgery, Johns Hopkins Medical School, Baltimore, MD 21205, USA
| | - Timothy Richard West
- Department of Neurosurgery, Massachusetts General Hospital, Mass General Brigham, Harvard Medical School, Boston, MA 02114, USA; (T.R.W.); (B.V.N.)
| | - Brian Vala Nahed
- Department of Neurosurgery, Massachusetts General Hospital, Mass General Brigham, Harvard Medical School, Boston, MA 02114, USA; (T.R.W.); (B.V.N.)
| | - Roberto Romero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (R.R.); (R.H.)
| | - Ignacio Arrese
- Department of Neurosurgery, Río Hortega University Hospital, 47014 Valladolid, Spain; (O.E.-S.); (I.A.); (R.S.)
| | - Roberto Hornero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (R.R.); (R.H.)
- Center for Biomedical Research in Network of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), 47011 Valladolid, Spain
- Institute for Research in Mathematics (IMUVA), University of Valladolid, 47011 Valladolid, Spain
| | - Rosario Sarabia
- Department of Neurosurgery, Río Hortega University Hospital, 47014 Valladolid, Spain; (O.E.-S.); (I.A.); (R.S.)
| |
Collapse
|
3
|
Behboodi B, Carton FX, Chabanas M, de Ribaupierre S, Solheim O, Munkvold BKR, Rivaz H, Xiao Y, Reinertsen I. Open access segmentations of intraoperative brain tumor ultrasound images. Med Phys 2024; 51:6525-6532. [PMID: 39047165 DOI: 10.1002/mp.17317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 04/04/2024] [Accepted: 06/04/2024] [Indexed: 07/27/2024] Open
Abstract
PURPOSE Registration and segmentation of magnetic resonance (MR) and ultrasound (US) images could play an essential role in surgical planning and resectioning brain tumors. However, validating these techniques is challenging due to the scarcity of publicly accessible sources with high-quality ground truth information. To this end, we propose a unique set of segmentations (RESECT-SEG) of cerebral structures from the previously published RESECT dataset to encourage a more rigorous development and assessment of image-processing techniques for neurosurgery. ACQUISITION AND VALIDATION METHODS The RESECT database consists of MR and intraoperative US (iUS) images of 23 patients who underwent brain tumor resection surgeries. The proposed RESECT-SEG dataset contains segmentations of tumor tissues, sulci, falx cerebri, and resection cavity of the RESECT iUS images. Two highly experienced neurosurgeons validated the quality of the segmentations. DATA FORMAT AND USAGE NOTES Segmentations are provided in 3D NIFTI format in the OSF open-science platform: https://osf.io/jv8bk. POTENTIAL APPLICATIONS The proposed RESECT-SEG dataset includes segmentations of real-world clinical US brain images that could be used to develop and evaluate segmentation and registration methods. Eventually, this dataset could further improve the quality of image guidance in neurosurgery.
Collapse
Affiliation(s)
- Bahareh Behboodi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
- School of Health, Concordia University, Montreal, Canada
| | | | - Matthieu Chabanas
- Université Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| | - Sandrine de Ribaupierre
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Bodil K R Munkvold
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
- School of Health, Concordia University, Montreal, Canada
| | - Yiming Xiao
- School of Health, Concordia University, Montreal, Canada
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
4
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
5
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
6
|
Canalini L, Klein J, Waldmannstetter D, Kofler F, Cerri S, Hering A, Heldmann S, Schlaeger S, Menze BH, Wiestler B, Kirschke J, Hahn HK. Quantitative evaluation of the influence of multiple MRI sequences and of pathological tissues on the registration of longitudinal data acquired during brain tumor treatment. FRONTIERS IN NEUROIMAGING 2022; 1:977491. [PMID: 37555157 PMCID: PMC10406206 DOI: 10.3389/fnimg.2022.977491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 08/15/2022] [Indexed: 08/10/2023]
Abstract
Registration methods facilitate the comparison of multiparametric magnetic resonance images acquired at different stages of brain tumor treatments. Image-based registration solutions are influenced by the sequences chosen to compute the distance measure, and the lack of image correspondences due to the resection cavities and pathological tissues. Nonetheless, an evaluation of the impact of these input parameters on the registration of longitudinal data is still missing. This work evaluates the influence of multiple sequences, namely T1-weighted (T1), T2-weighted (T2), contrast enhanced T1-weighted (T1-CE), and T2 Fluid Attenuated Inversion Recovery (FLAIR), and the exclusion of the pathological tissues on the non-rigid registration of pre- and post-operative images. We here investigate two types of registration methods, an iterative approach and a convolutional neural network solution based on a 3D U-Net. We employ two test sets to compute the mean target registration error (mTRE) based on corresponding landmarks. In the first set, markers are positioned exclusively in the surroundings of the pathology. The methods employing T1-CE achieves the lowest mTREs, with a improvement up to 0.8 mm for the iterative solution. The results are higher than the baseline when using the FLAIR sequence. When excluding the pathology, lower mTREs are observable for most of the methods. In the second test set, corresponding landmarks are located in the entire brain volumes. Both solutions employing T1-CE obtain the lowest mTREs, with a decrease up to 1.16 mm for the iterative method, whereas the results worsen using the FLAIR. When excluding the pathology, an improvement is observable for the CNN method using T1-CE. Both approaches utilizing the T1-CE sequence obtain the best mTREs, whereas the FLAIR is the least informative to guide the registration process. Besides, the exclusion of pathology from the distance measure computation improves the registration of the brain tissues surrounding the tumor. Thus, this work provides the first numerical evaluation of the influence of these parameters on the registration of longitudinal magnetic resonance images, and it can be helpful for developing future algorithms.
Collapse
Affiliation(s)
- Luca Canalini
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Diana Waldmannstetter
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Florian Kofler
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Stefano Cerri
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, Netherlands
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Sarah Schlaeger
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Bjoern H. Menze
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan Kirschke
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Horst K. Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
7
|
An C, Wang Y, Zhang J, Nguyen TQ. Self-Supervised Rigid Registration for Multimodal Retinal Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5733-5747. [PMID: 36040946 PMCID: PMC11211857 DOI: 10.1109/tip.2022.3201476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The ability to accurately overlay one modality retinal image to another is critical in ophthalmology. Our previous framework achieved the state-of-the-art results for multimodal retinal image registration. However, it requires human-annotated labels due to the supervised approach of the previous work. In this paper, we propose a self-supervised multimodal retina registration method to alleviate the burdens of time and expense to prepare for training data, that is, aiming to automatically register multimodal retinal images without any human annotations. Specially, we focus on registering color fundus images with infrared reflectance and fluorescein angiography images, and compare registration results with several conventional and supervised and unsupervised deep learning methods. From the experimental results, the proposed self-supervised framework achieves a comparable accuracy comparing to the state-of-the-art supervised learning method in terms of registration accuracy and Dice coefficient.
Collapse
|
8
|
Slice imputation: Multiple intermediate slices interpolation for anisotropic 3D medical image segmentation. Comput Biol Med 2022; 147:105667. [DOI: 10.1016/j.compbiomed.2022.105667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/07/2022] [Accepted: 05/22/2022] [Indexed: 11/18/2022]
|
9
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
10
|
Gerard IJ, Kersten-Oertel M, Hall JA, Sirhan D, Collins DL. Brain Shift in Neuronavigation of Brain Tumors: An Updated Review of Intra-Operative Ultrasound Applications. Front Oncol 2021; 10:618837. [PMID: 33628733 PMCID: PMC7897668 DOI: 10.3389/fonc.2020.618837] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 12/22/2020] [Indexed: 11/25/2022] Open
Abstract
Neuronavigation using pre-operative imaging data for neurosurgical guidance is a ubiquitous tool for the planning and resection of oncologic brain disease. These systems are rendered unreliable when brain shift invalidates the patient-image registration. Our previous review in 2015, Brain shift in neuronavigation of brain tumours: A review offered a new taxonomy, classification system, and a historical perspective on the causes, measurement, and pre- and intra-operative compensation of this phenomenon. Here we present an updated review using the same taxonomy and framework, focused on the developments of intra-operative ultrasound-based brain shift research from 2015 to the present (2020). The review was performed using PubMed to identify articles since 2015 with the specific words and phrases: “Brain shift” AND “Ultrasound”. Since 2015, the rate of publication of intra-operative ultrasound based articles in the context of brain shift has increased from 2–3 per year to 8–10 per year. This efficient and low-cost technology and increasing comfort among clinicians and researchers have allowed unique avenues of development. Since 2015, there has been a trend towards more mathematical advancements in the field which is often validated on publicly available datasets from early intra-operative ultrasound research, and may not give a just representation to the intra-operative imaging landscape in modern image-guided neurosurgery. Focus on vessel-based registration and virtual and augmented reality paradigms have seen traction, offering new perspectives to overcome some of the different pitfalls of ultrasound based technologies. Unfortunately, clinical adaptation and evaluation has not seen as significant of a publication boost. Brain shift continues to be a highly prevalent pitfall in maintaining accuracy throughout oncologic neurosurgical intervention and continues to be an area of active research. Intra-operative ultrasound continues to show promise as an effective, efficient, and low-cost solution for intra-operative accuracy management. A major drawback of the current research landscape is that mathematical tool validation based on retrospective data outpaces prospective clinical evaluations decreasing the strength of the evidence. The need for newer and more publicly available clinical datasets will be instrumental in more reliable validation of these methods that reflect the modern intra-operative imaging in these procedures.
Collapse
Affiliation(s)
- Ian J Gerard
- Department of Radiation Oncology, McGill University Health Centre, Montreal, QC, Canada
| | | | - Jeffery A Hall
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - Denis Sirhan
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - D Louis Collins
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
11
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
12
|
Navigated 3D Ultrasound in Brain Metastasis Surgery: Analyzing the Differences in Object Appearances in Ultrasound and Magnetic Resonance Imaging. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10217798] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Implementation of intraoperative 3D ultrasound (i3D US) into modern neuronavigational systems offers the possibility of live imaging and subsequent imaging updates. However, different modalities, image acquisition strategies, and timing of imaging influence object appearances. We analyzed the differences in object appearances in ultrasound (US) and magnetic resonance imaging (MRI) in 35 cases of brain metastasis, which were operated in a multimodal navigational setup after intraoperative computed tomography based (iCT) registration. Method: Registration accuracy was determined using the target registration error (TRE). Lesions segmented in preoperative magnetic resonance imaging (preMRI) and i3D US were compared focusing on object size, location, and similarity. Results: The mean and standard deviation (SD) of the TRE was 0.84 ± 0.36 mm. Objects were similar in size (mean ± SD in preMRI: 13.6 ± 16.0 cm3 vs. i3D US: 13.5 ± 16.0 cm3). The Dice coefficient was 0.68 ± 0.22 (mean ± SD), the Hausdorff distance 8.1 ± 2.9 mm (mean ± SD), and the Euclidean distance of the centers of gravity 3.7 ± 2.5 mm (mean ± SD). Conclusion: i3D US clearly delineates tumor boundaries and allows live updating of imaging for compensation of brain shift, which can already be identified to a significant amount before dural opening.
Collapse
|
13
|
Canalini L, Klein J, Miller D, Kikinis R. Enhanced registration of ultrasound volumes by segmentation of resection cavity in neurosurgical procedures. Int J Comput Assist Radiol Surg 2020; 15:1963-1974. [PMID: 33029677 PMCID: PMC7671994 DOI: 10.1007/s11548-020-02273-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 09/25/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Neurosurgeons can have a better understanding of surgical procedures by comparing ultrasound images obtained at different phases of the tumor resection. However, establishing a direct mapping between subsequent acquisitions is challenging due to the anatomical changes happening during surgery. We propose here a method to improve the registration of ultrasound volumes, by excluding the resection cavity from the registration process. METHODS The first step of our approach includes the automatic segmentation of the resection cavities in ultrasound volumes, acquired during and after resection. We used a convolution neural network inspired by the 3D U-Net. Then, subsequent ultrasound volumes are registered by excluding the contribution of resection cavity. RESULTS Regarding the segmentation of the resection cavity, the proposed method achieved a mean DICE index of 0.84 on 27 volumes. Concerning the registration of the subsequent ultrasound acquisitions, we reduced the mTRE of the volumes acquired before and during resection from 3.49 to 1.22 mm. For the set of volumes acquired before and after removal, the mTRE improved from 3.55 to 1.21 mm. CONCLUSIONS We proposed an innovative registration algorithm to compensate the brain shift affecting ultrasound volumes obtained at subsequent phases of neurosurgical procedures. To the best of our knowledge, our method is the first to exclude automatically segmented resection cavities in the registration of ultrasound volumes in neurosurgery.
Collapse
Affiliation(s)
- Luca Canalini
- Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, Germany.
- Medical Imaging Computing, University of Bremen, Bremen, Germany.
| | - Jan Klein
- Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, Germany
| | - Dorothea Miller
- Department of Neurosurgery, University Hospital Knappschaftskrankenhaus, Bochum, Germany
| | - Ron Kikinis
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| |
Collapse
|
14
|
Carton FX, Chabanas M, Le Lann F, Noble JH. Automatic segmentation of brain tumor resections in intraoperative ultrasound images using U-Net. J Med Imaging (Bellingham) 2020; 7:031503. [PMID: 32090137 PMCID: PMC7026519 DOI: 10.1117/1.jmi.7.3.031503] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods have been used to register preoperative magnetic resonance images with intraoperative images. In order to model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. We present an automatic method to segment the resection cavity in intraoperative ultrasound (iUS) images. We trained and evaluated two-dimensional (2-D) and three-dimensional (3-D) U-Net networks on two datasets of 37 and 13 cases that contain images acquired from different ultrasound systems. The best overall performing method was the 3-D network, which resulted in a 0.72 mean and 0.88 median Dice score over the whole dataset. The 2-D network also had good results with less computation time, with a median Dice score over 0.8. We also evaluated the sensitivity of network performance to training and testing with images from different ultrasound systems and image field of view. In this application, we found specialized networks to be more accurate for processing similar images than a general network trained with all the data. Overall, promising results were obtained for both datasets using specialized networks. This motivates further studies with additional clinical data, to enable training and validation of a clinically viable deep-learning model for automated delineation of the tumor resection cavity in iUS images.
Collapse
Affiliation(s)
- François-Xavier Carton
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Matthieu Chabanas
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Florian Le Lann
- Grenoble Alpes University Hospital, Department of Neurosurgery, Grenoble, France
| | - Jack H. Noble
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| |
Collapse
|