1
|
Jaspers TJM, Boers TGW, Kusters CHJ, Jong MR, Jukema JB, de Groof AJ, Bergman JJ, de With PHN, van der Sommen F. Robustness evaluation of deep neural networks for endoscopic image analysis: Insights and strategies. Med Image Anal 2024; 94:103157. [PMID: 38574544 DOI: 10.1016/j.media.2024.103157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.
Collapse
Affiliation(s)
- Tim J M Jaspers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Tim G W Boers
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Carolus H J Kusters
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Martijn R Jong
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Jelmer B Jukema
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Albert J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Jacques J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding & Architectures, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
2
|
Bakker FHA, de Nijs JV, Jaspers TJM, de With PHN, Beulens AJW, van der Poel H, van der Sommen F, Brinkman WM. Estimating Surgical Urethral Length on Intraoperative Robot-Assisted Prostatectomy Images using Artificial Intelligence Anatomy Recognition. J Endourol 2024. [PMID: 38613819 DOI: 10.1089/end.2023.0697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2024] Open
Abstract
Objective To construct a Convolutional Neural Network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical towards optimal outcomes. Therefore new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute towards future AI-assisted RARP and surgeon guidance. Methods Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. 264 frames were annotated according to: prostate, urethra, ligated plexus and catheter. Thirty annotated images from different RARP videos were used as a test dataset. The Dice coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes respectively, with a Hd95 of 29.27 and 72.62 respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 - 1.86mm difference versus human annotators, but with significant deviation (SD 3.28 - 3.56). Conclusion This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared to human annotators, but with a very small mean difference (<2mm). This is a promising development for further research on AI-assisted RARP. Keywords Prostate cancer, Anatomy recognition, Artificial intelligence, Continence, Urethral length.
Collapse
Affiliation(s)
| | - Joris V de Nijs
- Eindhoven University of Technology, 3169, Electrical Engineering, Eindhoven, Noord-Brabant, Netherlands;
| | - Tim J M Jaspers
- Eindhoven University of Technology, 3169, Electrical Engineering, Eindhoven, Noord-Brabant, Netherlands;
| | - Peter H N de With
- Eindhoven University of Technology, 3169, Electrical Engineering, Eindhoven, Noord-Brabant, Netherlands;
| | | | - Henk van der Poel
- Antoni van Leeuwenhoek, 1228, Urology, Amsterdam, Noord-Holland, Netherlands;
| | - Fons van der Sommen
- Eindhoven University of Technology, 3169, Electrical Engineering, Eindhoven, Noord-Brabant, Netherlands;
| | - Willem M Brinkman
- Universitair Medisch Centrum Utrecht, 8124, Urology, Heidelberglaan 100, Utrecht, Netherlands, 3584CG;
| |
Collapse
|
3
|
van der Zander QEW, Schreuder RM, Thijssen A, Kusters CHJ, Dehghani N, Scheeve T, Winkens B, van der Ende - van Loon MCM, de With PHN, van der Sommen F, Masclee AAM, Schoon EJ. Artificial intelligence for characterization of diminutive colorectal polyps: A feasibility study comparing two computer-aided diagnosis systems. Artif Intell Gastrointest Endosc 2024; 5:90574. [DOI: 10.37126/aige.v5.i1.90574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/11/2024] [Accepted: 02/02/2024] [Indexed: 02/20/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has potential in the optical diagnosis of colorectal polyps.
AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system (CADx) AI for ColoRectal Polyps (AI4CRP) for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYETM (Fujifilm, Tokyo, Japan). CADx influence on the optical diagnosis of an expert endoscopist was also investigated.
METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm. Both CADx-systems exploit convolutional neural networks. Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard. AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value (range 0.0-1.0). A predefined cut-off value of 0.6 was set with values < 0.6 indicating benign and values ≥ 0.6 indicating premalignant colorectal polyps. Low confidence characterizations were defined as values 40% around the cut-off value of 0.6 (< 0.36 and > 0.76). Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.
RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps. Self-critical AI4CRP, excluding 14 low confidence characterizations [27.5% (14/51)], had a diagnostic accuracy of 89.2%, sensitivity of 89.7%, and specificity of 87.5%, which was higher compared to AI4CRP. CAD EYE had a 83.7% diagnostic accuracy, 74.2% sensitivity, and 100.0% specificity. Diagnostic performances of the endoscopist alone (before AI) increased non-significantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE (AI-assisted endoscopist). Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems, except for specificity for which CAD EYE performed best.
CONCLUSION Real-time use of AI4CRP was feasible. Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
Collapse
Affiliation(s)
- Quirine Eunice Wennie van der Zander
- Department of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht 6202 AZ, Netherlands
- GROW, School for Oncology and Reproduction, Maastricht University, Maastricht 6200 MD, Netherlands
| | - Ramon M Schreuder
- Division of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Eindhoven 5602 ZA, Netherlands
| | - Ayla Thijssen
- Department of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht 6202 AZ, Netherlands
- GROW, School for Oncology and Reproduction, Maastricht University, Maastricht 6200 MD, Netherlands
| | - Carolus H J Kusters
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, Netherlands
| | - Nikoo Dehghani
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, Netherlands
| | - Thom Scheeve
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, Netherlands
| | - Bjorn Winkens
- Department of Methodology and Statistics, Maastricht University, Postbus 616, 6200 MD Maastricht, Netherlands
- School for Public Health and Primary Care, Maastricht University, Maastricht 6200 MD, Netherlands
| | | | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB, Netherlands
| | - Ad A M Masclee
- Department of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht 6202 AZ, Netherlands
| | - Erik J Schoon
- GROW, School for Oncology and Reproduction, Maastricht University, Maastricht 6200 MD, Netherlands
- Division of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Eindhoven 5602 ZA, Netherlands
| |
Collapse
|
4
|
Litjens G, Broekmans JPEA, Boers T, Caballo M, van den Hurk MHF, Ozdemir D, van Schaik CJ, Janse MHA, van Geenen EJM, van Laarhoven CJHM, Prokop M, de With PHN, van der Sommen F, Hermans JJ. Computed Tomography-Based Radiomics Using Tumor and Vessel Features to Assess Resectability in Cancer of the Pancreatic Head. Diagnostics (Basel) 2023; 13:3198. [PMID: 37892019 PMCID: PMC10606005 DOI: 10.3390/diagnostics13203198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/01/2023] [Accepted: 10/11/2023] [Indexed: 10/29/2023] Open
Abstract
The preoperative prediction of resectability pancreatic ductal adenocarcinoma (PDAC) is challenging. This retrospective single-center study examined tumor and vessel radiomics to predict the resectability of PDAC in chemo-naïve patients. The tumor and adjacent arteries and veins were segmented in the portal-venous phase of contrast-enhanced CT scans, and radiomic features were extracted. Features were selected via stability and collinearity testing, and least absolute shrinkage and selection operator application (LASSO). Three models, using tumor features, vessel features, and a combination of both, were trained with the training set (N = 86) to predict resectability. The results were validated with the test set (N = 15) and compared to the multidisciplinary team's (MDT) performance. The vessel-features-only model performed best, with an AUC of 0.92 and sensitivity and specificity of 97% and 73%, respectively. Test set validation showed a sensitivity and specificity of 100% and 88%, respectively. The combined model was as good as the vessel model (AUC = 0.91), whereas the tumor model showed poor performance (AUC = 0.76). The MDT's prediction reached a sensitivity and specificity of 97% and 84% for the training set and 88% and 100% for the test set, respectively. Our clinician-independent vessel-based radiomics model can aid in predicting resectability and shows performance comparable to that of the MDT. With these encouraging results, improved, automated, and generalizable models can be developed that reduce workload and can be applied in non-expert hospitals.
Collapse
Affiliation(s)
- Geke Litjens
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Joris P. E. A. Broekmans
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Tim Boers
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Marco Caballo
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Maud H. F. van den Hurk
- Department of Plastic and Reconstructive Surgery, Saint Vincent’s University Hospital, D04 T6F4 Dublin, Ireland
| | - Dilek Ozdemir
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Caroline J. van Schaik
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Markus H. A. Janse
- Image Sciences Institute, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
| | - Erwin J. M. van Geenen
- Department of Gastroenterology and Hepatology, Radboud Institute for Molecular Life Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Cees J. H. M. van Laarhoven
- Department of Surgery, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Mathias Prokop
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - John J. Hermans
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| |
Collapse
|
5
|
van der Laan JJH, van der Putten JA, Zhao X, Karrenbeld A, Peters FTM, Westerhof J, de With PHN, van der Sommen F, Nagengast WB. Optical Biopsy of Dysplasia in Barrett's Oesophagus Assisted by Artificial Intelligence. Cancers (Basel) 2023; 15:cancers15071950. [PMID: 37046611 PMCID: PMC10093622 DOI: 10.3390/cancers15071950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/14/2023] Open
Abstract
Optical biopsy in Barrett's oesophagus (BE) using endocytoscopy (EC) could optimize endoscopic screening. However, the identification of dysplasia is challenging due to the complex interpretation of the highly detailed images. Therefore, we assessed whether using artificial intelligence (AI) as second assessor could help gastroenterologists in interpreting endocytoscopic BE images. First, we prospectively videotaped 52 BE patients with EC. Then we trained and tested the AI pm distinct datasets drawn from 83,277 frames, developed an endocytoscopic BE classification system, and designed online training and testing modules. We invited two successive cohorts for these online modules: 10 endoscopists to validate the classification system and 12 gastroenterologists to evaluate AI as second assessor by providing six of them with the option to request AI assistance. Training the endoscopists in the classification system established an improved sensitivity of 90.0% (+32.67%, p < 0.001) and an accuracy of 77.67% (+13.0%, p = 0.020) compared with the baseline. However, these values deteriorated at follow-up (-16.67%, p < 0.001 and -8.0%, p = 0.009). Contrastingly, AI-assisted gastroenterologists maintained high sensitivity and accuracy at follow-up, subsequently outperforming the unassisted gastroenterologists (+20.0%, p = 0.025 and +12.22%, p = 0.05). Thus, best diagnostic scores for the identification of dysplasia emerged through human-machine collaboration between trained gastroenterologists with AI as the second assessor. Therefore, AI could support clinical implementation of optical biopsies through EC.
Collapse
Affiliation(s)
- Jouke J H van der Laan
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| | - Joost A van der Putten
- Department of Electrical Engineering, Video Coding and Architectures, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Xiaojuan Zhao
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| | - Arend Karrenbeld
- Department of Pathology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| | - Frans T M Peters
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| | - Jessie Westerhof
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Video Coding and Architectures, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding and Architectures, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Wouter B Nagengast
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
6
|
Klomp SR, Wijnhoven RGJ, de With PHN. Performance-Efficiency Comparisons of Channel Attention Modules for ResNets. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11161-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
Abstract
AbstractAttention modules can be added to neural network architectures to improve performance. This work presents an extensive comparison between several efficient attention modules for image classification and object detection, in addition to proposing a novel Attention Bias module with lower computational overhead. All measured attention modules have been efficiently re-implemented, which allows an objective comparison and evaluation of the relationship between accuracy and inference time. Our measurements show that single-image inference time increases far more (5–50%) than the increase in FLOPs suggests (0.2–3%) for a limited gain in accuracy, making computation cost an important selection criterion. Despite this increase in inference time, adding an attention module can outperform a deeper baseline ResNet in both speed and accuracy. Finally, we investigate the potential of adding attention modules to pretrained networks and show that fine-tuning is possible and superior to training from scratch. The choice of the best attention module strongly depends on the specific ResNet architecture, input resolution, batch size and inference framework.
Collapse
|
7
|
Lai M, Skyrman S, Kor F, Homan R, El-Hajj VG, Babic D, Edström E, Elmi-Terander A, Hendriks BHW, de With PHN. Development of a CT-Compatible, Anthropomorphic Skull and Brain Phantom for Neurosurgical Planning, Training, and Simulation. Bioengineering (Basel) 2022; 9:bioengineering9100537. [PMID: 36290503 PMCID: PMC9598361 DOI: 10.3390/bioengineering9100537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 10/04/2022] [Accepted: 10/08/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Neurosurgical procedures are complex and require years of training and experience. Traditional training on human cadavers is expensive, requires facilities and planning, and raises ethical concerns. Therefore, the use of anthropomorphic phantoms could be an excellent substitute. The aim of the study was to design and develop a patient-specific 3D-skull and brain model with realistic CT-attenuation suitable for conventional and augmented reality (AR)-navigated neurosurgical simulations. Methods: The radiodensity of materials considered for the skull and brain phantoms were investigated using cone beam CT (CBCT) and compared to the radiodensities of the human skull and brain. The mechanical properties of the materials considered were tested in the laboratory and subsequently evaluated by clinically active neurosurgeons. Optimization of the phantom for the intended purposes was performed in a feedback cycle of tests and improvements. Results: The skull, including a complete representation of the nasal cavity and skull base, was 3D printed using polylactic acid with calcium carbonate. The brain was cast using a mixture of water and coolant, with 4 wt% polyvinyl alcohol and 0.1 wt% barium sulfate, in a mold obtained from segmentation of CBCT and T1 weighted MR images from a cadaver. The experiments revealed that the radiodensities of the skull and brain phantoms were 547 and 38 Hounsfield units (HU), as compared to real skull bone and brain tissues with values of around 1300 and 30 HU, respectively. As for the mechanical properties testing, the brain phantom exhibited a similar elasticity to real brain tissue. The phantom was subsequently evaluated by neurosurgeons in simulations of endonasal skull-base surgery, brain biopsies, and external ventricular drain (EVD) placement and found to fulfill the requirements of a surgical phantom. Conclusions: A realistic and CT-compatible anthropomorphic head phantom was designed and successfully used for simulated augmented reality-led neurosurgical procedures. The anatomic details of the skull base and brain were realistically reproduced. This phantom can easily be manufactured and used for surgical training at a low cost.
Collapse
Affiliation(s)
- Marco Lai
- Philips Research, High Tech Campus 34, 5656 Eindhoven, The Netherlands
- Department of Engineering, Eindhoven University of Technology (TU/e), 5612 Eindhoven, The Netherlands
| | - Simon Skyrman
- Department of Clinical Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
- Department of Neurosurgery, Karolinska University Hospital, 17164 Stockholm, Sweden
| | - Flip Kor
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands
| | | | - Victor Gabriel El-Hajj
- Department of Clinical Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
- Department of Neurosurgery, Karolinska University Hospital, 17164 Stockholm, Sweden
| | - Drazenko Babic
- Philips Research, High Tech Campus 34, 5656 Eindhoven, The Netherlands
| | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
- Department of Neurosurgery, Karolinska University Hospital, 17164 Stockholm, Sweden
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, 17177 Stockholm, Sweden
- Department of Neurosurgery, Karolinska University Hospital, 17164 Stockholm, Sweden
- Correspondence:
| | - Benno H. W. Hendriks
- Philips Research, High Tech Campus 34, 5656 Eindhoven, The Netherlands
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands
| | - Peter H. N. de With
- Department of Engineering, Eindhoven University of Technology (TU/e), 5612 Eindhoven, The Netherlands
| |
Collapse
|
8
|
Yang H, Shan C, Kolen AF, de With PHN. Medical instrument detection in ultrasound: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractMedical instrument detection is essential for computer-assisted interventions, since it facilitates clinicians to find instruments efficiently with a better interpretation, thereby improving clinical outcomes. This article reviews image-based medical instrument detection methods for ultrasound-guided (US-guided) operations. Literature is selected based on an exhaustive search in different sources, including Google Scholar, PubMed, and Scopus. We first discuss the key clinical applications of medical instrument detection in the US, including delivering regional anesthesia, biopsy taking, prostate brachytherapy, and catheterization. Then, we present a comprehensive review of instrument detection methodologies, including non-machine-learning and machine-learning methods. The conventional non-machine-learning methods were extensively studied before the era of machine learning methods. The principal issues and potential research directions for future studies are summarized for the computer-assisted intervention community. In conclusion, although promising results have been obtained by the current (non-) machine learning methods for different clinical applications, thorough clinical validations are still required.
Collapse
|
9
|
Zavala-Mondragon LA, Rongen P, Bescos JO, de With PHN, van der Sommen F. Noise Reduction in CT Using Learned Wavelet-Frame Shrinkage Networks. IEEE Trans Med Imaging 2022; 41:2048-2066. [PMID: 35201984 DOI: 10.1109/tmi.2022.3154011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.
Collapse
|
10
|
Lopes RR, Mamprin M, Zelis JM, Tonino PAL, van Mourik MS, Vis MM, Zinger S, de Mol BAJM, de With PHN, Marquering HA. Local and Distributed Machine Learning for Inter-hospital Data Utilization: An Application for TAVI Outcome Prediction. Front Cardiovasc Med 2021; 8:787246. [PMID: 34869698 PMCID: PMC8632813 DOI: 10.3389/fcvm.2021.787246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 10/14/2021] [Indexed: 11/13/2022] Open
Abstract
Background: Machine learning models have been developed for numerous medical prognostic purposes. These models are commonly developed using data from single centers or regional registries. Including data from multiple centers improves robustness and accuracy of prognostic models. However, data sharing between multiple centers is complex, mainly because of regulations and patient privacy issues. Objective: We aim to overcome data sharing impediments by using distributed ML and local learning followed by model integration. We applied these techniques to develop 1-year TAVI mortality estimation models with data from two centers without sharing any data. Methods: A distributed ML technique and local learning followed by model integration was used to develop models to predict 1-year mortality after TAVI. We included two populations with 1,160 (Center A) and 631 (Center B) patients. Five traditional ML algorithms were implemented. The results were compared to models created individually on each center. Results: The combined learning techniques outperformed the mono-center models. For center A, the combined local XGBoost achieved an AUC of 0.67 (compared to a mono-center AUC of 0.65) and, for center B, a distributed neural network achieved an AUC of 0.68 (compared to a mono-center AUC of 0.64). Conclusion: This study shows that distributed ML and combined local models techniques, can overcome data sharing limitations and result in more accurate models for TAVI mortality estimation. We have shown improved prognostic accuracy for both centers and can also be used as an alternative to overcome the problem of limited amounts of data when creating prognostic models.
Collapse
Affiliation(s)
- Ricardo R Lopes
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands.,Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Jo M Zelis
- Department of Cardiology, Catharina Hospital, Eindhoven, Netherlands
| | - Pim A L Tonino
- Department of Cardiology, Catharina Hospital, Eindhoven, Netherlands
| | - Martijn S van Mourik
- Heart Centre, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Marije M Vis
- Heart Centre, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Bas A J M de Mol
- Heart Centre, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Henk A Marquering
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands.,Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
11
|
van der Zander QEW, Schreuder RM, Fonollà R, Scheeve T, van der Sommen F, Winkens B, Aepli P, Hayee B, Pischel AB, Stefanovic M, Subramaniam S, Bhandari P, de With PHN, Masclee AAM, Schoon EJ. Optical diagnosis of colorectal polyp images using a newly developed computer-aided diagnosis system (CADx) compared with intuitive optical diagnosis. Endoscopy 2021; 53:1219-1226. [PMID: 33368056 DOI: 10.1055/a-1343-1597] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND Optical diagnosis of colorectal polyps remains challenging. Image-enhancement techniques such as narrow-band imaging and blue-light imaging (BLI) can improve optical diagnosis. We developed and prospectively validated a computer-aided diagnosis system (CADx) using high-definition white-light (HDWL) and BLI images, and compared the system with the optical diagnosis of expert and novice endoscopists. METHODS CADx characterized colorectal polyps by exploiting artificial neural networks. Six experts and 13 novices optically diagnosed 60 colorectal polyps based on intuition. After 4 weeks, the same set of images was permuted and optically diagnosed using the BLI Adenoma Serrated International Classification (BASIC). RESULTS CADx had a diagnostic accuracy of 88.3 % using HDWL images and 86.7 % using BLI images. The overall diagnostic accuracy combining HDWL and BLI (multimodal imaging) was 95.0 %, which was significantly higher than that of experts (81.7 %, P = 0.03) and novices (66.7 %, P < 0.001). Sensitivity was also higher for CADx (95.6 % vs. 61.1 % and 55.4 %), whereas specificity was higher for experts compared with CADx and novices (95.6 % vs. 93.3 % and 93.2 %). For endoscopists, diagnostic accuracy did not increase when using BASIC, either for experts (intuition 79.5 % vs. BASIC 81.7 %, P = 0.14) or for novices (intuition 66.7 % vs. BASIC 66.5 %, P = 0.95). CONCLUSION CADx had a significantly higher diagnostic accuracy than experts and novices for the optical diagnosis of colorectal polyps. Multimodal imaging, incorporating both HDWL and BLI, improved the diagnostic accuracy of CADx. BASIC did not increase the diagnostic accuracy of endoscopists compared with intuitive optical diagnosis.
Collapse
Affiliation(s)
- Quirine E W van der Zander
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center + Maastricht, the Netherlands.,GROW, School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - Ramon M Schreuder
- Division of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Eindhoven, the Netherlands
| | - Roger Fonollà
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Thom Scheeve
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Bjorn Winkens
- Department of Methodology and Statistics, CAPHRI, Care and Public Health Research Institute, Maastricht University, Maastricht, the Netherlands
| | - Patrick Aepli
- Division of Gastroenterology and Hepatology, Luzerner Kantonsspital, Lucerne, Switzerland
| | - Bu'Hussain Hayee
- Division of Gastroenterology and Hepatology, King's College Hospital, London, United Kingdom
| | - Andreas B Pischel
- Division of Gastroenterology and Hepatology, University Hospital Gothenburg, Gothenburg, Sweden
| | - Milan Stefanovic
- Division of Gastroenterology and Hepatology, Diagnostični Center Bled, Ljubljana, Slovenia
| | - Sharmila Subramaniam
- Division of Gastroenterology and Hepatology, Queen Alexandra Hospital, Portsmouth, United Kingdom
| | - Pradeep Bhandari
- Division of Gastroenterology and Hepatology, Queen Alexandra Hospital, Portsmouth, United Kingdom
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Ad A M Masclee
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center + Maastricht, the Netherlands
| | - Erik J Schoon
- GROW, School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands.,Division of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Eindhoven, the Netherlands
| |
Collapse
|
12
|
Zavala-Mondragon LA, de With PHN, van der Sommen F. Image Noise Reduction Based on a Fixed Wavelet Frame and CNNs Applied to CT. IEEE Trans Image Process 2021; 30:9386-9401. [PMID: 34757905 DOI: 10.1109/tip.2021.3125489] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Radiation exposure in CT imaging leads to increased patient risk. This motivates the pursuit of reduced-dose scanning protocols, in which noise reduction processing is indispensable to warrant clinically acceptable image quality. Convolutional Neural Networks (CNNs) have received significant attention as an alternative for conventional noise reduction and are able to achieve state-of-the art results. However, the internal signal processing in such networks is often unknown, leading to sub-optimal network architectures. The need for better signal preservation and more transparency motivates the use of Wavelet Shrinkage Networks (WSNs), in which the Encoding-Decoding (ED) path is the fixed wavelet frame known as Overcomplete Haar Wavelet Transform (OHWT) and the noise reduction stage is data-driven. In this work, we considerably extend the WSN framework by focusing on three main improvements. First, we simplify the computation of the OHWT that can be easily reproduced. Second, we update the architecture of the shrinkage stage by further incorporating knowledge of conventional wavelet shrinkage methods. Finally, we extensively test its performance and generalization, by comparing it with the RED and FBPConvNet CNNs. Our results show that the proposed architecture achieves similar performance to the reference in terms of MSSIM (0.667, 0.662 and 0.657 for DHSN2, FBPConvNet and RED, respectively) and achieves excellent quality when visualizing patches of clinically important structures. Furthermore, we demonstrate the enhanced generalization and further advantages of the signal flow, by showing two additional potential applications, in which the new DHSN2 is used as regularizer: (1) iterative reconstruction and (2) ground-truth free training of the proposed noise reduction architecture. The presented results prove that the tight integration of signal processing and deep learning leads to simpler models with improved generalization.
Collapse
|
13
|
Fonollà R, van der Zander QEW, Schreuder RM, Subramaniam S, Bhandari P, Masclee AAM, Schoon EJ, van der Sommen F, de With PHN. Automatic image and text-based description for colorectal polyps using BASIC classification. Artif Intell Med 2021; 121:102178. [PMID: 34763800 DOI: 10.1016/j.artmed.2021.102178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/01/2021] [Accepted: 09/21/2021] [Indexed: 12/18/2022]
Abstract
Colorectal polyps (CRP) are precursor lesions of colorectal cancer (CRC). Correct identification of CRPs during in-vivo colonoscopy is supported by the endoscopist's expertise and medical classification models. A recent developed classification model is the Blue light imaging Adenoma Serrated International Classification (BASIC) which describes the differences between non-neoplastic and neoplastic lesions acquired with blue light imaging (BLI). Computer-aided detection (CADe) and diagnosis (CADx) systems are efficient at visually assisting with medical decisions but fall short at translating decisions into relevant clinical information. The communication between machine and medical expert is of crucial importance to improve diagnosis of CRP during in-vivo procedures. In this work, the combination of a polyp image classification model and a language model is proposed to develop a CADx system that automatically generates text comparable to the human language employed by endoscopists. The developed system generates equivalent sentences as the human-reference and describes CRP images acquired with white light (WL), blue light imaging (BLI) and linked color imaging (LCI). An image feature encoder and a BERT module are employed to build the AI model and an external test set is used to evaluate the results and compute the linguistic metrics. The experimental results show the construction of complete sentences with an established metric scores of BLEU-1 = 0.67, ROUGE-L = 0.83 and METEOR = 0.50. The developed CADx system for automatic CRP image captioning facilitates future advances towards automatic reporting and may help reduce time-consuming histology assessment.
Collapse
Affiliation(s)
- Roger Fonollà
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands.
| | - Quirine E W van der Zander
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, the Netherlands; GROW, School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - Ramon M Schreuder
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Noord-Brabant, the Netherlands
| | - Sharmila Subramaniam
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth, United Kingdom
| | - Pradeep Bhandari
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth, United Kingdom
| | - Ad A M Masclee
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, the Netherlands; NUTRIM, School of Nutrition & Translational Research in Metabolism, Maastricht University, Maastricht, the Netherlands
| | - Erik J Schoon
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Noord-Brabant, the Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands
| |
Collapse
|
14
|
Kusters KC, Zavala-Mondragon LA, Bescos JO, Rongen P, de With PHN, van der Sommen F. Conditional Generative Adversarial Networks for low-dose CT image denoising aiming at preservation of critical image content. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:2682-2687. [PMID: 34891804 DOI: 10.1109/embc46164.2021.9629600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
X-ray Computed Tomography (CT) is an imaging modality where patients are exposed to potentially harmful ionizing radiation. To limit patient risk, reduced-dose protocols are desirable, which inherently lead to an increased noise level in the reconstructed CT scans. Consequently, noise reduction algorithms are indispensable in the reconstruction processing chain. In this paper, we propose to leverage a conditional Generative Adversarial Networks (cGAN) model, to translate CT images from low-to-routine dose. However, when aiming to produce realistic images, such generative models may alter critical image content. Therefore, we propose to employ a frequency-based separation of the input prior to applying the cGAN model, in order to limit the cGAN to high-frequency bands, while leaving low-frequency bands untouched. The results of the proposed method are compared to a state-of-the-art model within the cGAN model as well as in a single-network setting. The proposed method generates visually superior results compared to the single-network model and the cGAN model in terms of quality of texture and preservation of fine structural details. It also appeared that the PSNR, SSIM and TV metrics are less important than a careful visual evaluation of the results. The obtained results demonstrate the relevance of defining and separating the input image into desired and undesired content, rather than blindly denoising entire images. This study shows promising results for further investigation of generative models towards finding a reliable deep learning-based noise reduction algorithm for low-dose CT acquisition.
Collapse
|
15
|
Yang H, Shan C, Bouwman A, Dekker LRC, Kolen AF, de With PHN. Medical Instrument Segmentation in 3D US by Hybrid Constrained Semi-Supervised Learning. IEEE J Biomed Health Inform 2021; 26:762-773. [PMID: 34347611 DOI: 10.1109/jbhi.2021.3101872] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Medical instrument segmentation in 3D ultrasound is essential for image-guided intervention. However, to train a successful deep neural network for instrument segmentation, a large number of labeled images are required, which is expensive and time-consuming to obtain. In this article, we propose a semi-supervised learning (SSL) framework for instrument segmentation in 3D US, which requires much less annotation effort than the existing methods. To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument. The Dual-UNet leverages unlabeled data using a novel hybrid loss function, consisting of uncertainty and contextual constraints. Specifically, the uncertainty constraints leverage the uncertainty estimation of the predictions of the UNet, and therefore improve the unlabeled information for SSL training. In addition, contextual constraints exploit the contextual information of the training images, which are used as the complementary information for voxel-wise uncertainty estimation. Extensive experiments on multiple ex-vivo and in-vivo datasets show that our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume. These results are better than the state-of-the-art SSL methods and the inference time is comparable to the supervised approaches.
Collapse
|
16
|
Sun Y, Hu J, Wang W, He M, de With PHN. Camera-based discomfort detection using multi-channel attention 3D-CNN for hospitalized infants. Quant Imaging Med Surg 2021; 11:3059-3069. [PMID: 34249635 DOI: 10.21037/qims-20-1302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 03/29/2021] [Indexed: 11/06/2022]
Abstract
Background Detecting discomfort in infants is an important topic for their well-being and development. In this paper, we present an automatic and continuous video-based system for monitoring and detecting discomfort in infants. Methods The proposed system employs a novel and efficient 3D convolutional neural network (CNN), which achieves an end-to-end solution without the conventional face detection and tracking steps. In the scheme of this study, we thoroughly investigate the video characteristics (e.g., intensity images and motion images) and CNN architectures (e.g., 2D and 3D) for infant discomfort detection. The realized improvements of the 3D-CNN are based on capturing both the motion and the facial expression information of the infants. Results The performance of the system is assessed using videos recorded from 24 hospitalized infants by visualizing receiver operating characteristic (ROC) curves and measuring the values of area under the ROC curve (AUC). Additional performance metrics (labeling accuracy) are also calculated. Experimental results show that the proposed system achieves an AUC of 0.99, while the overall labeling accuracy is 0.98. Conclusions These results confirms the robustness by using the 3D-CNN for infant discomfort monitoring and capturing both motion and facial expressions simultaneously.
Collapse
Affiliation(s)
- Yue Sun
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Jingjing Hu
- Department of Electrical Engineering, Hunan University, Changsha, China
| | - Wenjin Wang
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Min He
- Department of Electrical Engineering, Hunan University, Changsha, China
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
17
|
Mamprin M, Lopes RR, Zelis JM, Tonino PAL, van Mourik MS, Vis MM, Zinger S, de Mol BAJM, de With PHN. Machine Learning for Predicting Mortality in Transcatheter Aortic Valve Implantation: An Inter-Center Cross Validation Study. J Cardiovasc Dev Dis 2021; 8:65. [PMID: 34199892 PMCID: PMC8227005 DOI: 10.3390/jcdd8060065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 05/28/2021] [Accepted: 06/02/2021] [Indexed: 12/23/2022] Open
Abstract
Current prognostic risk scores for transcatheter aortic valve implantation (TAVI) do not benefit yet from modern machine learning techniques, which can improve risk stratification of one-year mortality of patients before TAVI. Despite the advancement of machine learning in healthcare, data sharing regulations are very strict and typically prevent exchanging patient data, without the involvement of ethical committees. A very robust validation approach, including 1300 and 631 patients per center, was performed to validate a machine learning model of one center at the other external center with their data, in a mutual fashion. This was achieved without any data exchange but solely by exchanging the models and the data processing pipelines. A dedicated exchange protocol was designed to evaluate and quantify the model's robustness on the data of the external center. Models developed with the larger dataset offered similar or higher prediction accuracy on the external validation. Logistic regression, random forest and CatBoost lead to areas under curve of the ROC of 0.65, 0.67 and 0.65 for the internal validation and of 0.62, 0.66, 0.68 for the external validation, respectively. We propose a scalable exchange protocol which can be further extended on other TAVI centers, but more generally to any other clinical scenario, that could benefit from this validation approach.
Collapse
Affiliation(s)
- Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AE Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
| | - Ricardo R. Lopes
- Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands;
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
| | - Jo M. Zelis
- Department of Cardiology, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (J.M.Z.); (P.A.L.T.)
| | - Pim A. L. Tonino
- Department of Cardiology, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (J.M.Z.); (P.A.L.T.)
| | - Martijn S. van Mourik
- Heart Centre, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands; (M.S.v.M.); (M.M.V.); (B.A.J.M.d.M.)
| | - Marije M. Vis
- Heart Centre, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands; (M.S.v.M.); (M.M.V.); (B.A.J.M.d.M.)
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AE Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
| | - Bas A. J. M. de Mol
- Heart Centre, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands; (M.S.v.M.); (M.M.V.); (B.A.J.M.d.M.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AE Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
| |
Collapse
|
18
|
Struyvenberg MR, de Groof AJ, Fonollà R, van der Sommen F, de With PHN, Schoon EJ, Weusten BLAM, Leggett CL, Kahn A, Trindade AJ, Ganguly EK, Konda VJA, Lightdale CJ, Pleskow DK, Sethi A, Smith MS, Wallace MB, Wolfsen HC, Tearney GJ, Meijer SL, Vieth M, Pouw RE, Curvers WL, Bergman JJ. Prospective development and validation of a volumetric laser endomicroscopy computer algorithm for detection of Barrett's neoplasia. Gastrointest Endosc 2021; 93:871-879. [PMID: 32735947 DOI: 10.1016/j.gie.2020.07.052] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 07/21/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Volumetric laser endomicroscopy (VLE) is an advanced imaging modality used to detect Barrett's esophagus (BE) dysplasia. However, real-time interpretation of VLE scans is complex and time-consuming. Computer-aided detection (CAD) may help in the process of VLE image interpretation. Our aim was to train and validate a CAD algorithm for VLE-based detection of BE neoplasia. METHODS The multicenter, VLE PREDICT study, prospectively enrolled 47 patients with BE. In total, 229 nondysplastic BE and 89 neoplastic (high-grade dysplasia/esophageal adenocarcinoma) targets were laser marked under VLE guidance and subsequently underwent a biopsy for histologic diagnosis. Deep convolutional neural networks were used to construct a CAD algorithm for differentiation between nondysplastic and neoplastic BE tissue. The CAD algorithm was trained on a set consisting of the first 22 patients (134 nondysplastic BE and 38 neoplastic targets) and validated on a separate test set from patients 23 to 47 (95 nondysplastic BE and 51 neoplastic targets). The performance of the algorithm was benchmarked against the performance of 10 VLE experts. RESULTS Using the training set to construct the algorithm resulted in an accuracy of 92%, sensitivity of 95%, and specificity of 92%. When performance was assessed on the test set, accuracy, sensitivity, and specificity were 85%, 91%, and 82%, respectively. The algorithm outperformed all 10 VLE experts, who demonstrated an overall accuracy of 77%, sensitivity of 70%, and specificity of 81%. CONCLUSIONS We developed, validated, and benchmarked a VLE CAD algorithm for detection of BE neoplasia using prospectively collected and biopsy-correlated VLE targets. The algorithm detected neoplasia with high accuracy and outperformed 10 VLE experts. (The Netherlands National Trials Registry (NTR) number: NTR 6728.).
Collapse
Affiliation(s)
- Maarten R Struyvenberg
- Department of Gastroenterology and Hepatology, Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Albert J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Roger Fonollà
- Department of Electrical Engineering, VCA group, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, VCA group, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, VCA group, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Erik J Schoon
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, the Netherlands
| | - Bas L A M Weusten
- Department of Gastroenterology and Hepatology, St. Antonius Hospital, Nieuwegein, the Netherlands
| | - Cadman L Leggett
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota, USA
| | - Allon Kahn
- Division of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, Arizona, USA
| | - Arvind J Trindade
- Division of Gastroenterology and Hepatology, Zucker School of Medicine at Hofstra/Northwell. Long Island Jewish Medical Center, New Hyde Park, New York, USA
| | - Eric K Ganguly
- Department of Gastroenterology and Hepatology, University of Vermont Medical Center, Burlington, Vermont, USA
| | - Vani J A Konda
- Department of Gastroenterology and Hepatology, Baylor University Medical Center at Dallas, Dallas, Texas, USA
| | - Charles J Lightdale
- Division of Gastroenterology and Hepatology, New York-Presbyterian Hospital, New York, New York, USA
| | - Douglas K Pleskow
- Department of Gastroenterology and Hepatology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Amrita Sethi
- Department of Gastroenterology and Hepatology, Columbia University Medical Center, New York, New York, USA
| | - Michael S Smith
- Division of Gastroenterology and Hepatology, Mount Sinai West & Mount Sinai St. Luke's Hospitals, New York, New York, USA
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, Florida, USA
| | - Herbert C Wolfsen
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, Florida, USA
| | - Gary J Tearney
- Department of Pathology, Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sybren L Meijer
- Department of Pathology, Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Michael Vieth
- Institute of Pathology, Bayreuth Clinic, Bayreuth, Germany
| | - Roos E Pouw
- Department of Gastroenterology and Hepatology, Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, the Netherlands
| | - Jacques J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| |
Collapse
|
19
|
Abstract
Ultrasound-guided procedures have been applied in many clinical therapies, such as cardiac catheterization and regional anesthesia. Medical instrument detection in 3D Ultrasound (US) is highly desired, but the existing approaches are far from real-time performance. Our objective is to investigate an efficient instrument detection method in 3D US for practical clinical use. We propose a novel Multi-dimensional Mixed Network for efficient instrument detection in 3D US, which extracts the discriminating features at 3D full-image level by a 3D encoder, and then applies a specially designed dimension reduction block to reduce the spatial complexity of the feature maps by projecting from 3D space into 2D space. A 2D decoder is adopted to detect the instrument along the specified axes. By projecting the predicted 2D outputs, the instrument is detected or visualized in the 3D volume. Furthermore, to enable the network to better learn the discriminative information, we propose a multi-level loss function to capture both pixel- and image-level differences. We carried out extensive experiments on two datasets for two tasks: (1) catheter detection for cardiac RF-ablation and (2) needle detection for regional anesthesia. Our experiments show that our proposed method achieves a detection error of 2-3 voxels with an efficiency of about 0.12 sec per 3D US volume. The proposed method is 3-8 times faster than the state-of-the-art methods, leading to real-time performance. The results show that our proposed method has significant clinical value for real-time 3D US-guided intervention.
Collapse
|
20
|
Mamprin M, Zelis JM, Tonino PAL, Zinger S, de With PHN. Decision Trees for Predicting Mortality in Transcatheter Aortic Valve Implantation. Bioengineering (Basel) 2021; 8:bioengineering8020022. [PMID: 33572063 PMCID: PMC7915084 DOI: 10.3390/bioengineering8020022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 01/27/2021] [Accepted: 02/04/2021] [Indexed: 12/23/2022] Open
Abstract
Current prognostic risk scores in cardiac surgery do not benefit yet from machine learning (ML). This research aims to create a machine learning model to predict one-year mortality of a patient after transcatheter aortic valve implantation (TAVI). We adopt a modern gradient boosting on decision trees classifier (GBDTs), specifically designed for categorical features. In combination with a recent technique for model interpretations, we developed a feature analysis and selection stage, enabling the identification of the most important features for the prediction. We base our prediction model on the most relevant features, after interpreting and discussing the feature analysis results with clinical experts. We validated our model on 270 consecutive TAVI cases, reaching a C-statistic of 0.83 with CI [0.82, 0.84]. The model has achieved a positive predictive value ranging from 57% to 64%, suggesting that the patient selection made by the heart team of professionals can be further improved by taking into consideration the clinical data we identified as important and by exploiting ML approaches in the development of clinical risk scores. Our approach has shown promising predictive potential also with respect to widespread prognostic risk scores, such as logistic European system for cardiac operative risk evaluation (EuroSCORE II) and the society of thoracic surgeons (STS) risk score, which are broadly adopted by cardiologists worldwide.
Collapse
Affiliation(s)
- Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AJ Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
- Correspondence:
| | - Jo M. Zelis
- Department of Cardiology, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (J.M.Z.); (P.A.L.T.)
| | - Pim A. L. Tonino
- Department of Cardiology, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (J.M.Z.); (P.A.L.T.)
| | - Sveta Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AJ Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AJ Eindhoven, The Netherlands; (S.Z.); (P.H.N.d.W.)
| |
Collapse
|
21
|
Manni F, Mamprin M, Holthuizen R, Shan C, Burström G, Elmi-Terander A, Edström E, Zinger S, de With PHN. Multi-view 3D skin feature recognition and localization for patient tracking in spinal surgery applications. Biomed Eng Online 2021; 20:6. [PMID: 33413426 PMCID: PMC7792004 DOI: 10.1186/s12938-020-00843-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/19/2020] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking. PURPOSE To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition. METHODS Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D. RESULTS The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively. CONCLUSIONS This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Caifeng Shan
- Shandong University of Science and Technology, Qingdao, China
| | - Gustav Burström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Adrian Elmi-Terander
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Erik Edström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
22
|
Struyvenberg MR, de Groof AJ, Bergman JJ, van der Sommen F, de With PHN, Konda VJA, Curvers WL. Advanced Imaging and Sampling in Barrett's Esophagus: Artificial Intelligence to the Rescue? Gastrointest Endosc Clin N Am 2021; 31:91-103. [PMID: 33213802 DOI: 10.1016/j.giec.2020.08.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Because the current Barrett's esophagus (BE) surveillance protocol suffers from sampling error of random biopsies and a high miss-rate of early neoplastic lesions, many new endoscopic imaging and sampling techniques have been developed. None of these techniques, however, have significantly increased the diagnostic yield of BE neoplasia. In fact, these techniques have led to an increase in the amount of visible information, yet endoscopists and pathologists inevitably suffer from variations in intra- and interobserver agreement. Artificial intelligence systems have the potential to overcome these endoscopist-dependent limitations.
Collapse
Affiliation(s)
- Maarten R Struyvenberg
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Albert J de Groof
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Jacques J Bergman
- Department of Gastroenterology and Hepatology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, VCA group, Eindhoven University of Technology, Groene Loper 19, 5612 AP Eindhoven, the Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, VCA group, Eindhoven University of Technology, Groene Loper 19, 5612 AP Eindhoven, the Netherlands
| | - Vani J A Konda
- Department of Gastroenterology and Hepatology, Baylor University Medical Center, 3500 Gaston Ave, Dallas, TX 75246, USA
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Michelangelolaan 2, 5623 EJ Eindhoven, the Netherlands.
| |
Collapse
|
23
|
Manni F, van der Sommen F, Fabelo H, Zinger S, Shan C, Edström E, Elmi-Terander A, Ortega S, Marrero Callicó G, de With PHN. Hyperspectral Imaging for Glioblastoma Surgery: Improving Tumor Identification Using a Deep Spectral-Spatial Approach. Sensors (Basel) 2020; 20:E6955. [PMID: 33291409 PMCID: PMC7730670 DOI: 10.3390/s20236955] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 12/16/2022]
Abstract
The primary treatment for malignant brain tumors is surgical resection. While gross total resection improves the prognosis, a supratotal resection may result in neurological deficits. On the other hand, accurate intraoperative identification of the tumor boundaries may be very difficult, resulting in subtotal resections. Histological examination of biopsies can be used repeatedly to help achieve gross total resection but this is not practically feasible due to the turn-around time of the tissue analysis. Therefore, intraoperative techniques to recognize tissue types are investigated to expedite the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the power of extracting additional information from the imaged tissue. Because HSI images cannot be visually assessed by human observers, we instead exploit artificial intelligence techniques and leverage a Convolutional Neural Network (CNN) to investigate the potential of HSI in twelve in vivo specimens. The proposed framework consists of a 3D-2D hybrid CNN-based approach to create a joint extraction of spectral and spatial information from hyperspectral images. A comparison study was conducted exploiting a 2D CNN, a 1D DNN and two conventional classification methods (SVM, and the SVM classifier combined with the 3D-2D hybrid CNN) to validate the proposed network. An overall accuracy of 80% was found when tumor, healthy tissue and blood vessels were classified, clearly outperforming the state-of-the-art approaches. These results can serve as a basis for brain tumor classification using HSI, and may open future avenues for image-guided neurosurgical applications.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (F.v.d.S.); (S.Z.); (P.H.N.d.W.)
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (F.v.d.S.); (S.Z.); (P.H.N.d.W.)
| | - Himar Fabelo
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain; (H.F.); (S.O.); (G.M.C.)
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (F.v.d.S.); (S.Z.); (P.H.N.d.W.)
| | - Caifeng Shan
- College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, China;
| | - Erik Edström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institutet, SE-171 46 Stockholm, Sweden; (E.E.); (A.E.-T.)
| | - Adrian Elmi-Terander
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institutet, SE-171 46 Stockholm, Sweden; (E.E.); (A.E.-T.)
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain; (H.F.); (S.O.); (G.M.C.)
| | - Gustavo Marrero Callicó
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain; (H.F.); (S.O.); (G.M.C.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (F.v.d.S.); (S.Z.); (P.H.N.d.W.)
| |
Collapse
|
24
|
Manni F, Fonolla R, der Sommen FV, Zinger S, Shan C, Kho E, de Koning SB, Ruers T, de With PHN. Hyperspectral imaging for colon cancer classification in surgical specimens: towards optical biopsy during image-guided surgery. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1169-1173. [PMID: 33018195 DOI: 10.1109/embc44109.2020.9176543] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The main curative treatment for localized colon cancer is surgical resection. However when tumor residuals are left positive margins are found during the histological examinations and additional treatment is needed to inhibit recurrence. Hyperspectral imaging (HSI) can offer non-invasive surgical guidance with the potential of optimizing the surgical effectiveness. In this paper we investigate the capability of HSI for automated colon cancer detection in six ex-vivo specimens employing a spectral-spatial patch-based classification approach. The results demonstrate the feasibility in assessing the benign and malignant boundaries of the lesion with a sensitivity of 0.88 and specificity of 0.78. The results are compared with the state-of-the-art deep learning based approaches. The method with a new hybrid CNN outperforms the state-of the-art approaches (0.74 vs. 0.82 AUC). This study paves the way for further investigation towards improving surgical outcomes with HSI.
Collapse
|
25
|
Manni F, Elmi-Terander A, Burström G, Persson O, Edström E, Holthuizen R, Shan C, Zinger S, van der Sommen F, de With PHN. Towards Optical Imaging for Spine Tracking without Markers in Navigated Spine Surgery. Sensors (Basel) 2020; 20:E3641. [PMID: 32610555 PMCID: PMC7374436 DOI: 10.3390/s20133641] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/13/2020] [Accepted: 06/22/2020] [Indexed: 12/18/2022]
Abstract
Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Gustav Burström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Oscar Persson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | | | - Caifeng Shan
- Philips Research, High Tech Campus 36, 5656 AE Eindhoven, The Netherlands;
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| |
Collapse
|
26
|
Sun Y, de With PHN, Kommers D, Wang W, Joshi R, Shan C, Tan T, Aarts RM, van Pul C, Andriessen P. Automatic and Continuous Discomfort Detection for Premature Infants in a NICU Using Video-Based Motion Analysis. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:5995-5999. [PMID: 31947213 DOI: 10.1109/embc.2019.8857597] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Frequent pain and discomfort in premature infants can lead to long-term adverse neurodevelopmental outcomes. Video-based monitoring is considered to be a promising contactless method for identification of discomfort moments. In this study, we propose a video-based method for automated detection of infant discomfort. The method is based on analyzing facial and body motion. Therefore, motion trajectories are estimated from frame to frame using optical flow. For each video segment, we further calculate the motion acceleration rate and extract 18 time- and frequency-domain features characterizing motion patterns. A support vector machine (SVM) classifier is then applied to video sequences to recognize infant status of comfort or discomfort. The method is evaluated using 183 video segments for 11 infants from 17 heel prick events. Experimental results show an AUC of 0.94 for discomfort detection and the average accuracy of 0.86 when combining all proposed features, which is promising for clinical use.
Collapse
|
27
|
Langenhuizen PPJH, Sebregts SHP, Zinger S, Leenstra S, Verheul JB, de With PHN. Prediction of transient tumor enlargement using MRI tumor texture after radiosurgery on vestibular schwannoma. Med Phys 2020; 47:1692-1701. [PMID: 31975523 PMCID: PMC7217023 DOI: 10.1002/mp.14042] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 01/13/2020] [Accepted: 01/16/2020] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Vestibular schwannomas (VSs) are uncommon benign brain tumors, generally treated using Gamma Knife radiosurgery (GKRS). However, due to the possible adverse effect of transient tumor enlargement (TTE), large VS tumors are often surgically removed instead of treated radiosurgically. Since microsurgery is highly invasive and results in a significant increased risk of complications, GKRS is generally preferred. Therefore, prediction of TTE for large VS tumors can improve overall VS treatment and enable physicians to select the most optimal treatment strategy on an individual basis. Currently, there are no clinical factors known to be predictive for TTE. In this research, we aim at predicting TTE following GKRS using texture features extracted from MRI scans. METHODS We analyzed clinical data of patients with VSs treated at our Gamma Knife center. The data was collected prospectively and included patient- and treatment-related characteristics and MRI scans obtained at day of treatment and at follow-up visits, 6, 12, 24 and 36 months after treatment. The correlations of the patient- and treatment-related characteristics to TTE were investigated using statistical tests. From the treatment scans, we extracted the following MRI image features: first-order statistics, Minkowski functionals (MFs), and three-dimensional gray-level co-occurrence matrices (GLCMs). These features were applied in a machine learning environment for classification of TTE, using support vector machines. RESULTS In a clinical data set, containing 61 patients presenting obvious non-TTE and 38 patients presenting obvious TTE, we determined that patient- and treatment-related characteristics do not show any correlation to TTE. Furthermore, first-order statistical MRI features and MFs did not significantly show prognostic values using support vector machine classification. However, utilizing a set of 4 GLCM features, we achieved a sensitivity of 0.82 and a specificity of 0.69, showing their prognostic value of TTE. Moreover, these results increased for larger tumor volumes obtaining a sensitivity of 0.77 and a specificity of 0.89 for tumors larger than 6 cm3 . CONCLUSIONS The results found in this research clearly show that MRI tumor texture provides information that can be employed for predicting TTE. This can form a basis for individual VS treatment selection, further improving overall treatment results. Particularly in patients with large VSs, where the phenomenon of TTE is most relevant and our predictive model performs best, these findings can be implemented in a clinical workflow such that for each patient, the most optimal treatment strategy can be determined.
Collapse
Affiliation(s)
- Patrick P J H Langenhuizen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.,Gamma Knife Center Tilburg, Department of Neurosurgery, ETZ Hospital, Tilburg, The Netherlands
| | - Sander H P Sebregts
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Svetlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Sieger Leenstra
- Department of Neurosurgery, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jeroen B Verheul
- Gamma Knife Center Tilburg, Department of Neurosurgery, ETZ Hospital, Tilburg, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
28
|
Lai M, Skyrman S, Shan C, Babic D, Homan R, Edström E, Persson O, Burström G, Elmi-Terander A, Hendriks BHW, de With PHN. Correction: Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS One 2020; 15:e0229454. [PMID: 32053699 PMCID: PMC7018132 DOI: 10.1371/journal.pone.0229454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
29
|
Manni F, Edstrom E, de With PHN, Liu X, Holthuizen R, Zinger S, der Sommen FV, Shan C, Mamprin M, Burstrom G, Elmi-Terander A. Towards non-invasive patient tracking: optical image analysis for spine tracking during spinal surgery procedures. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:3909-3914. [PMID: 31946727 DOI: 10.1109/embc.2019.8856304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Surgical navigation systems can enhance surgeon vision and form a reliable image-guided tool for complex interventions as spinal surgery. The main prerequisite is successful patient tracking which implies optimal motion compensation. Nowadays, optical tracking systems can satisfy the need of detecting patient position during surgery, allowing navigation without the risk of damaging neurovascular structures. However, the spine is subject to vertebrae movements which can impact the accuracy of the system. The aim of this paper is to investigate the feasibility of a novel approach for offering a direct relationship to movements of the spinal vertebra during surgery. To this end, we detect and track patient spine features between different image views, captured by several optical cameras, for vertebrae rotation and displacement reconstruction. We analyze patient images acquired in a real surgical scenario by two gray-scale cameras, embedded in the flat-panel detector of the C-arm. Spine segmentation is performed and anatomical landmarks are designed and tracked between different views, while experimenting with several feature detection algorithms (e.g. SURF, MSER, etc.). The 3D positions for the matched features are reconstructed and the triangulation errors are computed for an accuracy assessment. The analysis of the triangulation accuracy reveals a mean error of 0.38 mm, which demonstrates the feasibility of spine tracking and strengthens the clinical application of optical imaging for spinal navigation.
Collapse
|
30
|
Abstract
OBJECTIVE Detecting discomfort status of infants is particularly clinically relevant. Late treatment of discomfort infants can lead to adverse problems such as abnormal brain development, central nervous system damage and changes in responsiveness of the neuroendocrine and immune systems to stress at maturity. In this study, we exploit deep convolutional neural network (CNN) algorithms to address the problem of discomfort detection for infants by analyzing their facial expressions. APPROACH A dataset of 55 videos about facial expressions, recorded from 24 infants, is used in our study. Given the limited available data for training, we employ a pre-trained CNN model, which is followed by fine-tuning the networks using a public dataset with labeled facial expressions (the shoulder-pain dataset). The CNNs are further refined with our data of infants. MAIN RESULTS Using a two-fold cross-validation, we achieve an area under the curve (AUC) value of 0.96, which is substantially higher than the results without any pre-training steps (AUC = 0.77). Our method also achieves better results than the existing method based on handcrafted features. By fusing individual frame results, the AUC is further improved from 0.96 to 0.98. SIGNIFICANCE The proposed system has great potential for continuous discomfort and pain monitoring in clinical practice.
Collapse
Affiliation(s)
- Yue Sun
- Eindhoven University of Technology, Eindhoven, 5612 WH, The Netherlands
| | | | | | | | | | | | | |
Collapse
|
31
|
van der Putten J, van der Sommen F, de Groof J, Struyvenberg M, Zinger S, Curvers W, Schoon E, Bergman J, de With PHN. Modeling clinical assessor intervariability using deep hypersphere encoder–decoder networks. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04607-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
AbstractIn medical imaging, a proper gold-standard ground truth as, e.g., annotated segmentations by assessors or experts is lacking or only scarcely available and suffers from large intervariability in those segmentations. Most state-of-the-art segmentation models do not take inter-observer variability into account and are fully deterministic in nature. In this work, we propose hypersphere encoder–decoder networks in combination with dynamic leaky ReLUs, as a new method to explicitly incorporate inter-observer variability into a segmentation model. With this model, we can then generate multiple proposals based on the inter-observer agreement. As a result, the output segmentations of the proposed model can be tuned to typical margins inherent to the ambiguity in the data. For experimental validation, we provide a proof of concept on a toy data set as well as show improved segmentation results on two medical data sets. The proposed method has several advantages over current state-of-the-art segmentation models such as interpretability in the uncertainty of segmentation borders. Experiments with a medical localization problem show that it offers improved biopsy localizations, which are on average 12% closer to the optimal biopsy location.
Collapse
|
32
|
Klomp SR, van de Wouw DWJM, de With PHN. Real-time Small-object Change Detection from Ground Vehicles Using a Siamese Convolutional Neural Network. J Imaging Sci Technol 2019. [DOI: 10.2352/j.imagingsci.technol.2019.63.6.060402] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
33
|
Ghazvinian Zanjani F, Zinger S, Piepers B, Mahmoudpour S, Schelkens P, de With PHN. Impact of JPEG 2000 compression on deep convolutional neural networks for metastatic cancer detection in histopathological images. J Med Imaging (Bellingham) 2019; 6:027501. [PMID: 31037247 DOI: 10.1117/1.jmi.6.2.027501] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Accepted: 04/01/2019] [Indexed: 11/14/2022] Open
Abstract
The availability of massive amounts of data in histopathological whole-slide images (WSIs) has enabled the application of deep learning models and especially convolutional neural networks (CNNs), which have shown a high potential for improvement in cancer diagnosis. However, storage and transmission of large amounts of data such as gigapixel histopathological WSIs are challenging. Exploiting lossy compression algorithms for medical images is controversial but, as long as the clinical diagnosis is not affected, is acceptable. We study the impact of JPEG 2000 compression on our proposed CNN-based algorithm, which has produced performance comparable to that of pathologists and which was ranked second place in the CAMELYON17 challenge. Detecting tumor metastases in hematoxylin and eosin-stained tissue sections of breast lymph nodes is evaluated and compared with the pathologists' diagnoses in three different experimental setups. Our experiments show that the CNN model is robust against compression ratios up to 24:1 when it is trained on uncompressed high-quality images. We demonstrate that a model trained on lower quality images-i.e., lossy compressed images-depicts a classification performance that is significantly improved for the corresponding compression ratio. Moreover, it is also observed that the model performs equally well on all higher-quality images. These properties will help to design cloud-based computer-aided diagnosis (CAD) systems, e.g., telemedicine that employ deep CNN models that are more robust to image quality variations due to compression required to address data storage and transmission constraints. However, the results presented are specific to the CAD system and application described, and further work is needed to examine whether they generalize to other systems and applications.
Collapse
Affiliation(s)
| | - Svitlana Zinger
- Eindhoven University of Technology, SPS-VCA, Eindhoven, The Netherlands
| | | | - Saeed Mahmoudpour
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Brussels, Belgium.,IMEC, Leuven, Belgium
| | - Peter Schelkens
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Brussels, Belgium.,IMEC, Leuven, Belgium
| | - Peter H N de With
- Eindhoven University of Technology, SPS-VCA, Eindhoven, The Netherlands
| |
Collapse
|
34
|
Yang H, Shan C, Kolen AF, de With PHN. Catheter localization in 3D ultrasound using voxel-of-interest-based ConvNets for cardiac intervention. Int J Comput Assist Radiol Surg 2019; 14:1069-1077. [PMID: 30968351 PMCID: PMC6544608 DOI: 10.1007/s11548-019-01960-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 04/01/2019] [Indexed: 10/28/2022]
Abstract
PURPOSE Efficient image-based catheter localization in 3D US during cardiac interventions is highly desired, since it facilitates the operation procedure, reduces the patient risk and improves the outcome. Current image-based catheter localization methods are not efficient or accurate enough for real clinical use. METHODS We propose a catheter localization method for 3D cardiac ultrasound (US). The catheter candidate voxels are first pre-selected by the Frangi vesselness filter with adaptive thresholding, after which a triplanar-based ConvNet is applied to classify the remaining voxels as catheter or not. We propose a Share-ConvNet for 3D US, which reduces the computation complexity by sharing a single ConvNet for all orthogonal slices. To boost the performance of ConvNet, we also employ two-stage training with weighted cross-entropy. Using the classified voxels, the catheter is localized by a model fitting algorithm. RESULTS To validate our method, we have collected challenging ex vivo datasets. Extensive experiments show that the proposed method outperforms state-of-the-art methods and can localize the catheter with an average error of 2.1 mm in around 10 s per volume. CONCLUSION Our method can automatically localize the cardiac catheter in challenging 3D cardiac US images. The efficiency and accuracy localization of the proposed method are considered promising for catheter detection and localization during clinical interventions.
Collapse
Affiliation(s)
- Hongxu Yang
- Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | | | | |
Collapse
|
35
|
Yang H, Shan C, Pourtaherian A, Kolen AF, de With PHN. Catheter segmentation in three-dimensional ultrasound images by feature fusion and model fitting. J Med Imaging (Bellingham) 2019; 6:015001. [PMID: 30662926 DOI: 10.1117/1.jmi.6.1.015001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 12/14/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different in-vitro, ex-vivo, and in-vivo datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
Collapse
Affiliation(s)
- Hongxu Yang
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | - Caifeng Shan
- Philips Research, In-Body Systems, Eindhoven, The Netherlands
| | - Arash Pourtaherian
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | | | - Peter H N de With
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| |
Collapse
|
36
|
Zanjani FG, Moin DA, Claessen F, Cherici T, Parinussa S, Pourtaherian A, Zinger S, de With PHN. Mask-MCNet: Instance Segmentation in 3D Point Cloud of Intra-oral Scans. Lecture Notes in Computer Science 2019. [DOI: 10.1007/978-3-030-32254-0_15] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
37
|
Langenhuizen PPJH, Zinger S, Hanssens PEJ, Kunst HPM, Mulder JJS, Leenstra S, de With PHN, Verheul JB. Influence of pretreatment growth rate on Gamma Knife treatment response for vestibular schwannoma: a volumetric analysis. J Neurosurg 2018; 131:1-8. [PMID: 30497177 DOI: 10.3171/2018.6.jns18516] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 06/12/2018] [Indexed: 11/06/2022]
Abstract
OBJECTIVEThe aim of this study was to gain insight into the influence of the pretreatment growth rate on the volumetric tumor response and tumor control rates after Gamma Knife radiosurgery (GKRS) for incidental vestibular schwannoma (VS).METHODSAll patients treated with GKRS at the Gamma Knife Center, ETZ Hospital, who exhibited a confirmed radiological progression of their VS after an initial observation period were included. Pre- and posttreatment MRI scans were volumetrically evaluated, and the volume doubling times (VDTs) prior to treatment were calculated. Posttreatment volumes were used to create an objective mathematical failure definition: 2 consecutive significant increases in tumor volume among 3 consecutive follow-up MRI scans. Spearman correlation, Kaplan-Meier survival analysis, and Cox proportional hazards regression analysis were used to determine the influence of the VDT on the volumetric treatment response.RESULTSThe resulting patient cohort contained 311 patients in whom the VDT was calculated. This cohort had a median follow-up time of 60 months after GKRS. Of these 311 patients, 35 experienced loss of tumor control after GKRS. The pretreatment growth rate and the relative volume changes, calculated at 6 months and 1, 2, and 3 years following treatment, showed no statistically significant correlation. Kaplan-Meier analysis revealed that slow-growing tumors, with a VDT equal to or longer than the median VDT of 15 months, had calculated 5- and 10-year control rates of 97.3% and 86.0%, respectively, whereas fast-growing tumors, with a VDT less than the median growth rate, had control rates of 85.5% and 67.6%, respectively (log-rank, p = 0.001). The influence of the VDT on tumor control was also determined by employing the Cox regression analysis. The resulting model presented a significant (p = 0.045) effect of the VDT on the hazard rates of loss of tumor control.CONCLUSIONSBy employing a unique, large database with long follow-up times, the authors were able to accurately investigate the influence of the pretreatment VS growth rate on the volumetric GKRS treatment response. The authors have found a predictive model that illustrates the negative influence of the pretreatment VS growth rate on the efficacy of radiosurgery treatment. The resulting tumor control rates confirm the high efficacy of GKRS for slow-growing VS. However, fast-growing tumors showed significantly lower control rates. For these cases, different treatment strategies may be considered.
Collapse
Affiliation(s)
- Patrick P J H Langenhuizen
- 1Gamma Knife Center Tilburg, Department of Neurosurgery, ETZ Hospital, Tilburg
- 2Eindhoven University of Technology, Eindhoven
| | | | | | - Henricus P M Kunst
- 3Department of Otolaryngology, Radboud Institute of Health Sciences, Radboud University Medical Center, Nijmegen; and
| | - Jef J S Mulder
- 3Department of Otolaryngology, Radboud Institute of Health Sciences, Radboud University Medical Center, Nijmegen; and
| | - Sieger Leenstra
- 4Department of Neurosurgery, Erasmus Medical Center, Rotterdam, The Netherlands
| | | | - Jeroen B Verheul
- 1Gamma Knife Center Tilburg, Department of Neurosurgery, ETZ Hospital, Tilburg
| |
Collapse
|
38
|
Pourtaherian A, Ghazvinian Zanjani F, Zinger S, Mihajlovic N, Ng GC, Korsten HHM, de With PHN. Robust and semantic needle detection in 3D ultrasound using orthogonal-plane convolutional neural networks. Int J Comput Assist Radiol Surg 2018; 13:1321-1333. [PMID: 29855770 PMCID: PMC6132402 DOI: 10.1007/s11548-018-1798-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 05/21/2018] [Indexed: 12/30/2022]
Abstract
PURPOSE During needle interventions, successful automated detection of the needle immediately after insertion is necessary to allow the physician identify and correct any misalignment of the needle and the target at early stages, which reduces needle passes and improves health outcomes. METHODS We present a novel approach to localize partially inserted needles in 3D ultrasound volume with high precision using convolutional neural networks. We propose two methods based on patch classification and semantic segmentation of the needle from orthogonal 2D cross-sections extracted from the volume. For patch classification, each voxel is classified from locally extracted raw data of three orthogonal planes centered on it. We propose a bootstrap resampling approach to enhance the training in our highly imbalanced data. For semantic segmentation, parts of a needle are detected in cross-sections perpendicular to the lateral and elevational axes. We propose to exploit the structural information in the data with a novel thick-slice processing approach for efficient modeling of the context. RESULTS Our introduced methods successfully detect 17 and 22 G needles with a single trained network, showing a robust generalized approach. Extensive ex-vivo evaluations on datasets of chicken breast and porcine leg show 80 and 84% F1-scores, respectively. Furthermore, very short needles are detected with tip localization errors of less than 0.7 mm for lengths of only 5 and 10 mm at 0.2 and 0.36 mm voxel sizes, respectively. CONCLUSION Our method is able to accurately detect even very short needles, ensuring that the needle and its tip are maximally visible in the visualized plane during the entire intervention, thereby eliminating the need for advanced bi-manual coordination of the needle and transducer.
Collapse
Affiliation(s)
- Arash Pourtaherian
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands.
| | | | - Svitlana Zinger
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands
| | | | - Gary C Ng
- Philips Healthcare, Bothell, WA, 98021, USA
| | | | - Peter H N de With
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands
| |
Collapse
|
39
|
Camps SM, Verhaegen F, Vanneste BGL, de With PHN, Fontanarosa D. Automated patient-specific transperineal ultrasound probe setups for prostate cancer patients undergoing radiotherapy. Med Phys 2018; 45:3185-3195. [PMID: 29757474 DOI: 10.1002/mp.12972] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 05/04/2018] [Accepted: 05/04/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The use of ultrasound imaging is not widespread in prostate cancer radiotherapy workflows, despite several advantages (eg, allowing real-time volumetric organ tracking). This can be partially attributed to the need for a trained operator during acquisition and interpretation of the images. We introduce and evaluate an algorithm that can propose a patient-specific transperineal ultrasound probe setup, based on a CT scan and anatomical structure delineations. The use of this setup during the simulation and treatment stage could improve usability of ultrasound imaging for relatively untrained operators (radiotherapists with less than 1 yr experience with ultrasound). METHODS The internal perineum boundaries of three prostate cancer patients were identified based on bone masks extracted from their CT scans. After projection of these boundaries to the skin and exclusion of specific areas, this resulted in a skin area accessible for transperineal ultrasound probe placement in clinical practice. Several possible probe setups on this area were proposed by the algorithm and the optimal setup was automatically selected. In the end, this optimal setup was evaluated based on a comparison with a corresponding transperineal ultrasound volume acquired by a radiation oncologist. RESULTS The algorithm-proposed setups allowed visualization of 100% of the clinically required anatomical structures, including the whole prostate and seminal vesicles, as well as the adjacent edges of the bladder and rectum. In addition, these setups allowed visualization of 94% of the anatomical structures, which were also visualized by the physician during the acquisition of an actual ultrasound volume. CONCLUSION Provided that the ultrasound probe setup proposed by the algorithm, is properly reproduced on the patient, it allows visualization of all clinically required structures for image guided radiotherapy purposes. Future work should validate these results on a patient population and optimize the workflow to enable a relatively untrained operator to perform the procedure.
Collapse
Affiliation(s)
- Saskia Maria Camps
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands.,Oncology Solutions Department, Philips Research, 5656 AE, Eindhoven, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, 6229 ET, Maastricht, The Netherlands
| | - Ben G L Vanneste
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, 6229 ET, Maastricht, The Netherlands
| | - Peter H N de With
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Davide Fontanarosa
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Qld, 4000, Australia.,Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, Qld, 4059, Australia
| |
Collapse
|
40
|
van der Sommen F, Klomp SR, Swager AF, Zinger S, Curvers WL, Bergman JJGHM, Schoon EJ, de With PHN. Predictive features for early cancer detection in Barrett's esophagus using Volumetric Laser Endomicroscopy. Comput Med Imaging Graph 2018; 67:9-20. [PMID: 29684663 DOI: 10.1016/j.compmedimag.2018.02.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 01/22/2018] [Accepted: 02/27/2018] [Indexed: 02/07/2023]
Abstract
The incidence of Barrett cancer is increasing rapidly and current screening protocols often miss the disease at an early, treatable stage. Volumetric Laser Endomicroscopy (VLE) is a promising new tool for finding this type of cancer early, capturing a full circumferential scan of Barrett's Esophagus (BE), up to 3-mm depth. However, the interpretation of these VLE scans can be complicated, due to the large amount of cross-sectional images and the subtle grayscale variations. Therefore, algorithms for automated analysis of VLE data can offer a valuable contribution to its overall interpretation. In this study, we broadly investigate the potential of Computer-Aided Detection (CADe) for the identification of early Barrett's cancer using VLE. We employ a histopathologically validated set of ex-vivo VLE images for evaluating and comparing a considerable set of widely-used image features and machine learning algorithms. In addition, we show that incorporating clinical knowledge in feature design, leads to a superior classification performance and additional benefits, such as low complexity and fast computation time. Furthermore, we identify an optimal tissue depth for classification of 0.5-1.0 mm, and propose an extension to the evaluated features that exploits this phenomenon, improving their predictive properties for cancer detection in VLE data. Finally, we compare the performance of the CADe methods with the classification accuracy of two VLE experts. With a maximum Area Under the Curve (AUC) in the range of 0.90-0.93 for the evaluated features and machine learning methods versus an AUC of 0.81 for the medical experts, our experiments show that computer-aided methods can achieve a considerably better performance than trained human observers in the analysis of VLE data.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands; Department of Gastroenterology, Academic Medical Center, Postbus 22660, 1100 DD Amsterdam, The Netherlands.
| | - Sander R Klomp
- Department of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
| | - Anne-Fré Swager
- Department of Gastroenterology, Academic Medical Center, Postbus 22660, 1100 DD Amsterdam, The Netherlands.
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
| | - Wouter L Curvers
- Department of Gastroenterology, Academic Medical Center, Postbus 22660, 1100 DD Amsterdam, The Netherlands; Department of Gastroenterology and Hepathology, Catharina Hospital, P.O. Box 1350, 5602ZA Eindhoven, The Netherlands.
| | - Jacques J G H M Bergman
- Department of Gastroenterology, Academic Medical Center, Postbus 22660, 1100 DD Amsterdam, The Netherlands.
| | - Erik J Schoon
- Department of Gastroenterology and Hepathology, Catharina Hospital, P.O. Box 1350, 5602ZA Eindhoven, The Netherlands.
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
| |
Collapse
|
41
|
Bernas A, Barendse EM, Aldenkamp AP, Backes WH, Hofman PAM, Hendriks MPH, Kessels RPC, Willems FMJ, de With PHN, Zinger S, Jansen JFA. Brain resting-state networks in adolescents with high-functioning autism: Analysis of spatial connectivity and temporal neurodynamics. Brain Behav 2018; 8:e00878. [PMID: 29484255 PMCID: PMC5822569 DOI: 10.1002/brb3.878] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Revised: 07/20/2017] [Accepted: 10/10/2017] [Indexed: 12/16/2022] Open
Abstract
Introduction Autism spectrum disorder (ASD) is mainly characterized by functional and communication impairments as well as restrictive and repetitive behavior. The leading hypothesis for the neural basis of autism postulates globally abnormal brain connectivity, which can be assessed using functional magnetic resonance imaging (fMRI). Even in the absence of a task, the brain exhibits a high degree of functional connectivity, known as intrinsic, or resting-state, connectivity. Global default connectivity in individuals with autism versus controls is not well characterized, especially for a high-functioning young population. The aim of this study is to test whether high-functioning adolescents with ASD (HFA) have an abnormal resting-state functional connectivity. Materials and Methods We performed spatial and temporal analyses on resting-state networks (RSNs) in 13 HFA adolescents and 13 IQ- and age-matched controls. For the spatial analysis, we used probabilistic independent component analysis (ICA) and a permutation statistical method to reveal the RSN differences between the groups. For the temporal analysis, we applied Granger causality to find differences in temporal neurodynamics. Results Controls and HFA display very similar patterns and strengths of resting-state connectivity. We do not find any significant differences between HFA adolescents and controls in the spatial resting-state connectivity. However, in the temporal dynamics of this connectivity, we did find differences in the causal effect properties of RSNs originating in temporal and prefrontal cortices. Conclusion The results show a difference between HFA and controls in the temporal neurodynamics from the ventral attention network to the salience-executive network: a pathway involving cognitive, executive, and emotion-related cortices. We hypothesized that this weaker dynamic pathway is due to a subtle trigger challenging the cognitive state prior to the resting state.
Collapse
Affiliation(s)
- Antoine Bernas
- Department of Electrical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - Evelien M. Barendse
- Department of NeurologyMaastricht University Medical CenterMaastrichtThe Netherlands
- Department of Behavioral SciencesEpilepsy Center KempenhaegheHeezeThe Netherlands
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
| | - Albert P. Aldenkamp
- Department of Electrical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
- Department of NeurologyMaastricht University Medical CenterMaastrichtThe Netherlands
- Department of Behavioral SciencesEpilepsy Center KempenhaegheHeezeThe Netherlands
- School for Mental Health and NeuroscienceMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Walter H. Backes
- School for Mental Health and NeuroscienceMaastricht University Medical CenterMaastrichtThe Netherlands
- Department of RadiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Paul A. M. Hofman
- School for Mental Health and NeuroscienceMaastricht University Medical CenterMaastrichtThe Netherlands
- Department of RadiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Marc P. H. Hendriks
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
| | - Roy P. C. Kessels
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
- Department of Medical PsychologyRadboud University Nijmegen Medical CentreNijmegenThe Netherlands
| | - Frans M. J. Willems
- Department of Electrical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - Peter H. N. de With
- Department of Electrical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - Svitlana Zinger
- Department of Electrical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
- Department of Behavioral SciencesEpilepsy Center KempenhaegheHeezeThe Netherlands
| | - Jacobus F. A. Jansen
- School for Mental Health and NeuroscienceMaastricht University Medical CenterMaastrichtThe Netherlands
- Department of RadiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| |
Collapse
|
42
|
Pourtaherian A, Scholten HJ, Kusters L, Zinger S, Mihajlovic N, Kolen AF, Zuo F, Ng GC, Korsten HHM, de With PHN. Medical Instrument Detection in 3-Dimensional Ultrasound Data Volumes. IEEE Trans Med Imaging 2017; 36:1664-1675. [PMID: 28410101 DOI: 10.1109/tmi.2017.2692302] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ultrasound-guided medical interventions are broadly applied in diagnostics and therapy, e.g., regional anesthesia or ablation. A guided intervention using 2-D ultrasound is challenging due to the poor instrument visibility, limited field of view, and the multi-fold coordination of the medical instrument and ultrasound plane. Recent 3-D ultrasound transducers can improve the quality of the image-guided intervention if an automated detection of the needle is used. In this paper, we present a novel method for detecting medical instruments in 3-D ultrasound data that is solely based on image processing techniques and validated on various ex vivo and in vivo data sets. In the proposed procedure, the physician is placing the 3-D transducer at the desired position, and the image processing will automatically detect the best instrument view, so that the physician can entirely focus on the intervention. Our method is based on the classification of instrument voxels using volumetric structure directions and robust approximation of the primary tool axis. A novel normalization method is proposed for the shape and intensity consistency of instruments to improve the detection. Moreover, a novel 3-D Gabor wavelet transformation is introduced and optimally designed for revealing the instrument voxels in the volume, while remaining generic to several medical instruments and transducer types. Experiments on diverse data sets, including in vivo data from patients, show that for a given transducer and an instrument type, high detection accuracies are achieved with position errors smaller than the instrument diameter in the 0.5-1.5-mm range on average.
Collapse
|
43
|
van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BLAM, Bergman JJGHM, de With PHN, Schoon EJ. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy 2016; 48:617-24. [PMID: 27100718 DOI: 10.1055/s-0042-105284] [Citation(s) in RCA: 113] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND STUDY AIMS Early neoplasia in Barrett's esophagus is difficult to detect and often overlooked during Barrett's surveillance. An automatic detection system could be beneficial, by assisting endoscopists with detection of early neoplastic lesions. The aim of this study was to assess the feasibility of a computer system to detect early neoplasia in Barrett's esophagus. PATIENTS AND METHODS Based on 100 images from 44 patients with Barrett's esophagus, a computer algorithm, which employed specific texture, color filters, and machine learning, was developed for the detection of early neoplastic lesions in Barrett's esophagus. The evaluation by one endoscopist, who extensively imaged and endoscopically removed all early neoplastic lesions and was not blinded to the histological outcome, was considered the gold standard. For external validation, four international experts in Barrett's neoplasia, who were blinded to the pathology results, reviewed all images. RESULTS The system identified early neoplastic lesions on a per-image analysis with a sensitivity and specificity of 0.83. At the patient level, the system achieved a sensitivity and specificity of 0.86 and 0.87, respectively. A trade-off between the two performance metrics could be made by varying the percentage of training samples that showed neoplastic tissue. CONCLUSION The automated computer algorithm developed in this study was able to identify early neoplastic lesions with reasonable accuracy, suggesting that automated detection of early neoplasia in Barrett's esophagus is feasible. Further research is required to improve the accuracy of the system and prepare it for real-time operation, before it can be applied in clinical practice.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology, Catharina Hospital, Eindhoven, the Netherlands
| | - Raf Bisschops
- Department of Gastroenterology, University Hospitals Leuven, KU Leuven, Leuven, Belgium
| | - Oliver Pech
- Gastroenterology and Interventional Endoscopy, St. John of God Hospital, Regensburg, Germany
| | - Bas L A M Weusten
- Department of Gastroenterology, St. Antonius Hospital, Nieuwegein, the Netherlands
| | | | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands
| | - Erik J Schoon
- Department of Gastroenterology, Catharina Hospital, Eindhoven, the Netherlands
| |
Collapse
|
44
|
van de Wouw DWJM, Dubbelman G, de With PHN. Hierarchical 2.5-D Scene Alignment for Change Detection With Large Viewpoint Differences. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2016.2520561] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
45
|
Snoeren RM, Söderman M, Kroon JN, Roijers RB, de With PHN, Babic D. High-resolution 3D X-ray imaging of intracranial nitinol stents. Neuroradiology 2012; 54:155-62. [PMID: 21331601 PMCID: PMC3261414 DOI: 10.1007/s00234-011-0839-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2010] [Accepted: 01/26/2011] [Indexed: 11/17/2022]
Abstract
INTRODUCTION To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. METHODS Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. RESULTS We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). CONCLUSION By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations.
Collapse
Affiliation(s)
- Rudolph M Snoeren
- Faculty Electrical Engineering, Signal Processing Systems group (SPS), Eindhoven University of Technology (TU/e), Laplace Building 028, Postbox 513, 5600MB, Eindhoven, The Netherlands.
| | | | | | | | | | | |
Collapse
|