1
|
Amiri S, Karimzadeh R, Vrtovec T, Gudmann Steuble Brandt E, Thomsen HS, Brun Andersen M, Felix Müller C, Bertil Rodell A, Ibragimov B. Centerline-guided reinforcement learning model for pancreatic duct identifications. J Med Imaging (Bellingham) 2024; 11:064002. [PMID: 39525832 PMCID: PMC11543826 DOI: 10.1117/1.jmi.11.6.064002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 09/24/2024] [Accepted: 10/16/2024] [Indexed: 11/16/2024] Open
Abstract
Purpose Pancreatic ductal adenocarcinoma is forecast to become the second most significant cause of cancer mortality as the number of patients with cancer in the main duct of the pancreas grows, and measurement of the pancreatic duct diameter from medical images has been identified as relevant for its early diagnosis. Approach We propose an automated pancreatic duct centerline tracing method from computed tomography (CT) images that is based on deep reinforcement learning, which employs an artificial agent to interact with the environment and calculates rewards by combining the distances from the target and the centerline. A deep neural network is implemented to forecast step-wise values for each potential action. With the help of this mechanism, the agent can probe along the pancreatic duct centerline using the best possible navigational path. To enhance the tracing accuracy, we employ landmark-based registration, which enables the generation of a probability map of the pancreatic duct. Subsequently, we utilize a gradient-based method on the registered data to extract a probability map specifically indicating the centerline of the pancreatic duct. Results Three datasets with a total of 115 CT images were used to evaluate the proposed method. Using image hold-out from the first two datasets, the method performance was 2.0, 4.0, and 2.1 mm measured in terms of the mean detection error, Hausdorff distance (HD), and root mean squared error (RMSE), respectively. Using the first two datasets for training and the third one for testing, the method accuracy was 2.2, 4.9, and 2.6 mm measured in terms of the mean detection error, HD, and RMSE, respectively. Conclusions We present an algorithm for automated pancreatic duct centerline tracing using deep reinforcement learning. We observe that validation on an external dataset confirms the potential for practical utilization of the presented method.
Collapse
Affiliation(s)
- Sepideh Amiri
- University of Copenhagen, Department of Computer Science, Copenhagen, Denmark
| | - Reza Karimzadeh
- University of Copenhagen, Department of Computer Science, Copenhagen, Denmark
| | - Tomaž Vrtovec
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | | | - Henrik S. Thomsen
- Copenhagen University Hospital, Herlev Gentofte Hospital, Department of Radiology, Copenhagen, Denmark
| | - Michael Brun Andersen
- Copenhagen University Hospital, Herlev Gentofte Hospital, Department of Radiology, Copenhagen, Denmark
- Copenhagen University, Department of Clinical Medicine, Copenhagen, Denmark
| | - Christoph Felix Müller
- Copenhagen University Hospital, Herlev Gentofte Hospital, Department of Radiology, Copenhagen, Denmark
| | | | - Bulat Ibragimov
- University of Copenhagen, Department of Computer Science, Copenhagen, Denmark
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| |
Collapse
|
2
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
3
|
Jadhav S, Dmitriev K, Marino J, Barish M, Kaufman AE. 3D Virtual Pancreatography. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1457-1468. [PMID: 32870794 PMCID: PMC8884473 DOI: 10.1109/tvcg.2020.3020958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present 3D virtual pancreatography (VP), a novel visualization procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes: A machine learning based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, a machine learning based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists.
Collapse
|
4
|
Mathew S, Nadeem S, Kaufman A. FoldIt: Haustral Folds Detection and Segmentation in Colonoscopy Videos. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12903:221-230. [PMID: 35403172 PMCID: PMC8993167 DOI: 10.1007/978-3-030-87199-4_21] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Haustral folds are colon wall protrusions implicated for high polyp miss rate during optical colonoscopy procedures. If segmented accurately, haustral folds can allow for better estimation of missed surface and can also serve as valuable landmarks for registering pre-treatment virtual (CT) and optical colonoscopies, to guide navigation towards the anomalies found in pre-treatment scans. We present a novel generative adversarial network, FoldIt, for feature-consistent image translation of optical colonoscopy videos to virtual colonoscopy renderings with haustral fold overlays. A new transitive loss is introduced in order to leverage ground truth information between haustral fold annotations and virtual colonoscopy renderings. We demonstrate the effectiveness of our model on real challenging optical colonoscopy videos as well as on textured virtual colonoscopy videos with clinician-verified haustral fold annotations. All code and scripts to reproduce the experiments of this paper will be made available via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.
Collapse
Affiliation(s)
- Shawn Mathew
- Department of Computer Science, Stony Brook University
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center
| | - Arie Kaufman
- Department of Computer Science, Stony Brook University
| |
Collapse
|
5
|
Alam S, Thor M, Rimner A, Tyagi N, Zhang SY, Kuo LC, Nadeem S, Lu W, Hu YC, Yorke E, Zhang P. Quantification of accumulated dose and associated anatomical changes of esophagus using weekly Magnetic Resonance Imaging acquired during radiotherapy of locally advanced lung cancer. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2020; 13:36-43. [PMID: 32411833 PMCID: PMC7224352 DOI: 10.1016/j.phro.2020.03.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
MRI is suited for tracking volumetric changes/accumulating doses in the esophagus. Introduced medial axis of esophagus to calculate inter-fraction positional uncertainty. Planned and accumulated esophagus dose-volume parameter differences are significant. Longitudinal expansion of esophagus may link to acute esophagitis.
Background and purpose Minimizing acute esophagitis (AE) in locally advanced non-small cell lung cancer (LA-NSCLC) is critical given the proximity between the esophagus and the tumor. In this pilot study, we developed a clinical platform for quantification of accumulated doses and volumetric changes of esophagus via weekly Magnetic Resonance Imaging (MRI) for adaptive radiotherapy (RT). Material and methods Eleven patients treated via intensity-modulated RT to 60–70 Gy in 2–3 Gy-fractions with concurrent chemotherapy underwent weekly MRIs. Eight patients developed AE grade 2 (AE2), 3–6 weeks after RT started. First, weekly MRI esophagus contours were rigidly propagated to planning CT and the distances between the medial esophageal axes were calculated as positional uncertainties. Then, the weekly MRI were deformably registered to the planning CT and the total dose delivered to esophagus was accumulated. Weekly Maximum Esophagus Expansion (MEex) was calculated using the Jacobian map. Eventually, esophageal dose parameters (Mean Esophagus Dose (MED), V90% and D5cc) between the planned and accumulated dose were compared. Results Positional esophagus uncertainties were 6.8 ± 1.8 mm across patients. For the entire cohort at the end of RT: the median accumulated MED was significantly higher than the planned dose (24 Gy vs. 21 Gy p = 0.006). The median V90% and D5cc were 12.5 cm3 vs. 11.5 cm3 (p = 0.05) and 61 Gy vs. 60 Gy (p = 0.01), for accumulated and planned dose, respectively. The median MEex was 24% and was significantly associated with AE2 (p = 0.008). Conclusions MRI is well suited for tracking esophagus volumetric changes and accumulating doses. Longitudinal esophagus expansion could reflect radiation-induced inflammation that may link to AE.
Collapse
Affiliation(s)
- Sadegh Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Maria Thor
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Si-Yuan Zhang
- Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Li Cheng Kuo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States
| |
Collapse
|
6
|
Nadeem S, Zhang P, Rimner A, Sonke JJ, Deasy JO, Tannenbaum A. LDeform: Longitudinal deformation analysis for adaptive radiotherapy of lung cancer. Med Phys 2020; 47:132-141. [PMID: 31693764 PMCID: PMC7295163 DOI: 10.1002/mp.13907] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 10/03/2019] [Accepted: 10/24/2019] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Conventional radiotherapy for large lung tumors is given over several weeks, during which the tumor typically regresses in a highly nonuniform and variable manner. Adaptive radiotherapy would ideally follow these shape changes, but we need an accurate method to extrapolate tumor shape changes. We propose a computationally efficient algorithm to quantitate tumor surface shape changes that makes minimal assumptions, identifies fixed points, and can be used to predict future tumor geometrical response. METHODS A novel combination of nonrigid iterative closest point (ICP) and local shape-preserving map algorithms, LDeform, is developed to enable visualization, prediction, and categorization of both diffeomorphic and nondiffeomorphic tumor deformations during an extended course of radiotherapy. RESULTS We tested and validated our technique on 31 longitudinal CT/MRI subjects, with five to nine time points each. Based on this tumor deformation analysis, regions of local growth, shrinkage, and anchoring are identified and tracked across multiple time points. This categorization in turn represents a rational biomarker of local response. Results demonstrate useful predictive power, with an averaged Dice coefficient and surface mean-squared error of 0.85 and 2.8 mm, respectively, over all images. CONCLUSIONS We conclude that the LDeform algorithm can facilitate the adaptive decision-making process during lung cancer radiotherapy.
Collapse
Affiliation(s)
- Saad Nadeem
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Allen Tannenbaum
- Departments of Computer Science and Applied Mathematics & Statistics, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
7
|
Mirhosseini S, Gutenko I, Ojal S, Marino J, Kaufman A. Immersive Virtual Colonoscopy. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2011-2021. [PMID: 30762554 DOI: 10.1109/tvcg.2019.2898763] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Virtual colonoscopy (VC) is a non-invasive screening tool for colorectal polyps which employs volume visualization of a colon model reconstructed from a CT scan of the patient's abdomen. We present an immersive analytics system for VC which enhances and improves the traditional desktop VC through the use of VR technologies. Our system, using a head-mounted display (HMD), includes all of the standard VC features, such as the volume rendered endoluminal fly-through, measurement tool, bookmark modes, electronic biopsy, and slice views. The use of VR immersion, stereo, and wider field of view and field of regard has a positive effect on polyp search and analysis tasks in our immersive VC system, a volumetric-based immersive analytics application. Navigation includes enhanced automatic speed and direction controls, based on the user's head orientation, in conjunction with physical navigation for exploration of local proximity. In order to accommodate the resolution and frame rate requirements for HMDs, new rendering techniques have been developed, including mesh-assisted volume raycasting and a novel lighting paradigm. Feedback and further suggestions from expert radiologists show the promise of our system for immersive analysis for VC and encourage new avenues for exploring the use of VR in visualization systems for medical diagnosis.
Collapse
|
8
|
Nadeem S, Gu X, Kaufman AE. LMap: Shape-Preserving Local Mappings for Biomedical Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3111-3122. [PMID: 29990124 PMCID: PMC6309451 DOI: 10.1109/tvcg.2017.2772237] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Visualization of medical organs and biological structures is a challenging task because of their complex geometry and the resultant occlusions. Global spherical and planar mapping techniques simplify the complex geometry and resolve the occlusions to aid in visualization. However, while resolving the occlusions these techniques do not preserve the geometric context, making them less suitable for mission-critical biomedical visualization tasks. In this paper, we present a shape-preserving local mapping technique for resolving occlusions locally while preserving the overall geometric context. More specifically, we present a novel visualization algorithm, LMap, for conformally parameterizing and deforming a selected local region-of-interest (ROI) on an arbitrary surface. The resultant shape-preserving local mappings help to visualize complex surfaces while preserving the overall geometric context. The algorithm is based on the robust and efficient extrinsic Ricci flow technique, and uses the dynamic Ricci flow algorithm to guarantee the existence of a local map for a selected ROI on an arbitrary surface. We show the effectiveness and efficacy of our method in three challenging use cases: (1) multimodal brain visualization, (2) optimal coverage of virtual colonoscopy centerline flythrough, and (3) molecular surface visualization.
Collapse
|