1
|
Li J, Lai C, Wang Y, Luo A, Yuan X. SpectrumVA: Visual Analysis of Astronomical Spectra for Facilitating Classification Inspection. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5386-5403. [PMID: 37440386 DOI: 10.1109/tvcg.2023.3294958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/15/2023]
Abstract
In astronomical spectral analysis, class recognition is essential and fundamental for subsequent scientific research. The experts often perform the visual inspection after automatic classification to deal with low-quality spectra to improve accuracy. However, given the enormous spectral volume and inadequacy of the current inspection practice, such inspection is tedious and time-consuming. This article presents a visual analytics system named SpectrumVA to promote the efficiency of visual inspection while guaranteeing accuracy. We abstract inspection as a visual parameter space analysis process, using redshifts and spectral lines as parameters. Different navigation strategies are employed in the "selection-inspection-promotion" workflow. At the selection stage, we help the experts identify a spectrum of interest through spectral representations and auxiliary information. Several possible redshifts and corresponding important spectral lines are also recommended through a global-to-local strategy to provide an appropriate entry point for the inspection. The inspection stage adopts a variety of instant visual feedback to help the experts adjust the redshift and select spectral lines in an informed trial-and-error manner. Similar spectra to the inspected one rather than different ones are visualized at the promotion stage, making the inspection process more fluent. We demonstrate the effectiveness of SpectrumVA through a quantitative algorithmic assessment, a case study, interviews with domain experts, and a user study.
Collapse
|
2
|
Humer C, Nicholls R, Heberle H, Heckmann M, Pühringer M, Wolf T, Lübbesmeyer M, Heinrich J, Hillenbrand J, Volpin G, Streit M. CIME4R: Exploring iterative, AI-guided chemical reaction optimization campaigns in their parameter space. J Cheminform 2024; 16:51. [PMID: 38730469 PMCID: PMC11636728 DOI: 10.1186/s13321-024-00840-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 04/05/2024] [Indexed: 05/12/2024] Open
Abstract
Chemical reaction optimization (RO) is an iterative process that results in large, high-dimensional datasets. Current tools allow for only limited analysis and understanding of parameter spaces, making it hard for scientists to review or follow changes throughout the process. With the recent emergence of using artificial intelligence (AI) models to aid RO, another level of complexity has been added. Helping to assess the quality of a model's prediction and understand its decision is critical to supporting human-AI collaboration and trust calibration. To address this, we propose CIME4R-an open-source interactive web application for analyzing RO data and AI predictions. CIME4R supports users in (i) comprehending a reaction parameter space, (ii) investigating how an RO process developed over iterations, (iii) identifying critical factors of a reaction, and (iv) understanding model predictions. This facilitates making informed decisions during the RO process and helps users to review a completed RO process, especially in AI-guided RO. CIME4R aids decision-making through the interaction between humans and AI by combining the strengths of expert experience and high computational precision. We developed and tested CIME4R with domain experts and verified its usefulness in three case studies. Using CIME4R the experts were able to produce valuable insights from past RO campaigns and to make informed decisions on which experiments to perform next. We believe that CIME4R is the beginning of an open-source community project with the potential to improve the workflow of scientists working in the reaction optimization domain. SCIENTIFIC CONTRIBUTION: To the best of our knowledge, CIME4R is the first open-source interactive web application tailored to the peculiar analysis requirements of reaction optimization (RO) campaigns. Due to the growing use of AI in RO, we developed CIME4R with a special focus on facilitating human-AI collaboration and understanding of AI models. We developed and evaluated CIME4R in collaboration with domain experts to verify its practical usefulness.
Collapse
Affiliation(s)
| | - Rachel Nicholls
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | - Henry Heberle
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | | | | | - Thomas Wolf
- Division Crop Science, Bayer AG, Frankfurt, 65926, Germany
| | | | - Julian Heinrich
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | | | - Giulio Volpin
- Division Crop Science, Bayer AG, Frankfurt, 65926, Germany.
| | - Marc Streit
- Johannes Kepler University Linz, Linz, 4040, Austria.
- datavisyn GmbH, Linz, 4040, Austria.
| |
Collapse
|
3
|
Bayat HC, Waldner M, Raidou RG, Potel M. A Workflow to Visually Assess Interobserver Variability in Medical Image Segmentation. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:86-94. [PMID: 38271155 DOI: 10.1109/mcg.2023.3333475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
We introduce a workflow for the visual assessment of interobserver variability in medical image segmentation. Image segmentation is a crucial step in the diagnosis, prognosis, and treatment of many diseases. Despite the advancements in autosegmentation, clinical practice widely relies on manual delineations performed by radiologists. Our work focuses on designing a solution for understanding the radiologists' thought processes during segmentation and for unveiling reasons that lead to interobserver variability. To this end, we propose a visual analysis tool connecting multiple radiologists' delineation processes with their outcomes, and we demonstrate its potential in a case study.
Collapse
|
4
|
Piccolotto N, Bögl M, Miksch S. Visual Parameter Space Exploration in Time and Space. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:e14785. [PMID: 38505647 PMCID: PMC10947302 DOI: 10.1111/cgf.14785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.
Collapse
Affiliation(s)
- Nikolaus Piccolotto
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Markus Bögl
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Silvia Miksch
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| |
Collapse
|
5
|
Younesy H, Pober J, Möller T, Karimi MM. ModEx: a general purpose computer model exploration system. FRONTIERS IN BIOINFORMATICS 2023; 3:1153800. [PMID: 37304402 PMCID: PMC10249055 DOI: 10.3389/fbinf.2023.1153800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
We present a general purpose visual analysis system that can be used for exploring parameters of a variety of computer models. Our proposed system offers key components of a visual parameter analysis framework including parameter sampling, deriving output summaries, and an exploration interface. It also provides an API for rapid development of parameter space exploration solutions as well as the flexibility to support custom workflows for different application domains. We evaluate the effectiveness of our system by demonstrating it in three domains: data mining, machine learning and specific application in bioinformatics.
Collapse
Affiliation(s)
- Hamid Younesy
- School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
| | | | - Torsten Möller
- Research Network Data Science and Faculty of Computer Science, University of Vienna, Vienna, Austria
| | - Mohammad M. Karimi
- Comprehensive Cancer Centre, School of Cancer and Pharmaceutical Sciences, Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| |
Collapse
|
6
|
Rydow E, Borgo R, Fang H, Torsney-Weir T, Swallow B, Porphyre T, Turkay C, Chen M. Development and Evaluation of Two Approaches of Visual Sensitivity Analysis to Support Epidemiological Modeling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1255-1265. [PMID: 36173770 DOI: 10.1109/tvcg.2022.3209464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computational modeling is a commonly used technology in many scientific disciplines and has played a noticeable role in combating the COVID-19 pandemic. Modeling scientists conduct sensitivity analysis frequently to observe and monitor the behavior of a model during its development and deployment. The traditional algorithmic ranking of sensitivity of different parameters usually does not provide modeling scientists with sufficient information to understand the interactions between different parameters and model outputs, while modeling scientists need to observe a large number of model runs in order to gain actionable information for parameter optimization. To address the above challenge, we developed and compared two visual analytics approaches, namely: algorithm-centric and visualization-assisted, and visualization-centric and algorithm-assisted. We evaluated the two approaches based on a structured analysis of different tasks in visual sensitivity analysis as well as the feedback of domain experts. While the work was carried out in the context of epidemiological modeling, the two approaches developed in this work are directly applicable to a variety of modeling processes featuring time series outputs, and can be extended to work with models with other types of outputs.
Collapse
|
7
|
Kumpf A, Stumpfegger J, Hartl PF, Westermann R. Visual Analysis of Multi-Parameter Distributions Across Ensembles of 3D Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3530-3545. [PMID: 33625986 DOI: 10.1109/tvcg.2021.3061925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
For an ensemble of 3D multi-parameter fields, we present a visual analytics workflow to analyse whether and which parts of a selected multi-parameter distribution is present in all ensemble members. Supported by a parallel coordinate plot, a multi-parameter brush is applied to all ensemble members to select data points with similar multi-parameter distribution. By a combination of spatial sub-division and a covariance analysis of partitioned sub-sets of data points, a tight partition in multi-parameter space with reduced number of selected data points is obtained. To assess the representativeness of the selected multi-parameter distribution across the ensemble, we propose a novel extension of violin plots that can show multiple parameter distributions simultaneously. We investigate the visual design that effectively conveys (dis-)similarities in multi-parameter distributions, and demonstrate that users can quickly comprehend parameter-specific differences regarding distribution shape and representativeness from a side-by-side view of these plots. In a 3D spatial view, users can analyse and compare the spatial distribution of selected data points in different ensemble members via interval-based isosurface raycasting. In two real-world application cases we show how our approach is used to analyse the multi-parameter distributions across an ensemble of 3D fields.
Collapse
|
8
|
Dunne M, Mohammadi H, Challenor P, Borgo R, Porphyre T, Vernon I, Firat EE, Turkay C, Torsney-Weir T, Goldstein M, Reeve R, Fang H, Swallow B. Complex model calibration through emulation, a worked example for a stochastic epidemic model. Epidemics 2022; 39:100574. [PMID: 35617882 PMCID: PMC9109972 DOI: 10.1016/j.epidem.2022.100574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 04/22/2022] [Accepted: 04/29/2022] [Indexed: 12/03/2022] Open
Abstract
Uncertainty quantification is a formal paradigm of statistical estimation that aims to account for all uncertainties inherent in the modelling process of real-world complex systems. The methods are directly applicable to stochastic models in epidemiology, however they have thus far not been widely used in this context. In this paper, we provide a tutorial on uncertainty quantification of stochastic epidemic models, aiming to facilitate the use of the uncertainty quantification paradigm for practitioners with other complex stochastic simulators of applied systems. We provide a formal workflow including the important decisions and considerations that need to be taken, and illustrate the methods over a simple stochastic epidemic model of UK SARS-CoV-2 transmission and patient outcome. We also present new approaches to visualisation of outputs from sensitivity analyses and uncertainty quantification more generally in high input and/or output dimensions.
Collapse
Affiliation(s)
- Michael Dunne
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
| | - Hossein Mohammadi
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
| | - Peter Challenor
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
| | - Rita Borgo
- Department of Informatics, King's College London, London, UK
| | - Thibaud Porphyre
- Laboratoire de Biométrie et Biologie Evolutive, VetAgro Sup, Marcy l'Etoile, France
| | - Ian Vernon
- Department of Mathematical Sciences, Durham University, Durham, UK
| | - Elif E Firat
- Department of Computer Science, University of Nottingham, Nottingham, UK
| | - Cagatay Turkay
- Centre for Interdisciplinary Methodologies, University of Warwick, Coventry, UK
| | - Thomas Torsney-Weir
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria
| | | | - Richard Reeve
- Boyd Orr Centre for Population and Ecosystem Health, Institute of Biodiversity, Animal Health and Comparative Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - Hui Fang
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Ben Swallow
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK.
| |
Collapse
|
9
|
Victor VS, Schmeiser A, Leitte H, Gramsch S. Visual Parameter Space Analysis for Optimizing the Quality of Industrial Nonwovens. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:56-67. [PMID: 35239477 DOI: 10.1109/mcg.2022.3155867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Technical textiles, in particular, nonwovens used, for example, in medical masks, have become increasingly important in our daily lives. The quality of these textiles depends on the manufacturing process parameters that cannot be easily optimized in live settings. In this article, we present a visual analytics framework that enables interactive parameter space exploration and parameter optimization in industrial production processes of nonwovens. Therefore, we survey analysis strategies used in optimizing industrial production processes of nonwovens and support them in our tool. To enable real-time interaction, we augment the digital twin with a machine learning surrogate model for rapid quality computations. In addition, we integrate mechanisms for sensitivity analysis that ensure consistent product quality under mild parameter changes. In our case study, we explore the finding of optimal parameter sets, investigate the input-output relationship between parameters, and conduct a sensitivity analysis to find settings that result in robust quality.
Collapse
|
10
|
He W, Wang J, Guo H, Wang KC, Shen HW, Raj M, Nashed YSG, Peterka T. InSituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:23-33. [PMID: 31425097 DOI: 10.1109/tvcg.2019.2934312] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.
Collapse
|
11
|
Argüello D, Sánchez Acevedo HG, González-Estrada OA. Comparison of segmentation tools for structural analysis of bone tissues by finite elements. ACTA ACUST UNITED AC 2019. [DOI: 10.1088/1742-6596/1386/1/012113] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
12
|
Jadhav S, Nadeem S, Kaufman A. FeatureLego: Volume Exploration Using Exhaustive Clustering of Super-Voxels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2725-2737. [PMID: 30028709 PMCID: PMC6703906 DOI: 10.1109/tvcg.2018.2856744] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a volume exploration framework, FeatureLego, that uses a novel voxel clustering approach for efficient selection of semantic features. We partition the input volume into a set of compact super-voxels that represent the finest selection granularity. We then perform an exhaustive clustering of these super-voxels using a graph-based clustering method. Unlike the prevalent brute-force parameter sampling approaches, we propose an efficient algorithm to perform this exhaustive clustering. By computing an exhaustive set of clusters, we aim to capture as many boundaries as possible and ensure that the user has sufficient options for efficiently selecting semantically relevant features. Furthermore, we merge all the computed clusters into a single tree of meta-clusters that can be used for hierarchical exploration. We implement an intuitive user-interface to interactively explore volumes using our clustering approach. Finally, we show the effectiveness of our framework on multiple real-world datasets of different modalities.
Collapse
|
13
|
Visual Analytics for the Representation, Exploration, and Analysis of High-Dimensional, Multi-faceted Medical Data. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1138:137-162. [PMID: 31313263 DOI: 10.1007/978-3-030-14227-8_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2023]
Abstract
Medicine is among those research fields with a significant impact on humans and their health. Already for decades, medicine has established a tight coupling with the visualization domain, proving the importance of developing visualization techniques, designed exclusively for this research discipline. However, medical data is steadily increasing in complexity with the appearance of heterogeneous, multi-modal, multi-parametric, cohort or population, as well as uncertain data. To deal with this kind of complex data, the field of Visual Analytics has emerged. In this chapter, we discuss the many dimensions and facets of medical data. Based on this classification, we provide a general overview of state-of-the-art visualization systems and solutions dealing with high-dimensional, multi-faceted data. Our particular focus will be on multi-modal, multi-parametric data, on data from cohort or population studies and on uncertain data, especially with respect to Visual Analytics applications for the representation, exploration, and analysis of high-dimensional, multi-faceted medical data.
Collapse
|
14
|
Taveira LFR, Kurc T, Melo ACMA, Kong J, Bremer E, Saltz JH, Teodoro G. Multi-objective Parameter Auto-tuning for Tissue Image Segmentation Workflows. J Digit Imaging 2019; 32:521-533. [PMID: 30402669 PMCID: PMC6499855 DOI: 10.1007/s10278-018-0138-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
We propose a software platform that integrates methods and tools for multi-objective parameter auto-tuning in tissue image segmentation workflows. The goal of our work is to provide an approach for improving the accuracy of nucleus/cell segmentation pipelines by tuning their input parameters. The shape, size, and texture features of nuclei in tissue are important biomarkers for disease prognosis, and accurate computation of these features depends on accurate delineation of boundaries of nuclei. Input parameters in many nucleus segmentation workflows affect segmentation accuracy and have to be tuned for optimal performance. This is a time-consuming and computationally expensive process; automating this step facilitates more robust image segmentation workflows and enables more efficient application of image analysis in large image datasets. Our software platform adjusts the parameters of a nuclear segmentation algorithm to maximize the quality of image segmentation results while minimizing the execution time. It implements several optimization methods to search the parameter space efficiently. In addition, the methodology is developed to execute on high-performance computing systems to reduce the execution time of the parameter tuning phase. These capabilities are packaged in a Docker container for easy deployment and can be used through a friendly interface extension in 3D Slicer. Our results using three real-world image segmentation workflows demonstrate that the proposed solution is able to (1) search a small fraction (about 100 points) of the parameter space, which contains billions to trillions of points, and improve the quality of segmentation output by × 1.20, × 1.29, and × 1.29, on average; (2) decrease the execution time of a segmentation workflow by up to 11.79× while improving output quality; and (3) effectively use parallel systems to accelerate parameter tuning and segmentation phases.
Collapse
Affiliation(s)
- Luis F R Taveira
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Alba C M A Melo
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Jun Kong
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
- Department of Biomedical Engineering, Emory - Georgia Institute of Technology, Atlanta, GA, USA
- Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA
| | - Erich Bremer
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Joel H Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - George Teodoro
- Department of Computer Science, University of Brasília, Brasília, Brazil.
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA.
| |
Collapse
|
15
|
Orban D, Keefe DF, Biswas A, Ahrens J, Rogers D. Drag and Track: A Direct Manipulation Interface for Contextualizing Data Instances within a Continuous Parameter Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:256-266. [PMID: 30136980 DOI: 10.1109/tvcg.2018.2865051] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.
Collapse
|
16
|
Harrison DG, Efford ND, Fisher QJ, Ruddle RA. PETMiner-A Visual Analysis Tool for Petrophysical Properties of Core Sample Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:1728-1741. [PMID: 28320668 DOI: 10.1109/tvcg.2017.2682865] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The aim of the PETMiner software is to reduce the time and monetary cost of analysing petrophysical data that is obtained from reservoir sample cores. Analysis of these data requires tacit knowledge to fill 'gaps' so that predictions can be made for incomplete data. Through discussions with 30 industry and academic specialists, we identified three analysis use cases that exemplified the limitations of current petrophysics analysis tools. We used those use cases to develop nine core requirements for PETMiner, which is innovative because of its ability to display detailed images of the samples as data points, directly plot multiple sample properties and derived measures for comparison, and substantially reduce interaction cost. An 11-month evaluation demonstrated benefits across all three use cases by allowing a consultant to: (1) generate more accurate reservoir flow models, (2) discover a previously unknown relationship between one easy-to-measure property and another that is costly, and (3) make a 100-fold reduction in the time required to produce plots for a report.
Collapse
|
17
|
The semiotics of medical image Segmentation. Med Image Anal 2018; 44:54-71. [DOI: 10.1016/j.media.2017.11.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Revised: 10/30/2017] [Accepted: 11/18/2017] [Indexed: 11/21/2022]
|
18
|
Muhlbacher T, Linhardt L, Moller T, Piringer H. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:174-183. [PMID: 28866575 DOI: 10.1109/tvcg.2017.2745158] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.
Collapse
|
19
|
Liu J, Dwyer T, Marriott K, Millar J, Haworth A. Understanding the Relationship Between Interactive Optimisation and Visual Analytics in the Context of Prostate Brachytherapy. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:319-329. [PMID: 28866546 DOI: 10.1109/tvcg.2017.2744418] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The fields of operations research and computer science have long sought to find automatic solver techniques that can find high-quality solutions to difficult real-world optimisation problems. The traditional workflow is to exactly model the problem and then enter this model into a general-purpose "black-box" solver. In practice, however, many problems cannot be solved completely automatically, but require a "human-in-the-loop" to iteratively refine the model and give hints to the solver. In this paper, we explore the parallels between this interactive optimisation workflow and the visual analytics sense-making loop. We assert that interactive optimisation is essentially a visual analytics task and propose a problem-solving loop analogous to the sense-making loop. We explore these ideas through an in-depth analysis of a use-case in prostate brachytherapy, an application where interactive optimisation may be able to provide significant assistance to practitioners in creating prostate cancer treatment plans customised to each patient's tumour characteristics. However, current brachytherapy treatment planning is usually a careful, mostly manual process involving multiple professionals. We developed a prototype interactive optimisation tool for brachytherapy that goes beyond current practice in supporting focal therapy - targeting tumour cells directly rather than simply seeking coverage of the whole prostate gland. We conducted semi-structured interviews, in two stages, with seven radiation oncology professionals in order to establish whether they would prefer to use interactive optimisation for treatment planning and whether such a tool could improve their trust in the novel focal therapy approach and in machine generated solutions to the problem.
Collapse
|
20
|
Turkay C, Slingsby A, Lahtinen K, Butt S, Dykes J. Supporting theoretically-grounded model building in the social sciences through interactive visualisation. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.11.087] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
21
|
von Landesberger T, Fellner DW, Ruddle RA. Visualization System Requirements for Data Processing Pipeline Design and Optimization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2028-2041. [PMID: 28113376 DOI: 10.1109/tvcg.2016.2603178] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The rising quantity and complexity of data creates a need to design and optimize data processing pipelines-the set of data processing steps, parameters and algorithms that perform operations on the data. Visualization can support this process but, although there are many examples of systems for visual parameter analysis, there remains a need to systematically assess users' requirements and match those requirements to exemplar visualization methods. This article presents a new characterization of the requirements for pipeline design and optimization. This characterization is based on both a review of the literature and first-hand assessment of eight application case studies. We also match these requirements with exemplar functionality provided by existing visualization tools. Thus, we provide end-users and visualization developers with a way of identifying functionality that addresses data processing problems in an application. We also identify seven future challenges for visualization research that are not met by the capabilities of today's systems.
Collapse
|
22
|
Teodoro G, Kurç TM, Taveira LFR, Melo ACMA, Gao Y, Kong J, Saltz JH. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines. Bioinformatics 2017; 33:1064-1072. [PMID: 28062445 PMCID: PMC5409344 DOI: 10.1093/bioinformatics/btw749] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/09/2016] [Indexed: 11/13/2022] Open
Abstract
Motivation Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation Source code: https://github.com/SBU-BMI/region-templates/. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- George Teodoro
- Department of Computer Science, University of Brasília, Brasília 70910-900, Brazil.,Biomedical Informatics Department, Stony Brook University, Stony Brook, NY 11794-8322, USA
| | - Tahsin M Kurç
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY 11794-8322, USA.,Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Luís F R Taveira
- Department of Computer Science, University of Brasília, Brasília 70910-900, Brazil
| | - Alba C M A Melo
- Department of Computer Science, University of Brasília, Brasília 70910-900, Brazil
| | - Yi Gao
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY 11794-8322, USA
| | - Jun Kong
- Biomedical Informatics Department, Emory University, Atlanta, GA 30322, USA
| | - Joel H Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY 11794-8322, USA
| |
Collapse
|
23
|
Liu S, Maljovec D, Wang B, Bremer PT, Pascucci V. Visualizing High-Dimensional Data: Advances in the Past Decade. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1249-1268. [PMID: 28113321 DOI: 10.1109/tvcg.2016.2640960] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Massive simulations and arrays of sensing devices, in combination with increasing computing resources, have generated large, complex, high-dimensional datasets used to study phenomena across numerous fields of study. Visualization plays an important role in exploring such datasets. We provide a comprehensive survey of advances in high-dimensional data visualization that focuses on the past decade. We aim at providing guidance for data practitioners to navigate through a modular view of the recent advances, inspiring the creation of new visualizations along the enriched visualization pipeline, and identifying future opportunities for visualization research.
Collapse
|
24
|
Torsney-Weir T, Bergner S, Bingham D, Moller T. Predicting the Interactive Rendering Time Threshold of Gaussian Process Models With HyperSlice. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1111-1123. [PMID: 26915126 DOI: 10.1109/tvcg.2016.2532333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper we present a method for predicting the rendering time to display multi-dimensional data for the analysis of computer simulations using the HyperSlice [36] method with Gaussian process model reconstruction. Our method relies on a theoretical understanding of how the data points are drawn on slices and then fits the formula to a user's machine using practical experiments. We also describe the typical characteristics of data when analyzing deterministic computer simulations as described by the statistics community. We then show the advantage of carefully considering how many data points can be drawn in real time by proposing two approaches of how this predictive formula can be used in a real-world system.
Collapse
|
25
|
Xie C, Zhong W, Mueller K. A Visual Analytics Approach for Categorical Joint Distribution Reconstruction from Marginal Projections. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:51-60. [PMID: 27514059 DOI: 10.1109/tvcg.2016.2598479] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Oftentimes multivariate data are not available as sets of equally multivariate tuples, but only as sets of projections into subspaces spanned by subsets of these attributes. For example, one may find data with five attributes stored in six tables of two attributes each, instead of a single table of five attributes. This prohibits the visualization of these data with standard high-dimensional methods, such as parallel coordinates or MDS, and there is hence the need to reconstruct the full multivariate (joint) distribution from these marginal ones. Most of the existing methods designed for this purpose use an iterative procedure to estimate the joint distribution. With insufficient marginal distributions and domain knowledge, they lead to results whose joint errors can be large. Moreover, enforcing smoothness for regularizations in the joint space is not applicable if the attributes are not numerical but categorical. We propose a visual analytics approach that integrates both anecdotal data and human experts to iteratively narrow down a large set of plausible solutions. The solution space is populated using a Monte Carlo procedure which uniformly samples the solution space. A level-of-detail high dimensional visualization system helps the user understand the patterns and the uncertainties. Constraints that narrow the solution space can then be added by the user interactively during the iterative exploration, and eventually a subset of solutions with narrow uncertainty intervals emerges.
Collapse
|
26
|
Pajer S, Streit M, Torsney-Weir T, Spechtenhauser F, Muller T, Piringer H. WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:611-620. [PMID: 27875176 DOI: 10.1109/tvcg.2016.2598589] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.
Collapse
|
27
|
Landesberger TV, Basgier D, Becker M. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2537-2549. [PMID: 26595923 DOI: 10.1109/tvcg.2015.2501813] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.
Collapse
|
28
|
Khan AUM, Mikut R, Reischl M. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines. PLoS One 2016; 11:e0165180. [PMID: 27764213 PMCID: PMC5072585 DOI: 10.1371/journal.pone.0165180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 10/08/2016] [Indexed: 11/19/2022] Open
Abstract
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
Collapse
Affiliation(s)
- Arif ul Maula Khan
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
| | - Ralf Mikut
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
| | - Markus Reischl
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
- * E-mail:
| |
Collapse
|
29
|
Zhou L, Hansen CD. A Survey of Colormaps in Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2051-69. [PMID: 26513793 PMCID: PMC4959790 DOI: 10.1109/tvcg.2015.2489649] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks.
Collapse
Affiliation(s)
- Liang Zhou
- Visualisierungsinstitut, Universität Stuttgart (VISUS), Stuttgart, Germany
| | - Charles D. Hansen
- Scientific Computing and Imaging Institute and the School of Computing, University of Utah, Salt Lake City, UT 84112
| |
Collapse
|
30
|
Pretorius AJ, Zhou Y, Ruddle RA. Visual parameter optimisation for biomedical image processing. BMC Bioinformatics 2015; 16 Suppl 11:S9. [PMID: 26329538 PMCID: PMC4547193 DOI: 10.1186/1471-2105-16-s11-s9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches.
Collapse
|
31
|
Kainz B, Steinberger M, Wein W, Kuklisova-Murgasova M, Malamateniou C, Keraudren K, Torsney-Weir T, Rutherford M, Aljabar P, Hajnal JV, Rueckert D. Fast Volume Reconstruction From Motion Corrupted Stacks of 2D Slices. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1901-13. [PMID: 25807565 PMCID: PMC7115883 DOI: 10.1109/tmi.2015.2415453] [Citation(s) in RCA: 122] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Capturing an enclosing volume of moving subjects and organs using fast individual image slice acquisition has shown promise in dealing with motion artefacts. Motion between slice acquisitions results in spatial inconsistencies that can be resolved by slice-to-volume reconstruction (SVR) methods to provide high quality 3D image data. Existing algorithms are, however, typically very slow, specialised to specific applications and rely on approximations, which impedes their potential clinical use. In this paper, we present a fast multi-GPU accelerated framework for slice-to-volume reconstruction. It is based on optimised 2D/3D registration, super-resolution with automatic outlier rejection and an additional (optional) intensity bias correction. We introduce a novel and fully automatic procedure for selecting the image stack with least motion to serve as an initial registration target. We evaluate the proposed method using artificial motion corrupted phantom data as well as clinical data, including tracked freehand ultrasound of the liver and fetal Magnetic Resonance Imaging. We achieve speed-up factors greater than 30 compared to a single CPU system and greater than 10 compared to currently available state-of-the-art multi-core CPU methods. We ensure high reconstruction accuracy by exact computation of the point-spread function for every input data point, which has not previously been possible due to computational limitations. Our framework and its implementation is scalable for available computational infrastructures and tests show a speed-up factor of 1.70 for each additional GPU. This paves the way for the online application of image based reconstruction methods during clinical examinations. The source code for the proposed approach is publicly available.
Collapse
Affiliation(s)
| | - Markus Steinberger
- Institute for Computer Graphics and Vision at Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Wolfgang Wein
- ImFusion GmbH and the Chair for Computer Aided Medical Procedures & Augmented Reality at TU Munich, Agnes-Pockels-Bogen 1, 80992 Munich, Germany
| | - Maria Kuklisova-Murgasova
- Department of Perinatal Imaging and Health within the Division of Imaging Sciences and Biomedical Engineering at King's College London, Strand, London WC2R 2LS, UK
| | - Christina Malamateniou
- Department of Perinatal Imaging and Health within the Division of Imaging Sciences and Biomedical Engineering at King's College London, Strand, London WC2R 2LS, UK
| | - Kevin Keraudren
- Department of Computing, Imperial College London, 180 Queen's Gate, London SW7 2AZ, UK
| | - Thomas Torsney-Weir
- Visualization and Data Analysis group within the Faculty of Computer Science at the University of Vienna, Waehringer Strae 29, 1090 Vienna, Austria
| | - Mary Rutherford
- Department of Perinatal Imaging and Health within the Division of Imaging Sciences and Biomedical Engineering at King's College London, Strand, London WC2R 2LS, UK
| | - Paul Aljabar
- Department of Perinatal Imaging and Health within the Division of Imaging Sciences and Biomedical Engineering at King's College London, Strand, London WC2R 2LS, UK
| | - Joseph V. Hajnal
- Department of Perinatal Imaging and Health within the Division of Imaging Sciences and Biomedical Engineering at King's College London, Strand, London WC2R 2LS, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, 180 Queen's Gate, London SW7 2AZ, UK
| |
Collapse
|
32
|
Lin IC, Lan YC, Cheng PW. SI-Cut: Structural Inconsistency Analysis for Image Foreground Extraction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:860-872. [PMID: 26357247 DOI: 10.1109/tvcg.2015.2396063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper presents a novel approach for extracting foreground objects from an image. Existing methods involve separating the foreground and background mainly according to their color distributions and neighbor similarities. This paper proposes using a more discriminative strategy, structural inconsistency analysis, in which the localities of color and texture are considered. Given an indicated rectangle, the proposed system iteratively maximizes the consensus regions between the original image and predicted structures from the known background. The object contour can then be extracted according to inconsistency in the predicted background and foreground structures. The proposed method includes an efficient image completion technique for structural prediction. The results of experiments showed that the extraction accuracy of the proposed method is higher than that of related methods for structural scenes, and is also comparable to that of related methods for less structural situations.
Collapse
|
33
|
Sedlmair M, Heinzl C, Bruckner S, Piringer H, Möller T. Visual Parameter Space Analysis: A Conceptual Framework. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2161-2170. [PMID: 26356930 DOI: 10.1109/tvcg.2014.2346321] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Various case studies in different application domains have shown the great potential of visual parameter space analysis to support validating and using simulation models. In order to guide and systematize research endeavors in this area, we provide a conceptual framework for visual parameter space analysis problems. The framework is based on our own experience and a structured analysis of the visualization literature. It contains three major components: (1) a data flow model that helps to abstractly describe visual parameter space analysis problems independent of their application domain; (2) a set of four navigation strategies of how parameter space analysis can be supported by visualization tools; and (3) a characterization of six analysis tasks. Based on our framework, we analyze and classify the current body of literature, and identify three open research gaps in visual parameter space analysis. The framework and its discussion are meant to support visualization designers and researchers in characterizing parameter space analysis problems and to guide their design and evaluation processes.
Collapse
|
34
|
Konev A, Waser J, Sadransky B, Cornel D, Perdigão RAP, Horváth Z, Gröller ME. Run Watchers: Automatic Simulation-Based Decision Support in Flood Management. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:1873-1882. [PMID: 26356901 DOI: 10.1109/tvcg.2014.2346930] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we introduce a simulation-based approach to design protection plans for flood events. Existing solutions require a lot of computation time for an exhaustive search, or demand for a time-consuming expert supervision and steering. We present a faster alternative based on the automated control of multiple parallel simulation runs. Run Watchers are dedicated system components authorized to monitor simulation runs, terminate them, and start new runs originating from existing ones according to domain-specific rules. This approach allows for a more efficient traversal of the search space and overall performance improvements due to a re-use of simulated states and early termination of failed runs. In the course of search, Run Watchers generate large and complex decision trees. We visualize the entire set of decisions made by Run Watchers using interactive, clustered timelines. In addition, we present visualizations to explain the resulting response plans. Run Watchers automatically generate storyboards to convey plan details and to justify the underlying decisions, including those which leave particular buildings unprotected. We evaluate our solution with domain experts.
Collapse
|
35
|
Arezoomand S, Lee WS, Rakhra KS, Beaulé PE. A 3D active model framework for segmentation of proximal femur in MR images. Int J Comput Assist Radiol Surg 2014; 10:55-66. [PMID: 25370312 DOI: 10.1007/s11548-014-1125-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Accepted: 10/22/2014] [Indexed: 11/29/2022]
Abstract
PURPOSE Segmentation of osseous structures from clinical MR images is difficult due to acquisition artifacts and variable signal intensity of bones. Segmentation of femoral head is required for evaluation of hip joint abnormalities such as cam- type femoroacetabular impingement. A parametric deformable model (PDM) framework was developed for segmentation of 3D magnetic resonance (MR) images of the hip. METHOD A two-phase segmentation scheme was implemented: (i) Radial basis function interpolation was performed for semi-automatic piecewise registration of a proximal femur atlas model to an MRI scan region of interest. User-defined control points on the mesh model were registered to the corresponding landmarks on the image. (ii) An active PDM was then used for coarse-to-fine level segmentation. The segmentation technique was tested using 3D synthetic image data and clinical MR scans of the hip with varying resolution. RESULTS The segmentation method provided a mean target overlap of 0.95 and misclassification error of 0.035 for the synthetic data. The average target overlap was 0.88, and misclassification error rate was 0.12 for the clinical MRI data sets. CONCLUSION A framework for segmentation of proximal femur in hip MRI scans was developed and tested. This method is robust to artifacts and intensity inhomogeneity and resistant to leakage into adjacent tissues. In comparison with slicewise segmentation techniques, this method features inter-slice consistency, which results in a smooth model of the proximal femur in hip MRI scans.
Collapse
Affiliation(s)
- Sadaf Arezoomand
- School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, Canada,
| | | | | | | |
Collapse
|
36
|
Gibson KH, Vorkel D, Meissner J, Verbavatz JM. Fluorescing the electron: strategies in correlative experimental design. Methods Cell Biol 2014; 124:23-54. [PMID: 25287835 DOI: 10.1016/b978-0-12-801075-4.00002-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Correlative light and electron microscopy (CLEM) encompasses a growing number of imaging techniques aiming to combine the benefits of light microscopy, which allows routine labeling of molecules and live-cell imaging of fluorescently tagged proteins with the resolution and ultrastructural detail provided by electron microscopy (EM). Here we review three different strategies that are commonly used in CLEM and we illustrate each approach with one detailed example of their application. The focus is on different options for sample preparation with their respective benefits as well as on the imaging workflows that can be used. The three strategies cover: (1) the combination of live-cell imaging with the high resolution of EM (time-resolved CLEM), (2) the need to identify a fluorescent cell of interest for further exploration by EM (cell sorting), and (3) the subcellular correlation of a fluorescent feature in a cell with its associated ultrastructural features (spatial CLEM). Finally, we discuss future directions for CLEM exploring the possibilities for combining super-resolution microscopy with EM.
Collapse
Affiliation(s)
- Kimberley H Gibson
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Daniela Vorkel
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Jana Meissner
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Jean-Marc Verbavatz
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| |
Collapse
|
37
|
Schmidt J, Gröller ME, Bruckner S. VAICo: visual analysis for image comparison. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2090-2099. [PMID: 24051775 DOI: 10.1109/tvcg.2013.213] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains.
Collapse
|
38
|
Mühlbacher T, Piringer H. A partition-based framework for building and validating regression models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:1962-1971. [PMID: 24051762 DOI: 10.1109/tvcg.2013.125] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Regression models play a key role in many application domains for analyzing or predicting a quantitative dependent variable based on one or more independent variables. Automated approaches for building regression models are typically limited with respect to incorporating domain knowledge in the process of selecting input variables (also known as feature subset selection). Other limitations include the identification of local structures, transformations, and interactions between variables. The contribution of this paper is a framework for building regression models addressing these limitations. The framework combines a qualitative analysis of relationship structures by visualization and a quantification of relevance for ranking any number of features and pairs of features which may be categorical or continuous. A central aspect is the local approximation of the conditional target distribution by partitioning 1D and 2D feature domains into disjoint regions. This enables a visual investigation of local patterns and largely avoids structural assumptions for the quantitative ranking. We describe how the framework supports different tasks in model building (e.g., validation and comparison), and we present an interactive workflow for feature subset selection. A real-world case study illustrates the step-wise identification of a five-dimensional model for natural gas consumption. We also report feedback from domain experts after two months of deployment in the energy sector, indicating a significant effort reduction for building and improving regression models.
Collapse
|
39
|
Schultz T, Kindlmann GL. Open-box spectral clustering: applications to medical image analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2100-2108. [PMID: 24051776 DOI: 10.1109/tvcg.2013.181] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Spectral clustering is a powerful and versatile technique, whose broad range of applications includes 3D image analysis. However, its practical use often involves a tedious and time-consuming process of tuning parameters and making application-specific choices. In the absence of training data with labeled clusters, help from a human analyst is required to decide the number of clusters, to determine whether hierarchical clustering is needed, and to define the appropriate distance measures, parameters of the underlying graph, and type of graph Laplacian. We propose to simplify this process via an open-box approach, in which an interactive system visualizes the involved mathematical quantities, suggests parameter values, and provides immediate feedback to support the required decisions. Our framework focuses on applications in 3D image analysis, and links the abstract high-dimensional feature space used in spectral clustering to the three-dimensional data space. This provides a better understanding of the technique, and helps the analyst predict how well specific parameter settings will generalize to similar tasks. In addition, our system supports filtering outliers and labeling the final clusters in such a way that user actions can be recorded and transferred to different data in which the same structures are to be found. Our system supports a wide range of inputs, including triangular meshes, regular grids, and point clouds. We use our system to develop segmentation protocols in chest CT and brain MRI that are then successfully applied to other datasets in an automated manner.
Collapse
|
40
|
Coffey D, Lin CL, Erdman AG, Keefe DF. Design by dragging: an interface for creative forward and inverse design with simulation ensembles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2783-2791. [PMID: 24051845 PMCID: PMC4126190 DOI: 10.1109/tvcg.2013.147] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via 'tugging' and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users' drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation.
Collapse
Affiliation(s)
- Dane Coffey
- Department of Computer Science and Engineering, University of Minnesota
| | - Chi-Lun Lin
- Department of Mechanical Engineering, University of Minnesota
| | | | - Daniel F. Keefe
- Department of Computer Science and Engineering, University of Minnesota
| |
Collapse
|
41
|
Reh A, Gusenbauer C, Kastner J, Gröller ME, Heinzl C. MObjects--a novel method for the visualization and interactive exploration of defects in industrial XCT data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2906-2915. [PMID: 24051858 DOI: 10.1109/tvcg.2013.177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper describes an advanced visualization method for the analysis of defects in industrial 3D X-Ray Computed Tomography (XCT) data. We present a novel way to explore a high number of individual objects in a dataset, e.g., pores, inclusions, particles, fibers, and cracks demonstrated on the special application area of pore extraction in carbon fiber reinforced polymers (CFRP). After calculating the individual object properties volume, dimensions and shape factors, all objects are clustered into a mean object (MObject). The resulting MObject parameter space can be explored interactively. To do so, we introduce the visualization of mean object sets (MObject Sets) in a radial and a parallel arrangement. Each MObject may be split up into sub-classes by selecting a specific property, e.g., volume or shape factor, and the desired number of classes. Applying this interactive selection iteratively leads to the intended classifications and visualizations of MObjects along the selected analysis path. Hereby the given different scaling factors of the MObjects down the analysis path are visualized through a visual linking approach. Furthermore the representative MObjects are exported as volumetric datasets to serve as input for successive calculations and simulations. In the field of porosity determination in CFRP non-destructive testing practitioners use representative MObjects to improve ultrasonic calibration curves. Representative pores also serve as input for heat conduction simulations in active thermography. For a fast overview of the pore properties in a dataset we propose a local MObjects visualization in combination with a color-coded homogeneity visualization of cells. The advantages of our novel approach are demonstrated using real world CFRP specimens. The results were evaluated through a questionnaire in order to determine the practicality of the MObjects visualization as a supportive tool for domain specialists.
Collapse
Affiliation(s)
- Andreas Reh
- University of Applied Sciences Upper Austria, Campus Wels
| | | | | | | | | |
Collapse
|
42
|
Bergner S, Sedlmair M, Möller T, Abdolyousefi SN, Saad A. ParaGlide: interactive parameter space partitioning for computer simulations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:1499-1512. [PMID: 23846095 DOI: 10.1109/tvcg.2013.61] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this paper, we introduce ParaGlide, a visualization system designed for interactive exploration of parameter spaces of multidimensional simulation models. To get the right parameter configuration, model developers frequently have to go back and forth between setting input parameters and qualitatively judging the outcomes of their model. Current state-of-the-art tools and practices, however, fail to provide a systematic way of exploring these parameter spaces, making informed decisions about parameter configurations a tedious and workload-intensive task. ParaGlide endeavors to overcome this shortcoming by guiding data generation using a region-based user interface for parameter sampling and then dividing the model's input parameter space into partitions that represent distinct output behavior. In particular, we found that parameter space partitioning can help model developers to better understand qualitative differences among possibly high-dimensional model outputs. Further, it provides information on parameter sensitivity and facilitates comparison of models. We developed ParaGlide in close collaboration with experts from three different domains, who all were involved in developing new models for their domain. We first analyzed current practices of six domain experts and derived a set of tasks and design requirements, then engaged in a user-centered design process, and finally conducted three longitudinal in-depth case studies underlining the usefulness of our approach.
Collapse
Affiliation(s)
- Steven Bergner
- Department of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada.
| | | | | | | | | |
Collapse
|
43
|
Unger A, Schulte S, Klemann V, Dransch D. A Visual Analysis Concept for the Validation of Geoscientific Simulation Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2216-2225. [PMID: 26357129 DOI: 10.1109/tvcg.2012.190] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Geoscientific modeling and simulation helps to improve our understanding of the complex Earth system. During the modeling process, validation of the geoscientific model is an essential step. In validation, it is determined whether the model output shows sufficient agreement with observation data. Measures for this agreement are called goodness of fit. In the geosciences, analyzing the goodness of fit is challenging due to its manifold dependencies: 1) The goodness of fit depends on the model parameterization, whose precise values are not known. 2) The goodness of fit varies in space and time due to the spatio-temporal dimension of geoscientific models. 3) The significance of the goodness of fit is affected by resolution and preciseness of available observational data. 4) The correlation between goodness of fit and underlying modeled and observed values is ambiguous. In this paper, we introduce a visual analysis concept that targets these challenges in the validation of geoscientific models - specifically focusing on applications where observation data is sparse, unevenly distributed in space and time, and imprecise, which hinders a rigorous analytical approach. Our concept, developed in close cooperation with Earth system modelers, addresses the four challenges by four tailored visualization components. The tight linking of these components supports a twofold interactive drill-down in model parameter space and in the set of data samples, which facilitates the exploration of the numerous dependencies of the goodness of fit. We exemplify our visualization concept for geoscientific modeling of glacial isostatic adjustments in the last 100,000 years, validated against sea levels indicators - a prominent example for sparse and imprecise observation data. An initial use case and feedback from Earth system modelers indicate that our visualization concept is a valuable complement to the range of validation methods.
Collapse
Affiliation(s)
- A Unger
- GFZ German Research Centre for Geosciences, Potsdam, Germany.
| | | | | | | |
Collapse
|
44
|
Ip CY, Varshney A, JaJa J. Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2355-2363. [PMID: 26357143 DOI: 10.1109/tvcg.2012.231] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.
Collapse
Affiliation(s)
- Cheuk Yiu Ip
- Institute for Advanced Computer Studies, University of Maryland, College Park, USA.
| | | | | |
Collapse
|
45
|
Shepherd T, Owenius R. Gaussian process models of dynamic PET for functional volume definition in radiation oncology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1542-1556. [PMID: 22498690 DOI: 10.1109/tmi.2012.2193896] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In routine oncologic positron emission tomography (PET), dynamic information is discarded by time-averaging the signal to produce static images of the "standardised uptake value" (SUV). Defining functional volumes of interest (VOIs) in terms of SUV is flawed, as values are affected by confounding factors and the chosen time window, and SUV images are not sensitive to functional heterogeneity of pathological tissues. Also, SUV iso-contours are highly affected by the choice of threshold and no threshold, or other SUV-based segmentation method, is universally accepted for a given VOI type. Gaussian Process (GP) time series models describe macro-scale dynamic behavior arising from countless interacting micro-scale processes, as is the case for PET signals from heterogeneous tissue. We use GPs to model time-activity curves (TACs) from dynamic PET and to define functional volumes for PET oncology. Probabilistic methods of tissue discrimination are presented along with novel contouring methods for functional VOI segmentation. We demonstrate the value of GP models for voxel classification and VOI contouring of diseased and metastatic tissues with functional heterogeneity in prostate PET. Classification experiments reveal superior sensitivity and specificity over SUV calculation and a TAC-based method proposed in recent literature. Contouring experiments reveal differences in shape between gold-standard and GP VOIs and correlation with kinetic models shows that the novel VOIs contain extra clinically relevant information compared to SUVs alone. We conclude that the proposed models offer a principled data analysis technique that improves on SUVs for oncologic VOI definition. Continuing research will generalize GP models for different oncology tracers and imaging protocols with the ultimate goal of clinical use including treatment planning.
Collapse
Affiliation(s)
- Tony Shepherd
- Turku PET Centre and Department of Oncology and Radiotherapy, Turku University Hospital, 20521 Turku, Finland.
| | | |
Collapse
|
46
|
Weber B, Greenan G, Prohaska S, Baum D, Hege HC, Müller-Reichert T, Hyman AA, Verbavatz JM. Automated tracing of microtubules in electron tomograms of plastic embedded samples of Caenorhabditis elegans embryos. J Struct Biol 2011; 178:129-38. [PMID: 22182731 DOI: 10.1016/j.jsb.2011.12.004] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2011] [Revised: 11/23/2011] [Accepted: 12/05/2011] [Indexed: 01/15/2023]
Abstract
The ability to rapidly assess microtubule number in 3D image stacks from electron tomograms is essential for collecting statistically meaningful data sets. Here we implement microtubule tracing using 3D template matching. We evaluate our results by comparing the automatically traced centerlines to manual tracings in a large number of electron tomograms of the centrosome of the early Caenorhabditis elegans embryo. Furthermore, we give a qualitative description of the tracing results for three other types of samples. For dual-axis tomograms, the automatic tracing yields 4% false negatives and 8% false positives on average. For single-axis tomograms, the accuracy of tracing is lower (16% false negatives and 14% false positives) due to the missing wedge in electron tomography. We also implemented an editor specifically designed for correcting the automatic tracing. Besides, this editor can be used for annotating microtubules. The automatic tracing together with a manual correction significantly reduces the amount of manual labor for tracing microtubule centerlines so that large-scale analysis of microtubule network properties becomes feasible.
Collapse
Affiliation(s)
- Britta Weber
- Zuse Institute Berlin, Department of Visualization and Data Analysis, Takustrasse 7, 14195 Berlin, Germany.
| | | | | | | | | | | | | | | |
Collapse
|