1
|
Zhang M, Chen L, Li Q, Yuan X, Yong J. Uncertainty-Oriented Ensemble Data Visualization and Exploration using Variable Spatial Spreading. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1808-1818. [PMID: 33048703 DOI: 10.1109/tvcg.2020.3030377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As an important method of handling potential uncertainties in numerical simulations, ensemble simulation has been widely applied in many disciplines. Visualization is a promising and powerful ensemble simulation analysis method. However, conventional visualization methods mainly aim at data simplification and highlighting important information based on domain expertise instead of providing a flexible data exploration and intervention mechanism. Trial-and-error procedures have to be repeatedly conducted by such approaches. To resolve this issue, we propose a new perspective of ensemble data analysis using the attribute variable dimension as the primary analysis dimension. Particularly, we propose a variable uncertainty calculation method based on variable spatial spreading. Based on this method, we design an interactive ensemble analysis framework that provides a flexible interactive exploration of the ensemble data. Particularly, the proposed spreading curve view, the region stability heat map view, and the temporal analysis view, together with the commonly used 2D map view, jointly support uncertainty distribution perception, region selection, and temporal analysis, as well as other analysis requirements. We verify our approach by analyzing a real-world ensemble simulation dataset. Feedback collected from domain experts confirms the efficacy of our framework.
Collapse
|
2
|
Johnson S, Samsel F, Abram G, Olson D, Solis AJ, Herman B, Wolfram PJ, Lenglet C, Keefe DF. Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:492-502. [PMID: 31403430 DOI: 10.1109/tvcg.2019.2934260] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We introduce Artifact-Based Rendering (ABR), a framework of tools, algorithms, and processes that makes it possible to produce real, data-driven 3D scientific visualizations with a visual language derived entirely from colors, lines, textures, and forms created using traditional physical media or found in nature. A theory and process for ABR is presented to address three current needs: (i) designing better visualizations by making it possible for non-programmers to rapidly design and critique many alternative data-to-visual mappings; (ii) expanding the visual vocabulary used in scientific visualizations to depict increasingly complex multivariate data; (iii) bringing a more engaging, natural, and human-relatable handcrafted aesthetic to data visualization. New tools and algorithms to support ABR include front-end applets for constructing artifact-based colormaps, optimizing 3D scanned meshes for use in data visualization, and synthesizing textures from artifacts. These are complemented by an interactive rendering engine with custom algorithms and interfaces that demonstrate multiple new visual styles for depicting point, line, surface, and volume data. A within-the-research-team design study provides early evidence of the shift in visualization design processes that ABR is believed to enable when compared to traditional scientific visualization systems. Qualitative user feedback on applications to climate science and brain imaging support the utility of ABR for scientific discovery and public communication.
Collapse
|
3
|
Jadhav S, Nadeem S, Kaufman A. FeatureLego: Volume Exploration Using Exhaustive Clustering of Super-Voxels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2725-2737. [PMID: 30028709 PMCID: PMC6703906 DOI: 10.1109/tvcg.2018.2856744] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a volume exploration framework, FeatureLego, that uses a novel voxel clustering approach for efficient selection of semantic features. We partition the input volume into a set of compact super-voxels that represent the finest selection granularity. We then perform an exhaustive clustering of these super-voxels using a graph-based clustering method. Unlike the prevalent brute-force parameter sampling approaches, we propose an efficient algorithm to perform this exhaustive clustering. By computing an exhaustive set of clusters, we aim to capture as many boundaries as possible and ensure that the user has sufficient options for efficiently selecting semantically relevant features. Furthermore, we merge all the computed clusters into a single tree of meta-clusters that can be used for hierarchical exploration. We implement an intuitive user-interface to interactively explore volumes using our clustering approach. Finally, we show the effectiveness of our framework on multiple real-world datasets of different modalities.
Collapse
|
4
|
Berger M, Li J, Levine JA. A Generative Model for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1636-1650. [PMID: 29993811 DOI: 10.1109/tvcg.2018.2816059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
Collapse
|
5
|
Tao J, Wang C. Semi-Automatic Generation of Stream Surfaces via Sketching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2622-2635. [PMID: 28922122 DOI: 10.1109/tvcg.2017.2750681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a semi-automatic approach for stream surface generation. Our approach is based on the conjecture that good seeding curves can be inferred from a set of streamlines. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to describe their perceived flow patterns through drawing simple strokes directly on top of the streamline visualization results. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Furthermore, we propose an optimization scheme to identify the optimal seeding curve in the neighborhood of an original seeding curve based on surface quality measures. To support interactive optimization, we design a parallel surface quality estimation strategy that estimates the quality of a seeding curve without generating the surface. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach.
Collapse
|
6
|
Khan NM, Ksantini R, Guan L. A Novel Image-Centric Approach Toward Direct Volume Rendering. ACM T INTEL SYST TEC 2018. [DOI: 10.1145/3152875] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Transfer function (TF) generation is a fundamental problem in direct volume rendering (DVR). A TF maps voxels to color and opacity values to reveal inner structures. Existing TF tools are complex and unintuitive for the users who are more likely to be medical professionals than computer scientists. In this article, we propose a novel image-centric method for TF generation where instead of complex tools, the user directly manipulates volume data to generate DVR. The user’s work is further simplified by presenting only the most informative volume slices for selection. Based on the selected parts, the voxels are classified using our novel sparse nonparametric support vector machine classifier, which combines both local and near-global distributional information of the training data. The voxel classes are mapped to aesthetically pleasing and distinguishable color and opacity values using harmonic colors. Experimental results on several benchmark datasets and a detailed user survey show the effectiveness of the proposed method.
Collapse
|
7
|
Quan TM, Choi J, Jeong H, Jeong WK. An Intelligent System Approach for Probabilistic Volume Rendering Using Hierarchical 3D Convolutional Sparse Coding. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:964-973. [PMID: 28866519 DOI: 10.1109/tvcg.2017.2744078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a novel machine learning-based voxel classification method for highly-accurate volume rendering. Unlike conventional voxel classification methods that incorporate intensity-based features, the proposed method employs dictionary based features learned directly from the input data using hierarchical multi-scale 3D convolutional sparse coding, a novel extension of the state-of-the-art learning-based sparse feature representation method. The proposed approach automatically generates high-dimensional feature vectors in up to 75 dimensions, which are then fed into an intelligent system built on a random forest classifier for accurately classifying voxels from only a handful of selection scribbles made directly on the input data by the user. We apply the probabilistic transfer function to further customize and refine the rendered result. The proposed method is more intuitive to use and more robust to noise in comparison with conventional intensity-based classification methods. We evaluate the proposed method using several synthetic and real-world volume datasets, and demonstrate the methods usability through a user study.
Collapse
|
8
|
Kol TR, Klehm O, Seidel HP, Eisemann E. Expressive Single Scattering for Light Shaft Stylization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1753-1766. [PMID: 27101611 DOI: 10.1109/tvcg.2016.2554114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Light scattering in participating media is a natural phenomenon that is increasingly featured in movies and games, as it is visually pleasing and lends realism to a scene. In art, it may further be used to express a certain mood or emphasize objects. Here, artists often rely on stylization when creating scattering effects, not only because of the complexity of physically correct scattering, but also to increase expressiveness. Little research, however, focuses on artistically influencing the simulation of the scattering process in a virtual 3D scene. We propose novel stylization techniques, enabling artists to change the appearance of single scattering effects such as light shafts. Users can add, remove, or enhance light shafts using occluder manipulation. The colors of the light shafts can be stylized and animated using easily modifiable transfer functions. Alternatively, our system can optimize a light map given a simple user input for a number of desired views in the 3D world. Finally, we enable artists to control the heterogeneity of the underlying medium. Our stylized scattering solution is easy to use and compatible with standard rendering pipelines. It works for animated scenes and can be executed in real time to provide the artist with quick feedback.
Collapse
|
9
|
Opacity specification based on visibility ratio and occlusion vector in direct volume rendering. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Kwon BC, Kim H, Wall E, Choo J, Park H, Endert A. AxiSketcher: Interactive Nonlinear Axis Mapping of Visualizations through User Drawings. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:221-230. [PMID: 27514048 DOI: 10.1109/tvcg.2016.2598446] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Visual analytics techniques help users explore high-dimensional data. However, it is often challenging for users to express their domain knowledge in order to steer the underlying data model, especially when they have little attribute-level knowledge. Furthermore, users' complex, high-level domain knowledge, compared to low-level attributes, posits even greater challenges. To overcome these challenges, we introduce a technique to interpret a user's drawings with an interactive, nonlinear axis mapping approach called AxiSketcher. This technique enables users to impose their domain knowledge on a visualization by allowing interaction with data entries rather than with data attributes. The proposed interaction is performed through directly sketching lines over the visualization. Using this technique, users can draw lines over selected data points, and the system forms the axes that represent a nonlinear, weighted combination of multidimensional attributes. In this paper, we describe our techniques in three areas: 1) the design space of sketching methods for eliciting users' nonlinear domain knowledge; 2) the underlying model that translates users' input, extracts patterns behind the selected data points, and results in nonlinear axes reflecting users' complex intent; and 3) the interactive visualization for viewing, assessing, and reconstructing the newly formed, nonlinear axes.
Collapse
|
11
|
Jönsson D, Falk M, Ynnerman A. Intuitive Exploration of Volumetric Data Using Dynamic Galleries. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:896-905. [PMID: 26390481 DOI: 10.1109/tvcg.2015.2467294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.
Collapse
|
12
|
Schroeder D, Keefe DF. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:877-885. [PMID: 26529734 DOI: 10.1109/tvcg.2015.2467153] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.
Collapse
|
13
|
Intuitive transfer function design for photographic volumes. J Vis (Tokyo) 2014. [DOI: 10.1007/s12650-014-0267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
14
|
Qin H, Ye B, He R. The voxel visibility model: an efficient framework for transfer function design. Comput Med Imaging Graph 2014; 40:138-46. [PMID: 25510474 DOI: 10.1016/j.compmedimag.2014.11.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 10/29/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets.
Collapse
Affiliation(s)
- Hongxing Qin
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bin Ye
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Rui He
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
15
|
Chen H, Chen W, Mei H, Liu Z, Zhou K, Chen W, Gu W, Ma KL. Visual Abstraction and Exploration of Multi-class Scatterplots. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:1683-1692. [PMID: 26356882 DOI: 10.1109/tvcg.2014.2346594] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.
Collapse
|
16
|
Ren D, Höllerer T, Yuan X. iVisDesigner: Expressive Interactive Design of Information Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2092-2101. [PMID: 26356923 DOI: 10.1109/tvcg.2014.2346291] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present the design, implementation and evaluation of iVisDesigner, a web-based system that enables users to design information visualizations for complex datasets interactively, without the need for textual programming. Our system achieves high interactive expressiveness through conceptual modularity, covering a broad information visualization design space. iVisDesigner supports the interactive design of interactive visualizations, such as provisioning for responsive graph layouts and different types of brushing and linking interactions. We present the system design and implementation, exemplify it through a variety of illustrative visualization designs and discuss its limitations. A performance analysis and an informal user study are presented to evaluate the system.
Collapse
|
17
|
Mindek P, Gröller M, Bruckner S. Managing Spatial Selections With Contextual Snapshots. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2014; 33:132-144. [PMID: 25821284 PMCID: PMC4372140 DOI: 10.1111/cgf.12406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical documents and the display of anatomical data.
Collapse
Affiliation(s)
- P Mindek
- Vienna University of TechnologyAustria
| | - M E Gröller
- Vienna University of TechnologyAustria
- VRV is Research CenterAustria
| | | |
Collapse
|
18
|
|
19
|
|
20
|
Shan G, Xie M, Li F, Gao Y, Chi X. Interactive visual exploration of halos in large-scale cosmology simulation. J Vis (Tokyo) 2014. [DOI: 10.1007/s12650-014-0206-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
Klehm O, Ihrke I, Seidel HP, Eisemann E. Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:983-995. [PMID: 26357355 DOI: 10.1109/tvcg.2014.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Although volumetric phenomena are important for realistic rendering and can even be a crucial component in the image, the artistic control of the volume's appearance is challenging. Appropriate tools to edit volume properties are missing, which can make it necessary to use simulation results directly. Alternatively, high-level modifications that are rarely intuitive, e.g., the tweaking of noise function parameters, can be utilized. Our work introduces a solution to stylize single-scattering volumetric effects in static volumes. Hereby, an artistic and intuitive control of emission, scattering and extinction becomes possible, while ensuring a smooth and coherent appearance when changing the viewpoint. Our method is based on tomographic reconstruction, which we link to the volumetric rendering equation. It analyzes a number of target views provided by the artist and adapts the volume properties to match the appearance for the given perspectives. Additionally, we describe how we can optimize for the environmental lighting to match a desired scene appearance, while keeping volume properties constant. Finally, both techniques can be combined. We demonstrate several use cases of our approach and illustrate its effectiveness.
Collapse
|
22
|
Adeshina AM, Hashim R, Khalid NEA, Abidin SZZ. Multimodal 3-D reconstruction of human anatomical structures using SurLens Visualization System. Interdiscip Sci 2013; 5:23-36. [PMID: 23605637 DOI: 10.1007/s12539-013-0155-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2012] [Revised: 04/16/2012] [Accepted: 04/28/2012] [Indexed: 11/30/2022]
Abstract
In the medical diagnosis and treatment planning, radiologists and surgeons rely heavily on the slices produced by medical imaging devices. Unfortunately, these image scanners could only present the 3-D human anatomical structure in 2-D. Traditionally, this requires medical professional concerned to study and analyze the 2-D images based on their expert experience. This is tedious, time consuming and prone to error; expecially when certain features are occluding the desired region of interest. Reconstruction procedures was earlier proposed to handle such situation. However, 3-D reconstruction system requires high performance computation and longer processing time. Integrating efficient reconstruction system into clinical procedures involves high resulting cost. Previously, brain's blood vessels reconstruction with MRA was achieved using SurLens Visualization System. However, adapting such system to other image modalities, applicable to the entire human anatomical structures, would be a meaningful contribution towards achieving a resourceful system for medical diagnosis and disease therapy. This paper attempts to adapt SurLens to possible visualisation of abnormalities in human anatomical structures using CT and MR images. The study was evaluated with brain MR images from the department of Surgery, University of North Carolina, United States and CT abdominal pelvic, from the Swedish National Infrastructure for Computing. The MR images contain around 109 datasets each of T1-FLASH, T2-Weighted, DTI and T1-MPRAGE. Significantly, visualization of human anatomical structure was achieved without prior segmentation. SurLens was adapted to visualize and display abnormalities, such as an indication of walderstrom's macroglobulinemia, stroke and penetrating brain injury in the human brain using Magentic Resonance (MR) images. Moreover, possible abnormalities in abdominal pelvic was also visualized using Computed Tomography (CT) slices. The study shows SurLens' functionality as a 3-D Multimodal Visualization System.
Collapse
Affiliation(s)
- A M Adeshina
- Faculty of Computer Science & Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Parit Raja, Batu Pahat, Johor, Malaysia.
| | | | | | | |
Collapse
|