1
|
Tang K, Wang C. StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:613-623. [PMID: 39255154 DOI: 10.1109/tvcg.2024.3456342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.
Collapse
|
2
|
Han J, Zheng H, Bi C. KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-Based Implicit Neural Representation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6826-6838. [PMID: 38127599 DOI: 10.1109/tvcg.2023.3345373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.
Collapse
|
3
|
He X, Tao Y, Yang S, Dai H, Lin H. voxel2vec: A Natural Language Processing Approach to Learning Distributed Representations for Scientific Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4296-4311. [PMID: 35797320 DOI: 10.1109/tvcg.2022.3189094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Relationships in scientific data, such as the numerical and spatial distribution relations of features in univariate data, the scalar-value combinations' relations in multivariate data, and the association of volumes in time-varying and ensemble data, are intricate and complex. This paper presents voxel2vec, a novel unsupervised representation learning model, which is used to learn distributed representations of scalar values/scalar-value combinations in a low-dimensional vector space. Its basic assumption is that if two scalar values/scalar-value combinations have similar contexts, they usually have high similarity in terms of features. By representing scalar values/scalar-value combinations as symbols, voxel2vec learns the similarity between them in the context of spatial distribution and then allows us to explore the overall association between volumes by transfer prediction. We demonstrate the usefulness and effectiveness of voxel2vec by comparing it with the isosurface similarity map of univariate data and applying the learned distributed representations to feature classification for multivariate data and to association analysis for time-varying and ensemble data.
Collapse
|
4
|
Shi N, Xu J, Li H, Guo H, Woodring J, Shen HW. VDL-Surrogate: A View-Dependent Latent-based Model for Parameter Space Exploration of Ensemble Simulations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:820-830. [PMID: 36166538 DOI: 10.1109/tvcg.2022.3209413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We propose VDL-Surrogate, a view-dependent neural-network-latent-based surrogate model for parameter space exploration of ensemble simulations that allows high-resolution visualizations and user-specified visual mappings. Surrogate-enabled parameter space exploration allows domain scientists to preview simulation results without having to run a large number of computationally costly simulations. Limited by computational resources, however, existing surrogate models may not produce previews with sufficient resolution for visualization and analysis. To improve the efficient use of computational resources and support high-resolution exploration, we perform ray casting from different viewpoints to collect samples and produce compact latent representations. This latent encoding process reduces the cost of surrogate model training while maintaining the output quality. In the model training stage, we select viewpoints to cover the whole viewing sphere and train corresponding VDL-Surrogate models for the selected viewpoints. In the model inference stage, we predict the latent representations at previously selected viewpoints and decode the latent representations to data space. For any given viewpoint, we make interpolations over decoded data at selected viewpoints and generate visualizations with user-specified visual mappings. We show the effectiveness and efficiency of VDL-Surrogate in cosmological and ocean simulations with quantitative and qualitative evaluations. Source code is publicly available at https://github.com/trainsn/VDL-Surrogate.
Collapse
|
5
|
Shen J, Li H, Xu J, Biswas A, Shen HW. IDLat: An Importance-Driven Latent Generation Method for Scientific Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:679-689. [PMID: 36166537 DOI: 10.1109/tvcg.2022.3209419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
Collapse
|
6
|
Shi N, Xu J, Wurster SW, Guo H, Woodring J, Van Roekel LP, Shen HW. GNN-Surrogate: A Hierarchical and Adaptive Graph Neural Network for Parameter Space Exploration of Unstructured-Mesh Ocean Simulations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2301-2313. [PMID: 35389867 DOI: 10.1109/tvcg.2022.3165345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We propose GNN-Surrogate, a graph neural network-based surrogate model to explore the parameter space of ocean climate simulations. Parameter space exploration is important for domain scientists to understand the influence of input parameters (e.g., wind stress) on the simulation output (e.g., temperature). The exploration requires scientists to exhaust the complicated parameter space by running a batch of computationally expensive simulations. Our approach improves the efficiency of parameter space exploration with a surrogate model that predicts the simulation outputs accurately and efficiently. Specifically, GNN-Surrogate predicts the output field with given simulation parameters so scientists can explore the simulation parameter space with visualizations from user-specified visual mappings. Moreover, our graph-based techniques are designed for unstructured meshes, making the exploration of simulation outputs on irregular grids efficient. For efficient training, we generate hierarchical graphs and use adaptive resolutions. We give quantitative and qualitative evaluations on the MPAS-Ocean simulation to demonstrate the effectiveness and efficiency of GNN-Surrogate. Source code is publicly available at https://github.com/trainsn/GNN-Surrogate.
Collapse
|
7
|
Han J, Wang C. SSR-TVD: Spatial Super-Resolution for Time-Varying Data Analysis and Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2445-2456. [PMID: 33074824 DOI: 10.1109/tvcg.2020.3032123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present SSR-TVD, a novel deep learning framework that produces coherent spatial super-resolution (SSR) of time-varying data (TVD) using adversarial learning. In scientific visualization, SSR-TVD is the first work that applies the generative adversarial network (GAN) to generate high-resolution volumes for three-dimensional time-varying data sets. The design of SSR-TVD includes a generator and two discriminators (spatial and temporal discriminators). The generator takes a low-resolution volume as input and outputs a synthesized high-resolution volume. To capture spatial and temporal coherence in the volume sequence, the two discriminators take the synthesized high-resolution volume(s) as input and produce a score indicating the realness of the volume(s). Our method can work in the in situ visualization setting by downscaling volumetric data from selected time steps as the simulation runs and upscaling downsampled volumes to their original resolution during postprocessing. To demonstrate the effectiveness of SSR-TVD, we show quantitative and qualitative results with several time-varying data sets of different characteristics and compare our method against volume upscaling using bicubic interpolation and a solution solely based on CNN.
Collapse
|
8
|
Festag S, Denzler J, Spreckelsen C. Generative Adversarial Networks for Biomedical Time Series Forecasting and Imputation A systematic review. J Biomed Inform 2022; 129:104058. [DOI: 10.1016/j.jbi.2022.104058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/23/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
|
9
|
Han J, Zheng H, Chen DZ, Wang C. STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:270-280. [PMID: 34587051 DOI: 10.1109/tvcg.2021.3114815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions ( SSR+TSF, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.
Collapse
|
10
|
Dai H, Tao Y, He X, Lin H. IsoExplorer: an isosurface-driven framework for 3D shape analysis of biomedical volume data. J Vis (Tokyo) 2021; 24:1253-1266. [PMID: 34429686 PMCID: PMC8376112 DOI: 10.1007/s12650-021-00770-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 07/23/2021] [Indexed: 11/29/2022]
Abstract
Abstract The high-resolution scanning devices developed in recent decades provide biomedical volume datasets that support the study of molecular structure and drug design. Isosurface analysis is an important tool in these studies, and the key is to construct suitable description vectors to support subsequent tasks, such as classification and retrieval. Traditional methods based on handcrafted features are insufficient for dealing with complex structures, while deep learning-based approaches have high memory and computation costs when dealing directly with volume data. To address these problems, we propose IsoExplorer, an isosurface-driven framework for 3D shape analysis of biomedical volume data. We first extract isosurfaces from volume data and split them into individual 3D shapes according to their connectivity. Then, we utilize octree-based convolution to design a variational autoencoder model that learns the latent representations of the shape. Finally, these latent representations are used for low-dimensional isosurface representation and shape retrieval. We demonstrate the effectiveness and usefulness of IsoExplorer via isosurface similarity analysis, shape retrieval of real-world data, and comparison with existing methods. Graphic abstract ![]()
Collapse
Affiliation(s)
- Haoran Dai
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Yubo Tao
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Xiangyang He
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Hai Lin
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| |
Collapse
|