1
|
de Silva A, Zhao M, Stewart D, Khan FH, Dusek G, Davis J, Pang A. RipViz: Finding Rip Currents by Learning Pathline Behavior. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3930-3944. [PMID: 37022897 DOI: 10.1109/tvcg.2023.3243834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We present a hybrid machine learning and flow analysis feature detection method, RipViz, to extract rip currents from stationary videos. Rip currents are dangerous strong currents that can drag beachgoers out to sea. Most people are either unaware of them or do not know what they look like. In some instances, even trained personnel such as lifeguards have difficulty identifying them. RipViz produces a simple, easy to understand visualization of rip location overlaid on the source video. With RipViz, we first obtain an unsteady 2D vector field from the stationary video using optical flow. Movement at each pixel is analyzed over time. At each seed point, sequences of short pathlines, rather a single long pathline, are traced across the frames of the video to better capture the quasi-periodic flow behavior of wave activity. Because of the motion on the beach, the surf zone, and the surrounding areas, these pathlines may still appear very cluttered and incomprehensible. Furthermore, lay audiences are not familiar with pathlines and may not know how to interpret them. To address this, we treat rip currents as a flow anomaly in an otherwise normal flow. To learn about the normal flow behavior, we train an LSTM autoencoder with pathline sequences from normal ocean, foreground, and background movements. During test time, we use the trained LSTM autoencoder to detect anomalous pathlines (i.e., those in the rip zone). The origination points of such anomalous pathlines, over the course of the video, are then presented as points within the rip zone. RipViz is fully automated and does not require user input. Feedback from domain expert suggests that RipViz has the potential for wider use.
Collapse
|
2
|
Ye S, Chen Z, Chu X, Li K, Luo J, Li Y, Geng G, Wu Y. PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:429-439. [PMID: 36179001 DOI: 10.1109/tvcg.2022.3209388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
Collapse
|
3
|
Shen J, Li H, Xu J, Biswas A, Shen HW. IDLat: An Importance-Driven Latent Generation Method for Scientific Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:679-689. [PMID: 36166537 DOI: 10.1109/tvcg.2022.3209419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
Collapse
|
4
|
Application of boundary-fitted convolutional neural network to simulate non-Newtonian fluid flow behavior in eccentric annulus. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07092-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
5
|
Han J, Zheng H, Chen DZ, Wang C. STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:270-280. [PMID: 34587051 DOI: 10.1109/tvcg.2021.3114815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions ( SSR+TSF, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.
Collapse
|
6
|
He X, Tao Y, Yang S, Chen C, Lin H. ScalarGCN: scalar-value association analysis of volumes based on graph convolutional network. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00779-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Dai H, Tao Y, He X, Lin H. IsoExplorer: an isosurface-driven framework for 3D shape analysis of biomedical volume data. J Vis (Tokyo) 2021; 24:1253-1266. [PMID: 34429686 PMCID: PMC8376112 DOI: 10.1007/s12650-021-00770-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 07/23/2021] [Indexed: 11/29/2022]
Abstract
Abstract The high-resolution scanning devices developed in recent decades provide biomedical volume datasets that support the study of molecular structure and drug design. Isosurface analysis is an important tool in these studies, and the key is to construct suitable description vectors to support subsequent tasks, such as classification and retrieval. Traditional methods based on handcrafted features are insufficient for dealing with complex structures, while deep learning-based approaches have high memory and computation costs when dealing directly with volume data. To address these problems, we propose IsoExplorer, an isosurface-driven framework for 3D shape analysis of biomedical volume data. We first extract isosurfaces from volume data and split them into individual 3D shapes according to their connectivity. Then, we utilize octree-based convolution to design a variational autoencoder model that learns the latent representations of the shape. Finally, these latent representations are used for low-dimensional isosurface representation and shape retrieval. We demonstrate the effectiveness and usefulness of IsoExplorer via isosurface similarity analysis, shape retrieval of real-world data, and comparison with existing methods. Graphic abstract ![]()
Collapse
Affiliation(s)
- Haoran Dai
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Yubo Tao
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Xiangyang He
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Hai Lin
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| |
Collapse
|
8
|
Tkachev G, Frey S, Ertl T. Local Prediction Models for Spatiotemporal Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3091-3108. [PMID: 31880555 DOI: 10.1109/tvcg.2019.2961893] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a machine learning-based approach for detecting and visualizing complex behavior in spatiotemporal volumes. For this, we train models to predict future data values at a given position based on the past values in its neighborhood, capturing common temporal behavior in the data. We then evaluate the model's prediction on the same data. High prediction error means that the local behavior was too complex, unique or uncertain to be accurately captured during training, indicating spatiotemporal regions with interesting behavior. By training several models of varying capacity, we are able to detect spatiotemporal regions of various complexities. We aggregate the obtained prediction errors into a time series or spatial volumes and visualize them together to highlight regions of unpredictable behavior and how they differ between the models. We demonstrate two further volumetric applications: adaptive timestep selection and analysis of ensemble dissimilarity. We apply our technique to datasets from multiple application domains and demonstrate that we are able to produce meaningful results while making minimal assumptions about the underlying data.
Collapse
|
9
|
Zhang J, Tao J, Wang JX, Wang C. SurfRiver: Flattening Stream Surfaces for Comparative Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2783-2795. [PMID: 33881996 DOI: 10.1109/tvcg.2021.3074585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We present SurfRiver, a new visual transformation approach that flattens stream surfaces in 3D to rivers in 2D for comparative visualization. Leveraging the TextFlow-like visual metaphor, SurfRiver untangles the convoluted individual stream surfaces along the flow direction and maps them along the horizontal direction of the abstract river view. It stacks multiple surfaces along the vertical direction of the river view. This visual mapping makes it easy for users to track along the flow direction and align stream surfaces for comparative study. Through brushing and linking, the river view is connected to the spatial surface view for collective reasoning. SurfRiver can be used to examine a single stream surface, investigate seeding sensitivity or variability of a family of surfaces from a group of related seeding curves, or explore a collection of representative surfaces. We describe our optimization solution to achieve the desirable mapping, present SurfRiver interface and interactions, and report results from different flow fields to demonstrate its efficacy. Feedback from a domain expert also indicates the promise of SurfRiver.
Collapse
|
10
|
Han J, Zheng H, Xing Y, Chen DZ, Wang C. V2V: A Deep Learning Approach to Variable-to-Variable Selection and Translation for Multivariate Time-Varying Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1290-1300. [PMID: 33074812 DOI: 10.1109/tvcg.2020.3030346] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present V2V, a novel deep learning framework, as a general-purpose solution to the variable-to-variable (V2V) selection and translation problem for multivariate time-varying data (MTVD) analysis and visualization. V2V leverages a representation learning algorithm to identify transferable variables and utilizes Kullback-Leibler divergence to determine the source and target variables. It then uses a generative adversarial network (GAN) to learn the mapping from the source variable to the target variable via the adversarial, volumetric, and feature losses. V2V takes the pairs of time steps of the source and target variable as input for training, Once trained, it can infer unseen time steps of the target variable given the corresponding time steps of the source variable. Several multivariate time-varying data sets of different characteristics are used to demonstrate the effectiveness of V2V, both quantitatively and qualitatively. We compare V2V against histogram matching and two other deep learning solutions (Pix2Pix and CycleGAN).
Collapse
|
11
|
Jakob J, Gross M, Gunther T. A Fluid Flow Data Set for Machine Learning and its Application to Neural Flow Map Interpolation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1279-1289. [PMID: 33026993 DOI: 10.1109/tvcg.2020.3028947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, deep learning has opened countless research opportunities across many different disciplines. At present, visualization is mainly applied to explore and explain neural networks. Its counterpart-the application of deep learning to visualization problems-requires us to share data more openly in order to enable more scientists to engage in data-driven research. In this paper, we construct a large fluid flow data set and apply it to a deep learning problem in scientific visualization. Parameterized by the Reynolds number, the data set contains a wide spectrum of laminar and turbulent fluid flow regimes. The full data set was simulated on a high-performance compute cluster and contains 8000 time-dependent 2D vector fields, accumulating to more than 16 TB in size. Using our public fluid data set, we trained deep convolutional neural networks in order to set a benchmark for an improved post-hoc Lagrangian fluid flow analysis. In in-situ settings, flow maps are exported and interpolated in order to assess the transport characteristics of time-dependent fluids. Using deep learning, we improve the accuracy of flow map interpolations, allowing a more precise flow analysis at a reduced memory IO footprint.
Collapse
|
12
|
Han J, Wang C. TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:205-215. [PMID: 31425081 DOI: 10.1109/tvcg.2019.2934255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present TSR-TVD, a novel deep learning framework that generates temporal super-resolution (TSR) of time-varying data (TVD) using adversarial learning. TSR-TVD is the first work that applies the recurrent generative network (RGN), a combination of the recurrent neural network (RNN) and generative adversarial network (GAN), to generate temporal high-resolution volume sequences from low-resolution ones. The design of TSR-TVD includes a generator and a discriminator. The generator takes a pair of volumes as input and outputs the synthesized intermediate volume sequence through forward and backward predictions. The discriminator takes the synthesized intermediate volumes as input and produces a score indicating the realness of the volumes. Our method handles multivariate data as well where the trained network from one variable is applied to generate TSR for another variable. To demonstrate the effectiveness of TSR-TVD, we show quantitative and qualitative results with several time-varying multivariate data sets and compare our method against standard linear interpolation and solutions solely based on RNN or CNN.
Collapse
|