1
|
Zhao L, Isenberg T, Xie F, Liang HN, Yu L. SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:897-907. [PMID: 39255119 DOI: 10.1109/tvcg.2024.3456368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures-often at multiple spatial or semantic scales-across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.
Collapse
|
2
|
Rani S, Dhar SB, Khajuria A, Gupta D, Jaiswal PK, Singla N, Kaur M, Singh G, Barnwal RP. Advanced Overview of Biomarkers and Techniques for Early Diagnosis of Alzheimer's Disease. Cell Mol Neurobiol 2023; 43:2491-2523. [PMID: 36847930 PMCID: PMC11410160 DOI: 10.1007/s10571-023-01330-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/15/2023] [Indexed: 03/01/2023]
Abstract
The development of early non-invasive diagnosis methods and identification of novel biomarkers are necessary for managing Alzheimer's disease (AD) and facilitating effective prognosis and treatment. AD has multi-factorial nature and involves complex molecular mechanism, which causes neuronal degeneration. The primary challenges in early AD detection include patient heterogeneity and lack of precise diagnosis at the preclinical stage. Several cerebrospinal fluid (CSF) and blood biomarkers have been proposed to show excellent diagnosis ability by identifying tau pathology and cerebral amyloid beta (Aβ) for AD. Intense research endeavors are being made to develop ultrasensitive detection techniques and find potent biomarkers for early AD diagnosis. To mitigate AD worldwide, understanding various CSF biomarkers, blood biomarkers, and techniques that can be used for early diagnosis is imperative. This review attempts to provide information regarding AD pathophysiology, genetic and non-genetic factors associated with AD, several potential blood and CSF biomarkers, like neurofilament light, neurogranin, Aβ, and tau, along with biomarkers under development for AD detection. Besides, numerous techniques, such as neuroimaging, spectroscopic techniques, biosensors, and neuroproteomics, which are being explored to aid early AD detection, have been discussed. The insights thus gained would help in finding potential biomarkers and suitable techniques for the accurate diagnosis of early AD before cognitive dysfunction.
Collapse
Affiliation(s)
- Shital Rani
- Department of Biophysics, Panjab University, Chandigarh, 160014, India
| | - Sudhrita Basu Dhar
- University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, 160014, India
| | - Akhil Khajuria
- University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, 160014, India
| | - Dikshi Gupta
- JoyScore Inc., 2440 Cerritos Ave, Signal Hill, CA, 90755, USA
| | - Pradeep Kumar Jaiswal
- Department of Biochemistry and Biophysics, Texas A & M University, College Station, TX, 77843, USA
| | - Neha Singla
- Department of Biophysics, Panjab University, Chandigarh, 160014, India
| | - Mandeep Kaur
- Department of Biophysics, Panjab University, Chandigarh, 160014, India.
| | - Gurpal Singh
- University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh, 160014, India.
| | | |
Collapse
|
3
|
Ye S, Chen Z, Chu X, Li K, Luo J, Li Y, Geng G, Wu Y. PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:429-439. [PMID: 36179001 DOI: 10.1109/tvcg.2022.3209388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
Collapse
|
4
|
Hou Y, Zhu H, Liang HN, Yu L. A study of the effect of star glyph parameters on value estimation and comparison. J Vis (Tokyo) 2022. [DOI: 10.1007/s12650-022-00888-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
5
|
Meng X, Liu J, Fan X, Bian C, Wei Q, Wang Z, Liu W, Jiao Z. Multi-Modal Neuroimaging Neural Network-Based Feature Detection for Diagnosis of Alzheimer’s Disease. Front Aging Neurosci 2022; 14:911220. [PMID: 35651528 PMCID: PMC9149574 DOI: 10.3389/fnagi.2022.911220] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 04/19/2022] [Indexed: 11/29/2022] Open
Abstract
Alzheimer’s disease (AD) is a neurodegenerative brain disease, and it is challenging to mine features that distinguish AD and healthy control (HC) from multiple datasets. Brain network modeling technology in AD using single-modal images often lacks supplementary information regarding multi-source resolution and has poor spatiotemporal sensitivity. In this study, we proposed a novel multi-modal LassoNet framework with a neural network for AD-related feature detection and classification. Specifically, data including two modalities of resting-state functional magnetic resonance imaging (rs-fMRI) and diffusion tensor imaging (DTI) were adopted for predicting pathological brain areas related to AD. The results of 10 repeated experiments and validation experiments in three groups prove that our proposed framework outperforms well in classification performance, generalization, and reproducibility. Also, we found discriminative brain regions, such as Hippocampus, Frontal_Inf_Orb_L, Parietal_Sup_L, Putamen_L, Fusiform_R, etc. These discoveries provide a novel method for AD research, and the experimental study demonstrates that the framework will further improve our understanding of the mechanisms underlying the development of AD.
Collapse
Affiliation(s)
- Xianglian Meng
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Junlong Liu
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Xiang Fan
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Chenyuan Bian
- Shandong Provincial Key Laboratory of Digital Medicine and Computer-Assisted Surgery, Affiliated Hospital of Qingdao University, Qingdao, China
| | - Qingpeng Wei
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Ziwei Wang
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Wenjie Liu
- School of Computer Information and Engineering, Changzhou Institute of Technology, Changzhou, China
- *Correspondence: Wenjie Liu,
| | - Zhuqing Jiao
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
- Zhuqing Jiao,
| |
Collapse
|
6
|
Wu A, Wang Y, Zhou M, He X, Zhang H, Qu H, Zhang D. MultiVision: Designing Analytical Dashboards with Deep Learning Based Recommendation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:162-172. [PMID: 34587058 DOI: 10.1109/tvcg.2021.3114826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We contribute a deep-learning-based method that assists in designing analytical dashboards for analyzing a data table. Given a data table, data workers usually need to experience a tedious and time-consuming process to select meaningful combinations of data columns for creating charts. This process is further complicated by the needs of creating dashboards composed of multiple views that unveil different perspectives of data. Existing automated approaches for recommending multiple-view visualizations mainly build on manually crafted design rules, producing sub-optimal or irrelevant suggestions. To address this gap, we present a deep learning approach for selecting data columns and recommending multiple charts. More importantly, we integrate the deep learning models into a mixed-initiative system. Our model could make recommendations given optional user-input selections of data columns. The model, in turn, learns from provenance data of authoring logs in an offline manner. We compare our deep learning model with existing methods for visualization recommendation and conduct a user study to evaluate the usefulness of the system.
Collapse
|
7
|
Han J, Zheng H, Chen DZ, Wang C. STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:270-280. [PMID: 34587051 DOI: 10.1109/tvcg.2021.3114815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions ( SSR+TSF, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.
Collapse
|
8
|
Han J, Zheng H, Xing Y, Chen DZ, Wang C. V2V: A Deep Learning Approach to Variable-to-Variable Selection and Translation for Multivariate Time-Varying Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1290-1300. [PMID: 33074812 DOI: 10.1109/tvcg.2020.3030346] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present V2V, a novel deep learning framework, as a general-purpose solution to the variable-to-variable (V2V) selection and translation problem for multivariate time-varying data (MTVD) analysis and visualization. V2V leverages a representation learning algorithm to identify transferable variables and utilizes Kullback-Leibler divergence to determine the source and target variables. It then uses a generative adversarial network (GAN) to learn the mapping from the source variable to the target variable via the adversarial, volumetric, and feature losses. V2V takes the pairs of time steps of the source and target variable as input for training, Once trained, it can infer unseen time steps of the target variable given the corresponding time steps of the source variable. Several multivariate time-varying data sets of different characteristics are used to demonstrate the effectiveness of V2V, both quantitatively and qualitatively. We compare V2V against histogram matching and two other deep learning solutions (Pix2Pix and CycleGAN).
Collapse
|
9
|
Ye S, Chen Z, Chu X, Wang Y, Fu S, Shen L, Zhou K, Wu Y. ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:860-869. [PMID: 33048712 DOI: 10.1109/tvcg.2020.3030392] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.
Collapse
|
10
|
Jakob J, Gross M, Gunther T. A Fluid Flow Data Set for Machine Learning and its Application to Neural Flow Map Interpolation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1279-1289. [PMID: 33026993 DOI: 10.1109/tvcg.2020.3028947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, deep learning has opened countless research opportunities across many different disciplines. At present, visualization is mainly applied to explore and explain neural networks. Its counterpart-the application of deep learning to visualization problems-requires us to share data more openly in order to enable more scientists to engage in data-driven research. In this paper, we construct a large fluid flow data set and apply it to a deep learning problem in scientific visualization. Parameterized by the Reynolds number, the data set contains a wide spectrum of laminar and turbulent fluid flow regimes. The full data set was simulated on a high-performance compute cluster and contains 8000 time-dependent 2D vector fields, accumulating to more than 16 TB in size. Using our public fluid data set, we trained deep convolutional neural networks in order to set a benchmark for an improved post-hoc Lagrangian fluid flow analysis. In in-situ settings, flow maps are exported and interpolated in order to assess the transport characteristics of time-dependent fluids. Using deep learning, we improve the accuracy of flow map interpolations, allowing a more precise flow analysis at a reduced memory IO footprint.
Collapse
|