1
|
Mishra A, Singh H, Parnami A, Shukla J. MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7321-7331. [PMID: 39269808 DOI: 10.1109/tvcg.2024.3456191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles-proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.
Collapse
|
2
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
3
|
Dai S, Smiley J, Dwyer T, Ens B, Besancon L. RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:451-461. [PMID: 36155467 DOI: 10.1109/tvcg.2022.3209433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typical mid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of haptic controls to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range and resolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm with hand tracking to align tangible controls under the user's fingers as they reach out to interact with data affordances. We begin with a study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtual mid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-air interaction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction with immersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactive slicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data.
Collapse
|
4
|
Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6100094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement.
Collapse
|
5
|
Sereno M, Wang X, Besancon L, McGuffin MJ, Isenberg T. Collaborative Work in Augmented Reality: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2530-2549. [PMID: 33085619 DOI: 10.1109/tvcg.2020.3032761] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In Augmented Reality (AR), users perceive virtual content anchored in the real world. It is used in medicine, education, games, navigation, maintenance, product design, and visualization, in both single-user and multi-user scenarios. Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work (AR-CSCW), by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, role symmetry (whether the roles of users are symmetric), technology symmetry (whether the hardware platforms of users are symmetric), and output and input modalities. We derive design considerations for collaborative AR environments, and identify under-explored research topics. These include the use of heterogeneous hardware considerations and 3D data exploration research areas. This survey is useful for newcomers to the field, readers interested in an overview of CSCW in AR applications, and domain experts seeking up-to-date information.
Collapse
|
6
|
Besançon L, Rönnberg N, Löwgren J, Tennant JP, Cooper M. Open up: a survey on open and non-anonymized peer reviewing. Res Integr Peer Rev 2020; 5:8. [PMID: 32607252 PMCID: PMC7318523 DOI: 10.1186/s41073-020-00094-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 06/02/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Our aim is to highlight the benefits and limitations of open and non-anonymized peer review. Our argument is based on the literature and on responses to a survey on the reviewing process of alt.chi, a more or less open review track within the so-called Computer Human Interaction (CHI) conference, the predominant conference in the field of human-computer interaction. This track currently is the only implementation of an open peer review process in the field of human-computer interaction while, with the recent increase in interest in open scientific practices, open review is now being considered and used in other fields. METHODS We ran an online survey with 30 responses from alt.chi authors and reviewers, collecting quantitative data using multiple-choice questions and Likert scales. Qualitative data were collected using open questions. RESULTS Our main quantitative result is that respondents are more positive to open and non-anonymous reviewing for alt.chi than for other parts of the CHI conference. The qualitative data specifically highlight the benefits of open and transparent academic discussions. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews. CONCLUSION While the benefits are quite clear and the system is generally well-liked by alt.chi participants, they remain reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation.
Collapse
Affiliation(s)
- Lonni Besançon
- Linköping University, Norrköping, Sweden
- Université Paris Sud, Orsay, France
| | | | | | - Jonathan P. Tennant
- Southern Denmark University Library, Campusvej 55, Odense, 5230 Denmark
- Center for Research and Interdisciplinarity, Universite de Paris, Rue Charles V, Paris, France
- Institute for Globally Distributed Open Research and Education, Ubud, Indonesia
| | | |
Collapse
|
7
|
Saktheeswaran A, Srinivasan A, Stasko J. Touch? Speech? or Touch and Speech? Investigating Multimodal Interaction for Visual Network Exploration and Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2168-2179. [PMID: 32012017 DOI: 10.1109/tvcg.2020.2970512] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Interaction plays a vital role during visual network exploration as users need to engage with both elements in the view (e.g., nodes, links) and interface controls (e.g., sliders, dropdown menus). Particularly as the size and complexity of a network grow, interactive displays supporting multimodal input (e.g., touch, speech, pen, gaze) exhibit the potential to facilitate fluid interaction during visual network exploration and analysis. While multimodal interaction with network visualization seems like a promising idea, many open questions remain. For instance, do users actually prefer multimodal input over unimodal input, and if so, why? Does it enable them to interact more naturally, or does having multiple modes of input confuse users? To answer such questions, we conducted a qualitative user study in the context of a network visualization tool, comparing speech- and touch-based unimodal interfaces to a multimodal interface combining the two. Our results confirm that participants strongly prefer multimodal input over unimodal input attributing their preference to: 1) the freedom of expression, 2) the complementary nature of speech and touch, and 3) integrated interactions afforded by the combination of the two modalities. We also describe the interaction patterns participants employed to perform common network visualization operations and highlight themes for future multimodal network visualization systems to consider.
Collapse
|
8
|
Bruckner S, Isenberg T, Ropinski T, Wiebel A. A Model of Spatial Directness in Interactive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2514-2528. [PMID: 29994478 DOI: 10.1109/tvcg.2018.2848906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.
Collapse
|
9
|
Abstract
The indoor climate is closely related to human health, well-being, and comfort. Thus, an understanding of the indoor climate is vital. One way to improve the indoor climates is to place an aesthetically pleasing active plant wall in the environment. By collecting data using sensors placed in and around the plant wall both the indoor climate and the status of the plant wall can be monitored and analyzed. This manuscript presents a user study with domain experts in this field with a focus on the representation of such data. The experts explored this data with a Line graph, a Horizon graph, and a Stacked area graph to better understand the status of the active plant wall and the indoor climate. Qualitative measures were collected with Think-aloud protocol and semi-structured interviews. The study resulted in four categories of analysis tasks: Overview, Detail, Perception, and Complexity. The Line graph was found to be preferred for use in providing an overview, and the Horizon graph for detailed analysis, revealing patterns and showing discernible trends, while the Stacked area graph was generally not preferred. Based on these findings, directions for future research are discussed and formulated. The results and future directions of this research can facilitate the analysis of multivariate temporal data, both for domain users and visualization researchers.
Collapse
|
10
|
Mirhosseini S, Gutenko I, Ojal S, Marino J, Kaufman A. Immersive Virtual Colonoscopy. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2011-2021. [PMID: 30762554 DOI: 10.1109/tvcg.2019.2898763] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Virtual colonoscopy (VC) is a non-invasive screening tool for colorectal polyps which employs volume visualization of a colon model reconstructed from a CT scan of the patient's abdomen. We present an immersive analytics system for VC which enhances and improves the traditional desktop VC through the use of VR technologies. Our system, using a head-mounted display (HMD), includes all of the standard VC features, such as the volume rendered endoluminal fly-through, measurement tool, bookmark modes, electronic biopsy, and slice views. The use of VR immersion, stereo, and wider field of view and field of regard has a positive effect on polyp search and analysis tasks in our immersive VC system, a volumetric-based immersive analytics application. Navigation includes enhanced automatic speed and direction controls, based on the user's head orientation, in conjunction with physical navigation for exploration of local proximity. In order to accommodate the resolution and frame rate requirements for HMDs, new rendering techniques have been developed, including mesh-assisted volume raycasting and a novel lighting paradigm. Feedback and further suggestions from expert radiologists show the promise of our system for immersive analysis for VC and encourage new avenues for exploring the use of VR in visualization systems for medical diagnosis.
Collapse
|
11
|
Hurter C, Riche NH, Drucker SM, Cordeil M, Alligier R, Vuillemot R. FiberClay: Sculpting Three Dimensional Trajectories to Reveal Structural Insights. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:704-714. [PMID: 30136994 DOI: 10.1109/tvcg.2018.2865191] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Visualizing 3D trajectories to extract insights about their similarities and spatial configuration is a critical task in several domains. Air traffic controllers for example deal with large quantities of aircrafts routes to optimize safety in airspace and neuroscientists attempt to understand neuronal pathways in the human brain by visualizing bundles of fibers from DTI images. Extracting insights from masses of 3D trajectories is challenging as the multiple three dimensional lines have complex geometries, may overlap, cross or even merge with each other, making it impossible to follow individual ones in dense areas. As trajectories are inherently spatial and three dimensional, we propose FiberClay: a system to display and interact with 3D trajectories in immersive environments. FiberClay renders a large quantity of trajectories in real time using GP-GPU techniques. FiberClay also introduces a new set of interactive techniques for composing complex queries in 3D space leveraging immersive environment controllers and user position. These techniques enable an analyst to select and compare sets of trajectories with specific geometries and data properties. We conclude by discussing insights found using FiberClay with domain experts in air traffic control and neurology.
Collapse
|
12
|
Laha B, Bowman DA, Socha JJ. Bare-Hand Volume Cracker for Raw Volume Data Analysis. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|