1
|
Cao A, Xie X, Zhang R, Tian Y, Fan M, Zhang H, Wu Y. Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1-11. [PMID: 39255095 DOI: 10.1109/tvcg.2024.3456216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider whether the players will perform well in a new team, which is hard to learn directly from their historical performances. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and investigation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. For an in-depth investigation of the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.
Collapse
|
2
|
Liu PX, Pan TY, Lin HS, Chu HK, Hu MC. VisionCoach: Design and Effectiveness Study on VR Vision Training for Basketball Passing. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6665-6677. [PMID: 38015694 DOI: 10.1109/tvcg.2023.3335312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Vision Training is important for basketball players to effectively search for teammates who has wide-open opportunities to shoot, observe the defenders around the wide-open teammates and quickly choose a proper way to pass the ball to the most suitable one. We develop an immersive virtual reality (VR) system called VisionCoach to simulate the player's viewing perspective and generate three designed systematic vision training tasks to benefit the cultivating procedure. By recording the player's eye gazing and dribbling video sequence, the proposed system can analyze the vision-related behavior to understand the training effectiveness. To demonstrate the proposed VR training system can facilitate the cultivation of vision ability, we recruited 14 experienced players to participate in a 6-week between-subject study, and conducted a study by comparing the most frequently used 2D vision training method called Vision Performance Enhancement (VPE) program with the proposed system. Qualitative experiences and quantitative training results are reported to show that the proposed immersive VR training system can effectively improve player's vision ability in terms of gaze behavior and dribbling stability. Furthermore, training in the VR-VisionCoach Condition can transfer the learned abilities to real scenario more easily than training in the 2D-VPE Condition.
Collapse
|
3
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
4
|
Yao L, Vuillemot R, Bezerianos A, Isenberg P. Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1821-1836. [PMID: 38090861 DOI: 10.1109/tvcg.2023.3341990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data (Fig. 1), and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.
Collapse
|
5
|
Jin N, Zhan X. Big data analytics for image processing and computer vision technologies in sports health management. Technol Health Care 2024; 32:3167-3187. [PMID: 38820030 DOI: 10.3233/thc-231875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
BACKGROUND Visualization of sports has a lot of potential for future development in data sports because of how quickly things are changing and how much sports depend on data. Presently, conventional systems fail to accurately address sports persons' dynamic health data change with less error rate. Further, those systems are unable to distinguish players' health data and their visualization in a precise manner. An excellent starting point for building fitness solutions based on computer vision technology is the data visualization technology that arose in the age of big data analytics. OBJECTIVE This research presents a Big Data Analytic assisted Computer Vision Model (BD-CVM) for effective sports persons healthcare data management with improved accuracy and precision. METHODS The fitness and health of professional athletes are analyzed using information from a publicly available sports visualization dataset. Machine learning-assisted computer vision dynamic algorithm has been used for an effective image featuring and classification by categorizing sports videos through temporal and geographical data. RESULTS The significance of big data's great potential in screening data during a sporting event can be reasonably analyzed and processed effectively with less error rate. The proposed BD-CVM utilized an error analysis module which can be embedded in the design further to ensure the accuracy requirements in the data processing from sports videos. CONCLUSION The research findings of this paper demonstrate that the strategy presented here can potentially improve accuracy and precision and optimize mean square error in sports data classification and visualization.
Collapse
Affiliation(s)
- Ning Jin
- College of Sports, South-Central MinZu University, Wuhan, Hubei, China
| | - Xiao Zhan
- College of Computer Science, South-Central MinZu University, Wuhan, Hubei, China
| |
Collapse
|
6
|
Lin T, Aouididi A, Chen Z, Beyer J, Pfister H, Wang JH. VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:458-468. [PMID: 37878442 DOI: 10.1109/tvcg.2023.3327161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.
Collapse
|
7
|
Chen Z, Chiappalupi D, Lin T, Yang Y, Beyer J, Pfister H. RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:1347-1357. [PMID: 37871050 DOI: 10.1109/tvcg.2023.3326568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-LABEL in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
Collapse
|
8
|
Seebacher D, Polk T, Janetzko H, Keim DA, Schreck T, Stein M. Investigating the Sketchplan: A Novel Way of Identifying Tactical Behavior in Massive Soccer Datasets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1920-1936. [PMID: 34898435 DOI: 10.1109/tvcg.2021.3134814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Coaches and analysts prepare for upcoming matches by identifying common patterns in the positioning and movement of the competing teams in specific situations. Existing approaches in this domain typically rely on manual video analysis and formation discussion using whiteboards; or expert systems that rely on state-of-the-art video and trajectory visualization techniques and advanced user interaction. We bridge the gap between these approaches by contributing a light-weight, simplified interaction and visualization system, which we conceptualized in an iterative design study with the coaching team of a European first league soccer team. Our approach is walk-up usable by all domain stakeholders, and at the same time, can leverage advanced data retrieval and analysis techniques: a virtual magnetic tactic-board. Users place and move digital magnets on a virtual tactic-board, and these interactions get translated to spatio-temporal queries, used to retrieve relevant situations from massive team movement data. Despite such seemingly imprecise query input, our approach is highly usable, supports quick user exploration, and retrieval of relevant results via query relaxation. Appropriate simplified result visualization supports in-depth analyses to explore team behavior, such as formation detection, movement analysis, and what-if analysis. We evaluated our approach with several experts from European first league soccer clubs. The results show that our approach makes the complex analytical processes needed for the identification of tactical behavior directly accessible to domain experts for the first time, demonstrating our support of coaches in preparation for future encounters.
Collapse
|
9
|
Lan J, Zhou Z, Wang J, Zhang H, Xie X, Wu Y. SimuExplorer: Visual Exploration of Game Simulation in Table Tennis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1719-1732. [PMID: 34818191 DOI: 10.1109/tvcg.2021.3130422] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We propose SimuExplorer, a visualization system to help analysts explore how player behaviors impact scoring rates in table tennis. Such analysis is indispensable for analysts and coaches, who aim to formulate training plans that can help players improve. However, it is challenging to identify the impacts of individual behaviors, as well as to understand how these impacts are generated and accumulated gradually over the course of a game. To address these challenges, we worked closely with experts who work for a top national table tennis team to design SimuExplorer. The SimuExplorer system integrates a Markov chain model to simulate individual and cumulative impacts of particular behaviors. It then provides flow and matrix views to help users visualize and interpret these impacts. We demonstrate the usefulness of the system with case studies and expert interviews. The experts think highly of the system and have obtained insights into players' behaviors using it.
Collapse
|
10
|
Ye S, Chen Z, Chu X, Li K, Luo J, Li Y, Geng G, Wu Y. PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:429-439. [PMID: 36179001 DOI: 10.1109/tvcg.2022.3209388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
Collapse
|
11
|
Chen Z, Yang Q, Xie X, Beyer J, Xia H, Wu Y, Pfister H. Sporthesia: Augmenting Sports Videos Using Natural Language. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:918-928. [PMID: 36197856 DOI: 10.1109/tvcg.2022.3209497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Augmented sports videos, which combine visualizations and video effects to present data in actual scenes, can communicate insights engagingly and thus have been increasingly popular for sports enthusiasts around the world. Yet, creating augmented sports videos remains a challenging task, requiring considerable time and video editing skills. On the other hand, sports insights are often communicated using natural language, such as in commentaries, oral presentations, and articles, but usually lack visual cues. Thus, this work aims to facilitate the creation of augmented sports videos by enabling analysts to directly create visualizations embedded in videos using insights expressed in natural language. To achieve this goal, we propose a three-step approach - 1) detecting visualizable entities in the text, 2) mapping these entities into visualizations, and 3) scheduling these visualizations to play with the video - and analyzed 155 sports video clips and the accompanying commentaries for accomplishing these steps. Informed by our analysis, we have designed and implemented Sporthesia, a proof-of-concept system that takes racket-based sports videos and textual commentaries as the input and outputs augmented videos. We demonstrate Sporthesia's applicability in two exemplar scenarios, i.e., authoring augmented sports videos using text and augmenting historical sports videos based on auditory comments. A technical evaluation shows that Sporthesia achieves high accuracy (F1-score of 0.9) in detecting visualizable entities in the text. An expert evaluation with eight sports analysts suggests high utility, effectiveness, and satisfaction with our language-driven authoring method and provides insights for future improvement and opportunities.
Collapse
|
12
|
Tong W, Chen Z, Xia M, Lo LYH, Yuan L, Bach B, Qu H. Exploring Interactions with Printed Data Visualizations in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:418-428. [PMID: 36166542 DOI: 10.1109/tvcg.2022.3209386] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ( N=20) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ( N=12, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement "point" for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.
Collapse
|
13
|
Wu Y, Deng D, Xie X, He M, Xu J, Zhang H, Zhang H, Wu Y. OBTracker: Visual Analytics of Off-ball Movements in Basketball. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:929-939. [PMID: 36166529 DOI: 10.1109/tvcg.2022.3209373] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.
Collapse
|
14
|
Deng Z, Weng D, Liu S, Tian Y, Xu M, Wu Y. A survey of urban visual analytics: Advances and future directions. COMPUTATIONAL VISUAL MEDIA 2022; 9:3-39. [PMID: 36277276 PMCID: PMC9579670 DOI: 10.1007/s41095-022-0275-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/08/2022] [Indexed: 06/16/2023]
Abstract
Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
Collapse
Affiliation(s)
- Zikun Deng
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Di Weng
- Microsoft Research Asia, Beijing, 100080 China
| | - Shuhan Liu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Yuan Tian
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Mingliang Xu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
- Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, 450001 China
| | - Yingcai Wu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| |
Collapse
|
15
|
Yao L, Bezerianos A, Vuillemot R, Isenberg P. Visualization in Motion: A Research Agenda and Two Evaluations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3546-3562. [PMID: 35727779 DOI: 10.1109/tvcg.2022.3184993] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We contribute a research agenda for visualization in motion and two experiments to understand how well viewers can read data from moving visualizations. We define visualizations in motion as visual data representations that are used in contexts that exhibit relative motion between a viewer and an entire visualization. Sports analytics, video games, wearable devices, or data physicalizations are example contexts that involve different types of relative motion between a viewer and a visualization. To analyze the opportunities and challenges for designing visualization in motion, we show example scenarios and outline a first research agenda. Motivated primarily by the prevalence of and opportunities for visualizations in sports and video games we started to investigate a small aspect of our research agenda: the impact of two important characteristics of motion-speed and trajectory on a stationary viewer's ability to read data from moving donut and bar charts. We found that increasing speed and trajectory complexity did negatively affect the accuracy of reading values from the charts and that bar charts were more negatively impacted. In practice, however, this impact was small: both charts were still read fairly accurately.
Collapse
|
16
|
Design and Implementation of a Multidimensional Visualization Reconstruction System for Old Urban Spaces Based on Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4253128. [PMID: 35694601 PMCID: PMC9184188 DOI: 10.1155/2022/4253128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 11/17/2022]
Abstract
This article presents an in-depth study and analysis of the construction of a convolutional neural network model and multidimensional visualization system of old urban space and proposes the design of a multifaceted visualization reconstruction system of old urban space based on a neural network. It also quantitatively analyzes the essential spatial attribute characteristics of urban shadow areas as nodes of the overall urban dynamic network in three dimensions—spatial connection strength, spatial connection distance, and spatial connection direction—summarizes the characteristics of urban old spatial structure from the perspective of a dynamic network, and then proposes the model of urban old spatial design from the perspective of an active network. The shallow depth of the network structure is used to reduce the parameters in the learning process of reconfigurable convolutional neural networks using data sets so that the model learns more general features. For the situation where the number of data sets is small, data augmentation is used to expand the size of the data sets and improve the recognition accuracy of the reconfigurable convolutional neural network. A real-time update method of multifaceted data visualization for big data scenarios is proposed and implemented to reduce the network load and network latency caused by charts of multidimensional data changes, reduce the data error rate, and maintain the system stability in the old urban space concurrency scenario.
Collapse
|
17
|
Deng Z, Weng D, Xie X, Bao J, Zheng Y, Xu M, Chen W, Wu Y. Compass: Towards Better Causal Analysis of Urban Time Series. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1051-1061. [PMID: 34596550 DOI: 10.1109/tvcg.2021.3114875] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The spatial time series generated by city sensors allow us to observe urban phenomena like environmental pollution and traffic congestion at an unprecedented scale. However, recovering causal relations from these observations to explain the sources of urban phenomena remains a challenging task because these causal relations tend to be time-varying and demand proper time series partitioning for effective analyses. The prior approaches extract one causal graph given long-time observations, which cannot be directly applied to capturing, interpreting, and validating dynamic urban causality. This paper presents Compass, a novel visual analytics approach for in-depth analyses of the dynamic causality in urban time series. To develop Compass, we identify and address three challenges: detecting urban causality, interpreting dynamic causal relations, and unveiling suspicious causal relations. First, multiple causal graphs over time among urban time series are obtained with a causal detection framework extended from the Granger causality test. Then, a dynamic causal graph visualization is designed to reveal the time-varying causal relations across these causal graphs and facilitate the exploration of the graphs along the time. Finally, a tailored multi-dimensional visualization is developed to support the identification of spurious causal relations, thereby improving the reliability of causal analyses. The effectiveness of Compass is evaluated with two case studies conducted on the real-world urban datasets, including the air pollution and traffic speed datasets, and positive feedback was received from domain experts.
Collapse
|
18
|
Chu X, Xie X, Ye S, Lu H, Xiao H, Yuan Z, Zhu-Tian C, Zhang H, Wu Y. TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:118-128. [PMID: 34596547 DOI: 10.1109/tvcg.2021.3114861] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.
Collapse
|
19
|
Wu J, Liu D, Guo Z, Xu Q, Wu Y. TacticFlow: Visual Analytics of Ever-Changing Tactics in Racket Sports. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:835-845. [PMID: 34587062 DOI: 10.1109/tvcg.2021.3114832] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Event sequence mining is often used to summarize patterns from hundreds of sequences but faces special challenges when handling racket sports data. In racket sports (e.g., tennis and badminton), a player hitting the ball is considered a multivariate event consisting of multiple attributes (e.g., hit technique and ball position). A rally (i.e., a series of consecutive hits beginning with one player serving the ball and ending with one player winning a point) thereby can be viewed as a multivariate event sequence. Mining frequent patterns and depicting how patterns change over time is instructive and meaningful to players who want to learn more short-term competitive strategies (i.e., tactics) that encompass multiple hits. However, players in racket sports usually change their tactics rapidly according to the opponent's reaction, resulting in ever-changing tactic progression. In this work, we introduce a tailored visualization system built on a novel multivariate sequence pattern mining algorithm to facilitate explorative identification and analysis of various tactics and tactic progression. The algorithm can mine multiple non-overlapping multivariate patterns from hundreds of sequences effectively. Based on the mined results, we propose a glyph-based Sankey diagram to visualize the ever-changing tactic progression and support interactive data exploration. Through two case studies with four domain experts in tennis and badminton, we demonstrate that our system can effectively obtain insights about tactic progression in most racket sports. We further discuss the strengths and the limitations of our system based on domain experts' feedback.
Collapse
|
20
|
Ying L, Tangl T, Luo Y, Shen L, Xie X, Yu L, Wu Y. GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:400-410. [PMID: 34596552 DOI: 10.1109/tvcg.2021.3114877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Circular glyphs are used across disparate fields to represent multidimensional data. However, although these glyphs are extremely effective, creating them is often laborious, even for those with professional design skills. This paper presents GlyphCreator, an interactive tool for the example-based generation of circular glyphs. Given an example circular glyph and multidimensional input data, GlyphCreator promptly generates a list of design candidates, any of which can be edited to satisfy the requirements of a particular representation. To develop GlyphCreator, we first derive a design space of circular glyphs by summarizing relationships between different visual elements. With this design space, we build a circular glyph dataset and develop a deep learning model for glyph parsing. The model can deconstruct a circular glyph bitmap into a series of visual elements. Next, we introduce an interface that helps users bind the input data attributes to visual elements and customize visual styles. We evaluate the parsing model through a quantitative experiment, demonstrate the use of GlyphCreator through two use scenarios, and validate its effectiveness through user interviews.
Collapse
|
21
|
Tang J, Zhou Y, Tang T, Weng D, Xie B, Yu L, Zhang H, Wu Y. A Visualization Approach for Monitoring Order Processing in E-Commerce Warehouse. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:857-867. [PMID: 34596553 DOI: 10.1109/tvcg.2021.3114878] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset.
Collapse
|
22
|
Tang T, Wu Y, Wu Y, Yu L, Li Y. VideoModerator: A Risk-aware Framework for Multimodal Video Moderation in E-Commerce. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:846-856. [PMID: 34587029 DOI: 10.1109/tvcg.2021.3114781] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Video moderation, which refers to remove deviant or explicit content from e-commerce livestreams, has become prevalent owing to social and engaging features. However, this task is tedious and time consuming due to the difficulties associated with watching and reviewing multimodal video content, including video frames and audio clips. To ensure effective video moderation, we propose VideoModerator, a risk-aware framework that seamlessly integrates human knowledge with machine insights. This framework incorporates a set of advanced machine learning models to extract the risk-aware features from multimodal video content and discover potentially deviant videos. Moreover, this framework introduces an interactive visualization interface with three views, namely, a video view, a frame view, and an audio view. In the video view, we adopt a segmented timeline and highlight high-risk periods that may contain deviant information. In the frame view, we present a novel visual summarization method that combines risk-aware features and video context to enable quick video navigation. In the audio view, we employ a storyline-based design to provide a multi-faceted overview which can be used to explore audio content. Furthermore, we report the usage of VideoModerator through a case scenario and conduct experiments and a controlled user study to validate its effectiveness.
Collapse
|
23
|
Sun G, Li T, Liang R. SurVizor: visualizing and understanding the key content of surveillance videos. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00803-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Abstract
AbstractIn sports data analysis and visualization, understanding collective tactical behavior has become an integral part. Interactive and automatic data analysis is instrumental in making use of growing amounts of compound information. In professional team sports, gathering and analyzing sportsperson monitoring data are common practice, intending to evaluate fatigue and succeeding adaptation responses, analyze performance potential, and reduce injury and illness risk. Data visualization technology born in the era of big data analytics provides a good foundation for further developing fitness tools based on artificial intelligence (AI). Hence, this study proposed a video-based effective visualization framework (VEVF) based on artificial intelligence and big data analytics. This study uses the machine learning method to categorize the sports video by extracting both the videos' temporal and spatial features. Our system is based on convolutional neural networks united with temporal pooling layers. The experimental outcomes demonstrate that the recommended VEVF model enhances the accuracy ratio of 98.7%, recall ratio of 94.5%, F1-score ratio of 97.9%, the precision ratio of 96.7%, the error rate of 29.1%, the performance ratio of 95.2%, an efficiency ratio of 96.1% compared to other existing models.
Collapse
|
25
|
VeLight: A 3D virtual reality tool for CT-based anatomy teaching and training. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00790-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
26
|
Wang J, Cai X, Su J, Liao Y, Wu Y. What makes a scatterplot hard to comprehend: data size and pattern salience matter. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00778-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
27
|
RallyComparator: visual comparison of the multivariate and spatial stroke sequence in table tennis rally. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00772-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
28
|
Zheng F, Wen J, Zhang X, Chen Y, Zhang X, Liu Y, Xu T, Chen X, Wang Y, Su W, Zhou Z. Visual abstraction of large-scale geographical point data with credible spatial interpolation. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00777-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
29
|
Wang J, Wu J, Cao A, Zhou Z, Zhang H, Wu Y. Tac-Miner: Visual Tactic Mining for Multiple Table Tennis Matches. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2770-2782. [PMID: 33891553 DOI: 10.1109/tvcg.2021.3074576] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In table tennis, tactics specified by three consecutive strokes represent the high-level competition strategies in matches. Effective detection and analysis of tactics can reveal the playing styles of players, as well as their strengths and weaknesses. However, tactical analysis in table tennis is challenging as the analysts can often be overwhelmed by the large quantity and high dimension of the data. Statistical charts have been extensively used by researchers to explore and visualize table tennis data. However, these charts cannot support efficient comparative and correlation analysis of complicated tactic attributes. Besides, existing studies are limited to the analysis of one match. However, one player's strategy can change along with his/her opponents in different matches. Therefore, the data of multiple matches can support a more comprehensive tactical analysis. To address these issues, we introduced a visual analytics system called Tac-Miner to allow analysts to effectively analyze, explore, and compare tactics of multiple matches based on the advanced embedding and dimension reduction algorithms along with an interactive glyph. We evaluate our glyph's usability through a user study and demonstrate the system's usefulness through a case study with insights approved by coaches and domain experts.
Collapse
|
30
|
Xie X, Wang J, Liang H, Deng D, Cheng S, Zhang H, Chen W, Wu Y. PassVizor: Toward Better Understanding of the Dynamics of Soccer Passes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1322-1331. [PMID: 33048693 DOI: 10.1109/tvcg.2020.3030359] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In soccer, passing is the most frequent interaction between players and plays a significant role in creating scoring chances. Experts are interested in analyzing players' passing behavior to learn passing tactics, i.e., how players build up an attack with passing. Various approaches have been proposed to facilitate the analysis of passing tactics. However, the dynamic changes of a team's employed tactics over a match have not been comprehensively investigated. To address the problem, we closely collaborate with domain experts and characterize requirements to analyze the dynamic changes of a team's passing tactics. To characterize the passing tactic employed for each attack, we propose a topic-based approach that provides a high-level abstraction of complex passing behaviors. Based on the model, we propose a glyph-based design to reveal the multi-variate information of passing tactics within different phases of attacks, including player identity, spatial context, and formation. We further design and develop PassVizor, a visual analytics system, to support the comprehensive analysis of passing dynamics. With the system, users can detect the changing patterns of passing tactics and examine the detailed passing process for evaluating passing tactics. We invite experts to conduct analysis with PassVizor and demonstrate the usability of the system through an expert interview.
Collapse
|
31
|
Rubab S, Tang J, Wu Y. Examining interaction techniques in data visualization authoring tools from the perspective of goals and human cognition: a survey. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-020-00705-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|