1
|
Socialistic 3D tracking of humans from a mobile robot for a ‘human following robot’ behaviour. ROBOTICA 2023. [DOI: 10.1017/s0263574722001795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Abstract
Robotic guides take visitors on a tour of a facility. Such robots must always know the position of the visitor for decision-making. Current tracking algorithms largely assume that the person will be nearly always visible. In the robotic guide application, a person’s visibility could be often lost for prolonged periods, especially when the robot is circumventing a corner or making a sharp turn. In such cases, a person cannot quickly come behind the limited field of view rear camera. We propose a new algorithm that can track people for prolonged times under such conditions. The algorithm is benefitted from an application-level heuristic that the person will be nearly always following the robot, which can be used to guess the motion. The proposed work uses a Particle Filter with a ‘follow-the-robot’ motion model for tracking. The tracking is performed in 3D using a monocular camera. Unlike approaches in the literature, the proposed work observes from a moving base that is especially challenging since a rotation of the robot can cause a large sudden change in the position of the human in the image plane that the approaches in the literature would filter out. Tracking in 3D can resolve such errors. The proposed approach is tested for three different indoor scenarios. The results showcase that the approach is significantly better than the baselines including tracking in the image and projecting in 3D, tracking using a randomized (non-social) motion model, tracking using a Kalman Filter and using LSTM for trajectory prediction.
Collapse
|
2
|
Afzal S, Ghani S, Hittawe MM, Rashid SF, Knio OM, Hadwiger M, Hoteit I. Visualization and Visual Analytics Approaches for Image and Video Datasets: A Survey. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3576935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Image and video data analysis has become an increasingly important research area with applications in different domains such as security surveillance, healthcare, augmented and virtual reality, video and image editing, activity analysis and recognition, synthetic content generation, distance education, telepresence, remote sensing, sports analytics, art, non-photorealistic rendering, search engines, and social media. Recent advances in Artificial Intelligence (AI) and particularly deep learning have sparked new research challenges and led to significant advancements, especially in image and video analysis. These advancements have also resulted in significant research and development in other areas such as visualization and visual analytics, and have created new opportunities for future lines of research. In this survey paper, we present the current state of the art at the intersection of visualization and visual analytics, and image and video data analysis. We categorize the visualization papers included in our survey based on different taxonomies used in visualization and visual analytics research. We review these papers in terms of task requirements, tools, datasets, and application areas. We also discuss insights based on our survey results, trends and patterns, the current focus of visualization research, and opportunities for future research.
Collapse
Affiliation(s)
- Shehzad Afzal
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Sohaib Ghani
- King Abdullah University of Science & Technology, Saudi Arabia
| | | | | | - Omar M Knio
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Markus Hadwiger
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Ibrahim Hoteit
- King Abdullah University of Science & Technology, Saudi Arabia
| |
Collapse
|
3
|
Ying L, Shu X, Deng D, Yang Y, Tang T, Yu L, Wu Y. MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:331-341. [PMID: 36179002 DOI: 10.1109/tvcg.2022.3209447] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.
Collapse
|
4
|
Wang J, Ma J, Hu K, Zhou Z, Zhang H, Xie X, Wu Y. Tac-Trainer: A Visual Analytics System for IoT-based Racket Sports Training. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:951-961. [PMID: 36179004 DOI: 10.1109/tvcg.2022.3209352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Conventional racket sports training highly relies on coaches' knowledge and experience, leading to biases in the guidance. To solve this problem, smart wearable devices based on Internet of Things technology (IoT) have been extensively investigated to support data-driven training. Considerable studies introduced methods to extract valuable information from the sensor data collected by IoT devices. However, the information cannot provide actionable insights for coaches due to the large data volume and high data dimensions. We proposed an IoT + VA framework, Tac-Trainer, to integrate the sensor data, the information, and coaches' knowledge to facilitate racket sports training. Tac-Trainer consists of four components: device configuration, data interpretation, training optimization, and result visualization. These components collect trainees' kinematic data through IoT devices, transform the data into attributes and indicators, generate training suggestions, and provide an interactive visualization interface for exploration, respectively. We further discuss new research opportunities and challenges inspired by our work from two perspectives, VA for IoT and IoT for VA.
Collapse
|
5
|
|
6
|
NE-Motion: Visual Analysis of Stroke Patients Using Motion Sensor Networks. SENSORS 2021; 21:s21134482. [PMID: 34208996 PMCID: PMC8271972 DOI: 10.3390/s21134482] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 06/20/2021] [Accepted: 06/24/2021] [Indexed: 11/16/2022]
Abstract
A large number of stroke survivors suffer from a significant decrease in upper extremity (UE) function, requiring rehabilitation therapy to boost recovery of UE motion. Assessing the efficacy of treatment strategies is a challenging problem in this context, and is typically accomplished by observing the performance of patients during their execution of daily activities. A more detailed assessment of UE impairment can be undertaken with a clinical bedside test, the UE Fugl-Meyer Assessment, but it fails to examine compensatory movements of functioning body segments that are used to bypass impairment. In this work, we use a graph learning method to build a visualization tool tailored to support the analysis of stroke patients. Called NE-Motion, or Network Environment for Motion Capture Data Analysis, the proposed analytic tool handles a set of time series captured by motion sensors worn by patients so as to enable visual analytic resources to identify abnormalities in movement patterns. Developed in close collaboration with domain experts, NE-Motion is capable of uncovering important phenomena, such as compensation while revealing differences between stroke patients and healthy individuals. The effectiveness of NE-Motion is shown in two case studies designed to analyze particular patients and to compare groups of subjects.
Collapse
|
7
|
Chen S, Zhao X, Luo B, Sun Z. Visual Browse and Exploration in Motion Capture Data with Phylogenetic Tree of Context-Aware Poses. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20185224. [PMID: 32933203 PMCID: PMC7570504 DOI: 10.3390/s20185224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 09/09/2020] [Accepted: 09/11/2020] [Indexed: 06/11/2023]
Abstract
Visual browse and exploration in motion capture data take resource acquisition as a human-computer interaction problem, and it is an essential approach for target motion search. This paper presents a progressive schema which starts from pose browse, then locates the interesting region and then switches to online relevant motion exploration. It mainly addresses three core issues. First, to alleviate the contradiction between the limited visual space and ever-increasing size of real-world database, it applies affinity propagation to numerical similarity measure of pose to perform data abstraction and obtains representative poses of clusters. Second, to construct a meaningful neighborhood for user browsing, it further merges logical similarity measures of pose with the weight quartets and casts the isolated representative poses into a structure of phylogenetic tree. Third, to support online motion exploration including motion ranking and clustering, a biLSTM-based auto-encoder is proposed to encode the high-dimensional pose context into compact latent space. Experimental results on CMU's motion capture data verify the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Songle Chen
- Key Lab of Broadband Wireless Communication and Sensor Network Technology of Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China; (S.C.); (X.Z.)
| | - Xuejian Zhao
- Key Lab of Broadband Wireless Communication and Sensor Network Technology of Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China; (S.C.); (X.Z.)
| | - Bingqing Luo
- Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, Nanjing 210003, China;
| | - Zhixin Sun
- Key Lab of Broadband Wireless Communication and Sensor Network Technology of Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China; (S.C.); (X.Z.)
| |
Collapse
|
8
|
Chan GYY, Nonato LG, Chu A, Raghavan P, Aluru V, Silva CT. Motion Browser: Visualizing and Understanding Complex Upper Limb Movement Under Obstetrical Brachial Plexus Injuries. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 26:981-990. [PMID: 31449022 DOI: 10.1109/tvcg.2019.2934280] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The brachial plexus is a complex network of peripheral nerves that enables sensing from and control of the movements of the arms and hand. Nowadays, the coordination between the muscles to generate simple movements is still not well understood, hindering the knowledge of how to best treat patients with this type of peripheral nerve injury. To acquire enough information for medical data analysis, physicians conduct motion analysis assessments with patients to produce a rich dataset of electromyographic signals from multiple muscles recorded with joint movements during real-world tasks. However, tools for the analysis and visualization of the data in a succinct and interpretable manner are currently not available. Without the ability to integrate, compare, and compute multiple data sources in one platform, physicians can only compute simple statistical values to describe patient's behavior vaguely, which limits the possibility to answer clinical questions and generate hypotheses for research. To address this challenge, we have developed MOTION BROWSER, an interactive visual analytics system which provides an efficient framework to extract and compare muscle activity patterns from the patient's limbs and coordinated views to help users analyze muscle signals, motion data, and video information to address different tasks. The system was developed as a result of a collaborative endeavor between computer scientists and orthopedic surgery and rehabilitation physicians. We present case studies showing physicians can utilize the information displayed to understand how individuals coordinate their muscles to initiate appropriate treatment and generate new hypotheses for future research.
Collapse
|
9
|
Wagner M, Slijepcevic D, Horsak B, Rind A, Zeppelzauer M, Aigner W. KAVAGait: Knowledge-Assisted Visual Analytics for Clinical Gait Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1528-1542. [PMID: 29993807 DOI: 10.1109/tvcg.2017.2785271] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In 2014, more than 10 million people in the US were affected by an ambulatory disability. Thus, gait rehabilitation is a crucial part of health care systems. The quantification of human locomotion enables clinicians to describe and analyze a patient's gait performance in detail and allows them to base clinical decisions on objective data. These assessments generate a vast amount of complex data which need to be interpreted in a short time period. We conducted a design study in cooperation with gait analysis experts to develop a novel Knowledge-Assisted Visual Analytics solution for clinical Gait analysis (KAVAGait). KAVAGait allows the clinician to store and inspect complex data derived during clinical gait analysis. The system incorporates innovative and interactive visual interface concepts, which were developed based on the needs of clinicians. Additionally, an explicit knowledge store (EKS) allows externalization and storage of implicit knowledge from clinicians. It makes this information available for others, supporting the process of data inspection and clinical decision making. We validated our system by conducting expert reviews, a user study, and a case study. Results suggest that KAVAGait is able to support a clinician during clinical practice by visualizing complex gait data and providing knowledge of other clinicians.
Collapse
|
10
|
Andrienko G, Andrienko N, Fuchs G, Garcia JMC. Clustering Trajectories by Relevant Parts for Air Traffic Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:34-44. [PMID: 28866540 DOI: 10.1109/tvcg.2017.2744322] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Clustering of trajectories of moving objects by similarity is an important technique in movement analysis. Existing distance functions assess the similarity between trajectories based on properties of the trajectory points or segments. The properties may include the spatial positions, times, and thematic attributes. There may be a need to focus the analysis on certain parts of trajectories, i.e., points and segments that have particular properties. According to the analysis focus, the analyst may need to cluster trajectories by similarity of their relevant parts only. Throughout the analysis process, the focus may change, and different parts of trajectories may become relevant. We propose an analytical workflow in which interactive filtering tools are used to attach relevance flags to elements of trajectories, clustering is done using a distance function that ignores irrelevant elements, and the resulting clusters are summarized for further analysis. We demonstrate how this workflow can be useful for different analysis tasks in three case studies with real data from the domain of air traffic. We propose a suite of generic techniques and visualization guidelines to support movement data analysis by means of relevance-aware trajectory clustering.
Collapse
|