1
|
Wang Z, Dong Y, Wang S, Zhang X. Synchronizing controlled logistics terminals between simulated and visualized production lines using an ASTAK method. Sci Rep 2025; 15:14574. [PMID: 40281110 PMCID: PMC12032037 DOI: 10.1038/s41598-025-99483-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 04/21/2025] [Indexed: 04/29/2025] Open
Abstract
In a fully automated factory, the Visualized Production Line serves as a crucial tool for assisting personnel to monitor and manage the manufacturing process. The synchronization between the visualized line and the actual production line significantly impacts the efficiency of production supervision. This article proposes a method for controlling the logistics terminals, which encompasses three steps: animation simplification, timing alignment, and keyframe synchronization (hereinafter referred to as ASTAK). This method aims to achieve precise synchronization between the Simulated Production Line and the Visualized Production Line when the process data of the simulated line is not directly accessed. Then, the experiments demonstrate that the proposed method reduces the time difference between the simulated and visualized production lines to an average of 0.08 s with a synchronization rate of 99.97%, which further verifies the effectiveness and superiority of the proposed method over some other state-of-the-art methods.
Collapse
Affiliation(s)
- Zixiao Wang
- School of Information and Communication Engineering, Communication University of China, Beijing, 100024, China.
| | - Yue Dong
- School of Information and Communication Engineering, Communication University of China, Beijing, 100024, China
| | - Shengguo Wang
- China Ordnance Industry Survey and Geotechnical Institute Co., Ltd, Beijing, 100053, China
| | - Xinxiang Zhang
- Department of Electrical and Computer Engineering, Southern Methodist University, Dallas, TX, 75205, USA
| |
Collapse
|
2
|
Afzaal H, Alim U. Evaluating Force-Based Haptics for Immersive Tangible Interactions with Surface Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:886-896. [PMID: 39255113 DOI: 10.1109/tvcg.2024.3456316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions-with or without the application of assisting force stimuli-have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.
Collapse
|
3
|
Machuca MDB, Israel JH, Keefe DF, Stuerzlinger W. Toward More Comprehensive Evaluations of 3D Immersive Sketching, Drawing, and Painting. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4648-4664. [PMID: 37186537 DOI: 10.1109/tvcg.2023.3276291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
To understand current practice and explore the potential for more comprehensive evaluations of 3D immersive sketching, drawing, and painting, we present a survey of evaluation methodologies used in existing 3D sketching research, a breakdown and discussion of important phases (sub-tasks) in the 3D sketching process, and a framework that suggests how these factors can inform evaluation strategies in future 3D sketching research. Existing evaluations identified in the survey are organized and discussed within three high-level categories: 1) evaluating the 3D sketching activity, 2) evaluating 3D sketching tools, and 3) evaluating 3D sketching artifacts. The new framework suggests targeting evaluations to one or more of these categories and identifying relevant user populations. In addition, building upon the discussion of the different phases of the 3D sketching process, the framework suggests to evaluate relevant sketching tasks, which may range from low-level perception and hand movements to high-level conceptual design. Finally, we discuss limitations and challenges that arise when evaluating 3D sketching, including a lack of standardization of evaluation methods and multiple, potentially conflicting, ways to evaluate the same task and user interface usability; we also identify opportunities for more holistic evaluations. We hope the results can contribute to accelerating research in this domain and, ultimately, broad adoption of immersive sketching systems.
Collapse
|
4
|
Luo Q, Gao X, Jiang B, Yan X, Liu W, Ge J. A review of fine-grained sketch image retrieval based on deep learning. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:21186-21210. [PMID: 38124593 DOI: 10.3934/mbe.2023937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Sketch image retrieval is an important branch of the image retrieval field, mainly relying on sketch images as queries for content search. The acquisition process of sketch images is relatively simple and in some scenarios, such as when it is impossible to obtain photos of real objects, it demonstrates its unique practical application value, attracting the attention of many researchers. Furthermore, traditional generalized sketch image retrieval has its limitations when it comes to practical applications; merely retrieving images from the same category may not adequately identify the specific target that the user desires. Consequently, fine-grained sketch image retrieval merits further exploration and study. This approach offers the potential for more precise and targeted image retrieval, making it a valuable area of investigation compared to traditional sketch image retrieval. Therefore, we comprehensively review the fine-grained sketch image retrieval technology based on deep learning and its applications and conduct an in-depth analysis and summary of research literature in recent years. We also provide a detailed introduction to three fine-grained sketch image retrieval datasets: Queen Mary University of London (QMUL) ShoeV2, ChairV2 and PKU Sketch Re-ID, and list common evaluation metrics in the sketch image retrieval field, while showcasing the best performance achieved for these datasets. Finally, we discuss the existing challenges, unresolved issues and potential research directions in this field, aiming to provide guidance and inspiration for future research.
Collapse
Affiliation(s)
- Qing Luo
- Yuxi Power Supply Bureau, Yunnan Power Grid Co., Ltd., Yuxi, China
| | - Xiang Gao
- Yuxi Power Supply Bureau, Yunnan Power Grid Co., Ltd., Yuxi, China
| | - Bo Jiang
- Yuxi Power Supply Bureau, Yunnan Power Grid Co., Ltd., Yuxi, China
| | - Xueting Yan
- Yuxi Power Supply Bureau, Yunnan Power Grid Co., Ltd., Yuxi, China
| | - Wanyuan Liu
- Yuxi Power Supply Bureau, Yunnan Power Grid Co., Ltd., Yuxi, China
| | - Junchao Ge
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
5
|
Xu X, Zhou Y, Shao B, Feng G, Yu C. GestureSurface: VR Sketching through Assembling Scaffold Surface with Non-Dominant Hand. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2499-2507. [PMID: 37027702 DOI: 10.1109/tvcg.2023.3247059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
3D sketching in virtual reality (VR) provides an immersive drawing experience for designs. However, due to the lack of depth perception cues in VR, scaffolding surfaces that constrain strokes to 2D are usually used as visual guides to reduce the difficulty of drawing accurate strokes. When the dominant hand is occupied by the pen tool, the efficiency of scaffolding-based sketching can be improved by using gesture input to reduce the idleness of the non-dominant hand. This paper presents GestureSurface, a bi-manual interface that uses non-dominant hand performing gestures to operate scaffolding and the other hand drawing with controller. We designed a set of non-dominant gestures to create and manipulate scaffolding surfaces, which are assembled by automatic combination based on five predefined primitive surfaces. We evaluated GestureSurface through a 20-person user study and found that the method of scaffolding-based sketching using non-dominant hand has the advantages of high efficiency and low fatigue.
Collapse
|
6
|
Pallot M, Fleury S, Poussard B, Richir S. What are the Challenges and Enabling Technologies to Implement the Do-It-Together Approach Enhanced by Social Media, its Benefits and Drawbacks? JOURNAL OF INNOVATION ECONOMICS & MANAGEMENT 2023. [DOI: 10.3917/jie.pr1.0132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
7
|
Xu P, Hospedales TM, Yin Q, Song YZ, Xiang T, Wang L. Deep Learning for Free-Hand Sketch: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:285-312. [PMID: 35130149 DOI: 10.1109/tpami.2022.3148853] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular. The progress of deep learning has immensely benefited free-hand sketch research and applications. This paper presents a comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. The main contents of this survey include: (i) A discussion of the intrinsic traits and unique challenges of free-hand sketch, to highlight the essential differences between sketch data and other data modalities, e.g., natural photos. (ii) A review of the developments of free-hand sketch research in the deep learning era, by surveying existing datasets, research topics, and the state-of-the-art methods through a detailed taxonomy and experimental evaluation. (iii) Promotion of future work via a discussion of bottlenecks, open problems, and potential research directions for the community.
Collapse
|
8
|
Ye H, Kwan KC, Fu H. 3D Curve Creation on and Around Physical Objects With Mobile AR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2809-2821. [PMID: 33400650 DOI: 10.1109/tvcg.2020.3049006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The recent advance in motion tracking (e.g., Visual Inertial Odometry) allows the use of a mobile phone as a 3D pen, thus significantly benefiting various mobile Augmented Reality (AR) applications based on 3D curve creation. However, when creating 3D curves on and around physical objects with mobile AR, tracking might be less robust or even lost due to camera occlusion or textureless scenes. This motivates us to study how to achieve natural interaction with minimum tracking errors during close interaction between a mobile phone and physical objects. To this end, we contribute an elicitation study on input point and phone grip, and a quantitative study on tracking errors. Based on the results, we present a system for direct 3D drawing with an AR-enabled mobile phone as a 3D pen, and interactive correction of 3D curves with tracking errors in mobile AR. We demonstrate the usefulness and effectiveness of our system for two applications: in-situ 3D drawing, and direct 3D measurement.
Collapse
|
9
|
Steed A, Takala TM, Archer D, Lages W, Lindeman RW. Directions for 3D User Interface Research from Consumer VR Games. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4171-4182. [PMID: 34449366 DOI: 10.1109/tvcg.2021.3106431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the continuing development of affordable immersive virtual reality (VR) systems, there is now a growing market for consumer content. The current form of consumer systems is not dissimilar to the lab-based VR systems of the past 30 years: the primary input mechanism is a head-tracked display and one or two tracked hands with buttons and joysticks on hand-held controllers. Over those 30 years, a very diverse academic literature has emerged that covers design and ergonomics of 3D user interfaces (3DUIs). However, the growing consumer market has engaged a very broad range of creatives that have built a very diverse set of designs. Sometimes these designs adopt findings from the academic literature, but other times they experiment with completely novel or counter-intuitive mechanisms. In this paper and its online adjunct, we report on novel 3DUI design patterns that are interesting from both design and research perspectives: they are highly novel, potentially broadly re-usable and/or suggest interesting avenues for evaluation. The supplemental material, which is a living document, is a crowd-sourced repository of interesting patterns. This paper is a curated snapshot of those patterns that were considered to be the most fruitful for further elaboration.
Collapse
|
10
|
Johnson S, Orban D, Runesha HB, Meng L, Juhnke B, Erdman A, Samsel F, Keefe DF. Bento Box: An Interactive and Zoomable Small Multiples Technique for Visualizing 4D Simulation Ensembles in Virtual Reality. Front Robot AI 2019; 6:61. [PMID: 33501076 PMCID: PMC7805880 DOI: 10.3389/frobt.2019.00061] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 07/05/2019] [Indexed: 11/13/2022] Open
Abstract
We present Bento Box, a virtual reality data visualization technique and bimanual 3D user interface for exploratory analysis of 4D data ensembles. Bento Box helps scientists and engineers make detailed comparative judgments about multiple time-varying data instances that make up a data ensemble (e.g., a group of 10 parameterized simulation runs). The approach is to present an organized set of complementary volume visualizations juxtaposed in a grid arrangement, where each column visualizes a single data instance and each row provides a new view of the volume from a different perspective and/or scale. A novel bimanual interface enables users to select a sub-volume of interest to create a new row on-the-fly, scrub through time, and quickly navigate through the resulting virtual "bento box." The technique is evaluated through a real-world case study, supporting a team of medical device engineers and computational scientists using in-silico testing (supercomputer simulations) to redesign cardiac leads. The engineers confirmed hypotheses and developed new insights using a Bento Box visualization. An evaluation of the technical performance demonstrates that the proposed combination of data sampling strategies and clipped volume rendering is successful in displaying a juxtaposed visualization of fluid-structure-interaction simulation data (39 GB of raw data) at interactive VR frame rates.
Collapse
Affiliation(s)
- Seth Johnson
- Interactive Visualization Lab, Department of Computer Science, University of Minnesota, Minneapolis, MN, United States
| | - Daniel Orban
- Interactive Visualization Lab, Department of Computer Science, University of Minnesota, Minneapolis, MN, United States
| | | | - Lingyu Meng
- Research Computing Center, University of Chicago, Chicago, IL, United States
| | - Bethany Juhnke
- Department of Mechanical Engineering, Earl E. Bakken Medical Devices Center, University of Minnesota, Minneapolis, MN, United States
| | - Arthur Erdman
- Department of Mechanical Engineering, Earl E. Bakken Medical Devices Center, University of Minnesota, Minneapolis, MN, United States
| | - Francesca Samsel
- Texas Advanced Computing Center, University of Texas, Austin, TX, United States
| | - Daniel F Keefe
- Interactive Visualization Lab, Department of Computer Science, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|