1
|
Dagnino G, Kundrat D, Moreira P, Wurdemann HA, Abayazid M. Editorial: Translational research in medical robotics-challenges and opportunities. Front Robot AI 2023; 10:1270823. [PMID: 37860632 PMCID: PMC10582951 DOI: 10.3389/frobt.2023.1270823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 09/22/2023] [Indexed: 10/21/2023] Open
Affiliation(s)
- Giulio Dagnino
- Robotics and Mechatronics, University of Twente, Enschede, Netherlands
| | - Dennis Kundrat
- Robotics and Mechatronics, University of Twente, Enschede, Netherlands
| | - Pedro Moreira
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Helge Arne Wurdemann
- Department of Mechanical Engineering, University College London, London, United Kingdom
| | - Momen Abayazid
- Robotics and Mechatronics, University of Twente, Enschede, Netherlands
| |
Collapse
|
2
|
Scherf L, Schmidt A, Pal S, Koert D. Interactively learning behavior trees from imperfect human demonstrations. Front Robot AI 2023; 10:1152595. [PMID: 37501742 PMCID: PMC10368948 DOI: 10.3389/frobt.2023.1152595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 06/14/2023] [Indexed: 07/29/2023] Open
Abstract
Introduction: In Interactive Task Learning (ITL), an agent learns a new task through natural interaction with a human instructor. Behavior Trees (BTs) offer a reactive, modular, and interpretable way of encoding task descriptions but have not yet been applied a lot in robotic ITL settings. Most existing approaches that learn a BT from human demonstrations require the user to specify each action step-by-step or do not allow for adapting a learned BT without the need to repeat the entire teaching process from scratch. Method: We propose a new framework to directly learn a BT from only a few human task demonstrations recorded as RGB-D video streams. We automatically extract continuous pre- and post-conditions for BT action nodes from visual features and use a Backchaining approach to build a reactive BT. In a user study on how non-experts provide and vary demonstrations, we identify three common failure cases of an BT learned from potentially imperfect initial human demonstrations. We offer a way to interactively resolve these failure cases by refining the existing BT through interaction with a user over a web-interface. Specifically, failure cases or unknown states are detected automatically during the execution of a learned BT and the initial BT is adjusted or extended according to the provided user input. Evaluation and results: We evaluate our approach on a robotic trash disposal task with 20 human participants and demonstrate that our method is capable of learning reactive BTs from only a few human demonstrations and interactively resolving possible failure cases at runtime.
Collapse
Affiliation(s)
- Lisa Scherf
- Interactive AI & Cognitive Models for Human-AI Interaction (IKIDA), Technische Universität Darmstadt, Darmstadt, Germany
- Centre of Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Aljoscha Schmidt
- Interactive AI & Cognitive Models for Human-AI Interaction (IKIDA), Technische Universität Darmstadt, Darmstadt, Germany
| | - Suman Pal
- Telekinesis, Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Dorothea Koert
- Interactive AI & Cognitive Models for Human-AI Interaction (IKIDA), Technische Universität Darmstadt, Darmstadt, Germany
- Centre of Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
3
|
Constantin A, Atkinson M, Bernabeu MO, Buckmaster F, Dhillon B, McTrusty A, Strang N, Williams R. Optometrists' Perspectives Regarding Artificial Intelligence Aids and Contributing Retinal Images to a Repository: Web-Based Interview Study. JMIR Hum Factors 2023; 10:e40887. [PMID: 37227761 DOI: 10.2196/40887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 01/23/2023] [Accepted: 04/19/2023] [Indexed: 05/26/2023] Open
Abstract
BACKGROUND A repository of retinal images for research is being established in Scotland. It will permit researchers to validate, tune, and refine artificial intelligence (AI) decision-support algorithms to accelerate safe deployment in Scottish optometry and beyond. Research demonstrates the potential of AI systems in optometry and ophthalmology, though they are not yet widely adopted. OBJECTIVE In this study, 18 optometrists were interviewed to (1) identify their expectations and concerns about the national image research repository and their use of AI decision support and (2) gather their suggestions for improving eye health care. The goal was to clarify attitudes among optometrists delivering primary eye care with respect to contributing their patients' images and to using AI assistance. These attitudes are less well studied in primary care contexts. Five ophthalmologists were interviewed to discover their interactions with optometrists. METHODS Between March and August 2021, 23 semistructured interviews were conducted online lasting for 30-60 minutes. Transcribed and pseudonymized recordings were analyzed using thematic analysis. RESULTS All optometrists supported contributing retinal images to form an extensive and long-running research repository. Our main findings are summarized as follows. Optometrists were willing to share images of their patients' eyes but expressed concern about technical difficulties, lack of standardization, and the effort involved. Those interviewed thought that sharing digital images would improve collaboration between optometrists and ophthalmologists, for example, during referral to secondary health care. Optometrists welcomed an expanded primary care role in diagnosis and management of diseases by exploiting new technologies and anticipated significant health benefits. Optometrists welcomed AI assistance but insisted that it should not reduce their role and responsibilities. CONCLUSIONS Our investigation focusing on optometrists is novel because most similar studies on AI assistance were performed in hospital settings. Our findings are consistent with those of studies with professionals in ophthalmology and other medical disciplines: showing near universal willingness to use AI to improve health care, alongside concerns over training, costs, responsibilities, skill retention, data sharing, and disruptions to professional practices. Our study on optometrists' willingness to contribute images to a research repository introduces a new aspect; they hope that a digital image sharing infrastructure will facilitate service integration.
Collapse
Affiliation(s)
- Aurora Constantin
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Malcolm Atkinson
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Miguel Oscar Bernabeu
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, United Kingdom
| | - Fiona Buckmaster
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Baljean Dhillon
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Alice McTrusty
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Niall Strang
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
| | - Robin Williams
- School of Social and Political Science, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
4
|
Chirico Scheele S, Hartmann C, Siegrist M, Binks M, Egan PF. Consumer Assessment of 3D-Printed Food Shape, Taste, and Fidelity Using Chocolate and Marzipan Materials. 3D Print Addit Manuf 2022; 9:473-482. [PMID: 36660745 PMCID: PMC9831564 DOI: 10.1089/3dp.2020.0271] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Additive manufacturing enables the production of complex structures with emerging approaches showing great promise in the food industry for design customization. Three-dimensional food printing has benefits for providing personalized health and shape fabrication for consumers. Past studies have demonstrated positive consumer perceptions for 3D food printing, but there is still a need for consumer validation of the technology through consumption and rating of fabricated 3D-printed foods. This article measures consumer response on shape, taste, and fidelity for 3D-printed food designs. Participants (N = 28) were presented with a series of designs differing in shape complexity and ingredients (marzipan and chocolate) and provided ratings using a visual analog scale (100 mm line). The results show that fabricated shapes with higher complexity were preferred by participants with 8.8 ± 0.3 ratings over lower complexity shapes with 5.5 ± 0.4 ratings. Taste preference was primarily dependent on the material selection, with chocolate material preferred by participants with 8.2 ± 0.5 ratings over marzipan material with 6.0 ± 0.5. Results demonstrated that participants preferred 3D-printed shapes that achieved high fidelity in recreating their computer-aided design (CAD) with 7.3 ± 0.3 ratings that were greater than 5.5 ± 0.5 for low-fidelity prints. These findings demonstrate first measurements of 3D food printing from a consumer perspective and provide a foundation for future studies on personalized manufacturing and nutrition.
Collapse
Affiliation(s)
| | - Christina Hartmann
- Department of Environmental Systems Science, ETH Zürich, Zürich, Switzerland
| | - Michael Siegrist
- Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland
| | - Martin Binks
- Department of Nutritional Sciences, Texas Tech University, Lubbock, Texas, USA
| | - Paul F. Egan
- Department of Mechanical Engineering, Texas Tech University, Lubbock, Texas, USA
| |
Collapse
|
5
|
Schultheiß S, Lewandowski D. Data set of a representative online survey on search engines with a focus on search engine optimization (SEO): a cross-sectional study. F1000Res 2022; 11:376. [PMID: 36250002 PMCID: PMC9551386 DOI: 10.12688/f1000research.109662.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/06/2022] [Indexed: 11/11/2022] Open
Abstract
To gain a better understanding of user knowledge and perspectives of search engines, a fruitful approach are representative online surveys. In 2020, we conducted an online survey with a sample representative of the German online population aged 16 through 69 ( N = 2,012). The online survey included 12 search engine-related sections. The questions cover topics such as usage behavior, self-assessed search engine literacy, trust in search engines, knowledge of ads and search engine optimization (SEO), ability to distinguish ads from organic results, assessments and opinions regarding SEO, and personalization of search results. SEO is the specific focus of the survey, as it was conducted as part of the SEO Effect project, dealing with issues such as the role of SEO from the user perspective. This data set contains complete data from the online survey. On the one hand, the data set will allow further analyses, and, on the other hand, comparisons with follow-up studies.
Collapse
Affiliation(s)
- Sebastian Schultheiß
- Department of Information, Hamburg University of Applied Sciences, Hamburg, 22081, Germany,
| | - Dirk Lewandowski
- Department of Information, Hamburg University of Applied Sciences, Hamburg, 22081, Germany,
| |
Collapse
|
6
|
Zhang Z, Joy K, Harris R, Ozkaynak M, Adelgais K, Munjal K. Applications and User Perceptions of Smart Glasses in Emergency Medical Services: Semistructured Interview Study. JMIR Hum Factors 2022; 9:e30883. [PMID: 35225816 PMCID: PMC8922155 DOI: 10.2196/30883] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 11/23/2021] [Accepted: 12/07/2021] [Indexed: 01/12/2023] Open
Abstract
Background Smart glasses have been gaining momentum as a novel technology because of their advantages in enabling hands-free operation and see-what-I-see remote consultation. Researchers have primarily evaluated this technology in hospital settings; however, limited research has investigated its application in prehospital operations. Objective The aim of this study is to understand the potential of smart glasses to support the work practices of prehospital providers, such as emergency medical services (EMS) personnel. Methods We conducted semistructured interviews with 13 EMS providers recruited from 4 hospital-based EMS agencies in an urban area in the east coast region of the United States. The interview questions covered EMS workflow, challenges encountered, technology needs, and users’ perceptions of smart glasses in supporting daily EMS work. During the interviews, we demonstrated a system prototype to elicit more accurate and comprehensive insights regarding smart glasses. Interviews were transcribed verbatim and analyzed using the open coding technique. Results We identified four potential application areas for smart glasses in EMS: enhancing teleconsultation between distributed prehospital and hospital providers, semiautomating patient data collection and documentation in real time, supporting decision-making and situation awareness, and augmenting quality assurance and training. Compared with the built-in touch pad, voice commands and hand gestures were indicated as the most preferred and suitable interaction mechanisms. EMS providers expressed positive attitudes toward using smart glasses during prehospital encounters. However, several potential barriers and user concerns need to be considered and addressed before implementing and deploying smart glasses in EMS practice. They are related to hardware limitations, human factors, reliability, workflow, interoperability, and privacy. Conclusions Smart glasses can be a suitable technological means for supporting EMS work. We conclude this paper by discussing several design considerations for realizing the full potential of this hands-free technology.
Collapse
Affiliation(s)
- Zhan Zhang
- School of Computer Science and Information Systems, Pace University, New York, NY, United States
| | - Karen Joy
- School of Computer Science and Information Systems, Pace University, New York, NY, United States
| | - Richard Harris
- School of Computer Science and Information Systems, Pace University, New York, NY, United States
| | - Mustafa Ozkaynak
- College of Nursing, University of Colorado, Aurora, CO, United States
| | - Kathleen Adelgais
- School of Medicine, University of Colorado, Aurora, CO, United States
| | - Kevin Munjal
- Department of Emergency Medicine, Mount Sinai Medical Center, New York, NY, United States
| |
Collapse
|
7
|
Gantenbein J, Dittli J, Meyer JT, Gassert R, Lambercy O. Intention Detection Strategies for Robotic Upper-Limb Orthoses: A Scoping Review Considering Usability, Daily Life Application, and User Evaluation. Front Neurorobot 2022; 16:815693. [PMID: 35264940 PMCID: PMC8900616 DOI: 10.3389/fnbot.2022.815693] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Wearable robotic upper limb orthoses (ULO) are promising tools to assist or enhance the upper-limb function of their users. While the functionality of these devices has continuously increased, the robust and reliable detection of the user's intention to control the available degrees of freedom remains a major challenge and a barrier for acceptance. As the information interface between device and user, the intention detection strategy (IDS) has a crucial impact on the usability of the overall device. Yet, this aspect and the impact it has on the device usability is only rarely evaluated with respect to the context of use of ULO. A scoping literature review was conducted to identify non-invasive IDS applied to ULO that have been evaluated with human participants, with a specific focus on evaluation methods and findings related to functionality and usability and their appropriateness for specific contexts of use in daily life. A total of 93 studies were identified, describing 29 different IDS that are summarized and classified according to a four-level classification scheme. The predominant user input signal associated with the described IDS was electromyography (35.6%), followed by manual triggers such as buttons, touchscreens or joysticks (16.7%), as well as isometric force generated by residual movement in upper-limb segments (15.1%). We identify and discuss the strengths and weaknesses of IDS with respect to specific contexts of use and highlight a trade-off between performance and complexity in selecting an optimal IDS. Investigating evaluation practices to study the usability of IDS, the included studies revealed that, primarily, objective and quantitative usability attributes related to effectiveness or efficiency were assessed. Further, it underlined the lack of a systematic way to determine whether the usability of an IDS is sufficiently high to be appropriate for use in daily life applications. This work highlights the importance of a user- and application-specific selection and evaluation of non-invasive IDS for ULO. For technology developers in the field, it further provides recommendations on the selection process of IDS as well as to the design of corresponding evaluation protocols.
Collapse
Affiliation(s)
- Jessica Gantenbein
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Jan Dittli
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Jan Thomas Meyer
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Roger Gassert
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
| | - Olivier Lambercy
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
| |
Collapse
|
8
|
Daronnat S, Azzopardi L, Halvey M, Dubiel M. Inferring Trust From Users' Behaviours; Agents' Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration. Front Robot AI 2021; 8:642201. [PMID: 34307467 PMCID: PMC8295498 DOI: 10.3389/frobt.2021.642201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 06/10/2021] [Indexed: 11/23/2022] Open
Abstract
Collaborative virtual agents help human operators to perform tasks in real-time. For this collaboration to be effective, human operators must appropriately trust the agent(s) they are interacting with. Multiple factors influence trust, such as the context of interaction, prior experiences with automated systems and the quality of the help offered by agents in terms of its transparency and performance. Most of the literature on trust in automation identified the performance of the agent as a key factor influencing trust. However, other work has shown that the behavior of the agent, type of the agent’s errors, and predictability of the agent’s actions can influence the likelihood of the user’s reliance on the agent and efficiency of tasks completion. Our work focuses on how agents’ predictability affects cognitive load, performance and users’ trust in a real-time human-agent collaborative task. We used an interactive aiming task where participants had to collaborate with different agents that varied in terms of their predictability and performance. This setup uses behavioral information (such as task performance and reliance on the agent) as well as standardized survey instruments to estimate participants’ reported trust in the agent, cognitive load and perception of task difficulty. Thirty participants took part in our lab-based study. Our results showed that agents with more predictable behaviors have a more positive impact on task performance, reliance and trust while reducing cognitive workload. In addition, we investigated the human-agent trust relationship by creating models that could predict participants’ trust ratings using interaction data. We found that we could reliably estimate participants’ reported trust in the agents using information related to performance, task difficulty and reliance. This study provides insights on behavioral factors that are the most meaningful to anticipate complacent or distrusting attitudes toward automation. With this work, we seek to pave the way for the development of trust-aware agents capable of responding more appropriately to users by being able to monitor components of the human-agent relationships that are the most salient for trust calibration.
Collapse
Affiliation(s)
- Sylvain Daronnat
- Department of Computer and Information Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Leif Azzopardi
- Department of Computer and Information Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Martin Halvey
- Department of Computer and Information Sciences, University of Strathclyde, Glasgow, United Kingdom
| | - Mateusz Dubiel
- Department of Computer and Information Sciences, University of Strathclyde, Glasgow, United Kingdom
| |
Collapse
|
9
|
Koert D, Kircher M, Salikutluk V, D'Eramo C, Peters J. Multi-Channel Interactive Reinforcement Learning for Sequential Tasks. Front Robot AI 2021; 7:97. [PMID: 33501264 PMCID: PMC7805623 DOI: 10.3389/frobt.2020.00097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Accepted: 06/15/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to learn new tasks by sequencing already known skills is an important requirement for future robots. Reinforcement learning is a powerful tool for this as it allows for a robot to learn and improve on how to combine skills for sequential tasks. However, in real robotic applications, the cost of sample collection and exploration prevent the application of reinforcement learning for a variety of tasks. To overcome these limitations, human input during reinforcement can be beneficial to speed up learning, guide the exploration and prevent the choice of disastrous actions. Nevertheless, there is a lack of experimental evaluations of multi-channel interactive reinforcement learning systems solving robotic tasks with input from inexperienced human users, in particular for cases where human input might be partially wrong. Therefore, in this paper, we present an approach that incorporates multiple human input channels for interactive reinforcement learning in a unified framework and evaluate it on two robotic tasks with 20 inexperienced human subjects. To enable the robot to also handle potentially incorrect human input we incorporate a novel concept for self-confidence, which allows the robot to question human input after an initial learning phase. The second robotic task is specifically designed to investigate if this self-confidence can enable the robot to achieve learning progress even if the human input is partially incorrect. Further, we evaluate how humans react to suggestions of the robot, once the robot notices human input might be wrong. Our experimental evaluations show that our approach can successfully incorporate human input to accelerate the learning process in both robotic tasks even if it is partially wrong. However, not all humans were willing to accept the robot's suggestions or its questioning of their input, particularly if they do not understand the learning process and the reasons behind the robot's suggestions. We believe that the findings from this experimental evaluation can be beneficial for the future design of algorithms and interfaces of interactive reinforcement learning systems used by inexperienced users.
Collapse
Affiliation(s)
- Dorothea Koert
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Maximilian Kircher
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Vildan Salikutluk
- Center for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany.,Models of Higher Cognition Group, Department of Psychology, Technische Universität Darmstadt, Darmstadt, Germany
| | - Carlo D'Eramo
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Jan Peters
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany.,Robot Learning Group, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| |
Collapse
|
10
|
Piumsomboon T, Dey A, Ens B, Lee G, Billinghurst M. The Effects of Sharing Awareness Cues in Collaborative Mixed Reality. Front Robot AI 2019; 6:5. [PMID: 33501022 PMCID: PMC7805624 DOI: 10.3389/frobt.2019.00005] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 01/16/2019] [Indexed: 11/13/2022] Open
Abstract
Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
Collapse
Affiliation(s)
- Thammathip Piumsomboon
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia.,School of Product Design, University of Canterbury, Christchurch, New Zealand
| | - Arindam Dey
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia.,Co-Innovation Group, University of Queensland, Brisbane, QLD, Australia
| | - Barrett Ens
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia.,Immersive Analytics Lab, Monash University, Melbourne, VIC, Australia
| | - Gun Lee
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| | - Mark Billinghurst
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| |
Collapse
|
11
|
Verhulst A, Normand JM, Lombart C, Sugimoto M, Moreau G. Influence of Being Embodied in an Obese Virtual Body on Shopping Behavior and Products Perception in VR. Front Robot AI 2018; 5:113. [PMID: 33500992 PMCID: PMC7806053 DOI: 10.3389/frobt.2018.00113] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Accepted: 09/10/2018] [Indexed: 01/12/2023] Open
Abstract
Research in Virtual Reality (VR) showed that embodiment can influence participants' perceptions and behavior when embodied in a different yet plausible virtual body. In this paper, we study the changes an obese virtual body has on products perception (e.g., taste, etc.) and purchase behavior (e.g., number purchased) in an immersive virtual retail store. Participants (of a normal BMI on average) were embodied in a normal (N) or an obese (OB) virtual body and were asked to buy and evaluate food products in the immersive virtual store. Based on stereotypes that are classically associated with obese people, we expected that the group embodied in obese avatars would show a more unhealthy diet, (i.e., buy more food products and also buy more products with high energy intake, or saturated fat) and would rate unhealthy food as being tastier and healthier than participants embodied in “normal weight” avatars. Our participants also rated the perception of their virtual body: the OB group perceived their virtual body as significantly heavier and older. They also rated their sense of embodiment and presence within the immersive virtual store. These measures did not show any significant difference between groups. Finally, we asked them to rate different food products in terms of tastiness, healthiness, sustainability and price. The only difference we noticed is that participants embodied in an obese avatar (OB group) rated the coke as being significantly tastier and the apple as being significantly healthier. Nevertheless, while we hypothesized that participants embodied in a virtual body with obesity would show differences in their shopping patterns (e.g., more “unhealthy” products bought) there were no significant differences between the groups. Stereotype activation failed for our participants embodied in obese avatars, who did not exhibit a shopping behavior following the (negative) stereotypes related to obese people. conversely, while the opposite hypothesis (participants embodied in obese avatars would buy significantly more healthy products in order to “transform” their virtual bodies) could have been made, it was not the case either. We discuss these results and propose hypotheses as to why the behavior of the manipulated group differed from the one we expected. Indeed, unlike previous research, our participants were embodied in virtual avatars which differed greatly from their real bodies. Obese avatars should not only modify users' visual characteristics such as hair or skin color, etc. We hypothesize that an obese virtual body may require some other non-visual stimulus, e.g., the sensation of the extra weight or the change in body size. This main difference could then explain why we did not notice any important modification on participants' behavior and perceptions of food products. We also hypothesize that the absence of stereotype activation and thus of statistical difference between our N and OB groups might be due to higher-level cognitive processes involved while purchasing food products. Indeed our participants might have rejected their virtual bodies when performing the shopping task, while the embodiment and presence ratings did not show significant differences, and purchased products based on their real (non-obese) bodies. This could mean that stereotype activation is more complex that previously thought.
Collapse
Affiliation(s)
- Adrien Verhulst
- CRENAU, AAU UMR CNRS 1563, Computer Science and Mathematics Department, École Centrale de Nantes, Nantes, France
| | - Jean-Marie Normand
- CRENAU, AAU UMR CNRS 1563, Computer Science and Mathematics Department, École Centrale de Nantes, Nantes, France.,Hybrid, Inria, Rennes, France
| | - Cindy Lombart
- In Situ Lab, Marketing Department, Audencia Business School, Nantes, France
| | - Maki Sugimoto
- Interactive Media Lab, Department of Information and Computer Science, Faculty of Science and Technology, Keio University, Kanagawa, Japan
| | - Guillaume Moreau
- CRENAU, AAU UMR CNRS 1563, Computer Science and Mathematics Department, École Centrale de Nantes, Nantes, France.,Hybrid, Inria, Rennes, France
| |
Collapse
|
12
|
Dey A, Billinghurst M, Lindeman RW, Swan JE. A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front Robot AI 2018; 5:37. [PMID: 33500923 PMCID: PMC7805955 DOI: 10.3389/frobt.2018.00037] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
Collapse
Affiliation(s)
- Arindam Dey
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| | - Mark Billinghurst
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| | - Robert W Lindeman
- Human Interface Technology Lab New Zealand (HIT Lab NZ), University of Canterbury, Christchurch, New Zealand
| | - J Edward Swan
- Mississippi State University, Starkville, MS, United States
| |
Collapse
|
13
|
Ho BJ, Nikzad N, Balaji B, Srivastava M. Emu: Engagement Modeling for User Studies. Proc ACM Int Conf Ubiquitous Comput 2017; 2017:959-964. [PMID: 29629432 PMCID: PMC5889142 DOI: 10.1145/3123024.3124568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Mobile technologies that drive just-in-time ecological momentary assessments and interventions provide an unprecedented view into user behaviors and opportunities to manage chronic conditions. The success of these methods rely on engaging the user at the appropriate moment, so as to maximize questionnaire and task completion rates. However, mobile operating systems provide little support to precisely specify the contextual conditions in which to notify and engage the user, and study designers often lack the expertise to build context-aware software themselves. To address this problem, we have developed Emu, a framework that eases the development of context-aware study applications by providing a concise and powerful interface for specifying temporal- and contextual-constraints for task notifications. In this paper we present the design of the Emu API and demonstrate its use in capturing a range of scenarios common to smartphone-based study applications.
Collapse
Affiliation(s)
- Bo-Jhang Ho
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Nima Nikzad
- Scripps Translational Science Institute, La Jolla, CA 92037, USA
| | - Bharathan Balaji
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Mani Srivastava
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
14
|
Berendt B, Preibusch S. Toward Accountable Discrimination-Aware Data Mining: The Importance of Keeping the Human in the Loop-and Under the Looking Glass. Big Data 2017; 5:135-152. [PMID: 28586238 DOI: 10.1089/big.2016.0055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for discrimination-aware data mining and fairness-aware data mining aim at keeping decision processes supported by information technology free from unjust grounds. However, these formal approaches alone are not sufficient to solve the problem. In the present article, we describe reasons why discrimination with data can and typically does arise through the combined effects of human and machine-based reasoning, and argue that this requires a deeper understanding of the human side of decision-making with data mining. We describe results from a large-scale human-subjects experiment that investigated such decision-making, analyzing the reasoning that participants reported during their task to assess whether a loan request should or would be granted. We derive data protection by design strategies for making decision-making discrimination-aware in an accountable way, grounding these requirements in the accountability principle of the European Union General Data Protection Regulation, and outline how their implementations can integrate algorithmic, behavioral, and user interface factors.
Collapse
Affiliation(s)
- Bettina Berendt
- 1 Department of Computer Science, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
15
|
Klasnja P, Hekler EB, Korinek EV, Harlow J, Mishra SR. Toward Usable Evidence: Optimizing Knowledge Accumulation in HCI Research on Health Behavior Change. Proc SIGCHI Conf Hum Factor Comput Syst 2017; 2017:3071-3082. [PMID: 30272059 DOI: 10.1145/3025453.3026013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Over the last ten years, HCI researchers have introduced a range of novel ways to support health behavior change, from glanceable displays to sophisticated game dynamics. Yet, this research has not had as much impact as its originality warrants. A key reason for this is that common forms of evaluation used in HCI make it difficult to effectively accumulate-and use-knowledge across research projects. This paper proposes a strategy for HCI research on behavior change that retains the field's focus on novel technical contributions while enabling accumulation of evidence that can increase impact of individual research projects both in HCI and the broader behavior-change science. The core of this strategy is an emphasis on the discovery of causal effects of individual components of behavior-change technologies and the precise ways in which those effects vary with individual differences, design choices, and contexts in which those technologies are used.
Collapse
|
16
|
Brox E, Konstantinidis ST, Evertsen G. User-Centered Design of Serious Games for Older Adults Following 3 Years of Experience With Exergames for Seniors: A Study Design. JMIR Serious Games 2017; 5:e2. [PMID: 28077348 PMCID: PMC5266825 DOI: 10.2196/games.6254] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Revised: 11/30/2016] [Accepted: 11/30/2016] [Indexed: 11/30/2022] Open
Abstract
Background Seniors need sufficient balance and strength to manage in daily life, and sufficient physical activity is required to achieve and maintain these abilities. This can be a challenge, but fun and motivational exergames can be of help. However, most commercial games are not suited for this age group for several reasons. Many usability studies and user-centered design (UCD) protocols have been developed and applied, but to the best of our knowledge none of them are focusing on seniors’ use of games for physical activity. In GameUp, a European cofunded project, some prototype Kinect exergames to enhance the mobility of seniors were developed in a user-centered approach. Objective In this paper we aim to record lessons learned in 3 years of experience with exergames for seniors, considering both the needs of older adults regarding user-centered development of exergames and participation in UCD. We also provide a UCD protocol for exergames tailored to senior needs. Methods An initial UCD protocol was formed based on literature of previous research outcomes. Senior users participated in UCD following the initial protocol. The users formed a steady group that met every second week for 3 years to play exergames and participate in the UCD during the 4 phases of the protocol. Several methods were applied in the 4 different phases of the UCD protocol; the most important methods were structured and semistructured interviews, observations, and group discussions. Results A total of 16 seniors with an average age above 80 years participated for 3 years in UCD in order to develop the GameUp exergames. As a result of the lessons learned by applying the different methodologies of the UCD protocol, we propose an adjusted UCD protocol providing explanations on how it should be applied for seniors as users. Questionnaires should be turned into semistructured and structured interviews while user consultation sessions should be repeated with the same theme to ensure that the UCD methods produce a valid outcome. By first following the initial and gradually the adjusted UCD protocol, the project resulted in exergame functionalities and interface features for seniors. Conclusions The main lessons learned during 3 years of experience with exergames for seniors applying UCD are that devoting time to seniors is a key element of success so that trust can be gained, communication can be established, and users’ opinions can be recorded. All different game elements should be taken into consideration during the design of exergames for seniors even if they seem obvious. Despite the limitations of this study, one might argue that it provides a best practice guide to the development of serious games for physical activity targeting seniors.
Collapse
Affiliation(s)
- Ellen Brox
- Norut Northern Research Institute, Tromsoe, Norway
| | - Stathis Th Konstantinidis
- Norut Northern Research Institute, Tromsoe, Norway.,School of Health Sciences, The University of Nottingham, Nottingham, United Kingdom
| | | |
Collapse
|
17
|
Koscher A, Dittenberger S, Stainer-Hochgatterer A. ICT Inexperienced Elderlies: What Would Attract Elderlies to Use Items of Technology? Stud Health Technol Inform 2017; 242:72-75. [PMID: 28873779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents the findings of the first end-user research study with seniors who are not familiar with operating ICT devices, executed as part of the EU-Active and Assisted Living research project Kith & Kin. This project aims at developing an ICT device for these seniors by building on their needs and real capabilities, encouraging communication and fostering social inclusion.
Collapse
|
18
|
Peck TC, Fuchs H, Whitton MC. Evaluation of reorientation techniques and distractors for walking in large virtual environments. IEEE Trans Vis Comput Graph 2009; 15:383-94. [PMID: 19282546 PMCID: PMC2844119 DOI: 10.1109/tvcg.2008.191] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Virtual Environments (VEs) that use a real-walking locomotion interface have typically been restricted in size to the area of the tracked lab space. Techniques proposed to lift this size constraint, enabling real walking in VEs that are larger than the tracked lab space, all require reorientation techniques (ROTs) in the worst-case situation-when a user is close to walking out of the tracked space. We propose a new ROT using visual and audial distractors-objects in the VE that the user focuses on while the VE rotates-and compare our method to current ROTs through three user studies. ROTs using distractors were preferred and ranked more natural by users. Our findings also suggest that improving visual realism and adding sound increased a user's feeling of presence. Users were also less aware of the rotating VE when ROTs with distractors were used. Our findings also suggest that improving visual realism and adding sound increased a user's feeling of presence.
Collapse
|