1
|
Gyrard A, Tabeau K, Fiorini L, Kung A, Senges E, De Mul M, Giuliani F, Lefebvre D, Hoshino H, Fabbricotti I, Sancarlo D, D’Onofrio G, Cavallo F, Guiot D, Arzoz-Fernandez E, Okabe Y, Tsukamoto M. Knowledge Engineering Framework for IoT Robotics Applied to Smart Healthcare and Emotional Well-Being. Int J Soc Robot 2023; 15:445-472. [PMID: 34804257 PMCID: PMC8594653 DOI: 10.1007/s12369-021-00821-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2021] [Indexed: 12/01/2022]
Abstract
Social companion robots are getting more attention to assist elderly people to stay independent at home and to decrease their social isolation. When developing solutions, one remaining challenge is to design the right applications that are usable by elderly people. For this purpose, co-creation methodologies involving multiple stakeholders and a multidisciplinary researcher team (e.g., elderly people, medical professionals, and computer scientists such as roboticists or IoT engineers) are designed within the ACCRA (Agile Co-Creation of Robots for Ageing) project. This paper will address this research question: How can Internet of Robotic Things (IoRT) technology and co-creation methodologies help to design emotional-based robotic applications? This is supported by the ACCRA project that develops advanced social robots to support active and healthy ageing, co-created by various stakeholders such as ageing people and physicians. We demonstra this with three robots, Buddy, ASTRO, and RoboHon, used for daily life, mobility, and conversation. The three robots understand and convey emotions in real-time using the Internet of Things and Artificial Intelligence technologies (e.g., knowledge-based reasoning).
Collapse
Affiliation(s)
| | - Kasia Tabeau
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Laura Fiorini
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Eloise Senges
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | - Marleen De Mul
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Francesco Giuliani
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | | | - Hiroshi Hoshino
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | - Isabelle Fabbricotti
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniele Sancarlo
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
| | - Grazia D’Onofrio
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
| | - Filippo Cavallo
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | | | | | | |
Collapse
|
2
|
Emotion Detection for Social Robots Based on NLP Transformers and an Emotion Ontology. SENSORS 2021; 21:s21041322. [PMID: 33668412 PMCID: PMC7917797 DOI: 10.3390/s21041322] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 02/02/2021] [Accepted: 02/07/2021] [Indexed: 11/25/2022]
Abstract
For social robots, knowledge regarding human emotional states is an essential part of adapting their behavior or associating emotions to other entities. Robots gather the information from which emotion detection is processed via different media, such as text, speech, images, or videos. The multimedia content is then properly processed to recognize emotions/sentiments, for example, by analyzing faces and postures in images/videos based on machine learning techniques or by converting speech into text to perform emotion detection with natural language processing (NLP) techniques. Keeping this information in semantic repositories offers a wide range of possibilities for implementing smart applications. We propose a framework to allow social robots to detect emotions and to store this information in a semantic repository, based on EMONTO (an EMotion ONTOlogy), and in the first figure or table caption. Please define if appropriate. an ontology to represent emotions. As a proof-of-concept, we develop a first version of this framework focused on emotion detection in text, which can be obtained directly as text or by converting speech to text. We tested the implementation with a case study of tour-guide robots for museums that rely on a speech-to-text converter based on the Google Application Programming Interface (API) and a Python library, a neural network to label the emotions in texts based on NLP transformers, and EMONTO integrated with an ontology for museums; thus, it is possible to register the emotions that artworks produce in visitors. We evaluate the classification model, obtaining equivalent results compared with a state-of-the-art transformer-based model and with a clear roadmap for improvement.
Collapse
|
3
|
Amith M, Lin R, Cunningham R, Wu QL, Savas LS, Gong Y, Boom JA, Tang L, Tao C. Examining Potential Usability and Health Beliefs Among Young Adults Using a Conversational Agent for HPV Vaccine Counseling. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2020; 2020:43-52. [PMID: 32477622 PMCID: PMC7233050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The human papillomavirus (HPV) vaccine is the most effective way to prevent HPV-related cancers. Integrating provider vaccine counseling is crucial to improving HPV vaccine completion rates. Automating the counseling experience through a conversational agent could help improve HPV vaccine coverage and reduce the burden of vaccine counseling for providers. In a previous study, we tested a simulated conversational agent that provided HPV vaccine counseling for parents using the Wizard of OZ protocol. In the current study, we assessed the conversational agent among young college adults (n=24), a population that may have missed the HPV vaccine during their adolescence when vaccination is recommended. We also administered surveys for system and voice usability, and for health beliefs concerning the HPV vaccine. Participants perceived the agent to have high usability that is slightly better or equivalent to other voice interactive interfaces, and there is some evidence that the agent impacted their beliefs concerning the harms, uncertainty, and risk denials for the HPV vaccine. Overall, this study demonstrates the potential for conversational agents to be an impactful tool for health promotion endeavors.
Collapse
Affiliation(s)
- Muhammad Amith
- The University of Texas Health Science Center at Houston, Houston, TX
| | | | | | | | - Lara S. Savas
- The University of Texas Health Science Center at Houston, Houston, TX
| | - Yang Gong
- The University of Texas Health Science Center at Houston, Houston, TX
| | | | - Lu Tang
- Texas A&M University, College Station, TX
| | - Cui Tao
- The University of Texas Health Science Center at Houston, Houston, TX
| |
Collapse
|
4
|
AMITH M, ZHU A, CUNNINGHAM R, LIN R, SAVAS L, SHAY L, CHEN Y, GONG Y, BOOM J, ROBERTS K, TAO C. Early Usability Assessment of a Conversational Agent for HPV Vaccination. Stud Health Technol Inform 2019; 257:17-23. [PMID: 30741166 PMCID: PMC6677560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
With the emerging use of speech technology in consumer goods, we experimented with the application of conversational agents for the communication of health information relating to HPV vaccine. Research have stated that one-to-one contact between providers and patients have a variety of positive influences on patients' perception towards vaccines, even leading to uptake, compared to paper-based methods. We implemented a Wizard of Oz experiment that counsels adults with children (n=18) on the HPV vaccine, using an iPad tablet and dialogue script developed by public health collaborators, and for early testing of a prospective conversational agent in this area. Our early results show that non-vaccine hesitant parents believed that agent was easy to use and had capabilities needed, despite the desire for additional features. Our future work will involve developing a dialogue engine to provide automated dialogue interaction and future improvements and experimentation for the speech interface.
Collapse
Affiliation(s)
- Muhammad AMITH
- The University of Texas Health Science Center at Houston
| | | | | | | | - Lara SAVAS
- The University of Texas Health Science Center at Houston
| | - Laura SHAY
- The University of Texas Health Science Center at San Antonio
| | | | - Yang GONG
- The University of Texas Health Science Center at Houston
| | | | - Kirk ROBERTS
- The University of Texas Health Science Center at Houston
| | - Cui TAO
- The University of Texas Health Science Center at Houston
| |
Collapse
|
5
|
He Z, Tao C, Bian J, Zhang R, Huang J. Introduction: selected extended articles from the 2nd International Workshop on Semantics-Powered Data Analytics (SEPDA 2017). BMC Med Inform Decis Mak 2018; 18:56. [PMID: 30066636 PMCID: PMC6069756 DOI: 10.1186/s12911-018-0624-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In this editorial, we first summarize the 2nd International Workshop on Semantics-Powered Data Analytics (SEPDA 2017) held on November 13, 2017 in Kansas City, Missouri, U.S.A., and then briefly introduce 13 research articles included in this supplement issue, covering topics such as Semantic Integration, Deep Learning, Knowledge Base Construction, and Natural Language Processing.
Collapse
Affiliation(s)
- Zhe He
- School of Information, Florida State University, 142 Collegiate Loop, Tallahassee, 32306, FL, USA.
| | - Cui Tao
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Jiang Bian
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA
| | - Rui Zhang
- Institute for Health Informatics and College of Pharmacy, University of Minnesota, Minneapolis, MN, USA
| | - Jingshan Huang
- School of Computing, University of South Alabama, Mobile, AL, USA
| |
Collapse
|
6
|
Amith M, Lin R, Liang C, Gong Y, Tao C. VEO-Engine: Interfacing and Reasoning with an Emotion Ontology for Device Visual Expression. HCI INTERNATIONAL 2018 - POSTERS' EXTENDED ABSTRACTS : 20TH INTERNATIONAL CONFERENCE, HCI INTERNATIONAL 2018, LAS VEGAS, NV, USA, JULY 15-20, 2018, PROCEEDINGS, PART I. INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION (20TH : 2018... 2018; 851:349-355. [PMID: 30701263 PMCID: PMC6349229 DOI: 10.1007/978-3-319-92279-9_47] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In order for machines to understand or express emotion to users, the specific emotions must be formally defined and the software coded to how those emotions are to be expressed. This is particularly important if devices or computer-based tools are utilized in clinical settings, which may be stressful for patients and where emotions can dominate their decision making. We have reported our development and feasibility results of an ontology, Visualized Emotion Ontology (VEO), that links abstract visualizations that express specific emotions. Here, we used VEO with the VEO-Engine, a software API package that interfaces with the VEO. The VEO-Engine was developed in Java using Apache Jena and OWL-API. The software package was tested on a Raspberry Pi machine with a small touchscreen display that linked each visualization to an emotion. The VEO-Engine stores input parameters of emotional situations and valences to reason and interpret users' emotions using the ontology-based reasoner. With this software, devices can interfaced wirelessly, so smart devices with visual displays can interact with the ontology. By means of the VEO-Engine, we show the portability and usability of the VEO in human-computer interaction.
Collapse
Affiliation(s)
- Muhammad Amith
- University of Texas Health Science Center, Houston, TX 77030, USA
| | - Rebecca Lin
- Johns Hopkins University, Baltimore, MD 21218, USA
| | - Chen Liang
- Lousiana Tech University, Ruston, LA 71272, USA
| | - Yang Gong
- University of Texas Health Science Center, Houston, TX 77030, USA
| | - Cui Tao
- University of Texas Health Science Center, Houston, TX 77030, USA
| |
Collapse
|