1
|
Video Processing from a Virtual Unmanned Aerial Vehicle: Comparing Two Approaches to Using OpenCV in Unity. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Virtual reality (VR) simulators enable the evaluation of engineering systems and robotic solutions in safe and realistic environments. To do so, VR simulators must run algorithms in real time to accurately recreate the expected behaviour of real-life processes. This work was aimed at determining a suitable configuration for processing images taken from a virtual unmanned aerial vehicle developed in Unity using OpenCV. To this end, it was focused on comparing two approaches to integrate video processing in order to avoid potential pitfalls such as delays and bottlenecks. The first approach used a dynamic link library (DLL) programmed in C++, and the second an external module programmed in Python. The native DLL ran internally on the same Unity thread, as opposed to the Python module that ran in parallel to the main process and communicated with Unity through the Message Queue Telemetry Transport (MQTT) protocol. Pre-transmission processing, data transmission and video processing were evaluated for a pair of typical image-processing tasks like colour and face detection. The analysis confirmed that running the Python module in parallel does not overload the main Unity thread and achieves better performance than the C++ plugin in real-time simulation.
Collapse
|
2
|
PlatypOUs-A Mobile Robot Platform and Demonstration Tool Supporting STEM Education. SENSORS 2022; 22:s22062284. [PMID: 35336455 PMCID: PMC8949973 DOI: 10.3390/s22062284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/07/2022] [Accepted: 03/08/2022] [Indexed: 11/17/2022]
Abstract
Given the rising popularity of robotics, student-driven robot development projects are playing a key role in attracting more people towards engineering and science studies. This article presents the early development process of an open-source mobile robot platform—named PlatypOUs—which can be remotely controlled via an electromyography (EMG) appliance using the MindRove brain–computer interface (BCI) headset as a sensor for the purpose of signal acquisition. The gathered bio-signals are classified by a Support Vector Machine (SVM) whose results are translated into motion commands for the mobile platform. Along with the physical mobile robot platform, a virtual environment was implemented using Gazebo (an open-source 3D robotic simulator) inside the Robot Operating System (ROS) framework, which has the same capabilities as the real-world device. This can be used for development and test purposes. The main goal of the PlatypOUs project is to create a tool for STEM education and extracurricular activities, particularly laboratory practices and demonstrations. With the physical robot, the aim is to improve awareness of STEM outside and beyond the scope of regular education programmes. It implies several disciplines, including system design, control engineering, mobile robotics and machine learning with several application aspects in each. Using the PlatypOUs platform and the simulator provides students and self-learners with a firsthand exercise, and teaches them to deal with complex engineering problems in a professional, yet intriguing way.
Collapse
|
3
|
Khan N, Muhammad K, Hussain T, Nasir M, Munsif M, Imran AS, Sajjad M. An Adaptive Game-Based Learning Strategy for Children Road Safety Education and Practice in Virtual Space. SENSORS 2021; 21:s21113661. [PMID: 34070237 PMCID: PMC8197389 DOI: 10.3390/s21113661] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/14/2021] [Accepted: 05/18/2021] [Indexed: 11/16/2022]
Abstract
Virtual reality (VR) has been widely used as a tool to assist people by letting them learn and simulate situations that are too dangerous and risky to practice in real life, and one of these is road safety training for children. Traditional video- and presentation-based road safety training has average output results as it lacks physical practice and the involvement of children during training, without any practical testing examination to check the learned abilities of a child before their exposure to real-world environments. Therefore, in this paper, we propose a 3D realistic open-ended VR and Kinect sensor-based training setup using the Unity game engine, wherein children are educated and involved in road safety exercises. The proposed system applies the concepts of VR in a game-like setting to let the children learn about traffic rules and practice them in their homes without any risk of being exposed to the outside environment. Thus, with our interactive and immersive training environment, we aim to minimize road accidents involving children and contribute to the generic domain of healthcare. Furthermore, the proposed framework evaluates the overall performance of the students in a virtual environment (VE) to develop their road-awareness skills. To ensure safety, the proposed system has an extra examination layer for children’s abilities evaluation, whereby a child is considered fit for real-world practice in cases where they fulfil certain criteria by achieving set scores. To show the robustness and stability of the proposed system, we conduct four types of subjective activities by involving a group of ten students with average grades in their classes. The experimental results show the positive effect of the proposed system in improving the road crossing behavior of the children.
Collapse
Affiliation(s)
- Noman Khan
- Visual Analytics for Knowledge Laboratory, Department of Software, Sejong University, Seoul 143-747, Korea; (N.K.); (K.M.)
- Digital Image Processing Laboratory, Department of Computer Science, Islamia College University Peshawar, Peshawar 25000, Pakistan; (M.N.); (M.M.)
| | - Khan Muhammad
- Visual Analytics for Knowledge Laboratory, Department of Software, Sejong University, Seoul 143-747, Korea; (N.K.); (K.M.)
| | - Tanveer Hussain
- Department of Software, Sejong University, Seoul 143-747, Korea;
| | - Mansoor Nasir
- Digital Image Processing Laboratory, Department of Computer Science, Islamia College University Peshawar, Peshawar 25000, Pakistan; (M.N.); (M.M.)
| | - Muhammad Munsif
- Digital Image Processing Laboratory, Department of Computer Science, Islamia College University Peshawar, Peshawar 25000, Pakistan; (M.N.); (M.M.)
| | - Ali Shariq Imran
- Norwegian Colour and Visual Computing Laboratory, Department of Computer Science (IDI), Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway;
| | - Muhammad Sajjad
- Visual Analytics for Knowledge Laboratory, Department of Software, Sejong University, Seoul 143-747, Korea; (N.K.); (K.M.)
- Digital Image Processing Laboratory, Department of Computer Science, Islamia College University Peshawar, Peshawar 25000, Pakistan; (M.N.); (M.M.)
- Norwegian Colour and Visual Computing Laboratory, Department of Computer Science (IDI), Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway;
- Correspondence: or
| |
Collapse
|
4
|
Facial Emotion Recognition from an Unmanned Flying Social Robot for Home Care of Dependent People. ELECTRONICS 2021. [DOI: 10.3390/electronics10070868] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This work is part of an ongoing research project to develop an unmanned flying social robot to monitor dependants at home in order to detect the person’s state and bring the necessary assistance. In this sense, this paper focuses on the description of a virtual reality (VR) simulation platform for the monitoring process of an avatar in a virtual home by a rotatory-wing autonomous unmanned aerial vehicle (UAV). This platform is based on a distributed architecture composed of three modules communicated through the message queue telemetry transport (MQTT) protocol: the UAV Simulator implemented in MATLAB/Simulink, the VR Visualiser developed in Unity, and the new emotion recognition (ER) system developed in Python. Using a face detection algorithm and a convolutional neural network (CNN), the ER System is able to detect the person’s face in the image captured by the UAV’s on-board camera and classify the emotion among seven possible ones (surprise; fear; happiness; sadness; disgust; anger; or neutral expression). The experimental results demonstrate the correct integration of this new computer vision module within the VR platform, as well as the good performance of the designed CNN, with around 85% in the F1-score, a mean of the precision and recall of the model. The developed emotion detection system can be used in the future implementation of the assistance UAV that monitors dependent people in a real environment, since the methodology used is valid for images of real people.
Collapse
|