1
|
New techniques and methods for prevention and treatment of symptomatic traumatic neuroma: A systematic review. Front Neurol 2023; 14:1086806. [PMID: 36873443 PMCID: PMC9978738 DOI: 10.3389/fneur.2023.1086806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/02/2023] [Indexed: 02/18/2023] Open
Abstract
Generally, axons located at the central end of the nerve system will sprout after injury. Once these sprouts cannot reach the distal end of the severed nerve, they will form a traumatic neuroma. Traumatic neuromas bring a series of complex symptoms to patients, such as neuropathic pain, skin abnormalities, skeletal abnormalities, hearing loss, and visceral damage. To date, the most promising and practical clinical treatments are drug induction and surgery, but both have their limitations. Therefore, it will be the mainstream trend to explore new methods to prevent and treat traumatic neuroma by regulating and remodeling the microenvironment of nerve injury. This work first summarized the pathogenesis of traumatic neuroma. Additionally, the standard methods of prevention and treatment on traumatic neuroma were analyzed. We focused on three essential parts of advanced functional biomaterial therapy, stem cell therapy, and human-computer interface therapy to provide the availability and value of preventing and treating a traumatic neuroma. Finally, the revolutionary development of the prevention and treatment on traumatic neuroma has been prospected. How to transform the existing advanced functional materials, stem cells, and artificial intelligence robots into clinical practical technical means as soon as possible for high-quality nerve repair and prevention of neuroma was further discussed.
Collapse
|
2
|
Mental workload assessments of aerial photography missions performed by novice unmanned aerial vehicle operators. Work 2022; 75:181-193. [PMID: 36591669 DOI: 10.3233/wor-211222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Mental workload is one of the important variables in understanding human performance in drone operation. OBJECTIVE To test the effects of gender, age group, flight route, and altitude on the flight performance and mental workload of the novice drone operators. METHODS Ten male and 10 female participants without prior drone operating experience joined. They were split into two age groups. After attending a training, the participants operated a drone to perform photo taking missions under flight route and altitude conditions. The weighted NASA task-load-index (TLX), modified Cooper-Harper (MCH) scale, heart rate, and interbeat interval were measured to assess the mental workload of the participants. Flight time to complete the mission was adopted to indicate flight performance. RESULTS The effect of age group was significant (p < 0.05) on flight time, weighted TLX score, and MCH score. Flight route and altitude were not significant on the two subjective ratings and two cardiac measures. CONCLUSION The flight performance of younger participants was significantly better than that of their older counterpart. The effects of both the flight route and altitude on the perceived mental workload of the drone operators were insignificant. Both the weighted NASA TLX and MCH scales were appropriate in measuring the mental workload of the novice drone operators.
Collapse
|
3
|
Editorial: Methods and applications in neurorobotics. Front Neurorobot 2022; 16:1111877. [PMID: 36590080 PMCID: PMC9798275 DOI: 10.3389/fnbot.2022.1111877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 12/07/2022] [Indexed: 12/23/2022] Open
|
4
|
Simulator evaluation of an intersection maneuver assist system with connected and automated vehicle technologies. ERGONOMICS 2022:1-16. [PMID: 36062830 DOI: 10.1080/00140139.2022.2121006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/16/2022] [Indexed: 06/15/2023]
Abstract
Intersection crashes can be potentially mitigated through vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) safety management systems. It is important, however, to consider some of the human factors related aspects of such systems to maximise potential safety benefits. In this study, Intersection Manoeuvre Assistance Systems were conceptualised and evaluated in a driving simulator. The systems were designed to assist drivers with intersection manoeuvres by making use of connected infrastructure and providing real-time feedback, guidance, and active vehicle controls. The study compared drivers' confidence, workload, glances at the instrument panel, and hazard anticipation when driving using three systems-System A (no alert or assist); System B (alert only); and System C (alert and assist). Study results show differences in drivers' confidence in such systems and potentially degraded visual gaze behaviours. Practitioner summary: Connected infrastructure-based intersection management assistance systems can potentially reduce crashes. This experimental driving simulation study evaluated drivers' perceptions and reactions to intersection management systems. Results indicate reduced confidence in automated systems, reduced visual scanning for external hazards at intersections, and increased off-road glances towards the instrument panel.
Collapse
|
5
|
A New Labeling Approach for Proportional Electromyographic Control. SENSORS 2022; 22:s22041368. [PMID: 35214267 PMCID: PMC8962987 DOI: 10.3390/s22041368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/28/2022] [Accepted: 01/31/2022] [Indexed: 02/04/2023]
Abstract
Different control strategies are available for human machine interfaces based on electromyography (EMG) to map voluntary muscle signals to control signals of a remote controlled device. Complex systems such as robots or multi-fingered hands require a natural commanding, which can be realized with proportional and simultaneous control schemes. Machine learning approaches and methods based on regression are often used to realize the desired functionality. Training procedures often include the tracking of visual stimuli on a screen or additional sensors, such as cameras or force sensors, to create labels for decoder calibration. In certain scenarios, where ground truth, such as additional sensor data, can not be measured, e.g., with people suffering from physical disabilities, these methods come with the challenge of generating appropriate labels. We introduce a new approach that uses the EMG-feature stream recorded during a simple training procedure to generate continuous labels. The method avoids synchronization mismatches in the labels and has no need for additional sensor data. Furthermore, we investigated the influence of the transient phase of the muscle contraction when using the new labeling approach. For this purpose, we performed a user study involving 10 subjects performing online 2D goal-reaching and tracking tasks on a screen. In total, five different labeling methods were tested, including three variations of the new approach as well as methods based on binary labels, which served as a baseline. Results of the evaluation showed that the introduced labeling approach in combination with the transient phase leads to a proportional command that is more accurate than using only binary labels. In summary, this work presents a new labeling approach for proportional EMG control without the need of a complex training procedure or additional sensors.
Collapse
|
6
|
User Evaluation of Passenger Assistance System Concepts on Public Highways. Front Psychol 2021; 12:725808. [PMID: 34955955 PMCID: PMC8696277 DOI: 10.3389/fpsyg.2021.725808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
There is ample research on assistance systems for drivers in conventional and automated vehicles. In the past, those systems were developed to increase safety but also to increase driver comfort. Since many common risks have by now been mitigated through such systems, the research and development focus expanded to also include comfort-related assistance. However, the passenger has rarely been taken into account explicitly, although it has been shown that passenger discomfort is a relevant problem. Therefore, this work investigated the potential of passenger assistance systems to reduce such discomfort. Three different passenger assistant system prototypes were tested in a driving study on public highway with N = 19 participants. The systems provided information about parameters related to the performance of the driver and one additionally provided a communicative means of influence. For two passenger assistant systems, it could be shown that they significantly reduced passenger discomfort in at least a subset of the evaluated situations. The majority of participants rated one or multiple of the assistant systems as more comfortable than a ride without assistance. The system providing information about the attentiveness of the driver was most effective in reducing discomfort and was rated as the most helpful system. The results show that explicitly considering the situation of passengers in the design of assistance systems can positively impact their comfort. This can be achieved using information from common systems targeting driver assistance available to the passenger.
Collapse
|
7
|
Editorial: Somatosensory Integration in Human Movement: Perspectives for Neuromechanics, Modelling and Rehabilitation. Front Bioeng Biotechnol 2021; 9:725603. [PMID: 34336813 PMCID: PMC8317207 DOI: 10.3389/fbioe.2021.725603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
|
8
|
Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory. SENSORS 2021; 21:s21113673. [PMID: 34070528 PMCID: PMC8198032 DOI: 10.3390/s21113673] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 05/20/2021] [Accepted: 05/23/2021] [Indexed: 11/20/2022]
Abstract
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot.
Collapse
|
9
|
Wheelchair control for disabled patients using EMG/EOG based human machine interface: a review. J Med Eng Technol 2020; 45:61-74. [PMID: 33302770 DOI: 10.1080/03091902.2020.1853838] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The human-machine interface (HMI) and bio-signals have been used to control rehabilitation equipment and improve the lives of people with severe disabilities. This research depicts a review of electromyogram (EMG) or electrooculogram (EOG) signal-based control system for driving the wheelchair for disabled. For a paralysed person, EOG is one of the most useful signals that help to successfully communicate with the environment by using eye movements. In the case of amputation, the selection of muscles according to the distribution of power and frequency highly contributes to the specific motion of a wheelchair. Taking into account the day-to-day activities of persons with disabilities, both technologies are being used to design EMG or EOG based wheelchairs. This review paper examines a total of 70 EMG studies and 25 EOG studies published from 2000 to 2019. In addition, this paper covers current technologies used in wheelchair systems for signal capture, filtering, characterisation, and classification, including control commands such as left and right turns, forward and reverse motion, acceleration, deceleration, and wheelchair stop.
Collapse
|
10
|
An Integrated Multi-Sensor Approach for the Remote Monitoring of Parkinson's Disease. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4764. [PMID: 31684020 PMCID: PMC6864792 DOI: 10.3390/s19214764] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 01/30/2023]
Abstract
The increment of the prevalence of neurological diseases due to the trend in population aging demands for new strategies in disease management. In Parkinson's disease (PD), these strategies should aim at improving diagnosis accuracy and frequency of the clinical follow-up by means of decentralized cost-effective solutions. In this context, a system suitable for the remote monitoring of PD subjects is presented. It consists of the integration of two approaches investigated in our previous works, each one appropriate for the movement analysis of specific parts of the body: low-cost optical devices for the upper limbs and wearable sensors for the lower ones. The system performs the automated assessments of six motor tasks of the unified Parkinson's disease rating scale, and it is equipped with a gesture-based human machine interface designed to facilitate the user interaction and the system management. The usability of the system has been evaluated by means of standard questionnaires, and the accuracy of the automated assessment has been verified experimentally. The results demonstrate that the proposed solution represents a substantial improvement in PD assessment respect to the former two approaches treated separately, and a new example of an accurate, feasible and cost-effective mean for the decentralized management of PD.
Collapse
|
11
|
AR DriveSim: An Immersive Driving Simulator for Augmented Reality Head-Up Display Research. Front Robot AI 2019; 6:98. [PMID: 33501113 PMCID: PMC7805674 DOI: 10.3389/frobt.2019.00098] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Accepted: 09/26/2019] [Indexed: 11/21/2022] Open
Abstract
Optical see-through automotive head-up displays (HUDs) are a form of augmented reality (AR) that is quickly gaining penetration into the consumer market. Despite increasing adoption, demand, and competition among manufacturers to deliver higher quality HUDs with increased fields of view, little work has been done to understand how best to design and assess AR HUD user interfaces, and how to quantify their effects on driver behavior, performance, and ultimately safety. This paper reports on a novel, low-cost, immersive driving simulator created using a myriad of custom hardware and software technologies specifically to examine basic and applied research questions related to AR HUDs usage when driving. We describe our experiences developing simulator hardware and software and detail a user study that examines driver performance, visual attention, and preferences using two AR navigation interfaces. Results suggest that conformal AR graphics may not be inherently better than other HUD interfaces. We include lessons learned from our simulator development experiences, results of the user study and conclude with limitations and future work.
Collapse
|
12
|
Towards a Multimodal Model of Cognitive Workload Through Synchronous Optical Brain Imaging and Eye Tracking Measures. Front Hum Neurosci 2019; 13:375. [PMID: 31708760 PMCID: PMC6820355 DOI: 10.3389/fnhum.2019.00375] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 10/03/2019] [Indexed: 01/05/2023] Open
Abstract
Recent advances in neuroimaging technologies have rendered multimodal analysis of operators’ cognitive processes in complex task settings and environments increasingly more practical. In this exploratory study, we utilized optical brain imaging and mobile eye tracking technologies to investigate the behavioral and neurophysiological differences among expert and novice operators while they operated a human-machine interface in normal and adverse conditions. In congruence with related work, we observed that experts tended to have lower prefrontal oxygenation and exhibit gaze patterns that are better aligned with the optimal task sequence with shorter fixation durations as compared to novices. These trends reached statistical significance only in the adverse condition where the operators were prompted with an unexpected error message. Comparisons between hemodynamic and gaze measures before and after the error message indicated that experts’ neurophysiological response to the error involved a systematic increase in bilateral dorsolateral prefrontal cortex (dlPFC) activity accompanied with an increase in fixation durations, which suggests a shift in their attentional state, possibly from routine process execution to problem detection and resolution. The novices’ response was not as strong as that of experts, including a slight increase only in the left dlPFC with a decreasing trend in fixation durations, which is indicative of visual search behavior for possible cues to make sense of the unanticipated situation. A linear discriminant analysis model capitalizing on the covariance structure among hemodynamic and eye movement measures could distinguish experts from novices with 91% accuracy. Despite the small sample size, the performance of the linear discriminant analysis combining eye fixation and dorsolateral oxygenation measures before and after an unexpected event suggests that multimodal approaches may be fruitful for distinguishing novice and expert performance in similar neuroergonomic applications in the field.
Collapse
|
13
|
Continuous Finger Gesture Recognition Based on Flex Sensors. SENSORS 2019; 19:s19183986. [PMID: 31540184 PMCID: PMC6766835 DOI: 10.3390/s19183986] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 09/12/2019] [Accepted: 09/13/2019] [Indexed: 11/24/2022]
Abstract
The goal of this work is to present a novel continuous finger gesture recognition system based on flex sensors. The system is able to carry out accurate recognition of a sequence of gestures. Wireless smart gloves equipped with flex sensors were implemented for the collection of the training and testing sets. Given the sensory data acquired from the smart gloves, the gated recurrent unit (GRU) algorithm was then adopted for gesture spotting. During the training process for the GRU, the movements associated with different fingers and the transitions between two successive gestures were taken into consideration. On the basis of the gesture spotting results, the maximum a posteriori (MAP) estimation was carried out for the final gesture classification. Because of the effectiveness of the proposed spotting scheme, accurate gesture recognition was achieved even for complicated transitions between successive gestures. From the experimental results, it can be observed that the proposed system is an effective alternative for robust recognition of a sequence of finger gestures.
Collapse
|
14
|
Improvement of driver active interventions during automated driving by displaying trajectory pointers-A driving simulator study. TRAFFIC INJURY PREVENTION 2019; 20:S152-S156. [PMID: 31381449 DOI: 10.1080/15389588.2019.1610170] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 04/16/2019] [Accepted: 04/17/2019] [Indexed: 06/10/2023]
Abstract
Objective: The handover of vehicle control from automated to manual operation is a critical aspect of interaction between drivers and automated driving systems (ADS). In some cases, it is possible that the ADS may fail to detect an object. In this event, the driver must be aware of the situation and resume control of the vehicle without assistance from the system. Consequently, the driver must fulfill the following 2 main roles while driving: (1) monitor the vehicle trajectory and surrounding traffic environment and (2) actively take over vehicle control if the driver identifies a potential issue along the trajectory. An effective human-machine interface (HMI) is required that enables the driver to fulfill these roles. This article proposes an HMI that constantly indicates the future position of the vehicle. Methods: This research used the Toyota Dynamic Driving Simulator to evaluate the effect of the proposed HMI and compares the proposed HMI with an HMI that notifies the driver when the vehicle trajectory changes. A total of 48 test subjects were divided into 2 groups of 24: One group used the HMI that constantly indicated the future position of the vehicle and the other group used the HMI that provided information when the vehicle trajectory changed. The following instructions were given to the test subjects: (1) to not hold the steering wheel and to allow the vehicle to drive itself, (2) to constantly monitor the surrounding traffic environment because the functions of the ADS are limited, and (3) to take over driving if necessary. The driving simulator experiments were composed of an initial 10-min acclimatization period and a 10-min evaluation period. Approximately 10 min after the start of the evaluation period, a scenario occurred in which the ADS failed to detect an object on the vehicle trajectory, potentially resulting in a collision if the driver did not actively take over control and manually avoid the object. Results: The collision avoidance rate of the HMI that constantly indicated the future position of the vehicle was higher than that of the HMI that notified the driver of trajectory changes, χ2 = 6.38, P < .05. The steering wheel hands-on and steering override timings were also faster with the proposed HMI (t test; P < .05). Conclusions: This research confirmed that constantly indicating the position of the vehicle several seconds in the future facilitates active driver intervention when an ADS is in operation.
Collapse
|
15
|
A Piezoresistive Sensor to Measure Muscle Contraction and Mechanomyography. SENSORS 2018; 18:s18082553. [PMID: 30081541 PMCID: PMC6111775 DOI: 10.3390/s18082553] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 07/31/2018] [Accepted: 08/01/2018] [Indexed: 11/17/2022]
Abstract
Measurement of muscle contraction is mainly achieved through electromyography (EMG) and is an area of interest for many biomedical applications, including prosthesis control and human machine interface. However, EMG has some drawbacks, and there are also alternative methods for measuring muscle activity, such as by monitoring the mechanical variations that occur during contraction. In this study, a new, simple, non-invasive sensor based on a force-sensitive resistor (FSR) which is able to measure muscle contraction is presented. The sensor, applied on the skin through a rigid dome, senses the mechanical force exerted by the underlying contracting muscles. Although FSR creep causes output drift, it was found that appropriate FSR conditioning reduces the drift by fixing the voltage across the FSR and provides voltage output proportional to force. In addition to the larger contraction signal, the sensor was able to detect the mechanomyogram (MMG), i.e., the little vibrations which occur during muscle contraction. The frequency response of the FSR sensor was found to be large enough to correctly measure the MMG. Simultaneous recordings from flexor carpi ulnaris showed a high correlation (Pearson's r > 0.9) between the FSR output and the EMG linear envelope. Preliminary validation tests on healthy subjects showed the ability of the FSR sensor, used instead of the EMG, to proportionally control a hand prosthesis, achieving comparable performances.
Collapse
|
16
|
Concurrent surface electromyography and force myography classification during times of prosthetic socket shift and user fatigue. J Rehabil Assist Technol Eng 2017; 4:2055668317708731. [PMID: 31186928 PMCID: PMC6453103 DOI: 10.1177/2055668317708731] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Accepted: 04/03/2017] [Indexed: 11/15/2022] Open
Abstract
Objective Surface electromyography has been a long-standing source of signals for control of powered prosthetic devices. By contrast, force myography is a more recent alternative to surface electromyography that has the potential to enhance reliability and avoid operational challenges of surface electromyography during use. In this paper, we report on experiments conducted to assess improvements in classification of surface electromyography signals through the addition of collocated force myography consisting of piezo-resistive sensors. Methods Force sensors detect intrasocket pressure changes upon muscle activation due to changes in muscle volume during activities of daily living. A heterogeneous sensor configuration with four surface electromyography-force myography pairs was investigated as a control input for a powered upper limb prosthetic. Training of two different multilevel neural perceptron networks was employed during classification and trained on data gathered during experiments simulating socket shift and muscle fatigue. Results Results indicate that intrasocket pressure data used in conjunction with surface EMG data can improve classification of human intent and control of a powered prosthetic device compared to traditional, surface electromyography only systems. Significance Additional sensors lead to significantly better signal classification during times of user fatigue, poor socket fit, as well as radial and ulnar wrist deviation. Results from experimentally obtained training data sets are presented.
Collapse
|
17
|
Multi-modal demands of a smartphone used to place calls and enter addresses during highway driving relative to two embedded systems. ERGONOMICS 2016; 59:1565-1585. [PMID: 27110964 PMCID: PMC5215240 DOI: 10.1080/00140139.2016.1154189] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Accepted: 02/05/2016] [Indexed: 05/12/2023]
Abstract
There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to 'just driving', but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks.
Collapse
|
18
|
A Secure, Intelligent, and Smart-Sensing Approach for Industrial System Automation and Transmission over Unsecured Wireless Networks. SENSORS 2016; 16:s16030322. [PMID: 26950129 PMCID: PMC4813897 DOI: 10.3390/s16030322] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Revised: 01/22/2016] [Accepted: 01/25/2016] [Indexed: 11/17/2022]
Abstract
In Industrial systems, Supervisory control and data acquisition (SCADA) system, the pseudo-transport layer of the distributed network protocol (DNP3) performs the functions of the transport layer and network layer of the open systems interconnection (OSI) model. This study used a simulation design of water pumping system, in-which the network nodes are directly and wirelessly connected with sensors, and are monitored by the main controller, as part of the wireless SCADA system. This study also intends to focus on the security issues inherent in the pseudo-transport layer of the DNP3 protocol. During disassembly and reassembling processes, the pseudo-transport layer keeps track of the bytes sequence. However, no mechanism is available that can verify the message or maintain the integrity of the bytes in the bytes received/transmitted from/to the data link layer or in the send/respond from the main controller/sensors. To properly and sequentially keep track of the bytes, a mechanism is required that can perform verification while bytes are received/transmitted from/to the lower layer of the DNP3 protocol or the send/respond to/from field sensors. For security and byte verification purposes, a mechanism needs to be proposed for the pseudo-transport layer, by employing cryptography algorithm. A dynamic choice security buffer (SB) is designed and employed during the security development. To achieve the desired goals of the proposed study, a pseudo-transport layer stack model is designed using the DNP3 protocol open library and the security is deployed and tested, without changing the original design.
Collapse
|
19
|
Abstract
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Collapse
|