1
|
Evaluation of Prompts to Simplify Cardiovascular Disease Information Generated Using a Large Language Model: Cross-Sectional Study. J Med Internet Res 2024; 26:e55388. [PMID: 38648104 DOI: 10.2196/55388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/25/2024] [Accepted: 01/31/2024] [Indexed: 04/25/2024] Open
Abstract
In this cross-sectional study, we evaluated the completeness, readability, and syntactic complexity of cardiovascular disease prevention information produced by GPT-4 in response to 4 kinds of prompts.
Collapse
|
2
|
A Perspective on Crowdsourcing and Human-in-the-Loop Workflows in Precision Health. J Med Internet Res 2024; 26:e51138. [PMID: 38602750 PMCID: PMC11046386 DOI: 10.2196/51138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 11/15/2023] [Accepted: 01/30/2024] [Indexed: 04/12/2024] Open
Abstract
Modern machine learning approaches have led to performant diagnostic models for a variety of health conditions. Several machine learning approaches, such as decision trees and deep neural networks, can, in principle, approximate any function. However, this power can be considered to be both a gift and a curse, as the propensity toward overfitting is magnified when the input data are heterogeneous and high dimensional and the output class is highly nonlinear. This issue can especially plague diagnostic systems that predict behavioral and psychiatric conditions that are diagnosed with subjective criteria. An emerging solution to this issue is crowdsourcing, where crowd workers are paid to annotate complex behavioral features in return for monetary compensation or a gamified experience. These labels can then be used to derive a diagnosis, either directly or by using the labels as inputs to a diagnostic machine learning model. This viewpoint describes existing work in this emerging field and discusses ongoing challenges and opportunities with crowd-powered diagnostic systems, a nascent field of study. With the correct considerations, the addition of crowdsourcing to human-in-the-loop machine learning workflows for the prediction of complex and nuanced health conditions can accelerate screening, diagnostics, and ultimately access to care.
Collapse
|
3
|
A Deep Learning Based Platform for Remote Sensing Images Change Detection Integrating Crowdsourcing and Active Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:1509. [PMID: 38475044 DOI: 10.3390/s24051509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 02/07/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
Remote sensing images change detection technology has become a popular tool for monitoring the change type, area, and distribution of land cover, including cultivated land, forest land, photovoltaic, roads, and buildings. However, traditional methods which rely on pre-annotation and on-site verification are time-consuming and challenging to meet timeliness requirements. With the emergence of artificial intelligence, this paper proposes an automatic change detection model and a crowdsourcing collaborative framework. The framework uses human-in-the-loop technology and an active learning approach to transform the manual interpretation method into a human-machine collaborative intelligent interpretation method. This low-cost and high-efficiency framework aims to solve the problem of weak model generalization caused by the lack of annotated data in change detection. The proposed framework can effectively incorporate expert domain knowledge and reduce the cost of data annotation while improving model performance. To ensure data quality, a crowdsourcing quality control model is constructed to evaluate the annotation qualification of the annotators and check their annotation results. Furthermore, a prototype of automatic detection and crowdsourcing collaborative annotation management platform is developed, which integrates annotation, crowdsourcing quality control, and change detection applications. The proposed framework and platform can help natural resource departments monitor land cover changes efficiently and effectively.
Collapse
|
4
|
Who are the publics engaging in AI? PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2024:9636625231219853. [PMID: 38282355 DOI: 10.1177/09636625231219853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
Given the importance of public engagement in governments' adoption of artificial intelligence systems, artificial intelligence researchers and practitioners spend little time reflecting on who those publics are. Classifying publics affects assumptions and affordances attributed to the publics' ability to contribute to policy or knowledge production. Further complicating definitions are the publics' role in artificial intelligence production and optimization. Our structured analysis of the corpus used a mixed method, where algorithmic generation of search terms allowed us to examine approximately 2500 articles and provided the foundation to conduct an extensive systematic literature review of approximately 100 documents. Results show the multiplicity of ways publics are framed, by examining and revealing the different semantic nuances, affordances, political and expertise lenses, and, finally, a lack of definitions. We conclude that categorizing publics represents an act of power, politics, and truth-seeking in artificial intelligence.
Collapse
|
5
|
Sensors for Digital Transformation in Smart Forestry. SENSORS (BASEL, SWITZERLAND) 2024; 24:798. [PMID: 38339515 PMCID: PMC10857223 DOI: 10.3390/s24030798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 01/16/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024]
Abstract
Smart forestry, an innovative approach leveraging artificial intelligence (AI), aims to enhance forest management while minimizing the environmental impact. The efficacy of AI in this domain is contingent upon the availability of extensive, high-quality data, underscoring the pivotal role of sensor-based data acquisition in the digital transformation of forestry. However, the complexity and challenging conditions of forest environments often impede data collection efforts. Achieving the full potential of smart forestry necessitates a comprehensive integration of sensor technologies throughout the process chain, ensuring the production of standardized, high-quality data essential for AI applications. This paper highlights the symbiotic relationship between human expertise and the digital transformation in forestry, particularly under challenging conditions. We emphasize the human-in-the-loop approach, which allows experts to directly influence data generation, enhancing adaptability and effectiveness in diverse scenarios. A critical aspect of this integration is the deployment of autonomous robotic systems in forests, functioning both as data collectors and processing hubs. These systems are instrumental in facilitating sensor integration and generating substantial volumes of quality data. We present our universal sensor platform, detailing our experiences and the critical importance of the initial phase in digital transformation-the generation of comprehensive, high-quality data. The selection of appropriate sensors is a key factor in this process, and our findings underscore its significance in advancing smart forestry.
Collapse
|
6
|
A Generative Model to Embed Human Expressivity into Robot Motions. SENSORS (BASEL, SWITZERLAND) 2024; 24:569. [PMID: 38257661 PMCID: PMC10819644 DOI: 10.3390/s24020569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 12/31/2023] [Accepted: 01/15/2024] [Indexed: 01/24/2024]
Abstract
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot's linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs.
Collapse
|
7
|
Development and Evaluation of Machine Learning in Whole-Body Magnetic Resonance Imaging for Detecting Metastases in Patients With Lung or Colon Cancer: A Diagnostic Test Accuracy Study. Invest Radiol 2023; 58:823-831. [PMID: 37358356 PMCID: PMC10662596 DOI: 10.1097/rli.0000000000000996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/01/2023] [Indexed: 06/27/2023]
Abstract
OBJECTIVES Whole-body magnetic resonance imaging (WB-MRI) has been demonstrated to be efficient and cost-effective for cancer staging. The study aim was to develop a machine learning (ML) algorithm to improve radiologists' sensitivity and specificity for metastasis detection and reduce reading times. MATERIALS AND METHODS A retrospective analysis of 438 prospectively collected WB-MRI scans from multicenter Streamline studies (February 2013-September 2016) was undertaken. Disease sites were manually labeled using Streamline reference standard. Whole-body MRI scans were randomly allocated to training and testing sets. A model for malignant lesion detection was developed based on convolutional neural networks and a 2-stage training strategy. The final algorithm generated lesion probability heat maps. Using a concurrent reader paradigm, 25 radiologists (18 experienced, 7 inexperienced in WB-/MRI) were randomly allocated WB-MRI scans with or without ML support to detect malignant lesions over 2 or 3 reading rounds. Reads were undertaken in the setting of a diagnostic radiology reading room between November 2019 and March 2020. Reading times were recorded by a scribe. Prespecified analysis included sensitivity, specificity, interobserver agreement, and reading time of radiology readers to detect metastases with or without ML support. Reader performance for detection of the primary tumor was also evaluated. RESULTS Four hundred thirty-three evaluable WB-MRI scans were allocated to algorithm training (245) or radiology testing (50 patients with metastases, from primary 117 colon [n = 117] or lung [n = 71] cancer). Among a total 562 reads by experienced radiologists over 2 reading rounds, per-patient specificity was 86.2% (ML) and 87.7% (non-ML) (-1.5% difference; 95% confidence interval [CI], -6.4%, 3.5%; P = 0.39). Sensitivity was 66.0% (ML) and 70.0% (non-ML) (-4.0% difference; 95% CI, -13.5%, 5.5%; P = 0.344). Among 161 reads by inexperienced readers, per-patient specificity in both groups was 76.3% (0% difference; 95% CI, -15.0%, 15.0%; P = 0.613), with sensitivity of 73.3% (ML) and 60.0% (non-ML) (13.3% difference; 95% CI, -7.9%, 34.5%; P = 0.313). Per-site specificity was high (>90%) for all metastatic sites and experience levels. There was high sensitivity for the detection of primary tumors (lung cancer detection rate of 98.6% with and without ML [0.0% difference; 95% CI, -2.0%, 2.0%; P = 1.00], colon cancer detection rate of 89.0% with and 90.6% without ML [-1.7% difference; 95% CI, -5.6%, 2.2%; P = 0.65]). When combining all reads from rounds 1 and 2, reading times fell by 6.2% (95% CI, -22.8%, 10.0%) when using ML. Round 2 read-times fell by 32% (95% CI, 20.8%, 42.8%) compared with round 1. Within round 2, there was a significant decrease in read-time when using ML support, estimated as 286 seconds (or 11%) quicker ( P = 0.0281), using regression analysis to account for reader experience, read round, and tumor type. Interobserver variance suggests moderate agreement, Cohen κ = 0.64; 95% CI, 0.47, 0.81 (with ML), and Cohen κ = 0.66; 95% CI, 0.47, 0.81 (without ML). CONCLUSIONS There was no evidence of a significant difference in per-patient sensitivity and specificity for detecting metastases or the primary tumor using concurrent ML compared with standard WB-MRI. Radiology read-times with or without ML support fell for round 2 reads compared with round 1, suggesting that readers familiarized themselves with the study reading method. During the second reading round, there was a significant reduction in reading time when using ML support.
Collapse
|
8
|
Machine Learning and Explainable Artificial Intelligence Using Counterfactual Explanations for Evaluating Posture Parameters. Bioengineering (Basel) 2023; 10:bioengineering10050511. [PMID: 37237581 DOI: 10.3390/bioengineering10050511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/28/2023] Open
Abstract
Postural deficits such as hyperlordosis (hollow back) or hyperkyphosis (hunchback) are relevant health issues. Diagnoses depend on the experience of the examiner and are, therefore, often subjective and prone to errors. Machine learning (ML) methods in combination with explainable artificial intelligence (XAI) tools have proven useful for providing an objective, data-based orientation. However, only a few works have considered posture parameters, leaving the potential for more human-friendly XAI interpretations still untouched. Therefore, the present work proposes an objective, data-driven ML system for medical decision support that enables especially human-friendly interpretations using counterfactual explanations (CFs). The posture data for 1151 subjects were recorded by means of stereophotogrammetry. An expert-based classification of the subjects regarding the presence of hyperlordosis or hyperkyphosis was initially performed. Using a Gaussian progress classifier, the models were trained and interpreted using CFs. The label errors were flagged and re-evaluated using confident learning. Very good classification performances for both hyperlordosis and hyperkyphosis were found, whereby the re-evaluation and correction of the test labels led to a significant improvement (MPRAUC = 0.97). A statistical evaluation showed that the CFs seemed to be plausible, in general. In the context of personalized medicine, the present study's approach could be of importance for reducing diagnostic errors and thereby improving the individual adaptation of therapeutic measures. Likewise, it could be a basis for the development of apps for preventive posture assessment.
Collapse
|
9
|
Adaptive cueing strategy for gait modification: A case study using auditory cues. Front Neurorobot 2023; 17:1127033. [PMID: 37033414 PMCID: PMC10076772 DOI: 10.3389/fnbot.2023.1127033] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 02/28/2023] [Indexed: 04/11/2023] Open
Abstract
People with Parkinson's (PwP) experience gait impairments that can be improved through cue training, where visual, auditory, or haptic cues are provided to guide the walker's cadence or step length. There are two types of cueing strategies: open and closed-loop. Closed-loop cueing may be more effective in addressing habituation and cue dependency, but has to date been rarely validated with PwP. In this study, we adapt a human-in-the-loop framework to conduct preliminary analysis with four PwP. The closed-loop framework learns an individualized model of the walker's responsiveness to cues and generates an optimized cue based on the model. In this feasibility study, we determine whether participants in early stages of Parkinson's can respond to the novel cueing framework, and compare the performance of the framework to two alternative cueing strategies (fixed/proportional approaches) in changing the participant's cadence to two target cadences (speed up/slow down). The preliminary results show that the selection of the target cadence has an impact on the participant's gait performance. With the appropriate target, the framework and the fixed approaches perform similarly in slowing the participants' cadence. However, the proposed framework demonstrates better efficiency, explainability, and robustness across participants. Participants also have the highest retention rate in the absence of cues with the proposed framework. Finally, there is no clear benefit of using the proportional approach.
Collapse
|
10
|
Automatic synchrotron tomographic alignment schemes based on genetic algorithms and human-in-the-loop software. JOURNAL OF SYNCHROTRON RADIATION 2023; 30:169-178. [PMID: 36601935 PMCID: PMC9814067 DOI: 10.1107/s1600577522011067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 11/18/2022] [Indexed: 06/17/2023]
Abstract
Tomography imaging methods at synchrotron light sources keep evolving, pushing multi-modal characterization capabilities at high spatial and temporal resolutions. To achieve this goal, small probe size and multi-dimensional scanning schemes are utilized more often in the beamlines, leading to rising complexities and challenges in the experimental setup process. To avoid spending a significant amount of human effort and beam time on aligning the X-ray probe, sample and detector for data acquisition, most attention has been drawn to realigning the systems at the data processing stages. However, post-processing cannot correct everything, and is not time efficient. Here we present automatic alignment schemes of the rotational axis and sample pre- and during the data acquisition process using a software approach which combines the advantages of genetic algorithms and human intelligence. Our approach shows excellent sub-pixel alignment efficiency for both tasks in a short time, and therefore holds great potential for application in the data acquisition systems of future scanning tomography experiments.
Collapse
|
11
|
Leveraging explanations in interactive machine learning: An overview. Front Artif Intell 2023; 6:1066049. [PMID: 36909207 PMCID: PMC9995896 DOI: 10.3389/frai.2023.1066049] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 02/01/2023] [Indexed: 02/25/2023] Open
Abstract
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.
Collapse
|
12
|
A human-in-the-loop based Bayesian network approach to improve imbalanced radiation outcomes prediction for hepatocellular cancer patients with stereotactic body radiotherapy. Front Oncol 2022; 12:1061024. [PMID: 36568208 PMCID: PMC9782976 DOI: 10.3389/fonc.2022.1061024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 11/01/2022] [Indexed: 12/13/2022] Open
Abstract
Background Imbalanced outcome is one of common characteristics of oncology datasets. Current machine learning approaches have limitation in learning from such datasets. Here, we propose to resolve this problem by utilizing a human-in-the-loop (HITL) approach, which we hypothesize will also lead to more accurate and explainable outcome prediction models. Methods A total of 119 HCC patients with 163 tumors were used in the study. 81 patients with 104 tumors from the University of Michigan Hospital treated with SBRT were considered as a discovery dataset for radiation outcomes model building. The external testing dataset included 59 tumors from 38 patients with SBRT from Princess Margaret Hospital. In the discovery dataset, 100 tumors from 77 patients had local control (LC) (96% of 104 tumors) and 23 patients had at least one grade increment of ALBI (I-ALBI) during six-month follow up (28% of 81 patients). Each patient had a total of 110 features, where 15 or 20 features were identified by physicians as expert knowledge features (EKFs) for LC or I-ALBI prediction. We proposed a HITL based Bayesian network (HITL-BN) approach to enhance the capability of selecting important features from imbalanced data in terms of accuracy and explainability through humans' participation by integrating feature importance ranking and Markov blanket algorithms. A pure data-driven Bayesian network (PD-BN) method was applied to the same discovery dataset of HCC patients as a benchmark. Results In the training and testing phases, the areas under receiver operating characteristic curves of the HITL-BN models for LC or I-ALBI prediction during SBRT are 0.85 (95% confidence interval: 0.75-0.95) or 0.89 (0.81-0.95) and 0.77 or 0.78, respectively. They significantly outperformed the during-treatment PD-BN model in predicting LC or I-ALBI based on the discovery cross-validation and testing datasets from the Delong tests. Conclusion By allowing the human expert to be part of the model building process, the HITL-BN approach yielded significantly improved accuracy as well as better explainability when dealing with imbalanced outcomes in the prediction of post-SBRT treatment response of HCC patients when compared to the PD-BN method.
Collapse
|
13
|
Mist and Edge Computing Cyber-Physical Human-Centered Systems for Industry 5.0: A Cost-Effective IoT Thermal Imaging Safety System. SENSORS (BASEL, SWITZERLAND) 2022; 22:8500. [PMID: 36366192 PMCID: PMC9658932 DOI: 10.3390/s22218500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/28/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
While many companies worldwide are still striving to adjust to Industry 4.0 principles, the transition to Industry 5.0 is already underway. Under such a paradigm, Cyber-Physical Human-centered Systems (CPHSs) have emerged to leverage operator capabilities in order to meet the goals of complex manufacturing systems towards human-centricity, resilience and sustainability. This article first describes the essential concepts for the development of Industry 5.0 CPHSs and then analyzes the latest CPHSs, identifying their main design requirements and key implementation components. Moreover, the major challenges for the development of such CPHSs are outlined. Next, to illustrate the previously described concepts, a real-world Industry 5.0 CPHS is presented. Such a CPHS enables increased operator safety and operation tracking in manufacturing processes that rely on collaborative robots and heavy machinery. Specifically, the proposed use case consists of a workshop where a smarter use of resources is required, and human proximity detection determines when machinery should be working or not in order to avoid incidents or accidents involving such machinery. The proposed CPHS makes use of a hybrid edge computing architecture with smart mist computing nodes that processes thermal images and reacts to prevent industrial safety issues. The performed experiments show that, in the selected real-world scenario, the developed CPHS algorithms are able to detect human presence with low-power devices (with a Raspberry Pi 3B) in a fast and accurate way (in less than 10 ms with a 97.04% accuracy), thus being an effective solution (e.g., a good trade-off between cost, accuracy, resilience and computational efficiency) that can be integrated into many Industry 5.0 applications. Finally, this article provides specific guidelines that will help future developers and managers to overcome the challenges that will arise when deploying the next generation of CPHSs for smart and sustainable manufacturing.
Collapse
|
14
|
Deep Learning-Based Energy Expenditure Estimation in Assisted and Non-Assisted Gait Using Inertial, EMG, and Heart Rate Wearable Sensors. SENSORS (BASEL, SWITZERLAND) 2022; 22:7913. [PMID: 36298264 PMCID: PMC9607229 DOI: 10.3390/s22207913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/07/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
Energy expenditure is a key rehabilitation outcome and is starting to be used in robotics-based rehabilitation through human-in-the-loop control to tailor robot assistance towards reducing patients’ energy effort. However, it is usually assessed by indirect calorimetry which entails a certain degree of invasiveness and provides delayed data, which is not suitable for controlling robotic devices. This work proposes a deep learning-based tool for steady-state energy expenditure estimation based on more ergonomic sensors than indirect calorimetry. The study innovates by estimating the energy expenditure in assisted and non-assisted conditions and in slow gait speeds similarly to impaired subjects. This work explores and benchmarks the long short-term memory (LSTM) and convolutional neural network (CNN) as deep learning regressors. As inputs, we fused inertial data, electromyography, and heart rate signals measured by on-body sensors from eight healthy volunteers walking with and without assistance from an ankle-foot exoskeleton at 0.22, 0.33, and 0.44 m/s. LSTM and CNN were compared against indirect calorimetry using a leave-one-subject-out cross-validation technique. Results showed the suitability of this tool, especially CNN, that demonstrated root-mean-squared errors of 0.36 W/kg and high correlation (ρ > 0.85) between target and estimation (R¯2 = 0.79). CNN was able to discriminate the energy expenditure between assisted and non-assisted gait, basal, and walking energy expenditure, throughout three slow gait speeds. CNN regressor driven by kinematic and physiological data was shown to be a more ergonomic technique for estimating the energy expenditure, contributing to the clinical assessment in slow and robotic-assisted gait and future research concerning human-in-the-loop control.
Collapse
|
15
|
Human Control Model Estimation in Physical Human-Machine Interaction: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:1732. [PMID: 35270878 PMCID: PMC8914850 DOI: 10.3390/s22051732] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/11/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
The study of human-machine interaction as a unique control system was one of the first research interests in the engineering field, with almost a century having passed since the first works appeared in this area. At the same time, it is a crucial aspect of the most recent technological developments made in application fields such as collaborative robotics and artificial intelligence. Learning the processes and dynamics underlying human control strategies when interacting with controlled elements or objects of a different nature has been the subject of research in neuroscience, aerospace, robotics, and artificial intelligence. The cross-domain nature of this field of study can cause difficulties in finding a guiding line that links motor control theory, modelling approaches in physiological control systems, and identifying human-machine general control models in manipulative tasks. The discussed models have varying levels of complexity, from the first quasi-linear model in the frequency domain to the successive optimal control model. These models include detailed descriptions of physiologic subsystems and biomechanics. The motivation behind this work is to provide a complete view of the linear models that could be easily handled both in the time domain and in the frequency domain by using a well-established methodology in the classical linear systems and control theory.
Collapse
|
16
|
Quantum Materials Manufacturing. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022:e2109892. [PMID: 35195312 DOI: 10.1002/adma.202109892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 01/13/2022] [Indexed: 06/14/2023]
Abstract
The quantum age is just around the corner. As quantum systems become more stable, robust, and mainstream, tackling the challenge of high-throughput manufacturing will require further developments in materials synthesis, characterization, assembly, and diagnostics. As the building blocks of future technologies scale down to atomic and molecular scales, a paradigm shift in manufacturing will begin to take shape. Inspired by a quantum manufacturing world that elevates the Materials Genome Initiative to the next level, a "human-in-the-loop" framework for high-throughput manufacturing, which addresses key opportunities and challenges to be overcome, is outlined.
Collapse
|
17
|
Healing Hands: The Tactile Internet in Future Tele-Healthcare. SENSORS 2022; 22:s22041404. [PMID: 35214306 PMCID: PMC8963047 DOI: 10.3390/s22041404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/31/2022] [Accepted: 02/01/2022] [Indexed: 02/01/2023]
Abstract
In the early 2020s, the coronavirus pandemic brought the notion of remotely connected care to the general population across the globe. Oftentimes, the timely provisioning of access to and the implementation of affordable care are drivers behind tele-healthcare initiatives. Tele-healthcare has already garnered significant momentum in research and implementations in the years preceding the worldwide challenge of 2020, supported by the emerging capabilities of communication networks. The Tactile Internet (TI) with human-in-the-loop is one of those developments, leading to the democratization of skills and expertise that will significantly impact the long-term developments of the provisioning of care. However, significant challenges remain that require today’s communication networks to adapt to support the ultra-low latency required. The resulting latency challenge necessitates trans-disciplinary research efforts combining psychophysiological as well as technological solutions to achieve one millisecond and below round-trip times. The objective of this paper is to provide an overview of the benefits enabled by solving this network latency reduction challenge by employing state-of-the-art Time-Sensitive Networking (TSN) devices in a testbed, realizing the service differentiation required for the multi-modal human-machine interface. With completely new types of services and use cases resulting from the TI, we describe the potential impacts on remote surgery and remote rehabilitation as examples, with a focus on the future of tele-healthcare in rural settings.
Collapse
|
18
|
New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. FRONTIERS IN BIOINFORMATICS 2022; 1:777101. [PMID: 36303792 PMCID: PMC9580868 DOI: 10.3389/fbinf.2021.777101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/15/2021] [Indexed: 01/02/2023] Open
Abstract
Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.
Collapse
|
19
|
Fault Tolerant Strategies for Automated Insulin Delivery Considering the Human Component: Current and Future Perspectives. J Diabetes Sci Technol 2021; 15:1224-1231. [PMID: 34286613 PMCID: PMC8655284 DOI: 10.1177/19322968211029297] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Automated Insulin Delivery (AID) are systems developed for daily use by people with type 1 diabetes (T1D). To ensure the safety of users, it is essential to consider how the human factor affects the performance and safety of these devices. While there are numerous publications on hardware-related failures of AID systems, there are few studies on the human component of the system. From a control point of view, people with T1D using AID systems are at the same time the plant to be controlled and the plant operator. Therefore, users may induce faults in the controller, sensors, actuators, and the plant itself. Strategies to cope with the human interaction in AID systems are needed for further development of the technology. In this paper, we present an analysis of potential faults introduced by AID users when the system is under normal operation. This is followed by a review of current fault tolerant control (FTC) approaches to identify missing areas of research. The paper concludes with a discussion on future directions for the new generation of FTC AID systems.
Collapse
|
20
|
Haptic Devices Based on Real-Time Dynamic Models of Multibody Systems. SENSORS 2021; 21:s21144794. [PMID: 34300535 PMCID: PMC8309802 DOI: 10.3390/s21144794] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/09/2021] [Accepted: 07/12/2021] [Indexed: 11/16/2022]
Abstract
Multibody modeling of mechanical systems can be applied to various applications. Human-in-the-loop interfaces represent a growing research field, for which increasingly more devices include a dynamic multibody model to emulate the system physics in real-time. In this scope, reliable and highly dynamic sensors, to both validate those models and to measure in real-time the physical system behavior, have become crucial. In this paper, a multibody modeling approach in relative coordinates is proposed, based on symbolic equations of the physical system. The model is running in a ROS environment, which interacts with sensors and actuators. Two real-time applications with haptic feedback are presented: a piano key and a car simulator. In the present work, several sensors are used to characterize and validate the multibody model, but also to measure the system kinematics and dynamics within the human-in-the-loop process, and to ultimately validate the haptic device behavior. Experimental results for both developed devices confirm the interest of an embedded multibody model to enhance the haptic feedback performances. Besides, model parameters variations during the experiments illustrate the infinite possibilities that such model-based configurable haptic devices can offer.
Collapse
|
21
|
Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications. SENSORS 2021; 21:s21144626. [PMID: 34300366 PMCID: PMC8309492 DOI: 10.3390/s21144626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 06/27/2021] [Accepted: 07/01/2021] [Indexed: 11/23/2022]
Abstract
The utilization of robotic systems has been increasing in the last decade. This increase has been derived by the evolvement in the computational capabilities, communication systems, and the information systems of the manufacturing systems which is reflected in the concept of Industry 4.0. Furthermore, the robotics systems are continuously required to address new challenges in the industrial and manufacturing domain, like keeping humans in the loop, among other challenges. Briefly, the keeping humans in the loop concept focuses on closing the gap between humans and machines by introducing a safe and trustworthy environment for the human workers to work side by side with robots and machines. It aims at increasing the engagement of the human as the automation level increases rather than replacing the human, which can be nearly impossible in some applications. Consequently, the collaborative robots (Cobots) have been created to allow physical interaction with the human worker. However, these cobots still lack of recognizing the human emotional state. In this regard, this paper presents an approach for adapting cobot parameters to the emotional state of the human worker. The approach utilizes the Electroencephalography (EEG) technology for digitizing and understanding the human emotional state. Afterwards, the parameters of the cobot are instantly adjusted to keep the human emotional state in a desirable range which increases the confidence and the trust between the human and the cobot. In addition, the paper includes a review on technologies and methods for emotional sensing and recognition. Finally, this approach is tested on an ABB YuMi cobot with commercially available EEG headset.
Collapse
|
22
|
The human-in-the-loop: an evaluation of pathologists' interaction with artificial intelligence in clinical practice. Histopathology 2021; 79:210-218. [PMID: 33590577 DOI: 10.1111/his.14356] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 01/24/2021] [Accepted: 02/14/2021] [Indexed: 12/21/2022]
Abstract
AIMS One of the major drivers of the adoption of digital pathology in clinical practice is the possibility of introducing digital image analysis (DIA) to assist with diagnostic tasks. This offers potential increases in accuracy, reproducibility, and efficiency. Whereas stand-alone DIA has great potential benefit for research, little is known about the effect of DIA assistance in clinical use. The aim of this study was to investigate the clinical use characteristics of a DIA application for Ki67 proliferation assessment. Specifically, the human-in-the-loop interplay between DIA and pathologists was studied. METHODS AND RESULTS We retrospectively investigated breast cancer Ki67 areas assessed with human-in-the-loop DIA and compared them with visual and automatic approaches. The results, expressed as standard deviation of the error in the Ki67 index, showed that visual estimation ('eyeballing') (14.9 percentage points) performed significantly worse (P < 0.05) than DIA alone (7.2 percentage points) and DIA with human-in-the-loop corrections (6.9 percentage points). At the overall level, no improvement resulting from the addition of human-in-the-loop corrections to the automatic DIA results could be seen. For individual cases, however, human-in-the-loop corrections could address major DIA errors in terms of poor thresholding of faint staining and incorrect tumour-stroma separation. CONCLUSION The findings indicate that the primary value of human-in-the-loop corrections is to address major weaknesses of a DIA application, rather than fine-tuning the DIA quantifications.
Collapse
|
23
|
Generative Adversarial Networks-Enabled Human-Artificial Intelligence Collaborative Applications for Creative and Design Industries: A Systematic Review of Current Approaches and Trends. Front Artif Intell 2021; 4:604234. [PMID: 33997773 PMCID: PMC8113684 DOI: 10.3389/frai.2021.604234] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
The future of work and workplace is very much in flux. A vast amount has been written about artificial intelligence (AI) and its impact on work, with much of it focused on automation and its impact in terms of potential job losses. This review will address one area where AI is being added to creative and design practitioners' toolbox to enhance their creativity, productivity, and design horizons. A designer's primary purpose is to create, or generate, the most optimal artifact or prototype, given a set of constraints. We have seen AI encroaching into this space with the advent of generative networks and generative adversarial networks (GANs) in particular. This area has become one of the most active research fields in machine learning over the past number of years, and a number of these techniques, particularly those around plausible image generation, have garnered considerable media attention. We will look beyond automatic techniques and solutions and see how GANs are being incorporated into user pipelines for design practitioners. A systematic review of publications indexed on ScienceDirect, SpringerLink, Web of Science, Scopus, IEEExplore, and ACM DigitalLibrary was conducted from 2015 to 2020. Results are reported according to PRISMA statement. From 317 search results, 34 studies (including two snowball sampled) are reviewed, highlighting key trends in this area. The studies' limitations are presented, particularly a lack of user studies and the prevalence of toy-examples or implementations that are unlikely to scale. Areas for future study are also identified.
Collapse
|
24
|
Predictive Modelling and Its Visualization for Telehealth Data - Concept and Implementation of an Interactive Viewer. Stud Health Technol Inform 2019; 260:234-241. [PMID: 31118343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND Predictive modelling is becoming increasingly important in the healthcare sector. A comprehensive understanding of obtained models and their predictions is indispensable for the development and later acceptance of such systems. OBJECTIVES A general concept of a toolset that supports data scientists in the development of predictive models in the telehealth context had to be developed and subsequently implemented. METHODS Based on surveys the user requirements were determined. The concept development was based on the data model of the 'HerzMobil Tirol' telehealth program. The implementation was conducted in MATLAB. RESULTS A list of requirements was identified, based on which a viewer was implemented. CONCLUSION The developed viewer concept and its implementation facilitate a deeper insight and a better understanding of the development process of predictive models in the telehealth context.
Collapse
|
25
|
Bio-Cooperative Approach for the Human-in-the-Loop Control of an End-Effector Rehabilitation Robot. Front Neurorobot 2018; 12:67. [PMID: 30364325 PMCID: PMC6193510 DOI: 10.3389/fnbot.2018.00067] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 09/20/2018] [Indexed: 11/13/2022] Open
Abstract
The design of patient-tailored rehabilitative protocols represents one of the crucial factors that influence motor recovery mechanisms, such as neuroplasticity. This approach, including the patient in the control loop and characterized by a control strategy adaptable to the user's requirements, is expected to significantly improve functional recovery in robot-aided rehabilitation. In this paper, a novel 3D bio-cooperative robotic platform is developed. A new arm-weight support system is included into an operational robotic platform for 3D upper limb robot-aided rehabilitation. The robotic platform is capable of adapting therapy characteristics to specific patient needs, thanks to biomechanical and physiological measurements, and thus closing the subject in the control loop. The level of arm-weight support and the level of the assistance provided by the end-effector robot are varied on the basis of muscular fatigue and biomechanical indicators. An assistance-as-needed approach is applied to provide the appropriate amount of assistance. The proposed platform has been experimentally validated on 10 healthy subjects; they performed 3D point-to-point tasks in two different conditions, i.e., with and without assistance-as-needed. The results have demonstrated the capability of the proposed system to properly adapt to real needs of the patients. Moreover, the provided assistance was shown to reduce the muscular fatigue without negatively influencing motion execution.
Collapse
|
26
|
Human's Capability to Discriminate Spatial Forces at the Big Toe. Front Neurorobot 2018; 12:13. [PMID: 29692718 PMCID: PMC5902537 DOI: 10.3389/fnbot.2018.00013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Accepted: 03/08/2018] [Indexed: 11/13/2022] Open
Abstract
A key factor for reliable object manipulation is the tactile information provided by the skin of our hands. As this sensory information is so essential in our daily life it should also be provided during teleoperation of robotic devices or in the control of myoelectric prostheses. It is well-known that feeding back the tactile information to the user can lead to a more natural and intuitive control of robotic devices. However, in some applications it is difficult to use the hands as natural feedback channels since they may already be overloaded with other tasks or, e.g., in case of hand prostheses not accessible at all. Many alternatives for tactile feedback to the human hand have already been investigated. In particular, one approach shows that humans can integrate uni-directional (normal) force feedback at the toe into their sensorimotor-control loop. Extending this work, we investigate the human's capability to discriminate spatial forces at the bare front side of their toe. A state-of-the-art haptic feedback device was used to apply forces with three different amplitudes-2 N, 5 N, and 8 N-to subjects' right big toes. During the experiments, different force stimuli were presented, i.e., direction of the applied force was changed, such that tangential components occured. In total the four directions up (distal), down (proximal), left (medial), and right (lateral) were tested. The proportion of the tangential force was varied corresponding to a directional change of 5° to 25° with respect to the normal force. Given these force stimuli, the subjects' task was to identify the direction of the force change. We found the amplitude of the force as well as the proportion of tangential forces to have a significant influence on the success rate. Furthermore, the direction right showed a significantly different successrate from all other directions. The stimuli with a force amplitude of 8 N achieved success rates over 89% in all directions. The results of the user study provide evidence that the subjects were able to discriminate spatial forces at their toe within defined force amplitudes and tangential proportion.
Collapse
|
27
|
Using Active Learning to Identify Health Information Technology Related Patient Safety Events. Appl Clin Inform 2017; 8:35-46. [PMID: 28097287 DOI: 10.4338/aci-2016-09-cr-0148] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 11/09/2016] [Indexed: 11/23/2022] Open
Abstract
The widespread adoption of health information technology (HIT) has led to new patient safety hazards that are often difficult to identify. Patient safety event reports, which are self-reported descriptions of safety hazards, provide one view of potential HIT-related safety events. However, identifying HIT-related reports can be challenging as they are often categorized under other more predominate clinical categories. This challenge of identifying HIT-related reports is exacerbated by the increasing number and complexity of reports which pose challenges to human annotators that must manually review reports. In this paper, we apply active learning techniques to support classification of patient safety event reports as HIT-related. We evaluated different strategies and demonstrated a 30% increase in average precision of a confirmatory sampling strategy over a baseline no active learning approach after 10 learning iterations.
Collapse
|
28
|
Feedback-controlled robotics-assisted treadmill exercise to assess and influence aerobic capacity early after stroke: a proof-of-concept study. Disabil Rehabil Assist Technol 2013; 9:271-8. [PMID: 23597319 DOI: 10.3109/17483107.2013.785038] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
PURPOSE The majority of post-stroke individuals suffer from low exercise capacity as a secondary reaction to immobility. The aim of this study was to prove the concept of feedback-controlled robotics-assisted treadmill exercise (RATE) to assess aerobic capacity and guide cardiovascular exercise in severely impaired individuals early after stroke. METHOD Subjects underwent constant load and incremental exercise testing using a human-in-the-loop feedback system within a robotics-assisted exoskeleton (Lokomat, Hocoma AG, CH). Inclusion criteria were: stroke onset ≤8 weeks, stable medical condition, non-ambulatory status, moderate motor control of the lower limbs and appropriate cognitive function. Outcome measures included oxygen uptake kinetics, peak oxygen uptake (VO2peak), gas exchange threshold (GET), peak heart rate (HRpeak), peak work rate (Ppeak) and accuracy of reaching target work rate (P-RMSE). RESULTS Three subjects (18-42 d post-stroke) were included. Oxygen uptake kinetics during constant load ranged from 42.0 to 60.2 s. Incremental exercise testing showed: VO2peak range 19.7-28.8 ml/min/kg, GET range 11.6-12.7 ml/min/kg, and HRpeak range 115-161 bpm. Ppeak range was 55.2-110.9 W and P-RMSE range was 3.8-7.5 W. CONCLUSIONS The concept of feedback-controlled RATE for assessment of aerobic capacity and guidance of cardiovascular exercise is feasible. Further research is warranted to validate the method on a larger scale. IMPLICATIONS FOR REHABILITATION Aerobic capacity is seriously reduced in post-stroke individuals as a secondary reaction to immobility. Robotics-assisted walking devices may have substantial clinical relevance regarding assessment and improvement of aerobic capacity early after stroke. Feedback-controlled robotics-assisted treadmill exercise represents a new concept for cardiovascular assessment and intervention protocols for severely impaired individuals.
Collapse
|