1
|
Rafiei MH, Gauthier LV, Adeli H, Takabi D. Self-Supervised Learning for Near-Wild Cognitive Workload Estimation. J Med Syst 2024; 48:107. [PMID: 39576291 DOI: 10.1007/s10916-024-02122-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 11/08/2024] [Indexed: 11/24/2024]
Abstract
Feedback on cognitive workload may reduce decision-making mistakes. Machine learning-based models can produce feedback from physiological data such as electroencephalography (EEG) and electrocardiography (ECG). Supervised machine learning requires large training data sets that are (1) relevant and decontaminated and (2) carefully labeled for accurate approximation, a costly and tedious procedure. Commercial over-the-counter devices are low-cost resolutions for the real-time collection of physiological modalities. However, they produce significant artifacts when employed outside of laboratory settings, compromising machine learning accuracies. Additionally, the physiological modalities that most successfully machine-approximate cognitive workload in everyday settings are unknown. To address these challenges, a first-ever hybrid implementation of feature selection and self-supervised machine learning techniques is introduced. This model is employed on data collected outside controlled laboratory settings to (1) identify relevant physiological modalities to machine approximate six levels of cognitive-physical workloads from a seven-modality repository and (2) postulate limited labeling experiments and machine approximate mental-physical workloads using self-supervised learning techniques.
Collapse
Affiliation(s)
- Mohammad H Rafiei
- Whiting School of Engineering, Johns Hopkins University, 21218, Baltimore, MD, USA
| | - Lynne V Gauthier
- Department of Physical Therapy and Kinesiology, University of Massachusetts Lowell, 01854, Lowell, MA, USA
| | - Hojjat Adeli
- Departments of Biomedical Informatics and Neuroscience, The Ohio State University, 43210, Columbus, OH, USA.
| | - Daniel Takabi
- School of Cybersecurity, Old Dominion University, 23529, Norfolk, VA, USA
| |
Collapse
|
2
|
Baskaran P, Adams JA. Multi-dimensional task recognition for human-robot teaming: literature review. Front Robot AI 2023; 10:1123374. [PMID: 37609665 PMCID: PMC10440956 DOI: 10.3389/frobt.2023.1123374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 07/17/2023] [Indexed: 08/24/2023] Open
Abstract
Human-robot teams collaborating to achieve tasks under various conditions, especially in unstructured, dynamic environments will require robots to adapt autonomously to a human teammate's state. An important element of such adaptation is the robot's ability to infer the human teammate's tasks. Environmentally embedded sensors (e.g., motion capture and cameras) are infeasible in such environments for task recognition, but wearable sensors are a viable task recognition alternative. Human-robot teams will perform a wide variety of composite and atomic tasks, involving multiple activity components (i.e., gross motor, fine-grained motor, tactile, visual, cognitive, speech and auditory) that may occur concurrently. A robot's ability to recognize the human's composite, concurrent tasks is a key requirement for realizing successful teaming. Over a hundred task recognition algorithms across multiple activity components are evaluated based on six criteria: sensitivity, suitability, generalizability, composite factor, concurrency and anomaly awareness. The majority of the reviewed task recognition algorithms are not viable for human-robot teams in unstructured, dynamic environments, as they only detect tasks from a subset of activity components, incorporate non-wearable sensors, and rarely detect composite, concurrent tasks across multiple activity components.
Collapse
Affiliation(s)
- Prakash Baskaran
- Collaborative Robotics and Intelligent Systems Institute, Oregon State University, Corvallis, OR, United States
| | | |
Collapse
|
4
|
Wang Y, Wan C, Zhang Y, Zhou Y, Wang H, Yan F, Song D, Du R, Wang Q, Huang L. Detecting Connected Consciousness During Propofol-Induced Anesthesia Using EEG Based Brain Decoding. Int J Neural Syst 2021; 31:2150021. [PMID: 33970056 DOI: 10.1142/s0129065721500210] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Connected consciousness refers to the state when external stimuli can enter into the stream of our consciousness experience. Emerging evidence suggests that although patients may not respond behaviorally to external stimuli during anesthesia, they may be aware of their surroundings. In this work, we investigated whether EEG based brain decoding could be used for detecting connected consciousness in the absence of behavioral responses during propofol infusion. A total of 14 subjects participated in our experiment. Subjects were asked to discriminate two types of auditory stimuli with a finger press during an ultraslow propofol infusion. We trained an EEG based brain decoding model using data collected in the awakened state using the same auditory stimuli and tested the model on data collected during the propofol infusion. The model provided a correct classification rate (CCR) of [Formula: see text]% when subjects were able to respond to the stimuli during the propofol infusion. The CCR dropped to [Formula: see text]% when subjects ceased responding and further decreased to [Formula: see text]% when we increased the propofol concentration by another 0.2 [Formula: see text]g/ml. After terminating the propofol infusion, we observed that the CCR rebounded to [Formula: see text]% before the subjects regained consciousness. With the classification results, we provided evidence that loss of consciousness is a gradual process and may progress from full consciousness to connected consciousness and then to disconnected consciousness.
Collapse
Affiliation(s)
- Yubo Wang
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| | - Chenghao Wan
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| | - Yun Zhang
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| | - Yu Zhou
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| | - Haidong Wang
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| | - Fei Yan
- Department of Anesthesiology and Center for Brain Science, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, P. R. China
| | - Dawei Song
- Department of Anesthesiology and Center for Brain Science, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, P. R. China
| | - Ruini Du
- Department of Anesthesiology and Center for Brain Science, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, P. R. China
| | - Qiang Wang
- Department of Anesthesiology and Center for Brain Science, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, P. R. China
| | - Liyu Huang
- School of Life Science and Technology, Xidian University, Xi'an, P. R. China
| |
Collapse
|
5
|
Zheng Y, Hu X. Concurrent Prediction of Finger Forces Based on Source Separation and Classification of Neuron Discharge Information. Int J Neural Syst 2021; 31:2150010. [PMID: 33541251 DOI: 10.1142/s0129065721500106] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation ([Formula: see text] versus [Formula: see text]), a lower prediction error ([Formula: see text]% MVC versus [Formula: see text]% MVC), and a higher accuracy in finger state (rest/active) prediction ([Formula: see text]% versus [Formula: see text]%). Our decoding method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner.
Collapse
Affiliation(s)
- Yang Zheng
- Joint Department of Biomedical Engineering, University of North Carolina - Chapel Hill and North Carolina State University, Raleigh, NC, USA
| | - Xiaogang Hu
- Joint Department of Biomedical Engineering, University of North Carolina - Chapel Hill and North Carolina State University, Raleigh, NC, USA
| |
Collapse
|
6
|
Chong HH, Yang L, Sheng RF, Yu YL, Wu DJ, Rao SX, Yang C, Zeng MS. Multi-scale and multi-parametric radiomics of gadoxetate disodium-enhanced MRI predicts microvascular invasion and outcome in patients with solitary hepatocellular carcinoma ≤ 5 cm. Eur Radiol 2021; 31:4824-4838. [PMID: 33447861 PMCID: PMC8213553 DOI: 10.1007/s00330-020-07601-2] [Citation(s) in RCA: 142] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/28/2020] [Accepted: 12/03/2020] [Indexed: 02/06/2023]
Abstract
Objectives To develop radiomics-based nomograms for preoperative microvascular invasion (MVI) and recurrence-free survival (RFS) prediction in patients with solitary hepatocellular carcinoma (HCC) ≤ 5 cm. Methods Between March 2012 and September 2019, 356 patients with pathologically confirmed solitary HCC ≤ 5 cm who underwent preoperative gadoxetate disodium–enhanced MRI were retrospectively enrolled. MVI was graded as M0, M1, or M2 according to the number and distribution of invaded vessels. Radiomics features were extracted from DWI, arterial, portal venous, and hepatobiliary phase images in regions of the entire tumor, peritumoral area ≤ 10 mm, and randomly selected liver tissue. Multivariate analysis identified the independent predictors for MVI and RFS, with nomogram visualized the ultimately predictive models. Results Elevated alpha-fetoprotein, total bilirubin and radiomics values, peritumoral enhancement, and incomplete or absent capsule enhancement were independent risk factors for MVI. The AUCs of MVI nomogram reached 0.920 (95% CI: 0.861–0.979) using random forest and 0.879 (95% CI: 0.820–0.938) using logistic regression analysis in validation cohort (n = 106). With the 5-year RFS rate of 68.4%, the median RFS of MVI-positive (M2 and M1) and MVI-negative (M0) patients were 30.5 (11.9 and 40.9) and > 96.9 months (p < 0.001), respectively. Age, histologic MVI, alkaline phosphatase, and alanine aminotransferase independently predicted recurrence, yielding AUC of 0.654 (95% CI: 0.538–0.769, n = 99) in RFS validation cohort. Instead of histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest achieved comparable accuracy in MVI stratification and RFS prediction. Conclusions Preoperative radiomics-based nomogram using random forest is a potential biomarker of MVI and RFS prediction for solitary HCC ≤ 5 cm. Key Points • The radiomics score was the predominant independent predictor of MVI which was the primary independent risk factor for postoperative recurrence. • The radiomics-based nomogram using either random forest or logistic regression analysis has obtained the best preoperative prediction of MVI in HCC patients so far. • As an excellent substitute for the invasive histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest (MVI-RF) achieved comparable accuracy in MVI stratification and outcome, reinforcing the radiologic understanding of HCC angioinvasion and progression. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-020-07601-2.
Collapse
Affiliation(s)
- Huan-Huan Chong
- Shanghai Institute of Medical Imaging, 180 Fenglin Road, Shanghai, China.,Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
| | - Li Yang
- Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
| | - Ruo-Fan Sheng
- Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
| | - Yang-Li Yu
- Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
| | - Di-Jia Wu
- Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Sheng-Xiang Rao
- Shanghai Institute of Medical Imaging, 180 Fenglin Road, Shanghai, China.,Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China
| | - Chun Yang
- Shanghai Institute of Medical Imaging, 180 Fenglin Road, Shanghai, China. .,Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China.
| | - Meng-Su Zeng
- Shanghai Institute of Medical Imaging, 180 Fenglin Road, Shanghai, China. .,Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China. .,Department of Medical Imaging, Shanghai Medical College, Fudan University, Shanghai, China.
| |
Collapse
|