1
|
Yu K, Chen J, Ding X, Zhang D. Exploring cognitive load through neuropsychological features: an analysis using fNIRS-eye tracking. Med Biol Eng Comput 2025; 63:45-57. [PMID: 39107650 DOI: 10.1007/s11517-024-03178-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 07/17/2024] [Indexed: 01/03/2025]
Abstract
Cognition is crucial to brain function, and accurately classifying cognitive load is essential for understanding the psychological processes across tasks. This paper innovatively combines functional near-infrared spectroscopy (fNIRS) with eye tracking technology to delve into the classification of cognitive load at the neurocognitive level. This integration overcomes the limitations of a single modality, addressing challenges such as feature selection, high dimensionality, and insufficient sample capacity. We employ fNIRS-eye tracking technology to collect neural activity and eye tracking data during various cognitive tasks, followed by preprocessing. Using the maximum relevance minimum redundancy algorithm, we extract the most relevant features and evaluate their impact on the classification task. We evaluate the classification performance by building models (naive Bayes, support vector machine, K-nearest neighbors, and random forest) and employing cross-validation. The results demonstrate the effectiveness of fNIRS-eye tracking, the maximum relevance minimum redundancy algorithm, and machine learning techniques in discriminating cognitive load levels. This study emphasizes the impact of the number of features on performance, highlighting the need for an optimal feature set to improve accuracy. These findings advance our understanding of neuroscientific features related to cognitive load, propelling neural psychology research to deeper levels and holding significant implications for future cognitive science.
Collapse
Affiliation(s)
- Kaiwei Yu
- Research Center of Optical Instrument and System, Ministry of Education and Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China
| | - Jiafa Chen
- Research Center of Optical Instrument and System, Ministry of Education and Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China.
| | - Xian Ding
- Research Center of Optical Instrument and System, Ministry of Education and Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China
| | - Dawei Zhang
- Research Center of Optical Instrument and System, Ministry of Education and Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China.
| |
Collapse
|
2
|
Li X, Xu J, Cheng H. Functional sufficient dimension reduction through information maximization with application to classification. J Appl Stat 2024; 51:3059-3101. [PMID: 39512594 PMCID: PMC11539930 DOI: 10.1080/02664763.2024.2335570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 03/09/2024] [Indexed: 11/15/2024]
Abstract
Considering the case where the response variable is a categorical variable and the predictor is a random function, two novel functional sufficient dimensional reduction (FSDR) methods are proposed based on mutual information and square loss mutual information. Compared to the classical FSDR methods, such as functional sliced inverse regression and functional sliced average variance estimation, the proposed methods are appealing because they are capable of estimating multiple effective dimension reduction directions in the case of a relatively small number of categories, especially for the binary response. Moreover, the proposed methods do not require the restrictive linear conditional mean assumption and the constant covariance assumption. They avoid the inverse problem of the covariance operator which is often encountered in the functional sufficient dimension reduction. The functional principal component analysis with truncation be used as a regularization mechanism. Under some mild conditions, the statistical consistency of the proposed methods is established. Simulation studies and real data analyzes are used to evaluate the finite sample properties of our methods.
Collapse
Affiliation(s)
- Xinyu Li
- International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Jianjun Xu
- School of Mathematics, Hefei University of Technology, Hefei, Anhui, People's Republic of China
| | - Haoyang Cheng
- College of Electrical and Information Engineering, Quzhou University, Quzhou, Zhejiang, People's Republic of China
| |
Collapse
|
3
|
Peng X, Li H, Yuan F, Razul SG, Chen Z, Lin Z. An extreme learning machine for unsupervised online anomaly detection in multivariate time series. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
4
|
Teji JS, Jain S, Gupta SK, Suri JS. NeoAI 1.0: Machine learning-based paradigm for prediction of neonatal and infant risk of death. Comput Biol Med 2022; 147:105639. [DOI: 10.1016/j.compbiomed.2022.105639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 05/01/2022] [Accepted: 05/01/2022] [Indexed: 11/29/2022]
|
5
|
Sakai T, Niu G, Sugiyama M. Information-Theoretic Representation Learning for Positive-Unlabeled Classification. Neural Comput 2020; 33:244-268. [PMID: 33080157 DOI: 10.1162/neco_a_01337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent advances in weakly supervised classification allow us to train a classifier from only positive and unlabeled (PU) data. However, existing PU classification methods typically require an accurate estimate of the class-prior probability, a critical bottleneck particularly for high-dimensional data. This problem has been commonly addressed by applying principal component analysis in advance, but such unsupervised dimension reduction can collapse the underlying class structure. In this letter, we propose a novel representation learning method from PU data based on the information-maximization principle. Our method does not require class-prior estimation and thus can be used as a preprocessing method for PU classification. Through experiments, we demonstrate that our method, combined with deep neural networks, highly improves the accuracy of PU class-prior estimation, leading to state-of-the-art PU classification performance.
Collapse
Affiliation(s)
- Tomoya Sakai
- University of Tokyo, Kashiwa, Chiba 277-8561, Japan
| | - Gang Niu
- RIKEN Center for Advanced Intelligence Project, Chuo-ku, Tokyo 103-0027, Japan
| | - Masashi Sugiyama
- RIKEN Center for Advanced Intelligence Project, Chuo-ku, Tokyo 103-0027, Japan, and University of Tokyo, Kashiwa, Chiba 277-8561, Japan
| |
Collapse
|
6
|
Sha ZC, Liu ZM, Ma C, Chen J. Feature selection for multi-label classification by maximizing full-dimensional conditional mutual information. APPL INTELL 2020. [DOI: 10.1007/s10489-020-01822-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
7
|
Yasaei Sekeh S, Hero AO. Geometric Estimation of Multivariate Dependency. ENTROPY (BASEL, SWITZERLAND) 2019; 21:e21080787. [PMID: 33267500 PMCID: PMC7515316 DOI: 10.3390/e21080787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 08/02/2019] [Accepted: 08/08/2019] [Indexed: 06/11/2023]
Abstract
This paper proposes a geometric estimator of dependency between a pair of multivariate random variables. The proposed estimator of dependency is based on a randomly permuted geometric graph (the minimal spanning tree) over the two multivariate samples. This estimator converges to a quantity that we call the geometric mutual information (GMI), which is equivalent to the Henze-Penrose divergence. between the joint distribution of the multivariate samples and the product of the marginals. The GMI has many of the same properties as standard MI but can be estimated from empirical data without density estimation; making it scalable to large datasets. The proposed empirical estimator of GMI is simple to implement, involving the construction of an minimal spanning tree (MST) spanning over both the original data and a randomly permuted version of this data. We establish asymptotic convergence of the estimator and convergence rates of the bias and variance for smooth multivariate density functions belonging to a Hölder class. We demonstrate the advantages of our proposed geometric dependency estimator in a series of experiments.
Collapse
|
8
|
Sechidis K, Brown G. Simple strategies for semi-supervised feature selection. Mach Learn 2018; 107:357-395. [PMID: 31983804 PMCID: PMC6954040 DOI: 10.1007/s10994-017-5648-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Accepted: 06/08/2017] [Indexed: 11/17/2022]
Abstract
What is the simplest thing you can do to solve a problem? In the context of semi-supervised feature selection, we tackle exactly this—how much we can gain from two simple classifier-independent strategies. If we have some binary labelled data and some unlabelled, we could assume the unlabelled data are all positives, or assume them all negatives. These minimalist, seemingly naive, approaches have not previously been studied in depth. However, with theoretical and empirical studies, we show they provide powerful results for feature selection, via hypothesis testing and feature ranking. Combining them with some “soft” prior knowledge of the domain, we derive two novel algorithms (Semi-JMI, Semi-IAMB) that outperform significantly more complex competing methods, showing particularly good performance when the labels are missing-not-at-random. We conclude that simple approaches to this problem can work surprisingly well, and in many situations we can provably recover the exact feature selection dynamics, as if we had labelled the entire dataset.
Collapse
Affiliation(s)
| | - Gavin Brown
- School of Computer Science, University of Manchester, Manchester, M13 9PL UK
| |
Collapse
|
9
|
Sechidis K, Sperrin M, Petherick ES, Luján M, Brown G. Dealing with under-reported variables: An information theoretic solution. Int J Approx Reason 2017. [DOI: 10.1016/j.ijar.2017.04.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
10
|
|
11
|
Irie K, Sugiyama M, Tomono M. Dependence maximization localization: a novel approach to 2D street-map-based robot localization. Adv Robot 2016. [DOI: 10.1080/01691864.2016.1222915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
12
|
A Novel Method for Speech Acquisition and Enhancement by 94 GHz Millimeter-Wave Sensor. SENSORS 2015; 16:s16010050. [PMID: 26729126 PMCID: PMC4732083 DOI: 10.3390/s16010050] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Revised: 12/10/2015] [Accepted: 12/23/2015] [Indexed: 12/02/2022]
Abstract
In order to improve the speech acquisition ability of a non-contact method, a 94 GHz millimeter wave (MMW) radar sensor was employed to detect speech signals. This novel non-contact speech acquisition method was shown to have high directional sensitivity, and to be immune to strong acoustical disturbance. However, MMW radar speech is often degraded by combined sources of noise, which mainly include harmonic, electrical circuit and channel noise. In this paper, an algorithm combining empirical mode decomposition (EMD) and mutual information entropy (MIE) was proposed for enhancing the perceptibility and intelligibility of radar speech. Firstly, the radar speech signal was adaptively decomposed into oscillatory components called intrinsic mode functions (IMFs) by EMD. Secondly, MIE was used to determine the number of reconstructive components, and then an adaptive threshold was employed to remove the noise from the radar speech. The experimental results show that human speech can be effectively acquired by a 94 GHz MMW radar sensor when the detection distance is 20 m. Moreover, the noise of the radar speech is greatly suppressed and the speech sounds become more pleasant to human listeners after being enhanced by the proposed algorithm, suggesting that this novel speech acquisition and enhancement method will provide a promising alternative for various applications associated with speech detection.
Collapse
|
13
|
Yamada M, Sigal L, Raptis M, Toyoda M, Chang Y, Sugiyama M. Cross-Domain Matching with Squared-Loss Mutual Information. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:1764-1776. [PMID: 26353125 DOI: 10.1109/tpami.2014.2388235] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The goal of cross-domain matching (CDM) is to find correspondences between two sets of objects in different domains in an unsupervised way. CDM has various interesting applications, including photo album summarization where photos are automatically aligned into a designed frame expressed in the Cartesian coordinate system, and temporal alignment which aligns sequences such as videos that are potentially expressed using different features. In this paper, we propose an information-theoretic CDM framework based on squared-loss mutual information (SMI). The proposed approach can directly handle non-linearly related objects/sequences with different dimensions, with the ability that hyper-parameters can be objectively optimized by cross-validation. We apply the proposed method to several real-world problems including image matching, unpaired voice conversion, photo album summarization, cross-feature video and cross-domain video-to-mocap alignment, and Kinect-based action recognition, and experimentally demonstrate that the proposed method is a promising alternative to state-of-the-art CDM methods.
Collapse
|
14
|
Calandriello D, Niu G, Sugiyama M. Semi-supervised information-maximization clustering. Neural Netw 2014; 57:103-11. [PMID: 24975502 DOI: 10.1016/j.neunet.2014.05.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Revised: 04/18/2014] [Accepted: 05/26/2014] [Indexed: 11/30/2022]
Abstract
Semi-supervised clustering aims to introduce prior knowledge in the decision process of a clustering algorithm. In this paper, we propose a novel semi-supervised clustering algorithm based on the information-maximization principle. The proposed method is an extension of a previous unsupervised information-maximization clustering algorithm based on squared-loss mutual information to effectively incorporate must-links and cannot-links. The proposed method is computationally efficient because the clustering solution can be obtained analytically via eigendecomposition. Furthermore, the proposed method allows systematic optimization of tuning parameters such as the kernel width, given the degree of belief in the must-links and cannot-links. The usefulness of the proposed method is demonstrated through experiments.
Collapse
Affiliation(s)
| | - Gang Niu
- Tokyo Institute of Technology, Tokyo, Japan.
| | | |
Collapse
|
15
|
Sugiyama M, Niu G, Yamada M, Kimura M, Hachiya H. Information-maximization clustering based on squared-loss mutual information. Neural Comput 2013; 26:84-131. [PMID: 24102125 DOI: 10.1162/neco_a_00534] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.
Collapse
|