1
|
Retamales G, Gavidia ME, Bausch B, Montanari AN, Husch A, Goncalves J. Towards automatic home-based sleep apnea estimation using deep learning. NPJ Digit Med 2024; 7:144. [PMID: 38824175 DOI: 10.1038/s41746-024-01139-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/22/2024] [Indexed: 06/03/2024] Open
Abstract
Apnea and hypopnea are common sleep disorders characterized by the obstruction of the airways. Polysomnography (PSG) is a sleep study typically used to compute the Apnea-Hypopnea Index (AHI), the number of times a person has apnea or certain types of hypopnea per hour of sleep, and diagnose the severity of the sleep disorder. Early detection and treatment of apnea can significantly reduce morbidity and mortality. However, long-term PSG monitoring is unfeasible as it is costly and uncomfortable for patients. To address these issues, we propose a method, named DRIVEN, to estimate AHI at home from wearable devices and detect when apnea, hypopnea, and periods of wakefulness occur throughout the night. The method can therefore assist physicians in diagnosing the severity of apneas. Patients can wear a single sensor or a combination of sensors that can be easily measured at home: abdominal movement, thoracic movement, or pulse oximetry. For example, using only two sensors, DRIVEN correctly classifies 72.4% of all test patients into one of the four AHI classes, with 99.3% either correctly classified or placed one class away from the true one. This is a reasonable trade-off between the model's performance and the patient's comfort. We use publicly available data from three large sleep studies with a total of 14,370 recordings. DRIVEN consists of a combination of deep convolutional neural networks and a light-gradient-boost machine for classification. It can be implemented for automatic estimation of AHI in unsupervised long-term home monitoring systems, reducing costs to healthcare systems and improving patient care.
Collapse
Affiliation(s)
- Gabriela Retamales
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Marino E Gavidia
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Ben Bausch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Arthur N Montanari
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, 60208, USA
| | - Andreas Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Jorge Goncalves
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg.
- Department of Plant Sciences, University of Cambridge, Cambridge, CB2 3EA, UK.
| |
Collapse
|
2
|
Tu J, Yan J, Ji X, Liu Q, Qing X. Damage Severity Assessment of Multi-Layer Complex Structures Based on a Damage Information Extraction Method with Ladder Feature Mining. SENSORS (BASEL, SWITZERLAND) 2024; 24:2950. [PMID: 38733057 PMCID: PMC11086110 DOI: 10.3390/s24092950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 04/27/2024] [Accepted: 05/04/2024] [Indexed: 05/13/2024]
Abstract
Multi-layer complex structures are widely used in large-scale engineering structures because of their diverse combinations of properties and excellent overall performance. However, multi-layer complex structures are prone to interlaminar debonding damage during use. Therefore, it is necessary to monitor debonding damage in engineering applications to determine structural integrity. In this paper, a damage information extraction method with ladder feature mining for Lamb waves is proposed. The method is able to optimize and screen effective damage information through ladder-type damage extraction. It is suitable for evaluating the severity of debonding damage in aluminum-foamed silicone rubber, a novel multi-layer complex structure. The proposed method contains ladder feature mining stages of damage information selection and damage feature fusion, realizing a multi-level damage information extraction process from coarse to fine. The results show that the accuracy of damage severity assessment by the damage information extraction method with ladder feature mining is improved by more than 5% compared to other methods. The effectiveness and accuracy of the method in assessing the damage severity of multi-layer complex structures are demonstrated, providing a new perspective and solution for damage monitoring of multi-layer complex structures.
Collapse
Affiliation(s)
| | | | | | | | - Xinlin Qing
- School of Aerospace Engineering, Xiamen University, Xiamen 361102, China; (J.T.); (J.Y.); (X.J.); (Q.L.)
| |
Collapse
|
3
|
Shankarnarayan SA, Charlebois DA. Machine learning to identify clinically relevant Candida yeast species. Med Mycol 2024; 62:myad134. [PMID: 38130236 DOI: 10.1093/mmy/myad134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/06/2023] [Accepted: 12/19/2023] [Indexed: 12/23/2023] Open
Abstract
Fungal infections, especially due to Candida species, are on the rise. Multi-drug resistant organisms such as Candida auris are difficult and time consuming to identify accurately. Machine learning is increasingly being used in health care, especially in medical imaging. In this study, we evaluated the effectiveness of six convolutional neural networks (CNNs) to identify four clinically important Candida species. Wet-mounted images were captured using bright field live-cell microscopy followed by separating single-cells, budding-cells, and cell-group images which were then subjected to different machine learning algorithms (custom CNN, VGG16, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB7) to learn and predict Candida species. Among the six algorithms tested, the InceptionV3 model performed best in predicting Candida species from microscopy images. All models performed poorly on raw images obtained directly from the microscope. The performance of all models increased when trained on single and budding cell images. The InceptionV3 model identified budding cells of C. albicans, C. auris, C. glabrata (Nakaseomyces glabrata), and C. haemulonii in 97.0%, 74.0%, 68.0%, and 66.0% cases, respectively. For single cells of C. albicans, C. auris, C. glabrata, and C. haemulonii InceptionV3 identified 97.0%, 73.0%, 69.0%, and 73.0% cases, respectively. The sensitivity and specificity of InceptionV3 were 77.1% and 92.4%, respectively. Overall, this study provides proof of the concept that microscopy images from wet-mounted slides can be used to identify Candida yeast species using machine learning quickly and accurately.
Collapse
Affiliation(s)
| | - Daniel A Charlebois
- Department of Physics, University of Alberta, Edmonton, Alberta, T6G-2E1, Canada
- Department of Physics, Department of Biological Sciences, University of Alberta, Edmonton, Alberta, T6G-2E9, Canada
| |
Collapse
|
4
|
Lin N, Wu S, Ji S. A Morphologically Individualized Deep Learning Brain Injury Model. J Neurotrauma 2023; 40:2233-2247. [PMID: 37212255 DOI: 10.1089/neu.2022.0413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2023] Open
Abstract
The brain injury modeling community has recommended improving model subject specificity and simulation efficiency. Here, we extend an instantaneous (< 1 sec) convolutional neural network (CNN) brain model based on the anisotropic Worcester Head Injury Model (WHIM) V1.0 to account for strain differences due to individual morphological variations. Linear scaling factors relative to the generic WHIM along the three anatomical axes are used as additional CNN inputs. To generate training samples, the WHIM is randomly scaled to pair with augmented head impacts randomly generated from real-world data for simulation. An estimation of voxelized peak maximum principal strain of the whole-brain is said to be successful when the linear regression slope and Pearson's correlation coefficient relative to directly simulated do not deviate from 1.0 (when identical) by more than 0.1. Despite a modest training dataset (N = 1363 vs. ∼5.7 k previously), the individualized CNN achieves a success rate of 86.2% in cross-validation for scaled model responses, and 92.1% for independent generic model testing for impacts considered as complete capture of kinematic events. Using 11 scaled subject-specific models (with scaling factors determined from pre-established regression models based on head dimensions and sex and age information, and notably, without neuroimages), the morphologically individualized CNN remains accurate for impacts that also yield successful estimations for the generic WHIM. The individualized CNN instantly estimates subject-specific and spatially detailed peak strains of the entire brain and thus, supersedes others that report a scalar peak strain value incapable of informing the location of occurrence. This tool could be especially useful for youths and females due to their anticipated greater morphological differences relative to the generic model, even without the need for individual neuroimages. It has potential for a wide range of applications for injury mitigation purposes and the design of head protective gears. The voxelized strains also allow for convenient data sharing and promote collaboration among research groups.
Collapse
Affiliation(s)
- Nan Lin
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Shaoju Wu
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Songbai Ji
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
- Department of Mechanical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| |
Collapse
|
5
|
Hernández-Hernández DJ, Perez-Lizaur AB, Palacios-González B, Morales-Luna G. Machine learning accurately predicts food exchange list and the exchangeable portion. Front Nutr 2023; 10:1231873. [PMID: 37637952 PMCID: PMC10449541 DOI: 10.3389/fnut.2023.1231873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 07/26/2023] [Indexed: 08/29/2023] Open
Abstract
Introduction Food Exchange Lists (FELs) are a user-friendly tool developed to help individuals aid healthy eating habits and follow a specific diet plan. Given the rapidly increasing number of new products or access to new foods, one of the biggest challenges for FELs is being outdated. Supervised machine learning algorithms could be a tool that facilitates this process and allows for updated FELs-the present study aimed to generate an algorithm to predict food classification and calculate the equivalent portion. Methods Data mining techniques were used to generate the algorithm, which consists of processing and analyzing the information to find patterns, trends, or repetitive rules that explain the behavior of the data in a food database after performing this task. It was decided to approach the problem from a vector formulation (through 9 nutrient dimensions) that led to proposals for classifiers such as Spherical K-Means (SKM), and by developing this idea, it was possible to smooth the limits of the classifier with the help of a Multilayer Perceptron (MLP) which were compared with two other algorithms of machine learning, these being Random Forest and XGBoost. Results The algorithm proposed in this study could classify and calculate the equivalent portion of a single or a list of foods. The algorithm allows the categorization of more than one thousand foods with a confidence level of 97% at the first three places. Also, the algorithm indicates which foods exceed the limits established in sodium, sugar, and/or fat content and show their equivalents. Discussion Accurate and robust FELs could improve implementation and adherence to the recommended diet. Compared with manual categorization and calculation, machine learning approaches have several advantages. Machine learning reduces the time needed for manual food categorization and equivalent portion calculation of many food products. Since it is possible to access food composition databases of various populations, our algorithm could be adapted and applied in other databases, offering an even greater diversity of regional products and foods. In conclusion, machine learning is a promising method for automation in generating FELs. This study provides evidence of a large-scale, accurate real-time processing algorithm that can be useful for designing meal plans tailored to the foods consumed by the population. Our model allowed us not only to distinguish and classify foods within a group or subgroup but also to perform the calculation of an equivalent food. As a neural network, this model could be trained with other food bases and thus improve its predictive capacity. Although the performance of the SKM model was lower compared to other types of classifiers, our model allows selecting an equivalent food not from a group previously classified by machine learning but with a fully interpretable algorithm such as cosine similarity for comparing food.
Collapse
Affiliation(s)
| | - Ana Bertha Perez-Lizaur
- Departamento de Salud, Universidad Iberoamericana Ciudad de México, Ciudad de México, Mexico
| | - Berenice Palacios-González
- Laboratorio de Envejecimiento Saludable, Centro de Investigación Sobre Envejecimiento (CIE-CINVESTAV Sur), Instituto Nacional de Medicina Genómica, Ciudad de México, Mexico
| | - Gesuri Morales-Luna
- Departamento de Física y Matemáticas, Universidad Iberoamericana Ciudad de México, Ciudad de México, Mexico
| |
Collapse
|
6
|
Zhang Y, Che J, Hu Y, Cui J, Cui J. Real-Time Ocean Current Compensation for AUV Trajectory Tracking Control Using a Meta-Learning and Self-Adaptation Hybrid Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:6417. [PMID: 37514711 PMCID: PMC10386089 DOI: 10.3390/s23146417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023]
Abstract
Autonomous underwater vehicles (AUVs) may deviate from their predetermined trajectory in underwater currents due to the complex effects of hydrodynamics on their maneuverability. Model-based control methods are commonly employed to address this problem, but they suffer from issues related to the time-variability of parameters and the inaccuracy of mathematical models. To improve these, a meta-learning and self-adaptation hybrid approach is proposed in this paper to enable an underwater robot to adapt to ocean currents. Instead of using a traditional complex mathematical model, a deep neural network (DNN) serving as the basis function is trained to learn a high-order hydrodynamic model offline; then, a set of linear coefficients is adjusted dynamically by an adaptive law online. By conjoining these two strategies for real-time thrust compensation, the proposed method leverages the potent representational capacity of DNN along with the rapid response of adaptive control. This combination achieves a significant enhancement in tracking performance compared to alternative controllers, as observed in simulations. These findings substantiate that the AUV can adeptly adapt to new speeds of ocean currents.
Collapse
Affiliation(s)
- Yiqiang Zhang
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Jiaxing Che
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China (UESTC), Shenzhen 518110, China
| | - Yijun Hu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Jiankuo Cui
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Junhong Cui
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China (UESTC), Shenzhen 518110, China
| |
Collapse
|
7
|
Midya A, Chakraborty J, Srouji R, Narayan RR, Boerner T, Zheng J, Pak LM, Creasy JM, Escobar LA, Harrington KA, Gonen M, D'Angelica MI, Kingham TP, Do RKG, Jarnagin WR, Simpson AL. Computerized Diagnosis of Liver Tumors From CT Scans Using a Deep Neural Network Approach. IEEE J Biomed Health Inform 2023; 27:2456-2464. [PMID: 37027632 PMCID: PMC10245221 DOI: 10.1109/jbhi.2023.3248489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
The liver is a frequent site of benign and malignant, primary and metastatic tumors. Hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) are the most common primary liver cancers, and colorectal liver metastasis (CRLM) is the most common secondary liver cancer. Although the imaging characteristic of these tumors is central to optimal clinical management, it relies on imaging features that are often non-specific, overlap, and are subject to inter-observer variability. Thus, in this study, we aimed to categorize liver tumors automatically from CT scans using a deep learning approach that objectively extracts discriminating features not visible to the naked eye. Specifically, we used a modified Inception v3 network-based classification model to classify HCC, ICC, CRLM, and benign tumors from pretreatment portal venous phase computed tomography (CT) scans. Using a multi-institutional dataset of 814 patients, this method achieved an overall accuracy rate of 96%, with sensitivity rates of 96%, 94%, 99%, and 86% for HCC, ICC, CRLM, and benign tumors, respectively, using an independent dataset. These results demonstrate the feasibility of the proposed computer-assisted system as a novel non-invasive diagnostic tool to classify the most common liver tumors objectively.
Collapse
|