1
|
Carpal Tunnel Syndrome Automated Diagnosis: A Motor vs. Sensory Nerve Conduction-Based Approach. Bioengineering (Basel) 2024; 11:175. [PMID: 38391661 PMCID: PMC10886232 DOI: 10.3390/bioengineering11020175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/27/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
The objective of this study was to evaluate the effectiveness of machine learning classification techniques applied to nerve conduction studies (NCS) of motor and sensory signals for the automatic diagnosis of carpal tunnel syndrome (CTS). Two methodologies were tested. In the first methodology, motor signals recorded from the patients' median nerve were transformed into time-frequency spectrograms using the short-time Fourier transform (STFT). These spectrograms were then used as input to a deep two-dimensional convolutional neural network (CONV2D) for classification into two categories: patients and controls. In the second methodology, sensory signals from the patients' median and ulnar nerves were subjected to multilevel wavelet decomposition (MWD), and statistical and non-statistical features were extracted from the decomposed signals. These features were utilized to train and test classifiers. The classification target was set to three categories: normal subjects (controls), patients with mild CTS, and patients with moderate to severe CTS based on conventional electrodiagnosis results. The results of the classification analysis demonstrated that both methodologies surpassed previous attempts at automatic CTS diagnosis. The classification models utilizing the motor signals transformed into time-frequency spectrograms exhibited excellent performance, with average accuracy of 94%. Similarly, the classifiers based on the sensory signals and the extracted features from multilevel wavelet decomposition showed significant accuracy in distinguishing between controls, patients with mild CTS, and patients with moderate to severe CTS, with accuracy of 97.1%. The findings highlight the efficacy of incorporating machine learning algorithms into the diagnostic processes of NCS, providing a valuable tool for clinicians in the diagnosis and management of neuropathies such as CTS.
Collapse
|
2
|
Contribution of Deep Learning in the Investigation of Possible Dual LOX-3 Inhibitors/DPPH Scavengers: The Case of Recently Synthesized Compounds. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120800. [PMID: 36551006 PMCID: PMC9774961 DOI: 10.3390/bioengineering9120800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/18/2022] [Accepted: 12/09/2022] [Indexed: 12/15/2022]
Abstract
Even though non-steroidal anti-inflammatory drugs are the most effective treatment for inflammatory conditions, they have been linked to negative side effects. A promising approach to mitigating potential risks, is the development of new compounds able to combine anti-inflammatory with antioxidant activity to enhance activity and reduce toxicity. The implication of reactive oxygen species in inflammatory conditions has been extensively studied, based on the pro-inflammatory properties of generated free radicals. Drugs with dual activity (i.e., inhibiting inflammation related enzymes, e.g., LOX-3 and scavenging free radicals, e.g., DPPH) could find various therapeutic applications, such as in cardiovascular or neurodegenerating disorders. The challenge we embarked on using deep learning was the creation of appropriate classification and regression models to discriminate pharmacological activity and selectivity as well as to discover future compounds with dual activity prior to synthesis. An accurate filter algorithm was established, based on knowledge from compounds already evaluated in vitro, that can separate compounds with low, moderate or high activity. In this study, we constructed a customized highly effective one dimensional convolutional neural network (CONV1D), with accuracy scores up to 95.2%, that was able to identify dual active compounds, being LOX-3 inhibitors and DPPH scavengers, as an indication of simultaneous anti-inflammatory and antioxidant activity. Additionally, we created a highly accurate regression model that predicted the exact value of effectiveness of a set of recently synthesized compounds with anti-inflammatory activity, scoring a root mean square error value of 0.8. Eventually, we succeeded in observing the manner in which those newly synthesized compounds differentiate from each other, regarding a specific pharmacological target, using deep learning algorithms.
Collapse
|
3
|
Prediction of programmed ventricular stimulation inducibility using machine learning in post-myocardial infarction patients at risk for sudden cardiac arrest with preserved ejection fraction ≥40%. Eur Heart J 2022. [DOI: 10.1093/eurheartj/ehac544.681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Introduction
Sudden cardiac death (SCD) in post myocardial infarction (post-MI) patients with a relatively preserved left ventricular ejection fraction (LVEF ≥40%) has 1% annual incidence. In the PRESERVE-EF study, we used a two-step SCD risk stratification approach to detect patients with a relatively preserved left ventricular ejection fraction ≥40% at risk for major arrhythmic events. Seven noninvasive risk factors (NIRFs) were extracted from ambulatory electrocardiography. Patients with at least one NIRF present were referred for invasive programmed ventricular stimulation (PVS). Inducible patients received an ICD.
Purpose
The present study examines the performance of machine learning technology for the prediction of the inducible patients in PRESERVE-EF study.
Methods
After first step screening with NIRFs, 152 out of 575 patients underwent PVS and 41 of them were inducible. For the present analysis, data from these 152 patients were analysed. We used machine learning of NIRFs to predict these inducible high risk patients. We selected as classification method the Nearest Neighbour (NN) algorithm, after experimentation with several classifiers. NN classifies each subject according to the class of the N nearest neighbours. For each subject, we created a vector with the following 7 features: SAECG Late Potentials, Ventricular Premature beats ≥30/hour, Non-sustained Ventricular Tachycardia ≥1 episode (s)/24 hours, Fredericia corrected QT interval ≥45 0ms, SDNN/HRV ≤75 ms, T Wave Alternans ≥65 μV, Combined Deceleration capacity (DC) ≤4.5 ms and Heart Rate Turbulence Onset (To) ≥0% and Heart Rate Turbulence Slope (Ts) ≤2.5 ms.
Results
The achieved accuracy reached up to 72.2% when N was set to 7. We had totally 144 samples, 41 of which were inducible high risk patients. Results were similar for other values of N. To ensure independence of train and test sets, we employed 10-fold cross validation.
Conclusions
Inducible on PVS patients in PRESERVE-EF study were predicted with machine learning classification of NIRFs.
Funding Acknowledgement
Type of funding sources: None.
Collapse
|
4
|
Automatic Electrodiagnosis of Carpal Tunnel Syndrome Using Machine Learning. Bioengineering (Basel) 2021; 8:bioengineering8110181. [PMID: 34821747 PMCID: PMC8615235 DOI: 10.3390/bioengineering8110181] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/22/2021] [Accepted: 11/05/2021] [Indexed: 01/03/2023] Open
Abstract
Recent literature has revealed a long discussion about the importance and necessity of nerve conduction studies in carpal tunnel syndrome management. The purpose of this study was to investigate the possibility of automatic detection, based on electrodiagnostic features, for the median nerve mononeuropathy and decision making about carpal tunnel syndrome. The study included 38 volunteers, examined prospectively. The purpose was to investigate the possibility of automatically detecting the median nerve mononeuropathy based on common electrodiagnostic criteria, used in everyday clinical practice, as well as new features selected based on physiology and mathematics. Machine learning techniques were used to combine the examined characteristics for a stable and accurate diagnosis. Automatic electrodiagnosis reached an accuracy of 95% compared to the standard neurophysiological diagnosis of the physicians with nerve conduction studies and 89% compared to the clinical diagnosis. The results show that the automatic detection of carpal tunnel syndrome is possible and can be employed in decision making, excluding human error. It is also shown that the novel features investigated can be used for the detection of the syndrome, complementary to the commonly used ones, increasing the accuracy of the method.
Collapse
|
5
|
A Two-Steps-Ahead Estimator for Bubble Entropy. ENTROPY (BASEL, SWITZERLAND) 2021; 23:761. [PMID: 34208771 PMCID: PMC8235094 DOI: 10.3390/e23060761] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 06/08/2021] [Accepted: 06/13/2021] [Indexed: 11/16/2022]
Abstract
Aims: Bubble entropy (bEn) is an entropy metric with a limited dependence on parameters. bEn does not directly quantify the conditional entropy of the series, but it assesses the change in entropy of the ordering of portions of its samples of length m, when adding an extra element. The analytical formulation of bEn for autoregressive (AR) processes shows that, for this class of processes, the relation between the first autocorrelation coefficient and bEn changes for odd and even values of m. While this is not an issue, per se, it triggered ideas for further investigation. Methods: Using theoretical considerations on the expected values for AR processes, we examined a two-steps-ahead estimator of bEn, which considered the cost of ordering two additional samples. We first compared it with the original bEn estimator on a simulated series. Then, we tested it on real heart rate variability (HRV) data. Results: The experiments showed that both examined alternatives showed comparable discriminating power. However, for values of 10
Collapse
|
6
|
Estimation of HRV Based on Low Frequency Data Transmission. Stud Health Technol Inform 2020; 273:255-257. [PMID: 33087622 DOI: 10.3233/shti200651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Smart devices, including the popular smart watches, often collect information on the heart beat rhythm and transmit it to a central server for storage or further processing. A factor introducing important limitations in the amount of data collected, transmitted and finally processed is the life of the mobile device or smart watch battery. Some devices choose to transmit the mean heart rate over relatively long periods of time, to save power. Heart Rate Variability (HRV) analysis gives useful information about the human heart, by only examining the heart rate time series. Its discriminating capability is affected by the amount of available information to process. Ideally, the whole RR interval time series should be used. We investigate here how this discriminating capability is affected, when the analysis is based on mean heart rate values transmitted over relatively long time periods. We show that we still can get useful information and the discriminating power is still remarkable, even when the amount of the available data is relatively small.
Collapse
|
7
|
Abstract
Objective: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.Objective: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.
Collapse
|
8
|
Low Computational Cost for Sample Entropy. ENTROPY 2018; 20:e20010061. [PMID: 33265148 PMCID: PMC7512258 DOI: 10.3390/e20010061] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 12/24/2017] [Accepted: 01/09/2018] [Indexed: 11/23/2022]
Abstract
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the kd-trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
Collapse
|
9
|
P5521Impaired circadian properties of Beat to Beat Deceleration Capacity of heart rate predict arrhythmic events in heart failure patients. Eur Heart J 2017. [DOI: 10.1093/eurheartj/ehx493.p5521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
10
|
Deceleration Capacity of Heart Rate Predicts Arrhythmic and Total Mortality in Heart Failure Patients. Ann Noninvasive Electrocardiol 2016; 21:508-18. [PMID: 27038287 DOI: 10.1111/anec.12343] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Revised: 12/03/2015] [Accepted: 12/11/2015] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND Deceleration capacity (DC) of heart rate proved an independent mortality predictor in postmyocardial infarction patients. The original method (DCorig) may produce negative values (9% in our analyzed sample). We aimed to improve the method and to investigate if DC also predicts the arrhythmic mortality. METHODS Time series from 221 heart failure patients was analyzed with DCorig and a new variant, the DCsgn, in which decelerations are characterized based on windows of four consecutive beats and not on anchors. After 41.2 months, 69 patients experienced sudden cardiac death (SCD) surrogate end points, while 61 died. RESULTS (SCD+ vs SCD-group) DCorig: 3.7 ± 1.6 ms versus 4.6 ± 2.6 ms (P = 0.020) and DCsgn: 4.9 ± 1.7 ms versus 6.1 ± 2.2 ms (P < 0.001). After Cox regression (gender, age, left ventricular ejection fraction, filtered QRS, NSVT≥1/24h, VPBs≥240/24h, mean 24-h QTc, and each DC index added on the model separately), DCsgn (continuous) was an independent SCD predictor (hazard ratio [H.R.]: 0.742, 95% confidence intervals (C.I.): 0.631-0.871, P < 0.001). DCsgn ≤ 5.373 (dichotomous) presented 1.815 H.R. for SCD (95% C.I.: 1.080-3.049, P = 0.024), areas under curves (AUC)/receiver operator characteristic (ROC): 0.62 (DCorig) and 0.66 (DCsgn), P = 0.190 (chi-square). Results for deceased versus alive group: DCorig: 3.2 ± 2.0 ms versus 4.8 ± 2.4 ms (P < 0.001) and DCsgn: 4.6 ± 1.4 ms versus 6.2 ± 2.2 ms (P < 0.001). In Cox regression, DCsgn (continuous) presented H.R.: 0.686 (95% C.I. 0.546-0.862, P = 0.001) and DCsgn ≤ 5.373 (dichotomous) presented an H.R.: 2.443 for total mortality (TM) (95% C.I. 1.269-4.703, P = 0.008). AUC/ROC 0.71 (DCorig) and 0.73 (DCsgn), P = 0.402. CONCLUSIONS DC predicts both SCD and TM. DCsgn avoids the negative values, improving the method in a nonstatistical important level.
Collapse
|
11
|
Alignment of R-R interval signals using the circadian heart rate rhythm. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:3347-50. [PMID: 26737009 DOI: 10.1109/embc.2015.7319109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
R-R interval signals that come from different subjects are regularly aligned and averaged according to the horological starting time of the recordings. We argue that the horological time is a faulty alignment criterion and provide evidence in the form of a new alignment method. Our main motivation is that the human heart rate (HR) rhythm follows a circadian cycle, whose pattern can vary among different classes of people. We propose two novel alignment algorithms that consider the HR circadian rhythm, the Puzzle Piece Alignment Algorithm (PPA) and the Event Based Alignment Algorithm (EBA). First, we convert the R-R interval signal into a series of time windows and compute the mean HR per window. Then our algorithms search for matching circadian patterns to align the signals. We conduct experiments using R-R interval signals extracted from two databases in the Physionet Data Bank. Both algorithms are able to align the signals with respect to the circadian rhythmicity of HR. Furthermore, our findings confirm the presence of more than one pattern in the circadian HR rhythm. We suggest an automatic classification of signals according to the three most prominent patterns.
Collapse
|
12
|
|
13
|
Elevated nighttime heart rate due to insufficient circadian adaptation detects heart failure patients prone for malignant ventricular arrhythmias. Int J Cardiol 2013; 172:e154-6. [PMID: 24411917 DOI: 10.1016/j.ijcard.2013.12.075] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2013] [Accepted: 12/22/2013] [Indexed: 10/25/2022]
|
14
|
|
15
|
Arrhythmic sudden cardiac death: substrate, mechanisms and current risk stratification strategies for the post-myocardial infarction patient. Hellenic J Cardiol 2013; 54:301-315. [PMID: 23912922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023] Open
|
16
|
The use of emergency department thoracotomy for traumatic cardiopulmonary arrest. Injury 2012; 43:1355-61. [PMID: 22560130 DOI: 10.1016/j.injury.2012.04.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 03/30/2012] [Accepted: 04/07/2012] [Indexed: 02/02/2023]
Abstract
Despite the establishment of evidence-based guidelines for the resuscitation of critically injured patients who have sustained cardiopulmonary arrest, rapid decisions regarding patient salvageability in these situations remain difficult even for experienced physicians. Regardless, survival is limited after traumatic cardiopulmonary arrest. One applicable, well-described resuscitative technique is the emergency department thoracotomy-a procedure that, when applied correctly, is effective in saving small but significant numbers of critically injured patients. By understanding the indications, technical details, and predictors of survival along with the inherent risks and costs of emergency department thoracotomy, the physician is better equipped to make rapid futile versus salvageable decisions for this most severely injured subset of patients.
Collapse
|
17
|
Automated diagnosis of diseases based on classification: dynamic determination of the number of trees in random forests algorithm. ACTA ACUST UNITED AC 2011; 16:615-22. [PMID: 22106154 DOI: 10.1109/titb.2011.2175938] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The accurate diagnosis of diseases with high prevalence rate, such as Alzheimer, Parkinson, diabetes, breast cancer, and heart diseases, is one of the most important biomedical problems whose administration is imperative. In this paper, we present a new method for the automated diagnosis of diseases based on the improvement of random forests classification algorithm. More specifically, the dynamic determination of the optimum number of base classifiers composing the random forests is addressed. The proposed method is different from most of the methods reported in the literature, which follow an overproduce-and-choose strategy, where the members of the ensemble are selected from a pool of classifiers, which is known a priori. In our case, the number of classifiers is determined during the growing procedure of the forest. Additionally, the proposed method produces an ensemble not only accurate, but also diverse, ensuring the two important properties that should characterize an ensemble classifier. The method is based on an online fitting procedure and it is evaluated using eight biomedical datasets and five versions of the random forests algorithm (40 cases). The method decided correctly the number of trees in 90% of the test cases.
Collapse
|
18
|
Poster Session 1. Europace 2011. [DOI: 10.1093/europace/eur220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
19
|
|
20
|
Endoscopic vein harvest in lower-extremity bypass—Is it preferable to prosthetic bypass or standard vein harvest? Int J Angiol 2011. [DOI: 10.1007/s00547-005-2036-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|
21
|
Poster presentation. Europace 2011. [DOI: 10.1093/europace/euq492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
22
|
Sudden Cardiac Death II. Europace 2011. [DOI: 10.1093/europace/euq479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
23
|
A six stage approach for the diagnosis of the Alzheimer's disease based on fMRI data. J Biomed Inform 2009; 43:307-20. [PMID: 19883796 DOI: 10.1016/j.jbi.2009.10.004] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2009] [Revised: 09/30/2009] [Accepted: 10/26/2009] [Indexed: 11/26/2022]
Abstract
The aim of this work is to present an automated method that assists in the diagnosis of Alzheimer's disease and also supports the monitoring of the progression of the disease. The method is based on features extracted from the data acquired during an fMRI experiment. It consists of six stages: (a) preprocessing of fMRI data, (b) modeling of fMRI voxel time series using a Generalized Linear Model, (c) feature extraction from the fMRI data, (d) feature selection, (e) classification using classical and improved variations of the Random Forests algorithm and Support Vector Machines, and (f) conversion of the trees, of the Random Forest, to rules which have physical meaning. The method is evaluated using a dataset of 41 subjects. The results of the proposed method indicate the validity of the method in the diagnosis (accuracy 94%) and monitoring of the Alzheimer's disease (accuracy 97% and 99%).
Collapse
|
24
|
Comparison of the most common HRV computation algorithms from the systems designer point of view. J Med Eng Technol 2009; 33:110-8. [DOI: 10.1080/03091900701292265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
25
|
|
26
|
Poster Session 4: ECG. Europace 2009. [DOI: 10.1093/europace/euq237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
27
|
Fast computation of approximate entropy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2008; 91:48-54. [PMID: 18423927 DOI: 10.1016/j.cmpb.2008.02.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2007] [Revised: 01/05/2008] [Accepted: 02/25/2008] [Indexed: 05/26/2023]
Abstract
The approximate entropy (ApEn) is a measure of systems complexity. The implementation of the method is computationally expensive and requires execution time analogous to the square of the size of the input signal. We propose here a fast algorithm which speeds up the computation of approximate entropy by detecting early some vectors that are not similar and by excluding them from the similarity test. Experimental analysis with various biomedical signals revealed a significant improvement in execution times.
Collapse
|
28
|
Assessment of the classification capability of prediction and approximation methods for HRV analysis. Comput Biol Med 2007; 37:642-54. [PMID: 16904097 DOI: 10.1016/j.compbiomed.2006.06.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2005] [Revised: 05/02/2006] [Accepted: 06/05/2006] [Indexed: 11/27/2022]
Abstract
The goal of this paper is to examine the classification capabilities of various prediction and approximation methods and suggest which are most likely to be suitable for the clinical setting. Various prediction and approximation methods are applied in order to detect and extract those which provide the better differentiation between control and patient data, as well as members of different age groups. The prediction methods are local linear prediction, local exponential prediction, the delay times method, autoregressive prediction and neural networks. Approximation is computed with local linear approximation, least squares approximation, neural networks and the wavelet transform. These methods are chosen since each has a different physical basis and thus extracts and uses time series information in a different way.
Collapse
|
29
|
Open aneurysm repair in elderly patients not candidates for endovascular repair (EVAR): Comparison with patients undergoing EVAR or preferential open repair. Vasc Endovascular Surg 2006; 40:95-101. [PMID: 16598356 DOI: 10.1177/153857440604000202] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The authors reviewed a 2-year experience with abdominal aortic aneurysm (AAA) repair to determine if patients who were excluded from endovascular aneurysm repair (EVAR) because of anatomic criteria (Group III) represented a higher risk for subsequent open aneurysm repair than either patients undergoing EVAR (Group II) or those patients who preferentially underwent open repair (Group I). Between January 2001 and December 2003, 107 patients underwent AAA repair. Open repair was recommended in patients <70 years of age and without significant comorbidities (Group I). There were 35 patients in Group I; 72 patients were evaluated for EVAR; 29 patients underwent EVAR (Group II), and 43 were excluded and underwent open repair (Group III). Exclusion criteria were those recommended by the graft manufacturers. Patients in Group I were significantly younger than those in Groups II and III (p < 0.0001). Gender, incidence of diabetes, and hypertension were similar in all groups. Patients in Group III had a greater incidence of coronary artery disease (CAD) than those in Groups I and II, trending toward statistical significance (p = 0.06). Aneurysm size in Group II was statistically smaller than in Group I or III. Group III had significantly more complications (25.6% vs 5.7% and 6.9%) than either Group I or II (p < 0.015). Cardiac complications were similar in all groups. Three patients in Group III required prolonged intubation and 3 in Group III developed renal insufficiency. A history of CAD was predictive of complications (21.8% vs 5.8%, p < 0.024), as was inclusion in Group III. There were 2 deaths in this series, both in Group III. Length of stay was significantly less in Group II (4.17 +/-2.36 days) than in Group I (6.57 +/-1.84 days) or Group III (12.30 +/-9.82 days) (p = 0.0001). Open aneurysm repair can be safely performed in younger good-risk patients (Group I) with results equivalent to EVAR (Group II) but with slightly longer length of stay (LOS). In older patients with suitable anatomy EVAR can be performed with minimal morbidity and short LOS. Older patients not suitable for EVAR (Group III) constitute a higher risk group of patients because of increased incidence of CAD and the need for more complex repairs. However, the mortality rate in this group was only 4.6%.
Collapse
|
30
|
Robustness of support vector machine-based classification of heart rate signals. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2006; 2006:2159-2162. [PMID: 17945696 DOI: 10.1109/iembs.2006.260550] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this study, we discuss the use of support vector machine (SVM) learning to classify heart rate signals. Each signal is represented by an attribute vector containing a set of statistical measures for the respective signal. At first, the SVM classifier is trained by data (attribute vectors) with known ground truth. Then, the classifier learnt parameters can be used for the categorization of new signals not belonging to the training set. We have experimented with both real and artificial signals and the SVM classifier performs very well even with signals exhibiting very low signal to noise ratio which is not the case for other standard methods proposed by the literature.
Collapse
|
31
|
Experimental analysis of heart rate variability of long-recording electrocardiograms in normal subjects and patients with coronary artery disease and normal left ventricular function. J Biomed Inform 2004; 36:202-17. [PMID: 14615229 DOI: 10.1016/j.jbi.2003.09.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The heart rate signal contains valuable information about cardiac health, which cannot be extracted without the use of appropriate computerized methods. This paper presents an analysis of various electrocardiograms, the aim of which is to categorize them into two distinct groups. Group A represents young male subjects with no prior occurrence of coronary disease events and Group B represents middle-aged male subjects who have symptomatic coronary artery disease without myocardial infarction and whose 12-lead ECGs do not contain any abnormalities, thus wrongly indicating a normal subject. Electrocardiographic recordings are approximately 2h in length and acquired under conditions that favor the stationarity of collected data. Linear and nonlinear characteristics are studied by applying several techniques including Fourier analysis, Correlation Dimension Estimation, Approximate Entropy, and the Discrete Wavelet Transform. The small variations of the diagnostic information given by each one of the methods as well as the slightly different conclusions among similar studies indicate the necessity of further investigation, combined use, and complementary application of different approaches.
Collapse
|
32
|
Abstract
The decrease in heart rate variability is an indication of abnormal heart function. Proposed here is a hardware design of a standalone system that calculates and evaluates heart rate variability, distinguishing healthy from unhealthy subjects. Previous approaches are mostly based on the fast Fourier transform and the power spectral analysis. Described is an alternative approach thatfollows a recently proposed idea: the analysis of heart rate signals with the use of wavelets. The proposed system follows simple gated architecture and is composed offour main units: a processing unit that prepares the input signal for analysis, a unit that manages control signals, the wavelet computation unit and the wavelet coefficient evaluation unit. The hardware design is cost-effective, simple and easy to implement. Experimental results proved that this system is efficient and produces a clean and accurate separation between the healthy and unhealthy groups of patients for the first nine scales of wavelet analysis.
Collapse
|