1
|
Saleh H, Elrashidy N, Elaziz MA, Aseeri AO, El-sappagh S. Genetic algorithms based optimized hybrid deep learning model for explainable Alzheimer's prediction based on temporal multimodal cognitive data.. [DOI: 10.21203/rs.3.rs-3250006/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
Alzheimer's Disease (AD) is an irreversible neurodegenerative disease. Its early detection is crucial to stop disease progression at an early stage. Most deep learning (DL) literature focused on neuroimage analysis. However, there is no noticed effect of these studies in the real environment. Model's robustness, cost, and interpretability are considered the main reasons for these limitations. The medical intuition of physicians is to evaluate the clinical biomarkers of patients then test their neuroimages. Cognitive scores provide an medically acceptable and cost-effective alternative for the neuroimages to predict AD progression. Each score is calculated from a collection of sub-scores which provide a deeper insight about patient conditions. No study in the literature have explored the role of these multimodal time series sub-scores to predict AD progression.
We propose a hybrid CNN-LSTM DL model for predicting AD progression based on the fusion of four longitudinal cognitive sub-scores modalities. Bayesian optimizer has been used to select the best DL architecture. A genetic algorithms based feature selection optimization step has been added to the pipeline to select the best features from extracted deep representations of CNN-LSTM. The SoftMax classifier has been replaced by a robust and optimized random forest classifier. Extensive experiments using the ADNI dataset investigated the role of each optimization step, and the proposed model achieved the best results compared to other DL and classical machine learning models. The resulting model is robust, but it is a black box and it is difficult to understand the logic behind its decisions. Trustworthy AI models must be robust and explainable. We used SHAP and LIME to provide explainability features for the proposed model. The resulting trustworthy model has a great potential to be used to provide decision support in the real environments.
Collapse
Affiliation(s)
- Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada, Egypt
| | - Nora ElRashidy
- Machine Learning and Information Retrieval Department, Faculty of Artificial Intelligence, Kafrelsheiksh University, Kafrelsheiksh, 13518, Egypt
| | - Mohamed Abd Elaziz
- Faculty of Computer Science and Engineerings, Galala University, Suez, 435611, Egypt, Egypt
| | - Ahmad O. Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, 11942, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineerings, Galala University, Suez, 435611, Egypt, Egypt
| |
Collapse
|
2
|
Stocks J, Popuri K, Heywood A, Tosun D, Alpert K, Beg MF, Rosen H, Wang L, for the Alzheimer's Disease Neuroimaging Initiative. Network-wise concordance of multimodal neuroimaging features across the Alzheimer's disease continuum. ALZHEIMER'S & DEMENTIA (AMSTERDAM, NETHERLANDS) 2022; 14:e12304. [PMID: 35496375 PMCID: PMC9043119 DOI: 10.1002/dad2.12304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 02/17/2022] [Accepted: 02/25/2022] [Indexed: 01/18/2023]
Abstract
Background Concordance between cortical atrophy and cortical glucose hypometabolism within distributed brain networks was evaluated among cerebrospinal fluid (CSF) biomarker-defined amyloid/tau/neurodegeneration (A/T/N) groups. Method We computed correlations between cortical thickness and fluorodeoxyglucose metabolism within 12 functional brain networks. Differences among A/T/N groups (biomarker normal [BN], Alzheimer's disease [AD] continuum, suspected non-AD pathologic change [SNAP]) in network concordance and relationships to longitudinal change in cognition were assessed. Results Network-wise markers of concordance distinguish SNAP subjects from BN subjects within the posterior multimodal and language networks. AD-continuum subjects showed increased concordance in 9/12 networks assessed compared to BN subjects, as well as widespread atrophy and hypometabolism. Baseline network concordance was associated with longitudinal change in a composite memory variable in both SNAP and AD-continuum subjects. Conclusions Our novel study investigates the interrelationships between atrophy and hypometabolism across brain networks in A/T/N groups, helping disentangle the structure-function relationships that contribute to both clinical outcomes and diagnostic uncertainty in AD.
Collapse
Affiliation(s)
- Jane Stocks
- Department of Psychiatry and Behavioral SciencesFeinberg School of MedicineNorthwestern UniversityChicagoIllinoisUSA
| | - Karteek Popuri
- School of Engineering ScienceSimon Fraser UniversityBurnabyBritish ColumbiaCanada
| | - Ashley Heywood
- Department of Psychiatry and Behavioral SciencesFeinberg School of MedicineNorthwestern UniversityChicagoIllinoisUSA
| | - Duygu Tosun
- School of MedicineUniversity of CaliforniaSan Francisco, CaliforniaUSA
| | - Kate Alpert
- Department of Psychiatry and Behavioral SciencesFeinberg School of MedicineNorthwestern UniversityChicagoIllinoisUSA
| | - Mirza Faisal Beg
- School of Engineering ScienceSimon Fraser UniversityBurnabyBritish ColumbiaCanada
| | - Howard Rosen
- School of MedicineUniversity of CaliforniaSan Francisco, CaliforniaUSA
| | - Lei Wang
- Department of Psychiatry and Behavioral SciencesFeinberg School of MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of Psychiatry and Behavioral HealthOhio State University Wexner Medical CenterColumbusOhioUSA
| | | |
Collapse
|
3
|
Zhang J, Wang H, Zhao Y, Guo L, Du L. Identification of multimodal brain imaging association via a parameter decomposition based sparse multi-view canonical correlation analysis method. BMC Bioinformatics 2022; 23:128. [PMID: 35413798 PMCID: PMC9006414 DOI: 10.1186/s12859-022-04669-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 04/04/2022] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND With the development of noninvasive imaging technology, collecting different imaging measurements of the same brain has become more and more easy. These multimodal imaging data carry complementary information of the same brain, with both specific and shared information being intertwined. Within these multimodal data, it is essential to discriminate the specific information from the shared information since it is of benefit to comprehensively characterize brain diseases. While most existing methods are unqualified, in this paper, we propose a parameter decomposition based sparse multi-view canonical correlation analysis (PDSMCCA) method. PDSMCCA could identify both modality-shared and -specific information of multimodal data, leading to an in-depth understanding of complex pathology of brain disease. RESULTS Compared with the SMCCA method, our method obtains higher correlation coefficients and better canonical weights on both synthetic data and real neuroimaging data. This indicates that, coupled with modality-shared and -specific feature selection, PDSMCCA improves the multi-view association identification and shows meaningful feature selection capability with desirable interpretation. CONCLUSIONS The novel PDSMCCA confirms that the parameter decomposition is a suitable strategy to identify both modality-shared and -specific imaging features. The multimodal association and the diverse information of multimodal imaging data enable us to better understand the brain disease such as Alzheimer's disease.
Collapse
Affiliation(s)
- Jin Zhang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Huiai Wang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Ying Zhao
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Lei Guo
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Lei Du
- School of Automation, Northwestern Polytechnical University, Xi'an, China.
| | | |
Collapse
|
4
|
Venugopalan J, Tong L, Hassanzadeh HR, Wang MD. Multimodal deep learning models for early detection of Alzheimer's disease stage. Sci Rep 2021; 11:3254. [PMID: 33547343 PMCID: PMC7864942 DOI: 10.1038/s41598-020-74399-w] [Citation(s) in RCA: 180] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 01/22/2020] [Indexed: 02/06/2023] Open
Abstract
Most current Alzheimer's disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer's disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature.
Collapse
Affiliation(s)
- Janani Venugopalan
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Li Tong
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Hamid Reza Hassanzadeh
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - May D Wang
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
- Winship Cancer Institute, Parker H. Petit Institute for Bioengineering and Biosciences, Institute of People and Technology, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
| |
Collapse
|
5
|
Abrol A, Bhattarai M, Fedorov A, Du Y, Plis S, Calhoun V. Deep residual learning for neuroimaging: An application to predict progression to Alzheimer's disease. J Neurosci Methods 2020; 339:108701. [PMID: 32275915 PMCID: PMC7297044 DOI: 10.1016/j.jneumeth.2020.108701] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 01/03/2020] [Accepted: 03/25/2020] [Indexed: 01/22/2023]
Abstract
BACKGROUND The unparalleled performance of deep learning approaches in generic image processing has motivated its extension to neuroimaging data. These approaches learn abstract neuroanatomical and functional brain alterations that could enable exceptional performance in classification of brain disorders, predicting disease progression, and localizing brain abnormalities. NEW METHOD This work investigates the suitability of a modified form of deep residual neural networks (ResNet) for studying neuroimaging data in the specific application of predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Prediction was conducted first by training the deep models using MCI individuals only, followed by a domain transfer learning version that additionally trained on AD and controls. We also demonstrate a network occlusion based method to localize abnormalities. RESULTS The implemented framework captured non-linear features that successfully predicted AD progression and also conformed to the spectrum of various clinical scores. In a repeated cross-validated setup, the learnt predictive models showed highly similar peak activations that corresponded to previous AD reports. COMPARISON WITH EXISTING METHODS The implemented architecture achieved a significant performance improvement over the classical support vector machine and the stacked autoencoder frameworks (p < 0.005), numerically better than state-of-the-art performance using sMRI data alone (> 7% than the second-best performing method) and within 1% of the state-of-the-art performance considering learning using multiple neuroimaging modalities as well. CONCLUSIONS The explored frameworks reflected the high potential of deep learning architectures in learning subtle predictive features and utility in critical applications such as predicting and understanding disease progression.
Collapse
Affiliation(s)
- Anees Abrol
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Manish Bhattarai
- Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA; Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Alex Fedorov
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| | - Yuhui Du
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; School of Computer and Information Technology, Shanxi University, Taiyuan, China
| | - Sergey Plis
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA
| | - Vince Calhoun
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
6
|
Probabilistic disease progression modeling to characterize diagnostic uncertainty: Application to staging and prediction in Alzheimer's disease. Neuroimage 2019; 190:56-68. [DOI: 10.1016/j.neuroimage.2017.08.059] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 08/07/2017] [Accepted: 08/23/2017] [Indexed: 12/30/2022] Open
|
7
|
Guo Z, Li X, Huang H, Guo N, Li Q. Deep Learning-based Image Segmentation on Multimodal Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2019; 3:162-169. [PMID: 34722958 PMCID: PMC8553020 DOI: 10.1109/trpms.2018.2890359] [Citation(s) in RCA: 139] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis.
Collapse
Affiliation(s)
- Zhe Guo
- School of Information and Electronics, Beijing Institute of Technology, China
| | - Xiang Li
- Massachusetts General Hospital, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, USA
| | - Ning Guo
- Massachusetts General Hospital, USA
| | | |
Collapse
|
8
|
Zhu X, Suk HI, Lee SW, Shen D. Discriminative self-representation sparse regression for neuroimaging-based alzheimer's disease diagnosis. Brain Imaging Behav 2019; 13:27-40. [PMID: 28624881 PMCID: PMC5811409 DOI: 10.1007/s11682-017-9731-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In this paper, we propose a novel feature selection method by jointly considering (1) 'task-specific' relations between response variables (e.g., clinical labels in this work) and neuroimaging features and (2) 'self-representation' relations among neuroimaging features in a sparse regression framework. Specifically, the task-specific relation is devised to learn the relative importance of features for representation of response variables by a linear combination of the input features in a supervised manner, while the self-representation relation is used to take into account the inherent information among neuroimaging features such that any feature can be represented by a weighted sum of the other features, regardless of the label information, in an unsupervised manner. By integrating these two different relations along with a group sparsity constraint, we formulate a new sparse linear regression model for class-discriminative feature selection. The selected features are used to train a support vector machine for classification. To validate the effectiveness of the proposed method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset; experimental results showed superiority of the proposed method over the state-of-the-art methods considered in this work.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Seiler C, Green T, Hong D, Chromik L, Huffman L, Holmes S, Reiss AL. Multi-Table Differential Correlation Analysis of Neuroanatomical and Cognitive Interactions in Turner Syndrome. Neuroinformatics 2017; 16:81-93. [PMID: 29270892 DOI: 10.1007/s12021-017-9351-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Girls and women with Turner syndrome (TS) have a completely or partially missing X chromosome. Extensive studies on the impact of TS on neuroanatomy and cognition have been conducted. The integration of neuroanatomical and cognitive information into one consistent analysis through multi-table methods is difficult and most standard tests are underpowered. We propose a new two-sample testing procedure that compares associations between two tables in two groups. The procedure combines multi-table methods with permutation tests. In particular, we construct cluster size test statistics that incorporate spatial dependencies. We apply our new procedure to a newly collected dataset comprising of structural brain scans and cognitive test scores from girls with TS and healthy control participants (age and sex matched). We measure neuroanatomy with Tensor-Based Morphometry (TBM) and cognitive function with Wechsler IQ and NEuroPSYchological tests (NEPSY-II). We compare our multi-table testing procedure to a single-table analysis. Our new procedure reports differential correlations between two voxel clusters and a wide range of cognitive tests whereas the single-table analysis reports no differences. Our findings are consistent with the hypothesis that girls with TS have a different brain-cognition association structure than healthy controls.
Collapse
Affiliation(s)
- Christof Seiler
- Department of Statistics, Stanford University, Stanford, CA, USA.
| | - Tamar Green
- Center for Interdisciplinary Brain Sciences Research, Stanford University School of Medicine, Stanford, CA, USA.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - David Hong
- Center for Interdisciplinary Brain Sciences Research, Stanford University School of Medicine, Stanford, CA, USA
| | - Lindsay Chromik
- Center for Interdisciplinary Brain Sciences Research, Stanford University School of Medicine, Stanford, CA, USA
| | - Lynne Huffman
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Susan Holmes
- Department of Statistics, Stanford University, Stanford, CA, USA
| | - Allan L Reiss
- Center for Interdisciplinary Brain Sciences Research, Stanford University School of Medicine, Stanford, CA, USA.,Departments of Radiology, Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
10
|
Ramírez J, Górriz JM, Ortiz A, Martínez-Murcia FJ, Segovia F, Salas-Gonzalez D, Castillo-Barnes D, Illán IA, Puntonet CG. Ensemble of random forests One vs. Rest classifiers for MCI and AD prediction using ANOVA cortical and subcortical feature selection and partial least squares. J Neurosci Methods 2017; 302:47-57. [PMID: 29242123 DOI: 10.1016/j.jneumeth.2017.12.005] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Revised: 12/08/2017] [Accepted: 12/09/2017] [Indexed: 11/19/2022]
Abstract
BACKGROUND Alzheimer's disease (AD) is the most common cause of dementia in the elderly and affects approximately 30 million individuals worldwide. Mild cognitive impairment (MCI) is very frequently a prodromal phase of AD, and existing studies have suggested that people with MCI tend to progress to AD at a rate of about 10-15% per year. However, the ability of clinicians and machine learning systems to predict AD based on MRI biomarkers at an early stage is still a challenging problem that can have a great impact in improving treatments. METHOD The proposed system, developed by the SiPBA-UGR team for this challenge, is based on feature standardization, ANOVA feature selection, partial least squares feature dimension reduction and an ensemble of One vs. Rest random forest classifiers. With the aim of improving its performance when discriminating healthy controls (HC) from MCI, a second binary classification level was introduced that reconsiders the HC and MCI predictions of the first level. RESULTS The system was trained and evaluated on an ADNI datasets that consist of T1-weighted MRI morphological measurements from HC, stable MCI, converter MCI and AD subjects. The proposed system yields a 56.25% classification score on the test subset which consists of 160 real subjects. COMPARISON WITH EXISTING METHOD(S) The classifier yielded the best performance when compared to: (i) One vs. One (OvO), One vs. Rest (OvR) and error correcting output codes (ECOC) as strategies for reducing the multiclass classification task to multiple binary classification problems, (ii) support vector machines, gradient boosting classifier and random forest as base binary classifiers, and (iii) bagging ensemble learning. CONCLUSIONS A robust method has been proposed for the international challenge on MCI prediction based on MRI data. The system yielded the second best performance during the competition with an accuracy rate of 56.25% when evaluated on the real subjects of the test set.
Collapse
Affiliation(s)
- J Ramírez
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain.
| | - J M Górriz
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain; Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | - A Ortiz
- Dept. Communications Engineering, University of Málaga, Spain
| | - F J Martínez-Murcia
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain
| | - F Segovia
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain
| | - D Salas-Gonzalez
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain
| | - D Castillo-Barnes
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain
| | - I A Illán
- Dept. of Signal Theory, Networking and Communications, University of Granada, Spain
| | - C G Puntonet
- Dept. Architecture and Computer Technology, University of Granada, Spain
| |
Collapse
|