1
|
Khalili H, Wimmer MA. Towards Improved XAI-Based Epidemiological Research into the Next Potential Pandemic. Life (Basel) 2024; 14:783. [PMID: 39063538 PMCID: PMC11278356 DOI: 10.3390/life14070783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 06/16/2024] [Accepted: 06/19/2024] [Indexed: 07/28/2024] Open
Abstract
By applying AI techniques to a variety of pandemic-relevant data, artificial intelligence (AI) has substantially supported the control of the spread of the SARS-CoV-2 virus. Along with this, epidemiological machine learning studies of SARS-CoV-2 have been frequently published. While these models can be perceived as precise and policy-relevant to guide governments towards optimal containment policies, their black box nature can hamper building trust and relying confidently on the prescriptions proposed. This paper focuses on interpretable AI-based epidemiological models in the context of the recent SARS-CoV-2 pandemic. We systematically review existing studies, which jointly incorporate AI, SARS-CoV-2 epidemiology, and explainable AI approaches (XAI). First, we propose a conceptual framework by synthesizing the main methodological features of the existing AI pipelines of SARS-CoV-2. Upon the proposed conceptual framework and by analyzing the selected epidemiological studies, we reflect on current research gaps in epidemiological AI toolboxes and how to fill these gaps to generate enhanced policy support in the next potential pandemic.
Collapse
Affiliation(s)
- Hamed Khalili
- Research Group E-Government, Faculty of Computer Science, University of Koblenz, D-56070 Koblenz, Germany;
| | | |
Collapse
|
2
|
Menon S, Mangalagiri J, Galita J, Morris M, Saboury B, Yesha Y, Yesha Y, Nguyen P, Gangopadhyay A, Chapman D. CCS-GAN: COVID-19 CT Scan Generation and Classification with Very Few Positive Training Images. J Digit Imaging 2023; 36:1376-1389. [PMID: 37069451 PMCID: PMC10109233 DOI: 10.1007/s10278-023-00811-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 04/19/2023] Open
Abstract
We present a novel algorithm that is able to generate deep synthetic COVID-19 pneumonia CT scan slices using a very small sample of positive training images in tandem with a larger number of normal images. This generative algorithm produces images of sufficient accuracy to enable a DNN classifier to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID-19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19-positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. Furthermore, algorithms to produce deep synthetic images with smaller data volumes have the added benefit of reducing the barriers of data sharing between healthcare institutions. We present the cycle-consistent segmentation-generative adversarial network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19-positive CT scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19.
Collapse
Affiliation(s)
- Sumeet Menon
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA.
| | | | - Josh Galita
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
| | - Michael Morris
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
- Institute for Data Science and Computing, University of Miami, 33124, Coral Gables, FL, USA
- University of Miami Miller School of Medicine, Miami, FL, USA
- Networking Health, Oak Manor Drive, Suite 201, 21061, Glen Burnie, MD, USA
- National Institutes of Health Clinical Center, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD, USA
| | - Babak Saboury
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
- Institute for Data Science and Computing, University of Miami, 33124, Coral Gables, FL, USA
- National Institutes of Health Clinical Center, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD, USA
| | - Yaacov Yesha
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
| | - Yelena Yesha
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
- Institute for Data Science and Computing, University of Miami, 33124, Coral Gables, FL, USA
| | - Phuong Nguyen
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
| | | | - David Chapman
- University of Maryland, 1000 Hilltop Circle, 21250, Baltimore, MD, USA
| |
Collapse
|
3
|
Garcea F, Serra A, Lamberti F, Morra L. Data augmentation for medical imaging: A systematic literature review. Comput Biol Med 2023; 152:106391. [PMID: 36549032 DOI: 10.1016/j.compbiomed.2022.106391] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 11/22/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018-2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.
Collapse
Affiliation(s)
- Fabio Garcea
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Alessio Serra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Fabrizio Lamberti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy.
| |
Collapse
|
4
|
Celard P, Iglesias EL, Sorribes-Fdez JM, Romero R, Vieira AS, Borrajo L. A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. Neural Comput Appl 2022; 35:2291-2323. [PMID: 36373133 PMCID: PMC9638354 DOI: 10.1007/s00521-022-07953-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022]
Abstract
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Collapse
Affiliation(s)
- P. Celard
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - E. L. Iglesias
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - J. M. Sorribes-Fdez
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - R. Romero
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - A. Seara Vieira
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - L. Borrajo
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| |
Collapse
|
5
|
Latif G, Morsy H, Hassan A, Alghazo J. Novel Coronavirus and Common Pneumonia Detection from CT Scans Using Deep Learning-Based Extracted Features. Viruses 2022; 14:v14081667. [PMID: 36016288 PMCID: PMC9414828 DOI: 10.3390/v14081667] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/23/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Computer Science Department, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
- Department of Computer Sciences and Mathematics, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada
- Correspondence: or
| | - Hamdy Morsy
- Department of Applied Natural Sciences, College of Community, Qassim University, Buraydah 52571, Saudi Arabia;
- Department of Electronics and communications, College of Engineering, Helwan University, Cairo 11792, Egypt
| | - Asmaa Hassan
- Faculty of Medicine, Helwan University, Helwan 11795, Egypt;
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA;
| |
Collapse
|
6
|
Ali H, Shah Z. Combating COVID-19 Using Generative Adversarial Networks and Artificial Intelligence for Medical Images: Scoping Review. JMIR Med Inform 2022; 10:e37365. [PMID: 35709336 PMCID: PMC9246088 DOI: 10.2196/37365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 03/06/2022] [Accepted: 03/11/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Research on the diagnosis of COVID-19 using lung images is limited by the scarcity of imaging data. Generative adversarial networks (GANs) are popular for synthesis and data augmentation. GANs have been explored for data augmentation to enhance the performance of artificial intelligence (AI) methods for the diagnosis of COVID-19 within lung computed tomography (CT) and X-ray images. However, the role of GANs in overcoming data scarcity for COVID-19 is not well understood. OBJECTIVE This review presents a comprehensive study on the role of GANs in addressing the challenges related to COVID-19 data scarcity and diagnosis. It is the first review that summarizes different GAN methods and lung imaging data sets for COVID-19. It attempts to answer the questions related to applications of GANs, popular GAN architectures, frequently used image modalities, and the availability of source code. METHODS A search was conducted on 5 databases, namely PubMed, IEEEXplore, Association for Computing Machinery (ACM) Digital Library, Scopus, and Google Scholar. The search was conducted from October 11-13, 2021. The search was conducted using intervention keywords, such as "generative adversarial networks" and "GANs," and application keywords, such as "COVID-19" and "coronavirus." The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines for systematic and scoping reviews. Only those studies were included that reported GAN-based methods for analyzing chest X-ray images, chest CT images, and chest ultrasound images. Any studies that used deep learning methods but did not use GANs were excluded. No restrictions were imposed on the country of publication, study design, or outcomes. Only those studies that were in English and were published from 2020 to 2022 were included. No studies before 2020 were included. RESULTS This review included 57 full-text studies that reported the use of GANs for different applications in COVID-19 lung imaging data. Most of the studies (n=42, 74%) used GANs for data augmentation to enhance the performance of AI techniques for COVID-19 diagnosis. Other popular applications of GANs were segmentation of lungs and superresolution of lung images. The cycleGAN and the conditional GAN were the most commonly used architectures, used in 9 studies each. In addition, 29 (51%) studies used chest X-ray images, while 21 (37%) studies used CT images for the training of GANs. For the majority of the studies (n=47, 82%), the experiments were conducted and results were reported using publicly available data. A secondary evaluation of the results by radiologists/clinicians was reported by only 2 (4%) studies. CONCLUSIONS Studies have shown that GANs have great potential to address the data scarcity challenge for lung images in COVID-19. Data synthesized with GANs have been helpful to improve the training of the convolutional neural network (CNN) models trained for the diagnosis of COVID-19. In addition, GANs have also contributed to enhancing the CNNs' performance through the superresolution of the images and segmentation. This review also identified key limitations of the potential transformation of GAN-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
7
|
de Vente C, Boulogne LH, Venkadesh KV, Sital C, Lessmann N, Jacobs C, Sanchez CI, van Ginneken B. Automated COVID-19 Grading With Convolutional Neural Networks in Computed Tomography Scans: A Systematic Comparison. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022; 3:129-138. [PMID: 35582210 PMCID: PMC9014473 DOI: 10.1109/tai.2021.3115093] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 06/02/2021] [Accepted: 09/18/2021] [Indexed: 11/08/2022]
Abstract
Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on grand-challenge.org to allow for a fair comparison between the results of this and future research.
Collapse
Affiliation(s)
- Coen de Vente
- Radboud University Medical Center, Donders Institute for Brain, Cognition and BehaviourDepartment of Medical Imaging6525GANijmegenThe Netherlands.,Informatics Institute, Faculty of ScienceUniversity of Amsterdam 1012 WX Amsterdam The Netherlands
| | - Luuk H Boulogne
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Kiran Vaidhya Venkadesh
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Cheryl Sital
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Nikolas Lessmann
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Colin Jacobs
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Clara I Sanchez
- Informatics Institute, Faculty of ScienceUniversity of Amsterdam 1012 WX Amsterdam The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| |
Collapse
|
8
|
Li Z, Zhang J, Li B, Gu X, Luo X. COVID-19 diagnosis on CT scan images using a generative adversarial network and concatenated feature pyramid network with an attention mechanism. Med Phys 2021; 48:4334-4349. [PMID: 34117783 PMCID: PMC8420535 DOI: 10.1002/mp.15044] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/14/2021] [Accepted: 06/01/2021] [Indexed: 01/04/2023] Open
Abstract
OBJECTIVE Coronavirus disease 2019 (COVID-19) has caused hundreds of thousands of infections and deaths. Efficient diagnostic methods could help curb its global spread. The purpose of this study was to develop and evaluate a method for accurately diagnosing COVID-19 based on computed tomography (CT) scans in real time. METHODS We propose an architecture named "concatenated feature pyramid network" ("Concat-FPN") with an attention mechanism, by concatenating feature maps of multiple. The proposed architecture is then used to form two networks, which we call COVID-CT-GAN and COVID-CT-DenseNet, the former for data augmentation and the latter for data classification. RESULTS The proposed method is evaluated on 3 different numbers of magnitude of COVID-19 CT datasets. Compared with the method without GANs for data augmentation or the original network auxiliary classifier generative adversarial network, COVID-CT-GAN increases the accuracy by 2% to 3%, the recall by 2% to 4%, the precision by 1% to 3%, the F1-score by 1% to 3%, and the area under the curve by 1% to 4%. Compared with the original network DenseNet-201, COVID-CT-DenseNet increases the accuracy by 1% to 3%, the recall by 4% to 9%, the precision by 1%, the F1-score by 1% to 3%, and the area under the curve by 2%. CONCLUSION The experimental results show that our method improves the efficiency of diagnosing COVID-19 on CT images, and helps overcome the problem of limited training data when using deep learning methods to diagnose COVID-19. SIGNIFICANCE Our method can help clinicians build deep learning models using their private datasets to achieve automatic diagnosis of COVID-19 with a high precision.
Collapse
Affiliation(s)
- Zonggui Li
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Junhua Zhang
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Bo Li
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Xiaoying Gu
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Xudong Luo
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| |
Collapse
|