1
|
Wang Z, Lin Y, Zhu X. Transfer Contrastive Learning for Raman Spectroscopy Skin Cancer Tissue Classification. IEEE J Biomed Health Inform 2024; 28:7332-7344. [PMID: 39208055 DOI: 10.1109/jbhi.2024.3451950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Using Raman spectroscopy (RS) signals for skin cancer tissue classification has recently drawn significant attention, because of its non-invasive optical technique, which uses molecular structures and conformations within biological tissue for diagnosis. In reality, RS signals are noisy and unstable for training machine learning models. The scarcity of tissue samples also makes it challenging to learn reliable deep-learning networks for clinical usages. In this paper, we advocate a Transfer Contrasting Learning Paradigm (TCLP) to address the scarcity and noisy characteristics of the RS for skin cancer tissue classification. To overcome the challenge of limited samples, TCLP leverages transfer learning to pre-train deep learning models using RS data from similar domains (but collected from different RS equipments for other tasks). To tackle the noisy nature of the RS signals, TCLP uses contrastive learning to augment RS signals to learn reliable feature representation to represent RS signals for final classification. Experiments and comparisons, including statistical tests, demonstrate that TCLP outperforms existing deep learning baselines for RS signal-based skin cancer tissue classification.
Collapse
|
2
|
Christina Dally E, Banu Rekha B. Automated Chronic Obstructive Pulmonary Disease (COPD) detection and classification using Mayfly optimization with deep belief network model. Biomed Signal Process Control 2024; 96:106488. [DOI: 10.1016/j.bspc.2024.106488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
|
3
|
Wu Y, Xia S, Liang Z, Chen R, Qi S. Artificial intelligence in COPD CT images: identification, staging, and quantitation. Respir Res 2024; 25:319. [PMID: 39174978 PMCID: PMC11340084 DOI: 10.1186/s12931-024-02913-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/09/2024] [Indexed: 08/24/2024] Open
Abstract
Chronic obstructive pulmonary disease (COPD) stands as a significant global health challenge, with its intricate pathophysiological manifestations often demanding advanced diagnostic strategies. The recent applications of artificial intelligence (AI) within the realm of medical imaging, especially in computed tomography, present a promising avenue for transformative changes in COPD diagnosis and management. This review delves deep into the capabilities and advancements of AI, particularly focusing on machine learning and deep learning, and their applications in COPD identification, staging, and imaging phenotypes. Emphasis is laid on the AI-powered insights into emphysema, airway dynamics, and vascular structures. The challenges linked with data intricacies and the integration of AI in the clinical landscape are discussed. Lastly, the review casts a forward-looking perspective, highlighting emerging innovations in AI for COPD imaging and the potential of interdisciplinary collaborations, hinting at a future where AI doesn't just support but pioneers breakthroughs in COPD care. Through this review, we aim to provide a comprehensive understanding of the current state and future potential of AI in shaping the landscape of COPD diagnosis and management.
Collapse
Affiliation(s)
- Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Shuyue Xia
- Respiratory Department, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
- Key Laboratory of Medicine and Engineering for Chronic Obstructive Pulmonary Disease in Liaoning Province, Shenyang, China
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Rongchang Chen
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital, Shenzhen, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
4
|
Zhao M, Wu Y, Li Y, Zhang X, Xia S, Xu J, Chen R, Liang Z, Qi S. Learning and depicting lobe-based radiomics feature for COPD Severity staging in low-dose CT images. BMC Pulm Med 2024; 24:294. [PMID: 38915049 PMCID: PMC11197240 DOI: 10.1186/s12890-024-03109-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/19/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND Chronic obstructive pulmonary disease (COPD) is a prevalent and debilitating respiratory condition that imposes a significant healthcare burden worldwide. Accurate staging of COPD severity is crucial for patient management and treatment planning. METHODS The retrospective study included 530 hospital patients. A lobe-based radiomics method was proposed to classify COPD severity using computed tomography (CT) images. First, we segmented the lung lobes with a convolutional neural network model. Secondly, the radiomic features of each lung lobe are extracted from CT images, the features of the five lung lobes are merged, and the selection of features is accomplished through the utilization of a variance threshold, t-Test, least absolute shrinkage and selection operator (LASSO). Finally, the COPD severity was classified by a support vector machine (SVM) classifier. RESULTS 104 features were selected for staging COPD according to the Global initiative for chronic Obstructive Lung Disease (GOLD). The SVM classifier showed remarkable performance with an accuracy of 0.63. Moreover, an additional set of 132 features were selected to distinguish between milder (GOLD I + GOLD II) and more severe instances (GOLD III + GOLD IV) of COPD. The accuracy for SVM stood at 0.87. CONCLUSIONS The proposed method proved that the novel lobe-based radiomics method can significantly contribute to the refinement of COPD severity staging. By combining radiomic features from each lung lobe, it can obtain a more comprehensive and rich set of features and better capture the CT radiomic features of the lung than simply observing the lung as a whole.
Collapse
Affiliation(s)
- Meng Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yifu Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xiaoyu Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shuyue Xia
- Respiratory Department, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
| | - Jiaxuan Xu
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Rongchang Chen
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- Key Laboratory of Respiratory Disease of Shenzhen, Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital (Second Affiliated Hospital of Jinan University, First Affiliated Hospital of South University of Science and Technology of China), Shenzhen, China
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The National Center for Respiratory Medicine, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
5
|
Yang X. Application and Prospects of Artificial Intelligence Technology in Early Screening of Chronic Obstructive Pulmonary Disease at Primary Healthcare Institutions in China. Int J Chron Obstruct Pulmon Dis 2024; 19:1061-1067. [PMID: 38765765 PMCID: PMC11102166 DOI: 10.2147/copd.s458935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 04/25/2024] [Indexed: 05/22/2024] Open
Abstract
Chronic Obstructive Pulmonary Disease (COPD), as one of the major global health threat diseases, particularly in China, presents a high prevalence and mortality rate. Early diagnosis is crucial for controlling disease progression and improving patient prognosis. However, due to the lack of significant early symptoms, the awareness and diagnosis rates of COPD remain low. Against this background, primary healthcare institutions play a key role in identifying high-risk groups and early diagnosis. With the development of Artificial Intelligence (AI) technology, its potential in enhancing the efficiency and accuracy of COPD screening is evident. This paper discusses the characteristics of high-risk groups for COPD, current screening methods, and the application of AI technology in various aspects of screening. It also highlights challenges in AI application, such as data privacy, algorithm accuracy, and interpretability. Suggestions for improvement, such as enhancing AI technology dissemination, improving data quality, promoting interdisciplinary cooperation, and strengthening policy and financial support, aim to further enhance the effectiveness and prospects of AI technology in COPD screening at primary healthcare institutions in China.
Collapse
Affiliation(s)
- Xu Yang
- Department of General Practice, Donghuashi Community Health Service Center, Beijing, People’s Republic of China
| |
Collapse
|
6
|
Guo Z, Wu T, Lockhart TE, Soangra R, Yoon H. Correlation enhanced distribution adaptation for prediction of fall risk. Sci Rep 2024; 14:3477. [PMID: 38347050 PMCID: PMC10861595 DOI: 10.1038/s41598-024-54053-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/08/2024] [Indexed: 02/15/2024] Open
Abstract
With technological advancements in diagnostic imaging, smart sensing, and wearables, a multitude of heterogeneous sources or modalities are available to proactively monitor the health of the elderly. Due to the increasing risks of falls among older adults, an early diagnosis tool is crucial to prevent future falls. However, during the early stage of diagnosis, there is often limited or no labeled data (expert-confirmed diagnostic information) available in the target domain (new cohort) to determine the proper treatment for older adults. Instead, there are multiple related but non-identical domain data with labels from the existing cohort or different institutions. Integrating different data sources with labeled and unlabeled samples to predict a patient's condition poses a significant challenge. Traditional machine learning models assume that data for new patients follow a similar distribution. If the data does not satisfy this assumption, the trained models do not achieve the expected accuracy, leading to potential misdiagnosing risks. To address this issue, we utilize domain adaptation (DA) techniques, which employ labeled data from one or more related source domains. These DA techniques promise to tackle discrepancies in multiple data sources and achieve a robust diagnosis for new patients. In our research, we have developed an unsupervised DA model to align two domains by creating a domain-invariant feature representation. Subsequently, we have built a robust fall-risk prediction model based on these new feature representations. The results from simulation studies and real-world applications demonstrate that our proposed approach outperforms existing models.
Collapse
Affiliation(s)
- Ziqi Guo
- Department of Systems Science and Industrial Engineering, The State University of New York at Binghamton, Binghamton, USA
| | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, USA
| | - Thurmon E Lockhart
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, USA
| | - Rahul Soangra
- Department of Physical Therapy, Chapman University, Orange, USA
| | - Hyunsoo Yoon
- Department of Industrial Engineering, Yonsei University, Seoul, Korea.
| |
Collapse
|
7
|
Kumar S, Bhagat V, Sahu P, Chaube MK, Behera AK, Guizani M, Gravina R, Di Dio M, Fortino G, Curry E, Alsamhi SH. A novel multimodal framework for early diagnosis and classification of COPD based on CT scan images and multivariate pulmonary respiratory diseases. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107911. [PMID: 37981453 DOI: 10.1016/j.cmpb.2023.107911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/23/2023] [Accepted: 11/01/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND AND OBJECTIVE Chronic Obstructive Pulmonary Disease (COPD) is one of the world's worst diseases; its early diagnosis using existing methods like statistical machine learning techniques, medical diagnostic tools, conventional medical procedures, and other methods is challenging due to misclassification results of COPD diagnosis and takes a long time to perform accurate prediction. Due to the severe consequences of COPD, detection and accurate diagnosis of COPD at an early stage is essential. This paper aims to design and develop a multimodal framework for early diagnosis and accurate prediction of COPD patients based on prepared Computerized Tomography (CT) scan images and lung sound/cough (audio) samples using machine learning techniques, which are presented in this study. METHOD The proposed multimodal framework extracts texture, histogram intensity, chroma, Mel-Frequency Cepstral Coefficients (MFCCs), and Gaussian scale space from the prepared CT images and lung sound/cough samples. Accurate data from All India Institute Medical Sciences (AIIMS), Raipur, India, and the open respiratory CT images and lung sound/cough (audio) sample dataset validate the proposed framework. The discriminatory features are selected from the extracted feature sets using unsupervised ML techniques, and customized ensemble learning techniques are applied to perform early classification and assess the severity levels of COPD patients. RESULTS The proposed framework provided 97.50%, 98%, and 95.30% accuracy for early diagnosis of COPD patients based on the fusion technique, CT diagnostic model, and cough sample model. CONCLUSION Finally, we compare the performance of the proposed framework with existing methods, current approaches, and conventional benchmark techniques for early diagnosis.
Collapse
Affiliation(s)
- Santosh Kumar
- Department of Computer Science and Engineering, IIIT-Naya Raipur, Chhattisgarh, India.
| | - Vijesh Bhagat
- Department of Computer Science and Engineering, IIIT-Naya Raipur, Chhattisgarh, India.
| | - Prakash Sahu
- Department of Computer Science and Engineering, IIIT-Naya Raipur, Chhattisgarh, India.
| | | | - Ajoy Kumar Behera
- Department of Pulmonary Medicine & TB, All India Institute of Medical Sciences (AIIMS), Raipur, Chhattisgarh, India.
| | - Mohsen Guizani
- Machine Learning Department, Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, United Arab Emirates.
| | - Raffaele Gravina
- Department of Informatics, Modeling, Electronic, and System Engineering, University of Calabria, 87036 Rende, Italy.
| | - Michele Di Dio
- Department of Informatics, Modeling, Electronic, and System Engineering, University of Calabria, 87036 Rende, Italy; Annunziata Hospital Cosenza, Italy.
| | - Giancarlo Fortino
- Department of Informatics, Modeling, Electronic, and System Engineering, University of Calabria, 87036 Rende, Italy.
| | - Edward Curry
- Insight Centre for Data Analytics, University of Galway, Galway, Ireland.
| | - Saeed Hamood Alsamhi
- Insight Centre for Data Analytics, University of Galway, Galway, Ireland; Faculty of Engineering, IBB University, Ibb, Yemen.
| |
Collapse
|
8
|
Wu Y, Du R, Feng J, Qi S, Pang H, Xia S, Qian W. Deep CNN for COPD identification by Multi-View snapshot integration of 3D airway tree and lung field. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
9
|
Gu T, Lee PH, Duan R. COMMUTE: Communication-efficient transfer learning for multi-site risk prediction. J Biomed Inform 2023; 137:104243. [PMID: 36403757 PMCID: PMC9868117 DOI: 10.1016/j.jbi.2022.104243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/20/2022] [Accepted: 11/06/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVES We propose a communication-efficient transfer learning approach (COMMUTE) that effectively incorporates multi-site healthcare data for training a risk prediction model in a target population of interest, accounting for challenges including population heterogeneity and data sharing constraints across sites. METHODS We first train population-specific source models locally within each site. Using data from a given target population, COMMUTE learns a calibration term for each source model, which adjusts for potential data heterogeneity through flexible distance-based regularizations. In a centralized setting where multi-site data can be directly pooled, all data are combined to train the target model after calibration. When individual-level data are not shareable in some sites, COMMUTE requests only the locally trained models from these sites, with which, COMMUTE generates heterogeneity-adjusted synthetic data for training the target model. We evaluate COMMUTE via extensive simulation studies and an application to multi-site data from the electronic Medical Records and Genomics (eMERGE) Network to predict extreme obesity. RESULTS Simulation studies show that COMMUTE outperforms methods without adjusting for population heterogeneity and methods trained in a single population over a broad spectrum of settings. Using eMERGE data, COMMUTE achieves an area under the receiver operating characteristic curve (AUC) around 0.80, which outperforms other benchmark methods with AUC ranging from 0.51 to 0.70. CONCLUSION COMMUTE improves the risk prediction in a target population with limited samples and safeguards against negative transfer when some source populations are highly different from the target. In a federated setting, it is highly communication efficient as it only requires each site to share model parameter estimates once, and no iterative communication or higher-order terms are needed.
Collapse
Affiliation(s)
- Tian Gu
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Phil H Lee
- Department of Psychiatry, Harvard Medical School, Boston, MA, United States; Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA, United States; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA, United States
| | - Rui Duan
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States.
| |
Collapse
|
10
|
Sendra-Balcells C, Campello VM, Martín-Isla C, Viladés D, Descalzo ML, Guala A, Rodríguez-Palomares JF, Lekadir K. Domain generalization in deep learning for contrast-enhanced imaging. Comput Biol Med 2022; 149:106052. [DOI: 10.1016/j.compbiomed.2022.106052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 08/09/2022] [Accepted: 08/20/2022] [Indexed: 11/03/2022]
|
11
|
Majumdar SS, Jain S, Tourni IC, Mustafin A, Lteif D, Sclaroff S, Saenko K, Bargal SA. Ani-GIFs: A benchmark dataset for domain generalization of action recognition from GIFs. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.876846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Deep learning models perform remarkably well for the same task under the assumption that data is always coming from the same distribution. However, this is generally violated in practice, mainly due to the differences in data acquisition techniques and the lack of information about the underlying source of new data. Domain generalization targets the ability to generalize to test data of an unseen domain; while this problem is well-studied for images, such studies are significantly lacking in spatiotemporal visual content—videos and GIFs. This is due to (1) the challenging nature of misalignment of temporal features and the varying appearance/motion of actors and actions in different domains, and (2) spatiotemporal datasets being laborious to collect and annotate for multiple domains. We collect and present the first synthetic video dataset of Animated GIFs for domain generalization, Ani-GIFs, that is used to study the domain gap of videos vs. GIFs, and animated vs. real GIFs, for the task of action recognition. We provide a training and testing setting for Ani-GIFs, and extend two domain generalization baseline approaches, based on data augmentation and explainability, to the spatiotemporal domain to catalyze research in this direction.
Collapse
|
12
|
Saat P, Nogovitsyn N, Hassan MY, Ganaie MA, Souza R, Hemmati H. A domain adaptation benchmark for T1-weighted brain magnetic resonance image segmentation. Front Neuroinform 2022; 16:919779. [PMID: 36213544 PMCID: PMC9538795 DOI: 10.3389/fninf.2022.919779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023] Open
Abstract
Accurate brain segmentation is critical for magnetic resonance imaging (MRI) analysis pipelines. Machine-learning-based brain MR image segmentation methods are among the state-of-the-art techniques for this task. Nevertheless, the segmentations produced by machine learning models often degrade in the presence of expected domain shifts between the test and train sets data distributions. These domain shifts are expected due to several factors, such as scanner hardware and software differences, technology updates, and differences in MRI acquisition parameters. Domain adaptation (DA) methods can make machine learning models more resilient to these domain shifts. This paper proposes a benchmark for investigating DA techniques for brain MR image segmentation using data collected across sites with scanners from different vendors (Philips, Siemens, and General Electric). Our work provides labeled data, publicly available source code for a set of baseline and DA models, and a benchmark for assessing different brain MR image segmentation techniques. We applied the proposed benchmark to evaluate two segmentation tasks: skull-stripping; and white-matter, gray-matter, and cerebrospinal fluid segmentation, but the benchmark can be extended to other brain structures. Our main findings during the development of this benchmark are that there is not a single DA technique that consistently outperforms others, and hyperparameter tuning and computational times for these methods still pose a challenge before broader adoption of these methods in the clinical practice.
Collapse
Affiliation(s)
- Parisa Saat
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
| | - Nikita Nogovitsyn
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada
- Mood Disorders Program, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Muhammad Yusuf Hassan
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Electrical Engineering, Indian Institute of Technology, Gandhinagar, Gujarat, India
| | - Muhammad Athar Ganaie
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Chemical Engineering, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Roberto Souza
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Hadi Hemmati
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, ON, Canada
| |
Collapse
|
13
|
Li Z, Huang K, Liu L, Zhang Z. Early detection of COPD based on graph convolutional network and small and weakly labeled data. Med Biol Eng Comput 2022; 60:2321-2333. [PMID: 35750976 PMCID: PMC9244127 DOI: 10.1007/s11517-022-02589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 05/08/2022] [Indexed: 11/25/2022]
Abstract
Chronic obstructive pulmonary disease (COPD) is a common disease with high morbidity and mortality, where early detection benefits the population. However, the early diagnosis rate of COPD is low due to the absence or slight early symptoms. In this paper, a novel method based on graph convolution network (GCN) for early detection of COPD is proposed, which uses small and weakly labeled chest computed tomography image data from the publicly available Danish Lung Cancer Screening Trial database. The key idea is to construct a graph using regions of interest randomly selected from the segmented lung parenchyma and then input it into the GCN model for COPD detection. In this way, the model can not only extract the feature information of each region of interest but also the topological structure information between regions of interest, that is, graph structure information. The proposed GCN model achieves an acceptable performance with an accuracy of 0.77 and an area under a curve of 0.81, which is higher than the previous studies on the same dataset. GCN model also outperforms several state-of-the-art methods trained at the same time. As far as we know, it is also the first time using the GCN model on this dataset for COPD detection.
Collapse
Affiliation(s)
- Zongli Li
- Department of Pulmonary and Critical Care Medicine, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, People's Republic of China
- Beijing Institute of Respiratory Medicine, Beijing, 100020, People's Republic of China
- Department of Respiratory, Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, 100043, People's Republic of China
| | - Kewu Huang
- Department of Pulmonary and Critical Care Medicine, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, People's Republic of China.
- Beijing Institute of Respiratory Medicine, Beijing, 100020, People's Republic of China.
| | - Ligong Liu
- Department of Enterprise Management, China Energy Engineering Corporation Limited, Beijing, 100022, People's Republic of China
| | - Zuoqing Zhang
- Department of Respiratory, Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, 100043, People's Republic of China
| |
Collapse
|
14
|
Automated classification of emphysema using data augmentation and effective pixel location estimation with multi-scale residual network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07566-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
Aswiga RV, Shanthi AP. A Multilevel Transfer Learning Technique and LSTM Framework for Generating Medical Captions for Limited CT and DBT Images. J Digit Imaging 2022; 35:564-580. [PMID: 35217942 PMCID: PMC9156604 DOI: 10.1007/s10278-021-00567-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 09/24/2021] [Accepted: 12/06/2021] [Indexed: 12/15/2022] Open
Abstract
Medical image captioning has been recently attracting the attention of the medical community. Also, generating captions for images involving multiple organs is an even more challenging task. Therefore, any attempt toward such medical image captioning becomes the need of the hour. In recent years, the rapid developments in deep learning approaches have made them an effective option for the analysis of medical images and automatic report generation. But analyzing medical images that are scarce and limited is hard, and it is difficult even with machine learning approaches. The concept of transfer learning can be employed in such applications that suffer from insufficient training data. This paper presents an approach to develop a medical image captioning model based on a deep recurrent architecture that combines Multi Level Transfer Learning (MLTL) framework with a Long Short-Term-Memory (LSTM) model. A basic MLTL framework with three models is designed to detect and classify very limited datasets, using the knowledge acquired from easily available datasets. The first model for the source domain uses the abundantly available non-medical images and learns the generalized features. The acquired knowledge is then transferred to the second model for the intermediate and auxiliary domain, which is related to the target domain. This information is then used for the final target domain, which consists of medical datasets that are very limited in nature. Therefore, the knowledge learned from a non-medical source domain is transferred to improve the learning in the target domain that deals with medical images. Then, a novel LSTM model, which is used for sequence generation and machine translation, is proposed to generate captions for the given medical image from the MLTL framework. To improve the captioning of the target sentence further, an enhanced multi-input Convolutional Neural Network (CNN) model along with feature extraction techniques is proposed. This enhanced multi-input CNN model extracts the most important features of an image that help in generating a more precise and detailed caption of the medical image. Experimental results show that the proposed model performs well with an accuracy of 96.90%, with BLEU score of 76.9%, even with very limited datasets, when compared to the work reported in literature.
Collapse
Affiliation(s)
- R. V. Aswiga
- Department of Computer Science & Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Tamil Nadu, Chennai, 601103 India
| | - A. P. Shanthi
- Department of Computer Science & Engineering, College of Engineering, Guindy (CEG), Anna University, Tamil Nadu, Chennai, 600025 India
| |
Collapse
|
16
|
Multiple instance learning for lung pathophysiological findings detection using CT scans. Med Biol Eng Comput 2022; 60:1569-1584. [PMID: 35386027 DOI: 10.1007/s11517-022-02526-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 01/17/2022] [Indexed: 10/18/2022]
Abstract
Lung diseases affect the lives of billions of people worldwide, and 4 million people, each year, die prematurely due to this condition. These pathologies are characterized by specific imagiological findings in CT scans. The traditional Computer-Aided Diagnosis (CAD) approaches have been showing promising results to help clinicians; however, CADs normally consider a small part of the medical image for analysis, excluding possible relevant information for clinical evaluation. Multiple Instance Learning (MIL) approach takes into consideration different small pieces that are relevant for the final classification and creates a comprehensive analysis of pathophysiological changes. This study uses MIL-based approaches to identify the presence of lung pathophysiological findings in CT scans for the characterization of lung disease development. This work was focus on the detection of the following: Fibrosis, Emphysema, Satellite Nodules in Primary Lesion Lobe, Nodules in Contralateral Lung and Ground Glass, being Fibrosis and Emphysema the ones with more outstanding results, reaching an Area Under the Curve (AUC) of 0.89 and 0.72, respectively. Additionally, the MIL-based approach was used for EGFR mutation status prediction - the most relevant oncogene on lung cancer, with an AUC of 0.69. The results showed that this comprehensive approach can be a useful tool for lung pathophysiological characterization.
Collapse
|
17
|
Li Z, Liu L, Zhang Z, Yang X, Li X, Gao Y, Huang K. A Novel CT-Based Radiomics Features Analysis for Identification and Severity Staging of COPD. Acad Radiol 2022; 29:663-673. [PMID: 35151548 DOI: 10.1016/j.acra.2022.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 12/22/2021] [Accepted: 01/05/2022] [Indexed: 12/12/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the role of radiomics based on Chest Computed Tomography (CT) in the identification and severity staging of chronic obstructive pulmonary disease (COPD). MATERIALS AND METHODS This retrospective analysis included 322 participants (249 COPD patients and 73 control subjects). In total, 1395 chest CT-based radiomics features were extracted from each participant's CT images. Three feature selection methods, including variance threshold, Select K Best method, and least absolute shrinkage and selection operator (LASSO), and two classification methods, including support vector machine (SVM) and logistic regression (LR), were used as identification and severity classification of COPD. Performance was compared by AUC, accuracy, sensitivity, specificity, precision, and F1-score. RESULTS 38 and 10 features were selected to construct radiomics models to detect and stage COPD, respectively. For COPD identification, SVM classifier achieved AUCs of 0.992 and 0.970, while LR classifier achieved AUCs of 0.993 and 0.972 in the training set and test set, respectively. For the severity staging of COPD, the mentioned two machine learning classifiers can better differentiate less severity (GOLD1 + GOLD2) group from greater severity (GOLD3 + GOLD4) group. The AUCs of SVM and LR is 0.907 and 0.903 in the training set, and that of 0.799 and 0.797 in the test set. CONCLUSION The present study showed that the novel radiomics approach based on chest CT images that can be used for COPD identification and severity classification, and the constructed radiomics model demonstrated acceptable performance.
Collapse
Affiliation(s)
- Zongli Li
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Ligong Liu
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Zuoqing Zhang
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xuhong Yang
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xuanyi Li
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Yanli Gao
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Kewu Huang
- Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Chao-Yang Hospital, Capital Medical University, No 8 Gongti South Road, Beijing, 100020, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., K.H.), Beijing Institute of Respiratory Medicine, Beijing, People's Republic of China; Department of Pulmonary and Critical Care Medicine (Z.L., Z.Z.), Shijingshan Teaching Hospital of Capital Medical University, Beijing Shijingshan Hospital, Beijing, China; Department of Enterprise, Beijing e-Hualu Information Technology Corporation Limited (L. L.), Beijing, China; Dongsheng Science and Technology Park (X.Y.), Huiying Medical Technology Co., Ltd, Beijing, China; Department of Respiratory (X.L.), Third Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China; Department of Radiology (Y.G.), Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China.
| |
Collapse
|
18
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
19
|
Chen J, Zhang H, Mohiaddin R, Wong T, Firmin D, Keegan J, Yang G. Adaptive Hierarchical Dual Consistency for Semi-Supervised Left Atrium Segmentation on Cross-Domain Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:420-433. [PMID: 34534077 DOI: 10.1109/tmi.2021.3113678] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Semi-supervised learning provides great significance in left atrium (LA) segmentation model learning with insufficient labelled data. Generalising semi-supervised learning to cross-domain data is of high importance to further improve model robustness. However, the widely existing distribution difference and sample mismatch between different data domains hinder the generalisation of semi-supervised learning. In this study, we alleviate these problems by proposing an Adaptive Hierarchical Dual Consistency (AHDC) for the semi-supervised LA segmentation on cross-domain data. The AHDC mainly consists of a Bidirectional Adversarial Inference module (BAI) and a Hierarchical Dual Consistency learning module (HDC). The BAI overcomes the difference of distributions and the sample mismatch between two different domains. It mainly learns two mapping networks adversarially to obtain two matched domains through mutual adaptation. The HDC investigates a hierarchical dual learning paradigm for cross-domain semi-supervised segmentation based on the obtained matched domains. It mainly builds two dual-modelling networks for mining the complementary information in both intra-domain and inter-domain. For the intra-domain learning, a consistency constraint is applied to the dual-modelling targets to exploit the complementary modelling information. For the inter-domain learning, a consistency constraint is applied to the LAs modelled by two dual-modelling networks to exploit the complementary knowledge among different data domains. We demonstrated the performance of our proposed AHDC on four 3D late gadolinium enhancement cardiac MR (LGE-CMR) datasets from different centres and a 3D CT dataset. Compared to other state-of-the-art methods, our proposed AHDC achieved higher segmentation accuracy, which indicated its capability in the cross-domain semi-supervised LA segmentation.
Collapse
|
20
|
Chen J, Yang G, Khan H, Zhang H, Zhang Y, Zhao S, Mohiaddin R, Wong T, Firmin D, Keegan J. JAS-GAN: Generative Adversarial Network Based Joint Atrium and Scar Segmentations on Unbalanced Atrial Targets. IEEE J Biomed Health Inform 2022; 26:103-114. [PMID: 33945491 DOI: 10.1109/jbhi.2021.3077469] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated and accurate segmentations of left atrium (LA) and atrial scars from late gadolinium-enhanced cardiac magnetic resonance (LGE CMR) images are in high demand for quantifying atrial scars. The previous quantification of atrial scars relies on a two-phase segmentation for LA and atrial scars due to their large volume difference (unbalanced atrial targets). In this paper, we propose an inter-cascade generative adversarial network, namely JAS-GAN, to segment the unbalanced atrial targets from LGE CMR images automatically and accurately in an end-to-end way. Firstly, JAS-GAN investigates an adaptive attention cascade to automatically correlate the segmentation tasks of the unbalanced atrial targets. The adaptive attention cascade mainly models the inclusion relationship of the two unbalanced atrial targets, where the estimated LA acts as the attention map to adaptively focus on the small atrial scars roughly. Then, an adversarial regularization is applied to the segmentation tasks of the unbalanced atrial targets for making a consistent optimization. It mainly forces the estimated joint distribution of LA and atrial scars to match the real ones. We evaluated the performance of our JAS-GAN on a 3D LGE CMR dataset with 192 scans. Compared with the state-of-the-art methods, our proposed approach yielded better segmentation performance (Average Dice Similarity Coefficient (DSC) values of 0.946 and 0.821 for LA and atrial scars, respectively), which indicated the effectiveness of our proposed approach for segmenting unbalanced atrial targets.
Collapse
|
21
|
|
22
|
CyCMIS: Cycle-consistent Cross-domain Medical Image Segmentation via diverse image augmentation. Med Image Anal 2021; 76:102328. [PMID: 34920236 DOI: 10.1016/j.media.2021.102328] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 11/15/2021] [Accepted: 12/01/2021] [Indexed: 01/26/2023]
Abstract
Domain shift, a phenomenon when there exists distribution discrepancy between training dataset (source domain) and test dataset (target domain), is very common in practical applications and may cause significant performance degradation, which hinders the effective deployment of deep learning models to clinical settings. Adaptation algorithms to improve the model generalizability from source domain to target domain has significant practical value. In this paper, we investigate unsupervised domain adaptation (UDA) technique to train a cross-domain segmentation method which is robust to domain shift, and which does not require any annotations on the test domain. To this end, we propose Cycle-consistent Cross-domain Medical Image Segmentation, referred as CyCMIS, integrating online diverse image translation via disentangled representation learning and semantic consistency regularization into one network. Different from learning one-to-one mapping, our method characterizes the complex relationship between domains as many-to-many mapping. A novel diverse inter-domain semantic consistency loss is then proposed to regularize the cross-domain segmentation process. We additionally introduce an intra-domain semantic consistency loss to encourage the segmentation consistency between the original input and the image after cross-cycle reconstruction. We conduct comprehensive experiments on two publicly available datasets to evaluate the effectiveness of the proposed method. Results demonstrate the efficacy of the present approach.
Collapse
|
23
|
Qi S, Xu C, Li C, Tian B, Xia S, Ren J, Yang L, Wang H, Yu H. DR-MIL: deep represented multiple instance learning distinguishes COVID-19 from community-acquired pneumonia in CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106406. [PMID: 34536634 PMCID: PMC8426140 DOI: 10.1016/j.cmpb.2021.106406] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 09/02/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial. METHODS We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models. RESULTS DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation. CONCLUSIONS DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.
Collapse
Affiliation(s)
- Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Caiwen Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Bin Tian
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, China
| | - Shuyue Xia
- Department of Respiratory Medicine, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
| | - Jigang Ren
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Liming Yang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Hanlin Wang
- Department of Radiology, General Hospital of the Yangtze River Shipping, Wuhan, China.
| | - Hui Yu
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China.
| |
Collapse
|
24
|
Guan S, Wang T, Sun K, Meng C. Transfer Learning for Nonrigid 2D/3D Cardiovascular Images Registration. IEEE J Biomed Health Inform 2021; 25:3300-3309. [PMID: 33347417 DOI: 10.1109/jbhi.2020.3045977] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cardiovascular image registration is an essential approach to combine the advantages of preoperative 3D computed tomography angiograph (CTA) images and intraoperative 2D X-ray/digital subtraction angiography (DSA) images together in minimally invasive vascular interventional surgery (MIVI). Recent studies have shown that convolutional neural network (CNN) regression model can be used to register these two modality vascular images with fast speed and satisfactory accuracy. However, CNN regression model trained by tens of thousands of images of one patient is often unable to be applied to another patient due to the large difference and deformation of vascular structure in different patients. To overcome this challenge, we evaluate the ability of transfer learning (TL) for the registration of 2D/3D deformable cardiovascular images. Frozen weights in the convolutional layers were optimized to find the best common feature extractors for TL. After TL, the training data set size was reduced to 200 for a randomly selected patient to get accurate registration results. We compared the effectiveness of our proposed nonrigid registration model after TL with not only that without TL but also some traditional intensity-based methods to evaluate that our nonrigid model after TL performs better on deformable cardiovascular image registration.
Collapse
|
25
|
Chen J, Chen Y, Li J, Wang J, Lin Z, Nandi AK. Stroke Risk Prediction with Hybrid Deep Transfer Learning Framework. IEEE J Biomed Health Inform 2021; 26:411-422. [PMID: 34115602 DOI: 10.1109/jbhi.2021.3088750] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Stroke has become a leading cause of death and long-term disability in the world, and there is no effective treatment.Deep learning-based approaches have the potential to outperform existing stroke risk prediction models, they rely on large well-labeled data. Due to the strict privacy protection policy in health-care systems, stroke data is usually distributed among different hospitals in small pieces. In addition, the positive and negative instances of such data are extremely imbalanced. Transfer learning solves small data issue by exploiting the knowledge of a correlated domain, especially when multiple source are available.In this work, we propose a novel Hybrid Deep Transfer Learning-based Stroke Risk Prediction (HDTL-SRP) scheme to exploit the knowledge structure from multiple correlated sources (i.e.,external stroke data, chronic diseases data, such as hypertension and diabetes). The proposed framework has been extensively tested in synthetic and real-world scenarios, and it outperforms the state-of-the-art stroke risk prediction models. It also shows the potential of real-world deployment among multiple hospitals aided with 5G/B5G infrastructures.
Collapse
|
26
|
Sugimori H, Shimizu K, Makita H, Suzuki M, Konno S. A Comparative Evaluation of Computed Tomography Images for the Classification of Spirometric Severity of the Chronic Obstructive Pulmonary Disease with Deep Learning. Diagnostics (Basel) 2021; 11:diagnostics11060929. [PMID: 34064240 PMCID: PMC8224354 DOI: 10.3390/diagnostics11060929] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/17/2021] [Accepted: 05/19/2021] [Indexed: 12/03/2022] Open
Abstract
Recently, deep learning applications in medical imaging have been widely applied. However, whether it is sufficient to simply input the entire image or whether it is necessary to preprocess the setting of the supervised image has not been sufficiently studied. This study aimed to create a classifier trained with and without preprocessing for the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification using CT images and to evaluate the classification accuracy of the GOLD classification by confusion matrix. According to former GOLD 0, GOLD 1, GOLD 2, and GOLD 3 or 4, eighty patients were divided into four groups (n = 20). The classification models were created by the transfer learning of the ResNet50 network architecture. The created models were evaluated by confusion matrix and AUC. Moreover, the rearranged confusion matrix for former stages 0 and ≥1 was evaluated by the same procedure. The AUCs of original and threshold images for the four-class analysis were 0.61 ± 0.13 and 0.64 ± 0.10, respectively, and the AUCs for the two classifications of former GOLD 0 and GOLD ≥ 1 were 0.64 ± 0.06 and 0.68 ± 0.12, respectively. In the two-class classification by threshold image, recall and precision were over 0.8 in GOLD ≥ 1, and in the McNemar–Bowker test, there was some symmetry. The results suggest that the preprocessed threshold image can be possibly used as a screening tool for GOLD classification without pulmonary function tests, rather than inputting the normal image into the convolutional neural network (CNN) for CT image learning.
Collapse
Affiliation(s)
- Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan;
| | - Kaoruko Shimizu
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Correspondence: ; Tel.: +81-11-706-5911
| | - Hironi Makita
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Hokkaido Medical Research Institute for Respiratory Diseases, Sapporo 064-0807, Japan
| | - Masaru Suzuki
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| | - Satoshi Konno
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| |
Collapse
|
27
|
Fang C, Bai S, Chen Q, Zhou Y, Xia L, Qin L, Gong S, Xie X, Zhou C, Tu D, Zhang C, Liu X, Chen W, Bai X, Torr PHS. Deep learning for predicting COVID-19 malignant progression. Med Image Anal 2021; 72:102096. [PMID: 34051438 PMCID: PMC8112895 DOI: 10.1016/j.media.2021.102096] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 03/23/2021] [Accepted: 04/27/2021] [Indexed: 01/08/2023]
Abstract
As COVID-19 is highly infectious, many patients can simultaneously flood into hospitals for diagnosis and treatment, which has greatly challenged public medical systems. Treatment priority is often determined by the symptom severity based on first assessment. However, clinical observation suggests that some patients with mild symptoms may quickly deteriorate. Hence, it is crucial to identify patient early deterioration to optimize treatment strategy. To this end, we develop an early-warning system with deep learning techniques to predict COVID-19 malignant progression. Our method leverages CT scans and the clinical data of outpatients and achieves an AUC of 0.920 in the single-center study. We also propose a domain adaptation approach to improve the generalization of our model and achieve an average AUC of 0.874 in the multicenter study. Moreover, our model automatically identifies crucial indicators that contribute to the malignant progression, including Troponin, Brain natriuretic peptide, White cell count, Aspartate aminotransferase, Creatinine, and Hypersensitive C-reactive protein.
Collapse
Affiliation(s)
- Cong Fang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Song Bai
- Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom
| | - Qianlan Chen
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Yu Zhou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Liming Xia
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Lixin Qin
- Department of Radiology, Wuhan Pulmonary Hospital, Wuhan 430030, China
| | - Shi Gong
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Xudong Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chunhua Zhou
- Department of Radiology, Wuhan Pulmonary Hospital, Wuhan 430030, China
| | - Dandan Tu
- HUST-HW Joint Innovation Lab, Wuhan 430074, China
| | | | - Xiaowu Liu
- HUST-HW Joint Innovation Lab, Wuhan 430074, China
| | - Weiwei Chen
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China.
| | - Xiang Bai
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Philip H S Torr
- Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom
| |
Collapse
|
28
|
R V A, R A, A P S. Augmenting Transfer Learning with Feature Extraction Techniques for Limited Breast Imaging Datasets. J Digit Imaging 2021; 34:618-629. [PMID: 33973065 DOI: 10.1007/s10278-021-00456-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/25/2021] [Accepted: 04/27/2021] [Indexed: 11/24/2022] Open
Abstract
Computer aided detection (CADe) and computer aided diagnostic (CADx) systems are ongoing research areas for identifying lesions among complex inner structures with different pixel intensities, and for medical image classification. There are several techniques available for breast cancer detection and diagnosis using CADe and CADx systems. However, some of these systems are not accurate enough or suffer from lack of sufficient data. For example, mammography is the most commonly used breast cancer detection technique, and there are several CADe and CADx systems based on mammography, because of the huge dataset that is publicly available. But, the number of cancers escaping detection with mammography is substantial, particularly in dense-breasted women. On the other hand, digital breast tomosynthesis (DBT) is a new imaging technique, which alleviates the limitations of the mammography technique. However, the collections of huge amounts of the DBT images are difficult as it is not publicly available. In such cases, the concept of transfer learning can be employed. The knowledge learned from a trained source domain task, whose dataset is readily available, is transferred to improve the learning in the target domain task, whose dataset may be scarce. In this paper, a two-level framework is developed for the classification of the DBT datasets. A basic multilevel transfer learning (MLTL) based framework is proposed to use the knowledge learned from general non-medical image datasets and the mammography dataset, to train and classify the target DBT dataset. A feature extraction based transfer learning (FETL) framework is proposed to further improve the classification performance of the MLTL based framework. The FETL framework looks at three different feature extraction techniques to augment the MLTL based framework performance. The area under receiver operating characteristic (ROC) curve of value 0.89 is obtained, with just 2.08% of the source domain (non-medical) dataset, 5.09% of the intermediate domain (mammography) dataset, and 3.94% of the target domain (DBT) dataset, when compared to the dataset reported in literature.
Collapse
Affiliation(s)
- Aswiga R V
- Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, Tamil Nadu, India.
| | - Aishwarya R
- Department of Computer Science & Engineering, Anna University, Chennai-600025, Tamil Nadu, India
| | - Shanthi A P
- Department of Computer Science & Engineering, Anna University, Chennai-600025, Tamil Nadu, India
| |
Collapse
|
29
|
Chen H, Guo S, Hao Y, Fang Y, Fang Z, Wu W, Liu Z, Li S. Auxiliary Diagnosis for COVID-19 with Deep Transfer Learning. J Digit Imaging 2021; 34:231-241. [PMID: 33634413 PMCID: PMC7906243 DOI: 10.1007/s10278-021-00431-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 01/21/2021] [Accepted: 02/02/2021] [Indexed: 12/30/2022] Open
Abstract
To assist physicians identify COVID-19 and its manifestations through the automatic COVID-19 recognition and classification in chest CT images with deep transfer learning. In this retrospective study, the used chest CT image dataset covered 422 subjects, including 72 confirmed COVID-19 subjects (260 studies, 30,171 images), 252 other pneumonia subjects (252 studies, 26,534 images) that contained 158 viral pneumonia subjects and 94 pulmonary tuberculosis subjects, and 98 normal subjects (98 studies, 29,838 images). In the experiment, subjects were split into training (70%), validation (15%) and testing (15%) sets. We utilized the convolutional blocks of ResNets pretrained on the public social image collections and modified the top fully connected layer to suit our task (the COVID-19 recognition). In addition, we tested the proposed method on a finegrained classification task; that is, the images of COVID-19 were further split into 3 main manifestations (ground-glass opacity with 12,924 images, consolidation with 7418 images and fibrotic streaks with 7338 images). Similarly, the data partitioning strategy of 70%-15%-15% was adopted. The best performance obtained by the pretrained ResNet50 model is 94.87% sensitivity, 88.46% specificity, 91.21% accuracy for COVID-19 versus all other groups, and an overall accuracy of 89.01% for the three-category classification in the testing set. Consistent performance was observed from the COVID-19 manifestation classification task on images basis, where the best overall accuracy of 94.08% and AUC of 0.993 were obtained by the pretrained ResNet18 (P < 0.05). All the proposed models have achieved much satisfying performance and were thus very promising in both the practical application and statistics. Transfer learning is worth for exploring to be applied in recognition and classification of COVID-19 on CT images with limited training data. It not only achieved higher sensitivity (COVID-19 vs the rest) but also took far less time than radiologists, which is expected to give the auxiliary diagnosis and reduce the workload for the radiologists.
Collapse
Affiliation(s)
- Hongtao Chen
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shuanshuan Guo
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Yanbin Hao
- School of Data Science, University of Science and Technology of China, Hefei, 230026, Anhui, China.
- Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China.
| | - Yijie Fang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhaoxiong Fang
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Wenhao Wu
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhigang Liu
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shaolin Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China.
| |
Collapse
|
30
|
Wang Y, Zhang Y, Liu Y, Tian J, Zhong C, Shi Z, Zhang Y, He Z. Does non-COVID-19 lung lesion help? investigating transferability in COVID-19 CT image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:106004. [PMID: 33662804 PMCID: PMC7899930 DOI: 10.1016/j.cmpb.2021.106004] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 02/11/2021] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Coronavirus disease 2019 (COVID-19) is a highly contagious virus spreading all around the world. Deep learning has been adopted as an effective technique to aid COVID-19 detection and segmentation from computed tomography (CT) images. The major challenge lies in the inadequate public COVID-19 datasets. Recently, transfer learning has become a widely used technique that leverages the knowledge gained while solving one problem and applying it to a different but related problem. However, it remains unclear whether various non-COVID19 lung lesions could contribute to segmenting COVID-19 infection areas and how to better conduct this transfer procedure. This paper provides a way to understand the transferability of non-COVID19 lung lesions and a better strategy to train a robust deep learning model for COVID-19 infection segmentation. METHODS Based on a publicly available COVID-19 CT dataset and three public non-COVID19 datasets, we evaluate four transfer learning methods using 3D U-Net as a standard encoder-decoder method. i) We introduce the multi-task learning method to get a multi-lesion pre-trained model for COVID-19 infection. ii) We propose and compare four transfer learning strategies with various performance gains and training time costs. Our proposed Hybrid-encoder Learning strategy introduces a Dedicated-encoder and an Adapted-encoder to extract COVID-19 infection features and general lung lesion features, respectively. An attention-based Selective Fusion unit is designed for dynamic feature selection and aggregation. RESULTS Experiments show that trained with limited data, proposed Hybrid-encoder strategy based on multi-lesion pre-trained model achieves a mean DSC, NSD, Sensitivity, F1-score, Accuracy and MCC of 0.704, 0.735, 0.682, 0.707, 0.994 and 0.716, respectively, with better genetalization and lower over-fitting risks for segmenting COVID-19 infection. CONCLUSIONS The results reveal the benefits of transferring knowledge from non-COVID19 lung lesions, and learning from multiple lung lesion datasets can extract more general features, leading to accurate and robust pre-trained models. We further show the capability of the encoder to learn feature representations of lung lesions, which improves segmentation accuracy and facilitates training convergence. In addition, our proposed Hybrid-encoder learning method incorporates transferred lung lesion features from non-COVID19 datasets effectively and achieves significant improvement. These findings promote new insights into transfer learning for COVID-19 CT image segmentation, which can also be further generalized to other medical tasks.
Collapse
Affiliation(s)
- Yixin Wang
- Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China; AI Lab, Lenovo Research, Beijing, China
| | - Yao Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China; AI Lab, Lenovo Research, Beijing, China
| | - Yang Liu
- Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China; AI Lab, Lenovo Research, Beijing, China
| | | | | | | | - Yang Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China; Lenovo Corporate Research & Development, Lenovo Ltd., Beijing, China.
| | - Zhiqiang He
- Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China; Lenovo Corporate Research & Development, Lenovo Ltd., Beijing, China.
| |
Collapse
|
31
|
Ram S, Hoff BA, Bell AJ, Galban S, Fortuna AB, Weinheimer O, Wielpütz MO, Robinson TE, Newman B, Vummidi D, Chughtai A, Kazerooni EA, Johnson TD, Han MK, Hatt CR, Galban CJ. Improved detection of air trapping on expiratory computed tomography using deep learning. PLoS One 2021; 16:e0248902. [PMID: 33760861 PMCID: PMC7990199 DOI: 10.1371/journal.pone.0248902] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/26/2021] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Radiologic evidence of air trapping (AT) on expiratory computed tomography (CT) scans is associated with early pulmonary dysfunction in patients with cystic fibrosis (CF). However, standard techniques for quantitative assessment of AT are highly variable, resulting in limited efficacy for monitoring disease progression. OBJECTIVE To investigate the effectiveness of a convolutional neural network (CNN) model for quantifying and monitoring AT, and to compare it with other quantitative AT measures obtained from threshold-based techniques. MATERIALS AND METHODS Paired volumetric whole lung inspiratory and expiratory CT scans were obtained at four time points (0, 3, 12 and 24 months) on 36 subjects with mild CF lung disease. A densely connected CNN (DN) was trained using AT segmentation maps generated from a personalized threshold-based method (PTM). Quantitative AT (QAT) values, presented as the relative volume of AT over the lungs, from the DN approach were compared to QAT values from the PTM method. Radiographic assessment, spirometric measures, and clinical scores were correlated to the DN QAT values using a linear mixed effects model. RESULTS QAT values from the DN were found to increase from 8.65% ± 1.38% to 21.38% ± 1.82%, respectively, over a two-year period. Comparison of CNN model results to intensity-based measures demonstrated a systematic drop in the Dice coefficient over time (decreased from 0.86 ± 0.03 to 0.45 ± 0.04). The trends observed in DN QAT values were consistent with clinical scores for AT, bronchiectasis, and mucus plugging. In addition, the DN approach was found to be less susceptible to variations in expiratory deflation levels than the threshold-based approach. CONCLUSION The CNN model effectively delineated AT on expiratory CT scans, which provides an automated and objective approach for assessing and monitoring AT in CF patients.
Collapse
Affiliation(s)
- Sundaresh Ram
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Biomedical Engineering, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Benjamin A. Hoff
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Alexander J. Bell
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Stefanie Galban
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Aleksa B. Fortuna
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Oliver Weinheimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Heidelberg, Heidelberg, Germany
- Translational Lung Research Center, Heidelberg (TLRC), German Lung Research Center (DZL), Heidelberg, Germany
| | - Mark O. Wielpütz
- Department of Diagnostic and Interventional Radiology, University Hospital of Heidelberg, Heidelberg, Germany
- Translational Lung Research Center, Heidelberg (TLRC), German Lung Research Center (DZL), Heidelberg, Germany
| | - Terry E. Robinson
- Department of Pediatrics, Center of Excellence in Pulmonary Biology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Beverley Newman
- Department of Pediatric Radiology, Lucile Packard Children’s Hospital at Stanford, Stanford, California, United States of America
| | - Dharshan Vummidi
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Aamer Chughtai
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Ella A. Kazerooni
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Timothy D. Johnson
- Department of Biostatistics, University of Michigan, School of Public Health, Ann Arbor, Michigan, United States of America
| | - MeiLan K. Han
- Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Charles R. Hatt
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Imbio LLC, Minneapolis, Minnesota, United States of America
| | - Craig J. Galban
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Biomedical Engineering, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
32
|
Kouw WM, Loog M. A Review of Domain Adaptation without Target Labels. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:766-785. [PMID: 31603771 DOI: 10.1109/tpami.2019.2945942] [Citation(s) in RCA: 112] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: How can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based, and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting, and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.
Collapse
|
33
|
Xu C, Qi S, Feng J, Xia S, Kang Y, Yao Y, Qian W. DCT-MIL: Deep CNN transferred multiple instance learning for COPD identification using CT images. Phys Med Biol 2020; 65:145011. [PMID: 32235077 DOI: 10.1088/1361-6560/ab857d] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
While many pre-defined computed tomographic (CT) measures have been utilized to characterize chronic obstructive pulmonary disease (COPD), it is still challenging to represent pathological alternations of multiple dimensions and highly spatial heterogeneity. Deep CNN transferred multiple instance learning (DCT-MIL) is proposed to identify COPD via CT images. After the lung is divided into eight sections along the axial direction, one random axial CT image is taken out from each section as one instance. With one instance as the input, the activations of neural layers of AlexNet trained by natural images are extracted as features. After dimension reduction through principle component analysis, features of all instances are input into three MIL methods: Citation k-Nearest-Neighbor (Citation-KNN), multiple instance support vector machine, and expectation-maximization diverse density. Moreover, the performance dependence of the resulted models on the depth of the neural layer where activations are extracted and the number of features is investigated. The proposed DCT-MIL achieves an exceptional performance with an accuracy of 99.29% and area under curve of 0.9826 while using 100 principle components of features extracted from the fourth convolutional layer and Citation-KNN. It outperforms not only DCT-MIL models using other settings and the pre-trained AlexNet with fine-tuning by montages of eight lung CT images, but also other state-of-art methods. Deep CNN transferred multiple instance learning is suited for identification of COPD using CT images. It can help finding subgroups with high risk of COPD from large populations through CT scans ordered doing lung cancer screening.
Collapse
Affiliation(s)
- Caiwen Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
34
|
Transfer learning for informative-frame selection in laryngoscopic videos through learned features. Med Biol Eng Comput 2020; 58:1225-1238. [DOI: 10.1007/s11517-020-02127-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 01/07/2020] [Indexed: 02/06/2023]
|
35
|
Wang M, Zhang D, Huang J, Yap PT, Shen D, Liu M. Identifying Autism Spectrum Disorder With Multi-Site fMRI via Low-Rank Domain Adaptation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:644-655. [PMID: 31395542 PMCID: PMC7169995 DOI: 10.1109/tmi.2019.2933160] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is characterized by a wide range of symptoms. Identifying biomarkers for accurate diagnosis is crucial for early intervention of ASD. While multi-site data increase sample size and statistical power, they suffer from inter-site heterogeneity. To address this issue, we propose a multi-site adaption framework via low-rank representation decomposition (maLRR) for ASD identification based on functional MRI (fMRI). The main idea is to determine a common low-rank representation for data from the multiple sites, aiming to reduce differences in data distributions. Treating one site as a target domain and the remaining sites as source domains, data from these domains are transformed (i.e., adapted) to a common space using low-rank representation. To reduce data heterogeneity between the target and source domains, data from the source domains are linearly represented in the common space by those from the target domain. We evaluated the proposed method on both synthetic and real multi-site fMRI data for ASD identification. The results suggest that our method yields superior performance over several state-of-the-art domain adaptation methods.
Collapse
Affiliation(s)
- Mingliang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China
| | - Jiashuang Huang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
36
|
Huang S, Lee F, Miao R, Si Q, Lu C, Chen Q. A deep convolutional neural network architecture for interstitial lung disease pattern classification. Med Biol Eng Comput 2020; 58:725-737. [DOI: 10.1007/s11517-019-02111-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/21/2019] [Indexed: 01/22/2023]
|
37
|
Peng L, Lin L, Hu H, Zhang Y, Li H, Iwamoto Y, Han XH, Chen YW. Semi-Supervised Learning for Semantic Segmentation of Emphysema With Partial Annotations. IEEE J Biomed Health Inform 2019; 24:2327-2336. [PMID: 31902784 DOI: 10.1109/jbhi.2019.2963195] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Segmentation and quantification of each subtype of emphysema is helpful to monitor chronic obstructive pulmonary disease. Due to the nature of emphysema (diffuse pulmonary disease), it is very difficult for experts to allocate semantic labels to every pixel in the CT images. In practice, partially annotating is a better choice for the radiologists to reduce their workloads. In this paper, we propose a new end-to-end trainable semi-supervised framework for semantic segmentation of emphysema with partial annotations, in which a segmentation network is trained from both annotated and unannotated areas. In addition, we present a new loss function, referred to as Fisher loss, to enhance the discriminative power of the model and successfully integrate it into our proposed framework. Our experimental results show that the proposed methods have superior performance over the baseline supervised approach (trained with only annotated areas) and outperform the state-of-the-art methods for emphysema segmentation.
Collapse
|
38
|
Gene expression microarray public dataset reanalysis in chronic obstructive pulmonary disease. PLoS One 2019; 14:e0224750. [PMID: 31730674 PMCID: PMC6857915 DOI: 10.1371/journal.pone.0224750] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Accepted: 10/21/2019] [Indexed: 12/20/2022] Open
Abstract
Chronic obstructive pulmonary disease (COPD) was classified by the Centers for Disease Control and Prevention in 2014 as the 3rd leading cause of death in the United States (US). The main cause of COPD is exposure to tobacco smoke and air pollutants. Problems associated with COPD include under-diagnosis of the disease and an increase in the number of smokers worldwide. The goal of our study is to identify disease variability in the gene expression profiles of COPD subjects compared to controls, by reanalyzing pre-existing, publicly available microarray expression datasets. Our inclusion criteria for microarray datasets selected for smoking status, age and sex of blood donors reported. Our datasets used Affymetrix, Agilent microarray platforms (7 datasets, 1,262 samples). We re-analyzed the curated raw microarray expression data using R packages, and used Box-Cox power transformations to normalize datasets. To identify significant differentially expressed genes we used generalized least squares models with disease state, age, sex, smoking status and study as effects that also included binary interactions, followed by likelihood ratio tests (LRT). We found 3,315 statistically significant (Storey-adjusted q-value <0.05) differentially expressed genes with respect to disease state (COPD or control). We further filtered these genes for biological effect using results from LRT q-value <0.05 and model estimates’ 10% two-tailed quantiles of mean differences between COPD and control), to identify 679 genes. Through analysis of disease, sex, age, and also smoking status and disease interactions we identified differentially expressed genes involved in a variety of immune responses and cell processes in COPD. We also trained a logistic regression model using the common array genes as features, which enabled prediction of disease status with 81.7% accuracy. Our results give potential for improving the diagnosis of COPD through blood and highlight novel gene expression disease signatures.
Collapse
|
39
|
Peng L, Chen YW, Lin L, Hu H, Li H, Chen Q, Ling X, Wang D, Han X, Iwamoto Y. Classification and Quantification of Emphysema Using a Multi-Scale Residual Network. IEEE J Biomed Health Inform 2019; 23:2526-2536. [DOI: 10.1109/jbhi.2018.2890045] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
40
|
Cheplygina V, de Bruijne M, Pluim JPW. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 2019; 54:280-296. [PMID: 30959445 DOI: 10.1016/j.media.2019.03.009] [Citation(s) in RCA: 361] [Impact Index Per Article: 60.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 12/20/2018] [Accepted: 03/25/2019] [Indexed: 02/07/2023]
Abstract
Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416.
Collapse
Affiliation(s)
- Veronika Cheplygina
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Departments Radiology and Medical Informatics, Erasmus Medical Center, Rotterdam, the Netherlands; The Image Section, Department Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Josien P W Pluim
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands; Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
41
|
Therrien R, Doyle S. Role of training data variability on classifier performance and generalizability. MEDICAL IMAGING 2018: DIGITAL PATHOLOGY 2018:5. [DOI: 10.1117/12.2293919] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|