1
|
Chang C, Shi W, Wang Y, Zhang Z, Huang X, Jiao Y. The path from task-specific to general purpose artificial intelligence for medical diagnostics: A bibliometric analysis. Comput Biol Med 2024; 172:108258. [PMID: 38467093 DOI: 10.1016/j.compbiomed.2024.108258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/08/2024] [Accepted: 03/06/2024] [Indexed: 03/13/2024]
Abstract
Artificial intelligence (AI) has revolutionized many fields, and its potential in healthcare has been increasingly recognized. Based on diverse data sources such as imaging, laboratory tests, medical records, and electrophysiological data, diagnostic AI has witnessed rapid development in recent years. A comprehensive understanding of the development status, contributing factors, and their relationships in the application of AI to medical diagnostics is essential to further promote its use in clinical practice. In this study, we conducted a bibliometric analysis to explore the evolution of task-specific to general-purpose AI for medical diagnostics. We used the Web of Science database to search for relevant articles published between 2010 and 2023, and applied VOSviewer, the R package Bibliometrix, and CiteSpace to analyze collaborative networks and keywords. Our analysis revealed that the field of AI in medical diagnostics has experienced rapid growth in recent years, with a focus on tasks such as image analysis, disease prediction, and decision support. Collaborative networks were observed among researchers and institutions, indicating a trend of global cooperation in this field. Additionally, we identified several key factors contributing to the development of AI in medical diagnostics, including data quality, algorithm design, and computational power. Challenges to progress in the field include model explainability, robustness, and equality, which will require multi-stakeholder, interdisciplinary collaboration to tackle. Our study provides a holistic understanding of the path from task-specific, mono-modal AI toward general-purpose, multimodal AI for medical diagnostics. With the continuous improvement of AI technology and the accumulation of medical data, we believe that AI will play a greater role in medical diagnostics in the future.
Collapse
Affiliation(s)
- Chuheng Chang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; 4+4 Medical Doctor Program, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Wen Shi
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Youyang Wang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Zhan Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.
| | - Xiaoming Huang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Yang Jiao
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| |
Collapse
|
2
|
Uzundurukan A, Poncet S, Boffito DC, Micheau P. CT-FEM of the human thorax: Frequency response function and 3D harmonic analysis at resonance. Comput Methods Programs Biomed 2024; 246:108062. [PMID: 38359553 DOI: 10.1016/j.cmpb.2024.108062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 01/30/2024] [Accepted: 02/05/2024] [Indexed: 02/17/2024]
Abstract
BACKGROUND AND OBJECTIVE High-frequency chest wall compression (HFCC) therapy by airway clearance devices (ACDs) acts on the rheological properties of bronchial mucus to assist in clearing pulmonary secretions. Investigating low-frequency vibrations on the human thorax through numerical simulations is critical to ensure consistency and repeatability of studies by reducing extreme variability in body measurements across individuals. This study aims to present the numerical investigation of the harmonic acoustic excitation of ACDs on the human chest as a gentle and effective HFCC therapy. METHODS Four software programs were sequentially used to visualize medical images, decrease the number of surfaces, generate and repair meshes, and conduct numerical analysis, respectively. The developed methodology supplied the validation of the effect of HFCC through computed tomography-based finite element analysis (CT-FEM) of a human thorax. To illustrate the vibroacoustic characteristics of the HFCC therapy device, a 146-decibel sound pressure level (dBSPL) was applied on the back-chest surface of the model. Frequency response function (FRF) across 5-100 Hz was analyzed to characterize the behaviour of the human thorax with the state-space model. RESULTS We discovered that FRF pertaining to accelerance equals 0.138 m/s2N at the peak frequency of 28 Hz, which is consistent with two independent experimental airway clearance studies reported in the literature. The state-space model assessed two apparent resonance frequencies at 28 Hz and 41 Hz for the human thorax. The total displacement, kinetic energy density, and elastic strain energy density were furthermore quantified at 1 µm, 5.2 µJ/m3, and 140.7 µJ/m3, respectively, at the resonance frequency. In order to deepen our understanding of the impact on internal organs, the model underwent simulations in both the time domain and frequency domain for a comprehensive analysis. CONCLUSION Overall, the present study enabled determining and validating FRF of the human thorax to roll out the inconsistencies, contributing to the health of individuals with investigating gentle but effective HFCC therapy conditions with ACDs. This innovative finding furthermore provides greater clarity and a tangible understanding of the subject by simulating the responses of CT-FEM of the human thorax and internal organs at resonance.
Collapse
Affiliation(s)
- Arife Uzundurukan
- Centre de Recherche Acoustique-Signal-Humain, Université de Sherbrooke, 2500 Bd de l'Université, Sherbrooke, QC J1K 2R1, Canada.
| | - Sébastien Poncet
- Centre de Recherche Acoustique-Signal-Humain, Université de Sherbrooke, 2500 Bd de l'Université, Sherbrooke, QC J1K 2R1, Canada
| | - Daria Camilla Boffito
- Department of Chemical Engineering, École Polytechnique de Montréal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Philippe Micheau
- Centre de Recherche Acoustique-Signal-Humain, Université de Sherbrooke, 2500 Bd de l'Université, Sherbrooke, QC J1K 2R1, Canada
| |
Collapse
|
3
|
Jawad BN, Shaker SM, Altintas I, Eugen-Olsen J, Nehlin JO, Andersen O, Kallemose T. Development and validation of prognostic machine learning models for short- and long-term mortality among acutely admitted patients based on blood tests. Sci Rep 2024; 14:5942. [PMID: 38467752 PMCID: PMC10928126 DOI: 10.1038/s41598-024-56638-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 03/08/2024] [Indexed: 03/13/2024] Open
Abstract
Several scores predicting mortality at the emergency department have been developed. However, all with shortcomings either simple and applicable in a clinical setting, with poor performance, or advanced, with high performance, but clinically difficult to implement. This study aimed to explore if machine learning algorithms could predict all-cause short- and long-term mortality based on the routine blood test collected at admission. METHODS We analyzed data from a retrospective cohort study, including patients > 18 years admitted to the Emergency Department (ED) of Copenhagen University Hospital Hvidovre, Denmark between November 2013 and March 2017. The primary outcomes were 3-, 10-, 30-, and 365-day mortality after admission. PyCaret, an automated machine learning library, was used to evaluate the predictive performance of fifteen machine learning algorithms using the area under the receiver operating characteristic curve (AUC). RESULTS Data from 48,841 admissions were analyzed, of these 34,190 (70%) were randomly divided into training data, and 14,651 (30%) were in test data. Eight machine learning algorithms achieved very good to excellent results of AUC on test data in a of range 0.85-0.93. In prediction of short-term mortality, lactate dehydrogenase (LDH), leukocyte counts and differentials, Blood urea nitrogen (BUN) and mean corpuscular hemoglobin concentration (MCHC) were the best predictors, whereas prediction of long-term mortality was favored by age, LDH, soluble urokinase plasminogen activator receptor (suPAR), albumin, and blood urea nitrogen (BUN). CONCLUSION The findings suggest that measures of biomarkers taken from one blood sample during admission to the ED can identify patients at high risk of short-and long-term mortality following emergency admissions.
Collapse
Affiliation(s)
- Baker Nawfal Jawad
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark.
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark.
| | | | - Izzet Altintas
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
- Emergency Department, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Jesper Eugen-Olsen
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| | - Jan O Nehlin
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| | - Ove Andersen
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
- Emergency Department, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Thomas Kallemose
- Department of Clinical Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
4
|
Wong CYT, O'Byrne C, Taribagil P, Liu T, Antaki F, Keane PA. Comparing code-free and bespoke deep learning approaches in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024:10.1007/s00417-024-06432-x. [PMID: 38446200 DOI: 10.1007/s00417-024-06432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 02/13/2024] [Accepted: 02/27/2024] [Indexed: 03/07/2024] Open
Abstract
AIM Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management. METHODS We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review. RESULTS Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs. CONCLUSION For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.
Collapse
Affiliation(s)
- Carolyn Yu Tung Wong
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ciara O'Byrne
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Priyal Taribagil
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Timing Liu
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, QC, Canada
| | - Pearse Andrew Keane
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK.
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- NIHR Moorfields Biomedical Research Centre, London, UK.
| |
Collapse
|
5
|
Liang S, Xu X, Yang Z, Du Q, Zhou L, Shao J, Guo J, Ying B, Li W, Wang C. Deep learning for precise diagnosis and subtype triage of drug-resistant tuberculosis on chest computed tomography. MedComm (Beijing) 2024; 5:e487. [PMID: 38469547 PMCID: PMC10925488 DOI: 10.1002/mco2.487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 03/13/2024] Open
Abstract
Deep learning, transforming input data into target prediction through intricate network structures, has inspired novel exploration in automated diagnosis based on medical images. The distinct morphological characteristics of chest abnormalities between drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) on chest computed tomography (CT) are of potential value in differential diagnosis, which is challenging in the clinic. Hence, based on 1176 chest CT volumes from the equal number of patients with tuberculosis (TB), we presented a Deep learning-based system for TB drug resistance identification and subtype classification (DeepTB), which could automatically diagnose DR-TB and classify crucial subtypes, including rifampicin-resistant tuberculosis, multidrug-resistant tuberculosis, and extensively drug-resistant tuberculosis. Moreover, chest lesions were manually annotated to endow the model with robust power to assist radiologists in image interpretation and the Circos revealed the relationship between chest abnormalities and specific types of DR-TB. Finally, DeepTB achieved an area under the curve (AUC) up to 0.930 for thoracic abnormality detection and 0.943 for DR-TB diagnosis. Notably, the system demonstrated instructive value in DR-TB subtype classification with AUCs ranging from 0.880 to 0.928. Meanwhile, class activation maps were generated to express a human-understandable visual concept. Together, showing a prominent performance, DeepTB would be impactful in clinical decision-making for DR-TB.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Xiuyuan Xu
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Zhe Yang
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Qiuyu Du
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Lingyu Zhou
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Jun Shao
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Jixiang Guo
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Binwu Ying
- Department of Laboratory MedicineWest China Hospital, Sichuan UniversityChengduChina
| | - Weimin Li
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Chengdi Wang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| |
Collapse
|
6
|
Bougourzi F, Distante C, Dornaika F, Taleb-Ahmed A, Hadid A, Chaudhary S, Yang W, Qiang Y, Anwar T, Breaban ME, Hsu CC, Tai SC, Chen SN, Tricarico D, Chaudhry HAH, Fiandrotti A, Grangetto M, Spatafora MAN, Ortis A, Battiato S. COVID-19 Infection Percentage Estimation from Computed Tomography Scans: Results and Insights from the International Per-COVID-19 Challenge. Sensors (Basel) 2024; 24:1557. [PMID: 38475092 PMCID: PMC10934842 DOI: 10.3390/s24051557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/29/2023] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient's state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.
Collapse
Affiliation(s)
- Fares Bougourzi
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
- Laboratoire LISSI, University Paris-Est Creteil, Vitry sur Seine, 94400 Paris, France
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
| | - Fadi Dornaika
- Department of Computer Science and Artificial Intelligence, University of the Basque Country UPV/EHU, Manuel Lardizabal, 1, 20018 San Sebastian, Spain;
- IKERBASQUE, Basque Foundation for Science, 48011 Bilbao, Spain
| | - Abdelmalik Taleb-Ahmed
- Institut d’Electronique de Microélectronique et de Nanotechnologie (IEMN), UMR 8520, Universite Polytechnique Hauts-de-France, Université de Lille, CNRS, 59313 Valenciennes, France;
| | - Abdenour Hadid
- Sorbonne Center for Artificial Intelligence, Sorbonne University of Abu Dhabi, Abu Dhabi P.O. Box 38044, United Arab Emirates
| | - Suman Chaudhary
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Wanting Yang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Yan Qiang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Talha Anwar
- School of Computing, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
| | | | - Chih-Chung Hsu
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Shen-Chieh Tai
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Shao-Ning Chen
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Davide Tricarico
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Hafiza Ayesha Hoor Chaudhry
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Attilio Fiandrotti
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Marco Grangetto
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | | | - Alessandro Ortis
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy (S.B.)
| | - Sebastiano Battiato
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy (S.B.)
| |
Collapse
|
7
|
Fan W, Yang Y, Qi J, Zhang Q, Liao C, Wen L, Wang S, Wang G, Xia Y, Wu Q, Fan X, Chen X, He M, Xiao J, Yang L, Liu Y, Chen J, Wang B, Zhang L, Yang L, Gan H, Zhang S, Liu G, Ge X, Cai Y, Zhao G, Zhang X, Xie M, Xu H, Zhang Y, Chen J, Li J, Han S, Mu K, Xiao S, Xiong T, Nian Y, Zhang D. A deep-learning-based framework for identifying and localizing multiple abnormalities and assessing cardiomegaly in chest X-ray. Nat Commun 2024; 15:1347. [PMID: 38355644 PMCID: PMC10867134 DOI: 10.1038/s41467-024-45599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate identification and localization of multiple abnormalities are crucial steps in the interpretation of chest X-rays (CXRs); however, the lack of a large CXR dataset with bounding boxes severely constrains accurate localization research based on deep learning. We created a large CXR dataset named CXR-AL14, containing 165,988 CXRs and 253,844 bounding boxes. On the basis of this dataset, a deep-learning-based framework was developed to identify and localize 14 common abnormalities and calculate the cardiothoracic ratio (CTR) simultaneously. The mean average precision values obtained by the model for 14 abnormalities reached 0.572-0.631 with an intersection-over-union threshold of 0.5, and the intraclass correlation coefficient of the CTR algorithm exceeded 0.95 on the held-out, multicentre and prospective test datasets. This framework shows an excellent performance, good generalization ability and strong clinical applicability, which is superior to senior radiologists and suitable for routine clinical settings.
Collapse
Affiliation(s)
- Weijie Fan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Yang
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Jing Qi
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Qichuan Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Cuiwei Liao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Li Wen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guangxian Wang
- Department of Radiology, People's Hospital of Banan, Chongqing Medical University, Chongqing, 401320, P. R. China
| | - Yu Xia
- Department of Radiology, Xishui hospital of Traditional Chinese Medicine, Zunyi of Guizhou province, 564600, P. R. China
| | - Qihua Wu
- Department of Radiology, People's Hospital of Nanchuan, Chongqing, 408400, P. R. China
| | - Xiaotao Fan
- Department of Radiology, Fengdu People's Hospital, Chongqing, 408200, P. R. China
| | - Xingcai Chen
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Mi He
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - JingJing Xiao
- Department of Medical Engineering, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liu Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yun Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jia Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Bing Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Lei Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liuqing Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Hui Gan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shushu Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guofang Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xiaodong Ge
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yuanqing Cai
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Gang Zhao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Mingxun Xie
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Huilin Xu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jiao Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jun Li
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Han
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Ke Mu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shilin Xiao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Tingwei Xiong
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yongjian Nian
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China.
| | - Dong Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China.
| |
Collapse
|
8
|
Han PL, Jiang L, Cheng JL, Shi K, Huang S, Jiang Y, Jiang L, Xia Q, Li YY, Zhu M, Li K, Yang ZG. Artificial intelligence-assisted diagnosis of congenital heart disease and associated pulmonary arterial hypertension from chest radiographs: A multi-reader multi-case study. Eur J Radiol 2024; 171:111277. [PMID: 38160541 DOI: 10.1016/j.ejrad.2023.111277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 12/10/2023] [Accepted: 12/20/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVES To explore the possibility of automatic diagnosis of congenital heart disease (CHD) and pulmonary arterial hypertension associated with CHD (PAH-CHD) from chest radiographs using artificial intelligence (AI) technology and to evaluate whether AI assistance could improve clinical diagnostic accuracy. MATERIALS AND METHODS A total of 3255 frontal preoperative chest radiographs (1174 CHD of any type and 2081 non-CHD) were retrospectively obtained. In this study, we adopted ResNet18 pretrained with the ImageNet database to establish diagnostic models. Radiologists diagnosed CHD/PAH-CHD from 330/165 chest radiographs twice: the first time, 50% of the images were accompanied by AI-based classification; after a month, the remaining 50% were accompanied by AI-based classification. Diagnostic results were compared between the radiologists and AI models, and between radiologists with and without AI assistance. RESULTS The AI model achieved an average area under the receiver operating characteristic curve (AUC) of 0.948 (sensitivity: 0.970, specificity: 0.982) for CHD diagnoses and an AUC of 0.778 (sensitivity: 0.632, specificity: 0.925) for identifying PAH-CHD. In the 330 balanced (165 CHD and 165 non-CHD) testing set, AI achieved higher AUCs than all 5 radiologists in the identification of CHD (0.670-0.858) and PAH-CHD (0.610-0.688). With AI assistance, the mean ± standard error AUC of radiologists was significantly improved for CHD (ΔAUC + 0.096, 95 % CI: 0.001-0.190; P = 0.048) and PAH-CHD (ΔAUC + 0.066, 95 % CI: 0.010-0.122; P = 0.031) diagnosis. CONCLUSION Chest radiograph-based AI models can detect CHD and PAH-CHD automatically. AI assistance improved radiologists' diagnostic accuracy, which may facilitate a timely initial diagnosis of CHD and PAH-CHD.
Collapse
Affiliation(s)
- Pei-Lun Han
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Lei Jiang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Jun-Long Cheng
- College of Computer Science, Sichuan University, Chengdu, China
| | - Ke Shi
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Shan Huang
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Jiang
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Li Jiang
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Qing Xia
- SenseTime Research, Beijing, China
| | - Yi-Yue Li
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Min Zhu
- College of Computer Science, Sichuan University, Chengdu, China
| | - Kang Li
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China; Med-X Center for Informatics, Sichuan University, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Zhi-Gang Yang
- Department of Radiology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
9
|
Alves VM, dos Santos Cardoso J, Gama J. Classification of Pulmonary Nodules in 2-[ 18F]FDG PET/CT Images with a 3D Convolutional Neural Network. Nucl Med Mol Imaging 2024; 58:9-24. [PMID: 38261899 PMCID: PMC10796312 DOI: 10.1007/s13139-023-00821-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 05/17/2023] [Accepted: 08/08/2023] [Indexed: 01/25/2024] Open
Abstract
Purpose 2-[18F]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[18F]FDG PET images. Methods One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[18F]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used. Results The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives. Conclusion A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[18F]FDG PET images. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-023-00821-6.
Collapse
Affiliation(s)
- Victor Manuel Alves
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Department of Nuclear Medicine, University Hospital Center of São João, Alameda Prof. Hernâni Monteiro, 4200-319 Porto, Portugal
| | - Jaime dos Santos Cardoso
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| | - João Gama
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
10
|
Islam MT, Zhou Z, Ren H, Khuzani MB, Kapp D, Zou J, Tian L, Liao JC, Xing L. Revealing hidden patterns in deep neural network feature space continuum via manifold learning. Nat Commun 2023; 14:8506. [PMID: 38129376 PMCID: PMC10739971 DOI: 10.1038/s41467-023-43958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 11/24/2023] [Indexed: 12/23/2023] Open
Abstract
Deep neural networks (DNNs) extract thousands to millions of task-specific features during model training for inference and decision-making. While visualizing these features is critical for comprehending the learning process and improving the performance of the DNNs, existing visualization techniques work only for classification tasks. For regressions, the feature points lie on a high dimensional continuum having an inherently complex shape, making a meaningful visualization of the features intractable. Given that the majority of deep learning applications are regression-oriented, developing a conceptual framework and computational method to reliably visualize the regression features is of great significance. Here, we introduce a manifold discovery and analysis (MDA) method for DNN feature visualization, which involves learning the manifold topology associated with the output and target labels of a DNN. MDA leverages the acquired topological information to preserve the local geometry of the feature space manifold and provides insightful visualizations of the DNN features, highlighting the appropriateness, generalizability, and adversarial robustness of a DNN. The performance and advantages of the MDA approach compared to the existing methods are demonstrated in different deep learning applications.
Collapse
Affiliation(s)
- Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Zixia Zhou
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Hongyi Ren
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | | | - Daniel Kapp
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - James Zou
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, USA
| | - Lu Tian
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, USA
| | - Joseph C Liao
- Department of Urology, Stanford University, Stanford, CA, 94305, USA.
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| |
Collapse
|
11
|
Pei Y, E L, Dai C, Han J, Wang H, Liang H. Combining deep learning and intelligent biometry to extract ultrasound standard planes and assess early gestational weeks. Eur Radiol 2023; 33:9390-9400. [PMID: 37392231 DOI: 10.1007/s00330-023-09808-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/07/2023] [Accepted: 03/26/2023] [Indexed: 07/03/2023]
Abstract
OBJECTIVES To develop and validate a fully automated AI system to extract standard planes, assess early gestational weeks, and compare the performance of the developed system to sonographers. METHODS In this three-center retrospective study, 214 consecutive pregnant women that underwent transvaginal ultrasounds between January and December 2018 were selected. Their ultrasound videos were automatically split into 38,941 frames using a particular program. First, an optimal deep-learning classifier was selected to extract the standard planes with key anatomical structures from the ultrasound frames. Second, an optimal segmentation model was selected to outline gestational sacs. Third, novel biometry was used to measure, select the largest gestational sac in the same video, and assess gestational weeks automatically. Finally, an independent test set was used to compare the performance of the system with that of sonographers. The outcomes were analyzed using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and mean similarity between two samples (mDice). RESULTS The standard planes were extracted with an AUC of 0.975, a sensitivity of 0.961, and a specificity of 0.979. The gestational sacs' contours were segmented with a mDice of 0.974 (error less than 2 pixels). The comparison showed that the relative error of the tool in assessing gestational weeks was 12.44% and 6.92% lower and faster (min, 0.17 vs. 16.6 and 12.63) than that of the intermediate and senior sonographers, respectively. CONCLUSIONS This proposed end-to-end tool allows automatic assessment of gestational weeks in early pregnancy and may reduce manual analysis time and measurement errors. CLINICAL RELEVANCE STATEMENT The fully automated tool achieved high accuracy showing its potential to optimize the increasingly scarce resources of sonographers. Explainable predictions can assist in their confidence in assessing gestational weeks and provide a reliable basis for managing early pregnancy cases. KEY POINTS • The end-to-end pipeline enabled automatic identification of the standard plane containing the gestational sac in an ultrasound video, as well as segmentation of the sac contour, automatic multi-angle measurements, and the selection of the sac with the largest mean internal diameter to calculate the early gestational week. • This fully automated tool combining deep learning and intelligent biometry may assist the sonographer in assessing the early gestational week, increasing accuracy and reducing the analyzing time, thereby reducing observer dependence.
Collapse
Affiliation(s)
- Yuanyuan Pei
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Longjiang E
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Changping Dai
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Jin Han
- Prenatal Diagnosis Center of Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Haiyu Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China.
| | - Huiying Liang
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China.
- Medical Big Data Research Center, Guangdong Provincial People's Hospital/Guangdong Academy of Medical Sciences, Guangzhou, 510080, China.
| |
Collapse
|
12
|
Gong Z, Li X, Shi M, Cai G, Chen S, Ye Z, Gan X, Yang R, Wang R, Chen Z. Measuring the binary thickness of buccal bone of anterior maxilla in low-resolution cone-beam computed tomography via a bilinear convolutional neural network. Quant Imaging Med Surg 2023; 13:8053-8066. [PMID: 38106266 PMCID: PMC10722026 DOI: 10.21037/qims-23-744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 08/28/2023] [Indexed: 12/19/2023]
Abstract
Background The thickness of the buccal bone of the anterior maxilla is an important aesthetic-determining factor for dental implant, which is divided into the thick (≥1 mm) and thin type (<1 mm). However, as a micro-scale structure that is evaluated through low-resolution cone-beam computed tomography (CBCT), its thickness measurement is error-prone under the circumstance of enormous patients and relatively inexperienced primary dentists. Further, the challenges of deep learning-based analysis of the binary thickness of buccal bone include the substantial real-world variance caused by pixel error, the extraction of fine-grained features, and burdensome annotations. Methods This study built bilinear convolutional neural network (BCNN) with 2 convolutional neural network (CNN) backbones and a bilinear pooling module to predict the binary thickness of buccal bone (thick or thin) of the anterior maxilla in an end-to-end manner. The methods of 5-fold cross-validation and model ensemble were adopted at the training and testing stages. The visualization methods of Gradient Weighted Class Activation Mapping (Grad-CAM), Guided Grad-CAM, and layer-wise relevance propagation (LRP) were used for revealing the important features on which the model focused. The performance metrics and efficacy were compared between BCNN, dentists of different clinical experience (i.e., dental student, junior dentist, and senior dentist), and the fusion of BCNN and dentists to investigate the clinical feasibility of BCNN. Results Based on the dataset of 4,000 CBCT images from 1,000 patients (aged 36.15±13.09 years), the BCNN with visual geometry group (VGG)16 backbone achieved an accuracy of 0.870 [95% confidence interval (CI): 0.838-0.902] and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.924 (95% CI: 0.896-0.948). Compared with the conventional CNNs, BCNN precisely located the buccal bone wall over irrelevant regions. The BCNN generally outperformed the expert-level dentists. The clinical diagnostic performance of the dentists was improved with the assistance of BCNN. Conclusions The application of BCNN to the quantitative analysis of binary buccal bone thickness validated the model's excellent ability of subtle feature extraction and achieved expert-level performance. This work signals the potential of fine-grained image recognition networks to the precise quantitative analysis of micro-scale structures.
Collapse
Affiliation(s)
- Zhuohong Gong
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Xiaohui Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Mengru Shi
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Gengbin Cai
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Shijie Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Zejun Ye
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Xuejing Gan
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruihan Yang
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Zetao Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
13
|
Li J, Zhou Y, Ma J, Zhang Q, Shao J, Liang S, Yu Y, Li W, Wang C. The long-term health outcomes, pathophysiological mechanisms and multidisciplinary management of long COVID. Signal Transduct Target Ther 2023; 8:416. [PMID: 37907497 PMCID: PMC10618229 DOI: 10.1038/s41392-023-01640-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 08/04/2023] [Accepted: 09/04/2023] [Indexed: 11/02/2023] Open
Abstract
There have been hundreds of millions of cases of coronavirus disease 2019 (COVID-19), which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). With the growing population of recovered patients, it is crucial to understand the long-term consequences of the disease and management strategies. Although COVID-19 was initially considered an acute respiratory illness, recent evidence suggests that manifestations including but not limited to those of the cardiovascular, respiratory, neuropsychiatric, gastrointestinal, reproductive, and musculoskeletal systems may persist long after the acute phase. These persistent manifestations, also referred to as long COVID, could impact all patients with COVID-19 across the full spectrum of illness severity. Herein, we comprehensively review the current literature on long COVID, highlighting its epidemiological understanding, the impact of vaccinations, organ-specific sequelae, pathophysiological mechanisms, and multidisciplinary management strategies. In addition, the impact of psychological and psychosomatic factors is also underscored. Despite these crucial findings on long COVID, the current diagnostic and therapeutic strategies based on previous experience and pilot studies remain inadequate, and well-designed clinical trials should be prioritized to validate existing hypotheses. Thus, we propose the primary challenges concerning biological knowledge gaps and efficient remedies as well as discuss the corresponding recommendations.
Collapse
Affiliation(s)
- Jingwei Li
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
| | - Yun Zhou
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
| | - Jiechao Ma
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Qin Zhang
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
- Department of Postgraduate Student, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
| | - Shufan Liang
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
| | - Yizhou Yu
- Department of Computer Science, The University of Hong Kong, Hong Kong, China.
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China.
| | - Chengdi Wang
- Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
14
|
Schaudt D, von Schwerin R, Hafner A, Riedel P, Reichert M, von Schwerin M, Beer M, Kloth C. Augmentation strategies for an imbalanced learning problem on a novel COVID-19 severity dataset. Sci Rep 2023; 13:18299. [PMID: 37880333 PMCID: PMC10600145 DOI: 10.1038/s41598-023-45532-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/20/2023] [Indexed: 10/27/2023] Open
Abstract
Since the beginning of the COVID-19 pandemic, many different machine learning models have been developed to detect and verify COVID-19 pneumonia based on chest X-ray images. Although promising, binary models have only limited implications for medical treatment, whereas the prediction of disease severity suggests more suitable and specific treatment options. In this study, we publish severity scores for the 2358 COVID-19 positive images in the COVIDx8B dataset, creating one of the largest collections of publicly available COVID-19 severity data. Furthermore, we train and evaluate deep learning models on the newly created dataset to provide a first benchmark for the severity classification task. One of the main challenges of this dataset is the skewed class distribution, resulting in undesirable model performance for the most severe cases. We therefore propose and examine different augmentation strategies, specifically targeting majority and minority classes. Our augmentation strategies show significant improvements in precision and recall values for the rare and most severe cases. While the models might not yet fulfill medical requirements, they serve as an appropriate starting point for further research with the proposed dataset to optimize clinical resource allocation and treatment.
Collapse
Affiliation(s)
- Daniel Schaudt
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Alexander Hafner
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Pascal Riedel
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Marianne von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| |
Collapse
|
15
|
Wang G, Liu X, Ying Z, Yang G, Chen Z, Liu Z, Zhang M, Yan H, Lu Y, Gao Y, Xue K, Li X, Chen Y. Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial. Nat Med 2023; 29:2633-2642. [PMID: 37710000 PMCID: PMC10579102 DOI: 10.1038/s41591-023-02552-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/17/2023] [Indexed: 09/16/2023]
Abstract
The personalized titration and optimization of insulin regimens for treatment of type 2 diabetes (T2D) are resource-demanding healthcare tasks. Here we propose a model-based reinforcement learning (RL) framework (called RL-DITR), which learns the optimal insulin regimen by analyzing glycemic state rewards through patient model interactions. When evaluated during the development phase for managing hospitalized patients with T2D, RL-DITR achieved superior insulin titration optimization (mean absolute error (MAE) of 1.10 ± 0.03 U) compared to other deep learning models and standard clinical methods. We performed a stepwise clinical validation of the artificial intelligence system from simulation to deployment, demonstrating better performance in glycemic control in inpatients compared to junior and intermediate-level physicians through quantitative (MAE of 1.18 ± 0.09 U) and qualitative metrics from a blinded review. Additionally, we conducted a single-arm, patient-blinded, proof-of-concept feasibility trial in 16 patients with T2D. The primary outcome was difference in mean daily capillary blood glucose during the trial, which decreased from 11.1 (±3.6) to 8.6 (±2.4) mmol L-1 (P < 0.01), meeting the pre-specified endpoint. No episodes of severe hypoglycemia or hyperglycemia with ketosis occurred. These preliminary results warrant further investigation in larger, more diverse clinical studies. ClinicalTrials.gov registration: NCT05409391 .
Collapse
Affiliation(s)
- Guangyu Wang
- Ministry of Education Key Laboratory of Metabolism and Molecular Medicine, Department of Endocrinology and Metabolism, Zhongshan Hospital, Fudan University, Shanghai, China.
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China.
| | - Xiaohong Liu
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Zhen Ying
- Ministry of Education Key Laboratory of Metabolism and Molecular Medicine, Department of Endocrinology and Metabolism, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Guoxing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Zhiwei Chen
- Big Data and Artificial Intelligence Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zhiwen Liu
- Department of Endocrinology, XuHui Central Hospital of Shanghai, Shanghai, China
| | - Min Zhang
- Department of Endocrinology and Metabolism, Qingpu Branch of Zhongshan Hospital affiliated to Fudan University, Shanghai, China
| | - Hongmei Yan
- Ministry of Education Key Laboratory of Metabolism and Molecular Medicine, Department of Endocrinology and Metabolism, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yuxing Lu
- Big Data and Biomedical AI Laboratory, College of Future Technology, Peking University, Beijing, China
| | - Yuanxu Gao
- Big Data and Biomedical AI Laboratory, College of Future Technology, Peking University, Beijing, China
| | - Kanmin Xue
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Xiaoying Li
- Ministry of Education Key Laboratory of Metabolism and Molecular Medicine, Department of Endocrinology and Metabolism, Zhongshan Hospital, Fudan University, Shanghai, China.
- Shanghai Key Laboratory of Metabolic Remodeling and Health, Institute of Metabolism and Integrative Biology, Fudan University, Shanghai, China.
| | - Ying Chen
- Ministry of Education Key Laboratory of Metabolism and Molecular Medicine, Department of Endocrinology and Metabolism, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
16
|
Pei Y, Wang G, Cao H, Jiang S, Wang D, Wang H, Wang H, Yu H. A deep-learning pipeline to diagnose pediatric intussusception and assess severity during ultrasound scanning: a multicenter retrospective-prospective study. NPJ Digit Med 2023; 6:182. [PMID: 37775624 PMCID: PMC10541898 DOI: 10.1038/s41746-023-00930-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/01/2023] Open
Abstract
Ileocolic intussusception is one of the common acute abdomens in children and is first diagnosed urgently using ultrasound. Manual diagnosis requires extensive experience and skill, and identifying surgical indications in assessing the disease severity is more challenging. We aimed to develop a real-time lesion visualization deep-learning pipeline to solve this problem. This multicenter retrospective-prospective study used 14,085 images in 8736 consecutive patients (median age, eight months) with ileocolic intussusception who underwent ultrasound at six hospitals to train, validate, and test the deep-learning pipeline. Subsequently, the algorithm was validated in an internal image test set and an external video dataset. Furthermore, the performances of junior, intermediate, senior, and junior sonographers with AI-assistance were prospectively compared in 242 volunteers using the DeLong test. This tool recognized 1,086 images with three ileocolic intussusception signs with an average of the area under the receiver operating characteristic curve (average-AUC) of 0.972. It diagnosed 184 patients with no intussusception, nonsurgical intussusception, and surgical intussusception in 184 ultrasound videos with an average-AUC of 0.956. In the prospective pilot study using 242 volunteers, junior sonographers' performances were significantly improved with AI-assistance (average-AUC: 0.966 vs. 0.857, P < 0.001; median scanning-time: 9.46 min vs. 3.66 min, P < 0.001), which were comparable to those of senior sonographers (average-AUC: 0.966 vs. 0.973, P = 0.600). Thus, here, we report that the deep-learning pipeline that guides lesions in real-time and is interpretable during ultrasound scanning could assist sonographers in improving the accuracy and efficiency of diagnosing intussusception and identifying surgical indications.
Collapse
Affiliation(s)
- Yuanyuan Pei
- Provincial Key Laboratory of Research in Structure Birth Defect Disease and Department of Pediatric Surgery, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou, China
| | - Guijuan Wang
- School of Computer Science, South China Normal University, Guangzhou, China
| | - Haiwei Cao
- Ultrasonic Department, Kaifeng Children's Hospital, Kaifeng, China
| | - Shuanglan Jiang
- Ultrasonic Department, Dongguan Children's Hospital, Dongguan, China
| | - Dan Wang
- Ultrasonic Department, Children's Hospital Affiliated to Zhengzhou University, Zhengzhou, China
| | - Haiyu Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Hongying Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Hongkui Yu
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- Department of Ultrasonography, Shenzhen Baoan Women's and Children's Hospital, Jinan University, Shenzhen, China.
| |
Collapse
|
17
|
Guo S, Zhang J, Li H, Zhang J, Cheng CK. A multi-branch network to detect post-operative complications following hip arthroplasty on X-ray images. Front Bioeng Biotechnol 2023; 11:1239637. [PMID: 37840662 PMCID: PMC10569301 DOI: 10.3389/fbioe.2023.1239637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 09/13/2023] [Indexed: 10/17/2023] Open
Abstract
Background: Postoperative complications following total hip arthroplasty (THA) often require revision surgery. X-rays are usually used to detect such complications, but manually identifying the location of the problem and making an accurate assessment can be subjective and time-consuming. Therefore, in this study, we propose a multi-branch network to automatically detect postoperative complications on X-ray images. Methods: We developed a multi-branch network using ResNet as the backbone and two additional branches with a global feature stream and a channel feature stream for extracting features of interest. Additionally, inspired by our domain knowledge, we designed a multi-coefficient class-specific residual attention block to learn the correlations between different complications to improve the performance of the system. Results: Our proposed method achieved state-of-the-art (SOTA) performance in detecting multiple complications, with mean average precision (mAP) and F1 scores of 0.346 and 0.429, respectively. The network also showed excellent performance at identifying aseptic loosening, with recall and precision rates of 0.929 and 0.897, respectively. Ablation experiments were conducted on detecting multiple complications and single complications, as well as internal and external datasets, demonstrating the effectiveness of our proposed modules. Conclusion: Our deep learning method provides an accurate end-to-end solution for detecting postoperative complications following THA.
Collapse
Affiliation(s)
- Sijia Guo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai Jiao Tong University, Shanghai, China
| | - Jiping Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai Jiao Tong University, Shanghai, China
| | - Huiwu Li
- Department of Orthopaedics, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jingwei Zhang
- Department of Orthopaedics, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Cheng-Kung Cheng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
18
|
Chen Y, Wan Y, Pan F. Enhancing Multi-disease Diagnosis of Chest X-rays with Advanced Deep-learning Networks in Real-world Data. J Digit Imaging 2023; 36:1332-1347. [PMID: 36988837 PMCID: PMC10054207 DOI: 10.1007/s10278-023-00801-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/19/2023] [Accepted: 02/21/2023] [Indexed: 03/30/2023] Open
Abstract
The current artificial intelligence (AI) models are still insufficient in multi-disease diagnosis for real-world data, which always present a long-tail distribution. To tackle this issue, a long-tail public dataset, "ChestX-ray14," which involved fourteen (14) disease labels, was randomly divided into the train, validation, and test sets with ratios of 0.7, 0.1, and 0.2. Two pretrained state-of-the-art networks, EfficientNet-b5 and CoAtNet-0-rw, were chosen as the backbones. After the fully-connected layer, a final layer of 14 sigmoid activation units was added to output each disease's diagnosis. To achieve better adaptive learning, a novel loss (Lours) was designed, which coalesced reweighting and tail sample focus. For comparison, a pretrained ResNet50 network with weighted binary cross-entropy loss (LWBCE) was used as a baseline, which showed the best performance in a previous study. The overall and individual areas under the receiver operating curve (AUROC) for each disease label were evaluated and compared among different models. Group-score-weighted class activation mapping (Group-CAM) is applied for visual interpretations. As a result, the pretrained CoAtNet-0-rw + Lours showed the best overall AUROC of 0.842, significantly higher than ResNet50 + LWBCE (AUROC: 0.811, p = 0.037). Group-CAM presented that the model could pay the proper attention to lesions for most disease labels (e.g., atelectasis, edema, effusion) but wrong attention for the other labels, such as pneumothorax; meanwhile, mislabeling of the dataset was found. Overall, this study presented an advanced AI diagnostic model achieving a significant improvement in the multi-disease diagnosis of chest X-rays, particularly in real-world data with challenging long-tail distributions.
Collapse
Affiliation(s)
| | - Yiliang Wan
- Neusoft Medical Systems Co., Ltd, Shenyang, China
| | - Feng Pan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
19
|
Sukegawa S, Ono S, Tanaka F, Inoue Y, Hara T, Yoshii K, Nakano K, Takabatake K, Kawai H, Katsumitsu S, Nakai F, Nakai Y, Miyazaki R, Murakami S, Nagatsuka H, Miyake M. Effectiveness of deep learning classifiers in histopathological diagnosis of oral squamous cell carcinoma by pathologists. Sci Rep 2023; 13:11676. [PMID: 37468501 DOI: 10.1038/s41598-023-38343-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 07/06/2023] [Indexed: 07/21/2023] Open
Abstract
The study aims to identify histological classifiers from histopathological images of oral squamous cell carcinoma using convolutional neural network (CNN) deep learning models and shows how the results can improve diagnosis. Histopathological samples of oral squamous cell carcinoma were prepared by oral pathologists. Images were divided into tiles on a virtual slide, and labels (squamous cell carcinoma, normal, and others) were applied. VGG16 and ResNet50 with the optimizers stochastic gradient descent with momentum and spectral angle mapper (SAM) were used, with and without a learning rate scheduler. The conditions for achieving good CNN performances were identified by examining performance metrics. We used ROCAUC to statistically evaluate diagnostic performance improvement of six oral pathologists using the results from the selected CNN model for assisted diagnosis. VGG16 with SAM showed the best performance, with accuracy = 0.8622 and AUC = 0.9602. The diagnostic performances of the oral pathologists statistically significantly improved when the diagnostic results of the deep learning model were used as supplementary diagnoses (p-value = 0.031). By considering the learning results of deep learning model classifiers, the diagnostic accuracy of pathologists can be improved. This study contributes to the application of highly reliable deep learning models for oral pathological diagnosis.
Collapse
Affiliation(s)
- Shintaro Sukegawa
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1 Ikenobe, Miki, Kagawa, 761-0793, Japan.
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-Machi, Takamatsu, Kagawa, 760-8557, Japan.
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan.
| | - Sawako Ono
- Department of Pathology, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-Machi, Takamatsu, Kagawa, 760-8557, Japan
| | - Futa Tanaka
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Yuta Inoue
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Takeshi Hara
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
- Center for Healthcare Information Technology, Tokai National Higher Education and Research System, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Kazumasa Yoshii
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Keisuke Nakano
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Kiyofumi Takabatake
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Hotaka Kawai
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Shimada Katsumitsu
- Department of Oral Pathology, Graduate School of Oral Medicine, Matsumoto Dental University, 1780 Hirooka-Gobara, Shiojiri, Nagano, 399-0781, Japan
| | - Fumi Nakai
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1 Ikenobe, Miki, Kagawa, 761-0793, Japan
| | - Yasuhiro Nakai
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1 Ikenobe, Miki, Kagawa, 761-0793, Japan
| | - Ryo Miyazaki
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1 Ikenobe, Miki, Kagawa, 761-0793, Japan
| | - Satoshi Murakami
- Department of Oral Pathology, Graduate School of Oral Medicine, Matsumoto Dental University, 1780 Hirooka-Gobara, Shiojiri, Nagano, 399-0781, Japan
| | - Hitoshi Nagatsuka
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Minoru Miyake
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1 Ikenobe, Miki, Kagawa, 761-0793, Japan
| |
Collapse
|
20
|
Shin HJ, Kim MH, Son NH, Han K, Kim EK, Kim YC, Park YS, Lee EH, Kyong T. Clinical Implication and Prognostic Value of Artificial-Intelligence-Based Results of Chest Radiographs for Assessing Clinical Outcomes of COVID-19 Patients. Diagnostics (Basel) 2023; 13:2090. [PMID: 37370985 DOI: 10.3390/diagnostics13122090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/15/2023] [Accepted: 06/15/2023] [Indexed: 06/29/2023] Open
Abstract
This study aimed to investigate the clinical implications and prognostic value of artificial intelligence (AI)-based results for chest radiographs (CXR) in coronavirus disease 2019 (COVID-19) patients. Patients who were admitted due to COVID-19 from September 2021 to March 2022 were retrospectively included. A commercial AI-based software was used to assess CXR data for consolidation and pleural effusion scores. Clinical data, including laboratory results, were analyzed for possible prognostic factors. Total O2 supply period, the last SpO2 result, and deterioration were evaluated as prognostic indicators of treatment outcome. Generalized linear mixed model and regression tests were used to examine the prognostic value of CXR results. Among a total of 228 patients (mean 59.9 ± 18.8 years old), consolidation scores had a significant association with erythrocyte sedimentation rate and C-reactive protein changes, and initial consolidation scores were associated with the last SpO2 result (estimate -0.018, p = 0.024). All consolidation scores during admission showed significant association with the total O2 supply period and the last SpO2 result. Early changing degree of consolidation score showed an association with deterioration (odds ratio 1.017, 95% confidence interval 1.005-1.03). In conclusion, AI-based CXR results for consolidation have potential prognostic value for predicting treatment outcomes in COVID-19 patients.
Collapse
Affiliation(s)
- Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Min Hyung Kim
- Division of Infectious Diseases, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Nak-Hoon Son
- Department of Statistics, Keimyung University, Daegu 42601, Republic of Korea
| | - Kyunghwa Han
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Yong Chan Kim
- Division of Infectious Diseases, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Yoon Soo Park
- Division of Infectious Diseases, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Eun Hye Lee
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| | - Taeyoung Kyong
- Department of Hospital Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si 16995, Republic of Korea
| |
Collapse
|
21
|
Zhou HY, Yu Y, Wang C, Zhang S, Gao Y, Pan J, Shao J, Lu G, Zhang K, Li W. A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics. Nat Biomed Eng 2023:10.1038/s41551-023-01045-x. [PMID: 37308585 DOI: 10.1038/s41551-023-01045-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 04/26/2023] [Indexed: 06/14/2023]
Abstract
During the diagnostic process, clinicians leverage multimodal information, such as the chief complaint, medical images and laboratory test results. Deep-learning models for aiding diagnosis have yet to meet this requirement of leveraging multimodal information. Here we report a transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model leverages embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and uses bidirectional blocks with intramodal and intermodal attention to learn holistic representations of radiographs, the unstructured chief complaint and clinical history, and structured clinical information such as laboratory test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary disease (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Unified multimodal transformer-based models may help streamline the triaging of patients and facilitate the clinical decision-making process.
Collapse
Affiliation(s)
- Hong-Yu Zhou
- Department of Computer Science, The University of Hong Kong, Pokfulam, China
| | - Yizhou Yu
- Department of Computer Science, The University of Hong Kong, Pokfulam, China.
| | - Chengdi Wang
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China.
| | - Shu Zhang
- AI Lab, Deepwise Healthcare, Beijing, China
| | | | - Jia Pan
- Department of Computer Science, The University of Hong Kong, Pokfulam, China
| | - Jun Shao
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Nanjing University School of Medicine, Nanjing, China
| | - Kang Zhang
- Zhuhai International Eye Center and Provincial Key Laboratory of Tumor Interventional Diagnosis and Treatment, Zhuhai People's Hospital and the First Affiliated Hospital of Faculty of Medicine, Macau University of Science and Technology and University Hospital, Guangdong, China.
- Department of Big Data and Biomedical Artificial Intelligence, National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing, China.
- Clinical Translational Research Center, West China Hospital, Sichuan University, Chengdu, China.
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
22
|
Schaudt D, von Schwerin R, Hafner A, Riedel P, Späte C, Reichert M, Hinteregger A, Beer M, Kloth C. Leveraging human expert image annotations to improve pneumonia differentiation through human knowledge distillation. Sci Rep 2023; 13:9203. [PMID: 37280219 DOI: 10.1038/s41598-023-36148-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 05/30/2023] [Indexed: 06/08/2023] Open
Abstract
In medical imaging, deep learning models can be a critical tool to shorten time-to-diagnosis and support specialized medical staff in clinical decision making. The successful training of deep learning models usually requires large amounts of quality data, which are often not available in many medical imaging tasks. In this work we train a deep learning model on university hospital chest X-ray data, containing 1082 images. The data was reviewed, differentiated into 4 causes for pneumonia, and annotated by an expert radiologist. To successfully train a model on this small amount of complex image data, we propose a special knowledge distillation process, which we call Human Knowledge Distillation. This process enables deep learning models to utilize annotated regions in the images during the training process. This form of guidance by a human expert improves model convergence and performance. We evaluate the proposed process on our study data for multiple types of models, all of which show improved results. The best model of this study, called PneuKnowNet, shows an improvement of + 2.3% points in overall accuracy compared to a baseline model and also leads to more meaningful decision regions. Utilizing this implicit data quality-quantity trade-off can be a promising approach for many scarce data domains beyond medical imaging.
Collapse
Affiliation(s)
- Daniel Schaudt
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Alexander Hafner
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Pascal Riedel
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christian Späte
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Andreas Hinteregger
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| |
Collapse
|
23
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. Front Radiol 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
24
|
Liu D, Lu S, Zhang L, Liu Y. Anomaly Detection in Chest X-rays Based on Dual-Attention Mechanism and Multi-Scale Feature Fusion. Symmetry (Basel) 2023; 15:668. [DOI: 10.3390/sym15030668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
The efficient and automatic detection of chest abnormalities is vital for the auxiliary diagnosis of medical images. Many studies utilize computer vision and deep learning approaches involving symmetry and asymmetry concepts to detect chest abnormalities, and achieve promising findings. However, an accurate instance-level and multi-label detection of abnormalities in chest X-rays remains a significant challenge. Here, a novel anomaly detection method for symmetric chest X-rays using dual-attention and multi-scale feature fusion is proposed. Three aspects of our method should be noted in comparison with the previous approaches. We improved the deep neural network with channel-dimensional and spatial-dimensional attention to capture the abundant contextual features. We then used an optimized multi-scale learning framework for feature fusion to adapt to the scale variation in the abnormalities. Considering the influence of the data imbalance and other factors, we introduced a seesaw loss function to flexibly adjust the sample weights and enhance the model learning efficiency. The rigorous experimental evaluation of a public chest X-ray dataset with fourteen different types of abnormalities demonstrates that our model has a mean average precision of 0.362 and outperforms existing methods.
Collapse
|
25
|
Shao J, Ma J, Zhang Q, Li W, Wang C. Predicting gene mutation status via artificial intelligence technologies based on multimodal integration (MMI) to advance precision oncology. Semin Cancer Biol 2023; 91:1-15. [PMID: 36801447 DOI: 10.1016/j.semcancer.2023.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 01/30/2023] [Accepted: 02/15/2023] [Indexed: 02/21/2023]
Abstract
Personalized treatment strategies for cancer frequently rely on the detection of genetic alterations which are determined by molecular biology assays. Historically, these processes typically required single-gene sequencing, next-generation sequencing, or visual inspection of histopathology slides by experienced pathologists in a clinical context. In the past decade, advances in artificial intelligence (AI) technologies have demonstrated remarkable potential in assisting physicians with accurate diagnosis of oncology image-recognition tasks. Meanwhile, AI techniques make it possible to integrate multimodal data such as radiology, histology, and genomics, providing critical guidance for the stratification of patients in the context of precision therapy. Given that the mutation detection is unaffordable and time-consuming for a considerable number of patients, predicting gene mutations based on routine clinical radiological scans or whole-slide images of tissue with AI-based methods has become a hot issue in actual clinical practice. In this review, we synthesized the general framework of multimodal integration (MMI) for molecular intelligent diagnostics beyond standard techniques. Then we summarized the emerging applications of AI in the prediction of mutational and molecular profiles of common cancers (lung, brain, breast, and other tumor types) pertaining to radiology and histology imaging. Furthermore, we concluded that there truly exist multiple challenges of AI techniques in the way of its real-world application in the medical field, including data curation, feature fusion, model interpretability, and practice regulations. Despite these challenges, we still prospect the clinical implementation of AI as a highly potential decision-support tool to aid oncologists in future cancer treatment management.
Collapse
|
26
|
Zhang S, Mu W, Dong D, Wei J, Fang M, Shao L, Zhou Y, He B, Zhang S, Liu Z, Liu J, Tian J. The Applications of Artificial Intelligence in Digestive System Neoplasms: A Review. Health Data Sci 2023; 3:0005. [PMID: 38487199 PMCID: PMC10877701 DOI: 10.34133/hds.0005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 03/17/2024]
Abstract
Importance Digestive system neoplasms (DSNs) are the leading cause of cancer-related mortality with a 5-year survival rate of less than 20%. Subjective evaluation of medical images including endoscopic images, whole slide images, computed tomography images, and magnetic resonance images plays a vital role in the clinical practice of DSNs, but with limited performance and increased workload of radiologists or pathologists. The application of artificial intelligence (AI) in medical image analysis holds promise to augment the visual interpretation of medical images, which could not only automate the complicated evaluation process but also convert medical images into quantitative imaging features that associated with tumor heterogeneity. Highlights We briefly introduce the methodology of AI for medical image analysis and then review its clinical applications including clinical auxiliary diagnosis, assessment of treatment response, and prognosis prediction on 4 typical DSNs including esophageal cancer, gastric cancer, colorectal cancer, and hepatocellular carcinoma. Conclusion AI technology has great potential in supporting the clinical diagnosis and treatment decision-making of DSNs. Several technical issues should be overcome before its application into clinical practice of DSNs.
Collapse
Affiliation(s)
- Shuaitong Zhang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Wei Mu
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jingwei Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengjie Fang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yu Zhou
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Bingxi He
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
| | - Song Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jianhua Liu
- Department of Oncology, Guangdong Provincial People's Hospital/Second Clinical Medical College of Southern Medical University/Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
27
|
Zhao B, Zhai H, Shao H, Bi K, Zhu L. Potential of vibrational spectroscopy coupled with machine learning as a non-invasive diagnostic method for COVID-19. Comput Methods Programs Biomed 2023; 229:107295. [PMID: 36706562 PMCID: PMC9711896 DOI: 10.1016/j.cmpb.2022.107295] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/10/2022] [Accepted: 11/29/2022] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Efforts to alleviate the ongoing coronavirus disease 2019 (COVID-19) crisis showed that rapid, sensitive, and large-scale screening is critical for controlling the current infection and that of ongoing pandemics. METHODS Here, we explored the potential of vibrational spectroscopy coupled with machine learning to screen COVID-19 patients in its initial stage. Herein presented is a hybrid classification model called grey wolf optimized support vector machine (GWO-SVM). The proposed model was tested and comprehensively compared with other machine learning models via vibrational spectroscopic fingerprinting including saliva FTIR spectra dataset and serum Raman scattering spectra dataset. RESULTS For the unknown vibrational spectra, the presented GWO-SVM model provided an accuracy, specificity and F1_score value of 0.9825, 0.9714 and 0.9778 for saliva FTIR spectra dataset, respectively, while an overall accuracy, specificity and F1_score value of 0.9085, 0.9552 and 0.9036 for serum Raman scattering spectra dataset, respectively, which showed superiority than those of state-of-the-art models, thereby suggesting the suitability of the GWO-SVM model to be adopted in a clinical setting for initial screening of COVID-19 patients. CONCLUSIONS Prospectively, the presented vibrational spectroscopy based GWO-SVM model can facilitate in screening of COVID-19 patients and alleviate the medical service burden. Therefore, herein proof-of-concept results showed the chance of vibrational spectroscopy coupled with GWO-SVM model to help COVID-19 diagnosis and have the potential be further used for early screening of other infectious diseases.
Collapse
Affiliation(s)
- Bingqiang Zhao
- College of Chemistry & Chemical Engineering, Lanzhou University; South Tianshui Road 222, Lanzhou, Gansu 730000, PR China
| | - Honglin Zhai
- College of Chemistry & Chemical Engineering, Lanzhou University; South Tianshui Road 222, Lanzhou, Gansu 730000, PR China.
| | - Haiping Shao
- College of Chemistry & Chemical Engineering, Lanzhou University; South Tianshui Road 222, Lanzhou, Gansu 730000, PR China
| | - Kexin Bi
- College of Chemistry & Chemical Engineering, Lanzhou University; South Tianshui Road 222, Lanzhou, Gansu 730000, PR China
| | - Ling Zhu
- College of Chemistry & Chemical Engineering, Lanzhou University; South Tianshui Road 222, Lanzhou, Gansu 730000, PR China
| |
Collapse
|
28
|
Lee Y, Kim YS, Lee DI, Jeong S, Kang GH, Jang YS, Kim W, Choi HY, Kim JG. Comparison of the Diagnostic Performance of Deep Learning Algorithms for Reducing the Time Required for COVID-19 RT-PCR Testing. Viruses 2023; 15. [PMID: 36851519 DOI: 10.3390/v15020304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 01/24/2023] Open
Abstract
(1) Background: Rapid and accurate negative discrimination enables efficient management of scarce isolated bed resources and adequate patient accommodation in the majority of areas experiencing an explosion of confirmed cases due to Omicron mutations. Until now, methods for artificial intelligence or deep learning to replace time-consuming RT-PCR have relied on CXR, chest CT, blood test results, or clinical information. (2) Methods: We proposed and compared five different types of deep learning algorithms (RNN, LSTM, Bi-LSTM, GRU, and transformer) for reducing the time required for RT-PCR diagnosis by learning the change in fluorescence value derived over time during the RT-PCR process. (3) Results: Among the five deep learning algorithms capable of training time series data, Bi-LSTM and GRU were shown to be able to decrease the time required for RT-PCR diagnosis by half or by 25% without significantly impairing the diagnostic performance of the COVID-19 RT-PCR test. (4) Conclusions: The diagnostic performance of the model developed in this study when 40 cycles of RT-PCR are used for diagnosis shows the possibility of nearly halving the time required for RT-PCR diagnosis.
Collapse
|
29
|
Vardhan A, Makhnevich A, Omprakash P, Hirschorn D, Barish M, Cohen SL, Zanos TP. A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays. Bioelectron Med 2023; 9:1. [PMID: 36597113 PMCID: PMC9809517 DOI: 10.1186/s42234-022-00103-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 12/12/2022] [Indexed: 01/05/2023] Open
Abstract
Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.
Collapse
Affiliation(s)
- Avantika Vardhan
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA
| | - Alex Makhnevich
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| | - Pravan Omprakash
- grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA
| | - David Hirschorn
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.416477.70000 0001 2168 3646Department of Information Services, Northwell Health, New Hyde Park, NY 11042 USA
| | - Matthew Barish
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.416477.70000 0001 2168 3646Department of Information Services, Northwell Health, New Hyde Park, NY 11042 USA
| | - Stuart L. Cohen
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| | - Theodoros P. Zanos
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| |
Collapse
|
30
|
Zhang A, Xing L, Zou J, Wu JC. Shifting machine learning for healthcare from development to deployment and from models to data. Nat Biomed Eng 2022; 6:1330-1345. [PMID: 35788685 DOI: 10.1038/s41551-022-00898-y] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 05/03/2022] [Indexed: 01/14/2023]
Abstract
In the past decade, the application of machine learning (ML) to healthcare has helped drive the automation of physician tasks as well as enhancements in clinical capabilities and access to care. This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance.
Collapse
Affiliation(s)
- Angela Zhang
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Department of Genetics, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Department of Computer Science, Stanford University, Stanford, CA, USA.
| | - Lei Xing
- Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, USA
| | - James Zou
- Department of Computer Science, Stanford University, Stanford, CA, USA.,Department of Biomedical Informatics, School of Medicine, Stanford University, Stanford, CA, USA
| | - Joseph C Wu
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Departments of Medicine, Division of Cardiovascular Medicine Stanford University, Stanford, CA, USA. .,Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA.
| |
Collapse
|
31
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN Comput Sci 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
32
|
Wang W, Liu S, Xu H, Deng L. COVIDX-LwNet: A Lightweight Network Ensemble Model for the Detection of COVID-19 Based on Chest X-ray Images. Sensors (Basel) 2022; 22:8578. [PMID: 36366277 PMCID: PMC9655773 DOI: 10.3390/s22218578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods.
Collapse
|
33
|
Abstract
Thoracic imaging has been revolutionized through advances in technology and research around the world, and so has China. Thoracic imaging in China has progressed from anatomic observation to quantitative and functional evaluation, from using traditional approaches to using artificial intelligence. This article will review the past, present, and future of thoracic imaging in China, in an attempt to establish new accepted strategies moving forward.
Collapse
Affiliation(s)
- Li Fan
- Second Affiliated Hospital, Naval Medical University
| | - Wenjie Yang
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenting Tu
- Second Affiliated Hospital, Naval Medical University
| | - Xiuxiu Zhou
- Second Affiliated Hospital, Naval Medical University
| | - Qin Zou
- Second Affiliated Hospital, Naval Medical University
| | - Hanxiao Zhang
- Second Affiliated Hospital, Naval Medical University
| | - Yan Feng
- Second Affiliated Hospital, Naval Medical University
| | - Shiyuan Liu
- Second Affiliated Hospital, Naval Medical University
| |
Collapse
|
34
|
Sharma A, Mishra PK. Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images. Pattern Recognit 2022; 131:108826. [PMID: 35698723 PMCID: PMC9170279 DOI: 10.1016/j.patcog.2022.108826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 04/24/2022] [Accepted: 06/02/2022] [Indexed: 05/17/2023]
Abstract
The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
Collapse
Affiliation(s)
- Ajay Sharma
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| | - Pramod Kumar Mishra
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| |
Collapse
|
35
|
Li F, Zhang X, Comellas AP, Hoffman EA, Yang T, Lin CL. Contrastive learning and subtyping of post-COVID-19 lung computed tomography images. Front Physiol 2022; 13:999263. [PMID: 36304574 PMCID: PMC9593072 DOI: 10.3389/fphys.2022.999263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 09/27/2022] [Indexed: 11/30/2022] Open
Abstract
Patients who recovered from the novel coronavirus disease 2019 (COVID-19) may experience a range of long-term symptoms. Since the lung is the most common site of the infection, pulmonary sequelae may present persistently in COVID-19 survivors. To better understand the symptoms associated with impaired lung function in patients with post-COVID-19, we aimed to build a deep learning model which conducts two tasks: to differentiate post-COVID-19 from healthy subjects and to identify post-COVID-19 subtypes, based on the latent representations of lung computed tomography (CT) scans. CT scans of 140 post-COVID-19 subjects and 105 healthy controls were analyzed. A novel contrastive learning model was developed by introducing a lung volume transform to learn latent features of disease phenotypes from CT scans at inspiration and expiration of the same subjects. The model achieved 90% accuracy for the differentiation of the post-COVID-19 subjects from the healthy controls. Two clusters (C1 and C2) with distinct characteristics were identified among the post-COVID-19 subjects. C1 exhibited increased air-trapping caused by small airways disease (4.10%, p = 0.008) and diffusing capacity for carbon monoxide %predicted (DLCO %predicted, 101.95%, p < 0.001), while C2 had decreased lung volume (4.40L, p < 0.001) and increased ground glass opacity (GGO%, 15.85%, p < 0.001). The contrastive learning model is able to capture the latent features of two post-COVID-19 subtypes characterized by air-trapping due to small airways disease and airway-associated interstitial fibrotic-like patterns, respectively. The discovery of post-COVID-19 subtypes suggests the need for different managements and treatments of long-term sequelae of patients with post-COVID-19.
Collapse
Affiliation(s)
- Frank Li
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, United States
- IIHR-Hydroscience and Engineering, University of Iowa, Iowa City, IA, United States
| | - Xuan Zhang
- IIHR-Hydroscience and Engineering, University of Iowa, Iowa City, IA, United States
- Department of Mechanical Engineering, University of Iowa, Iowa City, IA, United States
| | | | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, United States
- Department of Radiology, University of Iowa, Iowa City, IA, United States
| | - Tianbao Yang
- Department of Computer Science, University of Iowa, Iowa City, IA, United States
| | - Ching-Long Lin
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, United States
- IIHR-Hydroscience and Engineering, University of Iowa, Iowa City, IA, United States
- Department of Mechanical Engineering, University of Iowa, Iowa City, IA, United States
- Department of Radiology, University of Iowa, Iowa City, IA, United States
- *Correspondence: Ching-Long Lin,
| |
Collapse
|
36
|
Shao J, Ma J, Zhang S, Li J, Dai H, Liang S, Yu Y, Li W, Wang C. Radiogenomic System for Non-Invasive Identification of Multiple Actionable Mutations and PD-L1 Expression in Non-Small Cell Lung Cancer Based on CT Images. Cancers (Basel) 2022; 14:4823. [PMID: 36230746 DOI: 10.3390/cancers14194823] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/29/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
PURPOSE Personalized treatments such as targeted therapy and immunotherapy have revolutionized the predominantly therapeutic paradigm for non-small cell lung cancer (NSCLC). However, these treatment decisions require the determination of targetable genomic and molecular alterations through invasive genetic or immunohistochemistry (IHC) tests. Numerous previous studies have demonstrated that artificial intelligence can accurately predict the single-gene status of tumors based on radiologic imaging, but few studies have achieved the simultaneous evaluation of multiple genes to reflect more realistic clinical scenarios. METHODS We proposed a multi-label multi-task deep learning (MMDL) system for non-invasively predicting actionable NSCLC mutations and PD-L1 expression utilizing routinely acquired computed tomography (CT) images. This radiogenomic system integrated transformer-based deep learning features and radiomic features of CT volumes from 1096 NSCLC patients based on next-generation sequencing (NGS) and IHC tests. RESULTS For each task cohort, we randomly split the corresponding dataset into training (80%), validation (10%), and testing (10%) subsets. The area under the receiver operating characteristic curves (AUCs) of the MMDL system achieved 0.862 (95% confidence interval (CI), 0.758-0.969) for discrimination of a panel of 8 mutated genes, including EGFR, ALK, ERBB2, BRAF, MET, ROS1, RET and KRAS, 0.856 (95% CI, 0.663-0.948) for identification of a 10-molecular status panel (previous 8 genes plus TP53 and PD-L1); and 0.868 (95% CI, 0.641-0.972) for classifying EGFR / PD-L1 subtype, respectively. CONCLUSIONS To the best of our knowledge, this study is the first deep learning system to simultaneously analyze 10 molecular expressions, which might be utilized as an assistive tool in conjunction with or in lieu of ancillary testing to support precision treatment options.
Collapse
|
37
|
Morajkar RV, Kumar AS, Kunkalekar RK, Vernekar AA. Advances in nanotechnology application in biosafety materials: A crucial response to COVID-19 pandemic. Biosaf Health 2022; 4:347-363. [PMID: 35765656 PMCID: PMC9225943 DOI: 10.1016/j.bsheal.2022.06.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 06/10/2022] [Accepted: 06/20/2022] [Indexed: 11/07/2022] Open
Abstract
The outbreak of coronavirus disease 2019 (COVID-19) has adversely affected the public domain causing unprecedented cases and high mortality across the globe. This has brought back the concept of biosafety into the spotlight to solve biosafety problems in developing diagnostics and therapeutics to treat COVID-19. The advances in nanotechnology and material science in combination with medicinal chemistry have provided a new perspective to overcome this crisis. Herein, we discuss the efforts of researchers in the field of material science in developing personal protective equipment (PPE), detection devices, vaccines, drug delivery systems, and medical equipment. Such a synergistic approach of disciplines can strengthen the research to develop biosafety products in solving biosafety problems.
Collapse
Affiliation(s)
- Rasmi V. Morajkar
- Inorganic and Physical Chemistry Laboratory, Council of Scientific and Industrial Research (CSIR)-Central Leather Research Institute (CLRI), Adyar, Chennai 600020, Tamil Nadu, India
| | - Akhil S. Kumar
- Inorganic and Physical Chemistry Laboratory, Council of Scientific and Industrial Research (CSIR)-Central Leather Research Institute (CLRI), Adyar, Chennai 600020, Tamil Nadu, India
| | - Rohan K. Kunkalekar
- School of Chemical Sciences, Goa University, Taleigao Plateau 403206, Goa, India,Corresponding authors: Inorganic and Physical Chemistry Laboratory, Council of Scientific and Industrial Research (CSIR)-Central Leather Research Institute (CLRI), Adyar, Chennai 600020, Tamil Nadu, India (A.A. Vernekar); School of Chemical Sciences, Goa University, Taleigao Plateau 403206, Goa, India (R.K. Kunkalekar)
| | - Amit A. Vernekar
- Inorganic and Physical Chemistry Laboratory, Council of Scientific and Industrial Research (CSIR)-Central Leather Research Institute (CLRI), Adyar, Chennai 600020, Tamil Nadu, India,Corresponding authors: Inorganic and Physical Chemistry Laboratory, Council of Scientific and Industrial Research (CSIR)-Central Leather Research Institute (CLRI), Adyar, Chennai 600020, Tamil Nadu, India (A.A. Vernekar); School of Chemical Sciences, Goa University, Taleigao Plateau 403206, Goa, India (R.K. Kunkalekar)
| |
Collapse
|
38
|
Costa YMG, Silva SA, Teixeira LO, Pereira RM, Bertolini D, Britto AS, Oliveira LS, Cavalcanti GDC. COVID-19 Detection on Chest X-ray and CT Scan: A Review of the Top-100 Most Cited Papers. Sensors (Basel) 2022; 22:7303. [PMID: 36236402 PMCID: PMC9570662 DOI: 10.3390/s22197303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/13/2022] [Accepted: 09/19/2022] [Indexed: 06/16/2023]
Abstract
Since the beginning of the COVID-19 pandemic, many works have been published proposing solutions to the problems that arose in this scenario. In this vein, one of the topics that attracted the most attention is the development of computer-based strategies to detect COVID-19 from thoracic medical imaging, such as chest X-ray (CXR) and computerized tomography scan (CT scan). By searching for works already published on this theme, we can easily find thousands of them. This is partly explained by the fact that the most severe worldwide pandemic emerged amid the technological advances recently achieved, and also considering the technical facilities to deal with the large amount of data produced in this context. Even though several of these works describe important advances, we cannot overlook the fact that others only use well-known methods and techniques without a more relevant and critical contribution. Hence, differentiating the works with the most relevant contributions is not a trivial task. The number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community. Aiming to help researchers in this scenario, we present a review of the top-100 most cited papers in this field of investigation according to the Google Scholar search engine. We evaluate the distribution of the top-100 papers taking into account some important aspects, such as the type of medical imaging explored, learning settings, segmentation strategy, explainable artificial intelligence (XAI), and finally, the dataset and code availability.
Collapse
Affiliation(s)
- Yandre M. G. Costa
- Departamento de Informática, Universidade Estadual de Maringá, Maringá 87020-900, Brazil
| | - Sergio A. Silva
- Departamento de Informática, Universidade Estadual de Maringá, Maringá 87020-900, Brazil
| | - Lucas O. Teixeira
- Departamento de Informática, Universidade Estadual de Maringá, Maringá 87020-900, Brazil
| | | | - Diego Bertolini
- Departamento Acadêmico de Ciência da Computação, Universidade Tecnológica Federal do Paraná, Campo Mourão 87301-899, Brazil
| | - Alceu S. Britto
- Departmento de Ciência da Computação, Pontifícia Universidade Católica do Paraná, Curitiba 80215-901, Brazil
| | - Luiz S. Oliveira
- Departamento de Informática, Universidade Federal do Paraná, Curitiba 81531-980, Brazil
| | | |
Collapse
|
39
|
La Salvia M, Torti E, Leon R, Fabelo H, Ortega S, Balea-Fernandez F, Martinez-Vega B, Castaño I, Almeida P, Carretero G, Hernandez JA, Callico GM, Leporati F. Neural Networks-Based On-Site Dermatologic Diagnosis through Hyperspectral Epidermal Images. Sensors (Basel) 2022; 22:7139. [PMID: 36236240 PMCID: PMC9571453 DOI: 10.3390/s22197139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 09/15/2022] [Accepted: 09/18/2022] [Indexed: 06/16/2023]
Abstract
Cancer originates from the uncontrolled growth of healthy cells into a mass. Chromophores, such as hemoglobin and melanin, characterize skin spectral properties, allowing the classification of lesions into different etiologies. Hyperspectral imaging systems gather skin-reflected and transmitted light into several wavelength ranges of the electromagnetic spectrum, enabling potential skin-lesion differentiation through machine learning algorithms. Challenged by data availability and tiny inter and intra-tumoral variability, here we introduce a pipeline based on deep neural networks to diagnose hyperspectral skin cancer images, targeting a handheld device equipped with a low-power graphical processing unit for routine clinical testing. Enhanced by data augmentation, transfer learning, and hyperparameter tuning, the proposed architectures aim to meet and improve the well-known dermatologist-level detection performances concerning both benign-malignant and multiclass classification tasks, being able to diagnose hyperspectral data considering real-time constraints. Experiments show 87% sensitivity and 88% specificity for benign-malignant classification and specificity above 80% for the multiclass scenario. AUC measurements suggest classification performance improvement above 90% with adequate thresholding. Concerning binary segmentation, we measured skin DICE and IOU higher than 90%. We estimated 1.21 s, at most, consuming 5 Watts to segment the epidermal lesions with the U-Net++ architecture, meeting the imposed time limit. Hence, we can diagnose hyperspectral epidermal data assuming real-time constraints.
Collapse
Affiliation(s)
- Marco La Salvia
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Emanuele Torti
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Raquel Leon
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Himar Fabelo
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Samuel Ortega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
- Norwegian Institute of Food, Fisheries and Aquaculture Research (Nofima), 6122 Tromsø, Norway
| | - Francisco Balea-Fernandez
- Department of Psychology, Sociology and Social Work, University of Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria, Spain
| | - Beatriz Martinez-Vega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Irene Castaño
- Department of Dermatology, Hospital Universitario de Gran Canaria Doctor Negrín, Barranco de la Ballena, s/n, 35010 Las Palmas de Gran Canaria, Spain
| | - Pablo Almeida
- Department of Dermatology, Complejo Hospitalario Universitario Insular-Materno Infantil, Avenida Maritima del Sur, s/n, 35016 Las Palmas de Gran Canaria, Spain
| | - Gregorio Carretero
- Department of Dermatology, Hospital Universitario de Gran Canaria Doctor Negrín, Barranco de la Ballena, s/n, 35010 Las Palmas de Gran Canaria, Spain
| | - Javier A. Hernandez
- Department of Dermatology, Complejo Hospitalario Universitario Insular-Materno Infantil, Avenida Maritima del Sur, s/n, 35016 Las Palmas de Gran Canaria, Spain
| | - Gustavo M. Callico
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Francesco Leporati
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| |
Collapse
|
40
|
Wang C, Ma J, Zhang S, Shao J, Wang Y, Zhou HY, Song L, Zheng J, Yu Y, Li W. Development and validation of an abnormality-derived deep-learning diagnostic system for major respiratory diseases. NPJ Digit Med 2022; 5:124. [PMID: 35999467 DOI: 10.1038/s41746-022-00648-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 07/04/2022] [Indexed: 01/05/2023] Open
Abstract
Respiratory diseases impose a tremendous global health burden on large patient populations. In this study, we aimed to develop DeepMRDTR, a deep learning-based medical image interpretation system for the diagnosis of major respiratory diseases based on the automated identification of a wide range of radiological abnormalities through computed tomography (CT) and chest X-ray (CXR) from real-world, large-scale datasets. DeepMRDTR comprises four networks (two CT-Nets and two CXR-Nets) that exploit contrastive learning to generate pre-training parameters that are fine-tuned on the retrospective dataset collected from a single institution. The performance of DeepMRDTR was evaluated for abnormality identification and disease diagnosis on data from two different institutions: one was an internal testing dataset from the same institution as the training data and the second was collected from an external institution to evaluate the model generalizability and robustness to an unrelated population dataset. In such a difficult multi-class diagnosis task, our system achieved the average area under the receiver operating characteristic curve (AUC) of 0.856 (95% confidence interval (CI):0.843–0.868) and 0.841 (95%CI:0.832–0.887) for abnormality identification, and 0.900 (95%CI:0.872–0.958) and 0.866 (95%CI:0.832–0.887) for major respiratory diseases’ diagnosis on CT and CXR datasets, respectively. Furthermore, to achieve a clinically actionable diagnosis, we deployed a preliminary version of DeepMRDTR into the clinical workflow, which was performed on par with senior experts in disease diagnosis, with an AUC of 0.890 and a Cohen’s k of 0.746–0.877 at a reasonable timescale; these findings demonstrate the potential to accelerate the medical workflow to facilitate early diagnosis as a triage tool for respiratory diseases which supports improved clinical diagnoses and decision-making.
Collapse
|
41
|
Mahaboob Basha S, Lira Neto AV, Alshathri S, Elaziz MA, Hashmitha Mohisin S, De Albuquerque VHC, Javed AR. Multithreshold Segmentation and Machine Learning Based Approach to Differentiate COVID-19 from Viral Pneumonia. Computational Intelligence and Neuroscience 2022; 2022:1-12. [PMID: 36039344 PMCID: PMC9420061 DOI: 10.1155/2022/2728866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/13/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
Coronavirus disease (COVID-19) has created an unprecedented devastation and the loss of millions of lives globally. Contagious nature and fatalities invariably pose challenges to physicians and healthcare support systems. Clinical diagnostic evaluation using reverse transcription-polymerase chain reaction and other approaches are currently in use. The Chest X-ray (CXR) and CT images were effectively utilized in screening purposes that could provide relevant data on localized regions affected by the infection. A step towards automated screening and diagnosis using CXR and CT could be of considerable importance in these turbulent times. The main objective is to probe a simple threshold-based segmentation approach to identify possible infection regions in CXR images and investigate intensity-based, wavelet transform (WT)-based, and Laws based texture features with statistical measures. Further feature selection strategy using Random Forest (RF) then selected features used to create Machine Learning (ML) representation with Support Vector Machine (SVM) and a Random Forest (RF) to make different COVID-19 from viral pneumonia (VP). The results obtained clearly indicate that the intensity and WT-based features vary in the two pathologies that are better differentiated with the combined features trained using SVM and RF classifiers. Classifier performance measures like an Area Under the Curve (AUC) of 0.97 and by and large classification accuracy of 0.9 using the RF model clearly indicate that the methodology implemented is useful in characterizing COVID-19 and Viral Pneumonia.
Collapse
|
42
|
Liang S, Ma J, Wang G, Shao J, Li J, Deng H, Wang C, Li W. The Application of Artificial Intelligence in the Diagnosis and Drug Resistance Prediction of Pulmonary Tuberculosis. Front Med (Lausanne) 2022; 9:935080. [PMID: 35966878 PMCID: PMC9366014 DOI: 10.3389/fmed.2022.935080] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jiechao Ma
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Gang Wang
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jingwei Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Hui Deng
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Hui Deng,
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Chengdi Wang,
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Weimin Li,
| |
Collapse
|
43
|
Zhu Z, Li J, Huang J, Li Z, Zhang H, Chen S, Zhong Q, Xie Y, Hu S, Wang Y, Wang D, Yu G. An intelligent prediagnosis system for disease prediction and examination recommendation based on electronic medical record and a medical-semantic-aware convolution neural network (MSCNN) for pediatric chronic cough. Transl Pediatr 2022; 11:1216-1233. [PMID: 35958012 PMCID: PMC9360821 DOI: 10.21037/tp-22-275] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 07/10/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Due to the phenotypic similarities among different pediatric respiratory diseases with chronic cough, primary doctors often misdiagnose and the misuse of examinations is prevalent. In the pre-diagnosis stage, the patients' chief complaints and other information in the electronic medical record (EMR) provide a powerful reference for respiratory experts to make preliminary disease judgment and examination plan. In this paper, we proposed an intelligent prediagnosis system to predict disease diagnosis and recommend examinations based on EMR text. METHODS We examined the clinical notes of 178,293 children with chronic cough symptoms from retrospective EMR data. The dataset is split into 7:3 for training and testing. From the testing set, we also extract 5% of samples for validation. We proposed a medical-semantic-aware convolution neural network (MSCNN) framework that can accomplish two downstream tasks from the same medical language model through transfer learning. First, a medical language model based on the word2vec algorithm was built to generate embeddings for the text data. Then, text convolutional neural network (TextCNN) was used to build models for disease prediction and examination recommendation. RESULTS We implemented 5 algorithms for disease prediction. In the disease prediction task, our algorithm outperformed the baseline methods on all metrics, with a top-1 accuracy (AC) of 0.68 and a top-3 AC of 0.923 on the testing set. By adding data enhancement, the top-3 AC reached 0.926. In the examination recommendation task, the overall AC on the testing set was 0.93 and the macro average (MA) F1-score was 0.88. The average area under the curve (AUC) on the training set was 0.97 while on the testing set it was 0.86. CONCLUSIONS We constructed an intelligent prediagnosis system with an MSCNN framework that can predict diseases and make examination recommendations based on EMR data. Our approach achieved good results on a retrospective clinical dataset and thus has great potential for the application of automated diagnosis assist in clinical practice during pre-diagnosis stage, which will provide help for primary level doctors or doctors in basic-level hospitals. Due to the generality of the proposed framework, it can be straight forwardly extended to prediagnosis for other diseases.
Collapse
Affiliation(s)
- Zhu Zhu
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China
| | - Jing Li
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China
| | - Jian Huang
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China
| | - Zheming Li
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China
| | | | - Siyu Chen
- Avain (Hangzhou) Technology Co., Ltd., Hangzhou, China
| | - Qianhui Zhong
- Avain (Hangzhou) Technology Co., Ltd., Hangzhou, China
| | - Yulan Xie
- Avain (Hangzhou) Technology Co., Ltd., Hangzhou, China
| | - Shasha Hu
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China
| | - Yinshuo Wang
- National Clinical Research Center for Child Health, Hangzhou, China.,Department of Pulmonology, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Dejian Wang
- Department of R&D, Hangzhou Healink Technology, Hangzhou, China
| | - Gang Yu
- Department of Data and Information, The Children's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,National Clinical Research Center for Child Health, Hangzhou, China.,Polytechnic Institute, Zhejiang University, Hangzhou, China
| |
Collapse
|
44
|
He H, Zhang X, Du L, Ye M, Lu Y, Xue J, Wu J, Shuai X. Molecular imaging nanoprobes for theranostic applications. Adv Drug Deliv Rev 2022; 186:114320. [PMID: 35526664 DOI: 10.1016/j.addr.2022.114320] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/11/2022] [Accepted: 04/30/2022] [Indexed: 12/13/2022]
Abstract
As a non-invasive imaging monitoring method, molecular imaging can provide the location and expression level of disease signature biomolecules in vivo, leading to early diagnosis of relevant diseases, improved treatment strategies, and accurate assessment of treating efficacy. In recent years, a variety of nanosized imaging probes have been developed and intensively investigated in fundamental/translational research and clinical practice. Meanwhile, as an interdisciplinary discipline, this field combines many subjects of chemistry, medicine, biology, radiology, and material science, etc. The successful molecular imaging not only requires advanced imaging equipment, but also the synthesis of efficient imaging probes. However, limited summary has been reported for recent advances of nanoprobes. In this paper, we summarized the recent progress of three common and main types of nanosized molecular imaging probes, including ultrasound (US) imaging nanoprobes, magnetic resonance imaging (MRI) nanoprobes, and computed tomography (CT) imaging nanoprobes. The applications of molecular imaging nanoprobes were discussed in details. Finally, we provided an outlook on the development of next generation molecular imaging nanoprobes.
Collapse
Affiliation(s)
- Haozhe He
- Nanomedicine Research Center, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou 510630, China; Department of Pediatrics, The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen 518107, China
| | - Xindan Zhang
- Beijing Laboratory of Biomedical Materials, Beijing University of Chemical Technology, Beijing 100029, China
| | - Lihua Du
- PCFM Lab of Ministry of Education, School of Materials Science and Engineering, Sun Yat-Sen University, Guangzhou 510260, China
| | - Minwen Ye
- Beijing Laboratory of Biomedical Materials, Beijing University of Chemical Technology, Beijing 100029, China
| | - Yonglai Lu
- Beijing Laboratory of Biomedical Materials, Beijing University of Chemical Technology, Beijing 100029, China
| | - Jiajia Xue
- Beijing Laboratory of Biomedical Materials, Beijing University of Chemical Technology, Beijing 100029, China.
| | - Jun Wu
- PCFM Lab of Ministry of Education, School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Xintao Shuai
- Nanomedicine Research Center, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou 510630, China; PCFM Lab of Ministry of Education, School of Materials Science and Engineering, Sun Yat-Sen University, Guangzhou 510260, China.
| |
Collapse
|
45
|
Abstract
Machine learning refers to a series of processes in which a computer finds rules from a vast amount of data. With recent advances in computer technology and the availability of a wide variety of health data, machine learning has rapidly developed and been applied in medical research. Currently, there are three types of machine learning: supervised, unsupervised, and reinforcement learning. In medical research, supervised learning is commonly used for diagnoses and prognoses, while unsupervised learning is used for phenotyping a disease, and reinforcement learning for maximizing favorable results, such as optimization of total patients' waiting time in the emergency department. The present article focuses on the concept and application of supervised learning in medicine, the most commonly used machine learning approach in medicine, and provides a brief explanation of four algorithms widely used for prediction (random forests, gradient-boosted decision tree, support vector machine, and neural network). Among these algorithms, the neural network has further developed into deep learning algorithms to solve more complex tasks. Along with simple classification problems, deep learning is commonly used to process medical imaging, such as retinal fundus photographs for diabetic retinopathy diagnosis. Although machine learning can bring new insights into medicine by processing a vast amount of data that are often beyond human capacity, algorithms can also fail when domain knowledge is neglected. The combination of algorithms and human cognitive ability is a key to the successful application of machine learning in medicine.
Collapse
Affiliation(s)
- Sachiko Ono
- Department of Eat-loss Medicine, Graduate School of Medicine, The University of Tokyo
| | - Tadahiro Goto
- Department of Clinical Epidemiology and Health Economics, The University of Tokyo
- TXP Medical Co. Ltd
| |
Collapse
|
46
|
Ramírez Varela A, Moreno López S, Contreras-Arrieta S, Tamayo-Cabeza G, Restrepo-Restrepo S, Sarmiento-Barbieri I, Caballero-Díaz Y, Jorge Hernandez-Florez L, Mario González J, Salas-Zapata L, Laajaj R, Buitrago-Gutierrez G, de la Hoz-Restrepo F, Vives Florez M, Osorio E, Sofía Ríos-Oliveros D, Behrentz E. Prediction of SARS-CoV-2 infection with a Symptoms-Based model to aid public health decision making in Latin America and other low and middle income settings. Prev Med Rep 2022; 27:101798. [PMID: 35469291 PMCID: PMC9020649 DOI: 10.1016/j.pmedr.2022.101798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 03/16/2022] [Accepted: 04/17/2022] [Indexed: 11/01/2022] Open
Abstract
Early non-pharmacological interventions are necessary limit the spread of COVID-19. In low to middle-income countries, there are limited resources to face the pandemic. Symptoms (i.e. anosmia) can be used to apply early strategies in suspicious cases. Logistic regression provides interpretability in prediction analysis. Machine learning analysis aids prediction because of its capacity of data synthesis.
Symptoms-based models for predicting SARS-CoV-2 infection may improve clinical decision-making and be an alternative to resource allocation in under-resourced settings. In this study we aimed to test a model based on symptoms to predict a positive test result for SARS-CoV-2 infection during the COVID-19 pandemic using logistic regression and a machine-learning approach, in Bogotá, Colombia. Participants from the CoVIDA project were included. A logistic regression using the model was chosen based on biological plausibility and the Akaike Information criterion. Also, we performed an analysis using machine learning with random forest, support vector machine, and extreme gradient boosting. The study included 58,577 participants with a positivity rate of 5.7%. The logistic regression showed that anosmia (aOR = 7.76, 95% CI [6.19, 9.73]), fever (aOR = 4.29, 95% CI [3.07, 6.02]), headache (aOR = 3.29, 95% CI [1.78, 6.07]), dry cough (aOR = 2.96, 95% CI [2.44, 3.58]), and fatigue (aOR = 1.93, 95% CI [1.57, 2.93]) were independently associated with SARS-CoV-2 infection. Our final model had an area under the curve of 0.73. The symptoms-based model correctly identified over 85% of participants. This model can be used to prioritize resource allocation related to COVID-19 diagnosis, to decide on early isolation, and contact-tracing strategies in individuals with a high probability of infection before receiving a confirmatory test result. This strategy has public health and clinical decision-making significance in low- and middle-income settings like Latin America.
Collapse
|
47
|
Zhang R, Yang F, Luo Y, Liu J, Wang C. Learning Invariant Representation for Unsupervised Domain Adaptive Thorax Disease Classification. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
48
|
Huang T, Yang R, Shen L, Feng A, Li L, He N, Li S, Huang L, Lyu J. Deep transfer learning to quantify pleural effusion severity in chest X-rays. BMC Med Imaging 2022; 22:100. [PMID: 35624426 PMCID: PMC9137166 DOI: 10.1186/s12880-022-00827-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/18/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose The detection of pleural effusion in chest radiography is crucial for doctors to make timely treatment decisions for patients with chronic obstructive pulmonary disease. We used the MIMIC-CXR database to develop a deep learning model to quantify pleural effusion severity in chest radiographs. Methods The Medical Information Mart for Intensive Care Chest X-ray (MIMIC-CXR) dataset was divided into patients ‘with’ or ‘without’ chronic obstructive pulmonary disease (COPD). The label of pleural effusion severity was obtained from the extracted COPD radiology reports and classified into four categories: no effusion, small effusion, moderate effusion, and large effusion. A total of 200 datasets were randomly sampled to manually check each item and determine whether the tags are correct. A professional doctor re-tagged these items as a verification cohort without knowing their previous tags. The learning models include eight common network structures including Resnet, DenseNet, and GoogleNET. Three data processing methods (no sampling, downsampling, and upsampling) and two loss algorithms (focal loss and cross-entropy loss) were used for unbalanced data. The Neural Network Intelligence tool was applied to train the model. Receiver operating characteristic curves, Area under the curve, and confusion matrix were employed to evaluate the model results. Grad-CAM was used for model interpretation. Results Among the 8533 patients, 15,620 chest X-rays with clearly marked pleural effusion severity were obtained (no effusion, 5685; small effusion, 4877; moderate effusion, 3657; and large effusion, 1401). The error rate of the manual check label was 6.5%, and the error rate of the doctor’s relabeling was 11.0%. The highest accuracy rate of the optimized model was 73.07. The micro-average AUCs of the testing and validation cohorts was 0.89 and 0.90, respectively, and their macro-average AUCs were 0.86 and 0.89, respectively. The AUC of the distinguishing results of each class and the other three classes were 0.95 and 0.94, 0.76 and 0.83, 0.85 and 0.83, and 0.87 and 0.93. Conclusion The deep transfer learning model can grade the severity of pleural effusion.
Collapse
Affiliation(s)
- Tao Huang
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Rui Yang
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Longbin Shen
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Aozi Feng
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Li Li
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Ningxia He
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Shuna Li
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Liying Huang
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Jun Lyu
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, 510630, China. .,Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Guangzhou, Guangdong, China.
| |
Collapse
|
49
|
Kong L, Cheng J. Classification and Detection of COVID-19 X-Ray Images based on DenseNet and VGG16 Feature Fusion. Biomed Signal Process Control 2022; 77:103772. [PMID: 35573817 PMCID: PMC9080057 DOI: 10.1016/j.bspc.2022.103772] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/22/2022] [Accepted: 04/27/2022] [Indexed: 12/12/2022]
Abstract
Since December 2019, the novel coronavirus disease (COVID-19) caused by the syndrome coronavirus 2 (SARS-CoV-2) strain has spread widely around the world and has become a serious global public health problem. For this high-speed infectious disease, the application of X-ray to chest diagnosis plays a key role. In this study, we propose a chest X-ray image classification method based on feature fusion of a dense convolutional network (DenseNet) and a visual geometry group network (VGG16). This paper adds an attention mechanism (global attention machine block and category attention block) to the model to extract deep features. A residual network (ResNet) is used to segment effective image information to quickly achieve accurate classification. The average accuracy of our model in detecting binary classification can reach 98.0%. The average accuracy for three category classification can reach 97.3%. The experimental results show that the proposed model has good results in this work. Therefore, the use of deep learning and feature fusion technology in the classification of chest X-ray images can become an auxiliary tool for clinicians and radiologists.
Collapse
Affiliation(s)
- Lingzhi Kong
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250000, China
| | - Jinyong Cheng
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250000, China
| |
Collapse
|
50
|
Furtado A, Andrade L, Frias D, Maia T, Badaró R, Nascimento EGS. Deep Learning Applied to Chest Radiograph Classification—A COVID-19 Pneumonia Experience. Applied Sciences 2022; 12:3712. [DOI: 10.3390/app12083712] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Due to the recent COVID-19 pandemic, a large number of reports present deep learning algorithms that support the detection of pneumonia caused by COVID-19 in chest radiographs. Few studies have provided the complete source code, limiting testing and reproducibility on different datasets. This work presents Cimatec_XCOV19, a novel deep learning system inspired by the Inception-V3 architecture that is able to (i) support the identification of abnormal chest radiographs and (ii) classify the abnormal radiographs as suggestive of COVID-19. The training dataset has 44,031 images with 2917 COVID-19 cases, one of the largest datasets in recent literature. We organized and published an external validation dataset of 1158 chest radiographs from a Brazilian hospital. Two experienced radiologists independently evaluated the radiographs. The Cimatec_XCOV19 algorithm obtained a sensitivity of 0.85, specificity of 0.82, and AUC ROC of 0.93. We compared the AUC ROC of our algorithm with a well-known public solution and did not find a statistically relevant difference between both performances. We provide full access to the code and the test dataset, enabling this work to be used as a tool for supporting the fast screening of COVID-19 on chest X-ray exams, serving as a reference for educators, and supporting further algorithm enhancements.
Collapse
|