1
|
Saibro G, Keeza Y, Sauer B, Marescaux J, Diana M, Hostettler A, Collins T. Automatic diagnosis of abdominal pathologies in untrimmed ultrasound videos. Int J Comput Assist Radiol Surg 2025; 20:923-933. [PMID: 40069481 DOI: 10.1007/s11548-025-03334-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Accepted: 02/03/2025] [Indexed: 04/13/2025]
Abstract
PURPOSE Despite major advances in Computer Assisted Diagnosis (CAD), the need for carefully labeled training data remains an important clinical translation barrier. This work aims to overcome this barrier for ultrasound video-based CAD, using video-level classification labels combined with a novel training strategy to improve the generalization performance of state-of-the-art (SOTA) video classifiers. METHODS SOTA video classifiers were trained and evaluated on a novel ultrasound video dataset of liver and kidney pathologies, and they all struggled to generalize, especially for kidney pathologies. A new training strategy is presented, wherein a frame relevance assessor is trained to score the video frames in a video by diagnostic relevance. This is used to automatically generate diagnostically-relevant video clips (DR-Clips), which guide a video classifier during training and inference. RESULTS Using DR-Clips with a Video Swin Transformer, we achieved a 0.92 ROC-AUC for kidney pathology detection in videos, compared to 0.72 ROC-AUC with a Swin Transformer and standard video clips. For liver steatosis detection, due to the diffuse nature of the pathology, the Video Swin Transformer, and other video classifiers, performed similarly well, generally exceeding a 0.92 ROC-AUC. CONCLUSION In theory, video classifiers, such as video transformers, should be able to solve ultrasound CAD tasks with video labels. However, in practice, video labels provide weaker supervision compared to image labels, resulting in worse generalization, as demonstrated. The additional frame guidance provided by DR-Clips enhances performance significantly. The results highlight current limits and opportunities to improve frame guidance.
Collapse
Affiliation(s)
- Güinther Saibro
- Ircad Africa, Kigali, Rwanda
- Ircad France, Strasbourg, France
- Icube - Photonics Instrumentation for Health, Université de Strasbourg, Illkirch, France
| | | | - Benoît Sauer
- MIM Groupe d'Imagerie Médicale, Strasbourg, France
| | | | - Michele Diana
- Icube - Photonics Instrumentation for Health, Université de Strasbourg, Illkirch, France
- Department of Surgery, University Hospital of Geneva, Geneva, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | | | - Toby Collins
- Ircad Africa, Kigali, Rwanda.
- Ircad France, Strasbourg, France.
| |
Collapse
|
2
|
Berikol GB, Kanbakan A, Ilhan B, Doğanay F. Mapping artificial intelligence models in emergency medicine: A scoping review on artificial intelligence performance in emergency care and education. Turk J Emerg Med 2025; 25:67-91. [PMID: 40248473 PMCID: PMC12002153 DOI: 10.4103/tjem.tjem_45_25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2025] [Revised: 03/07/2025] [Accepted: 03/08/2025] [Indexed: 04/19/2025] Open
Abstract
Artificial intelligence (AI) is increasingly improving the processes such as emergency patient care and emergency medicine education. This scoping review aims to map the use and performance of AI models in emergency medicine regarding AI concepts. The findings show that AI-based medical imaging systems provide disease detection with 85%-90% accuracy in imaging techniques such as X-ray and computed tomography scans. In addition, AI-supported triage systems were found to be successful in correctly classifying low- and high-urgency patients. In education, large language models have provided high accuracy rates in evaluating emergency medicine exams. However, there are still challenges in the integration of AI into clinical workflows and model generalization capacity. These findings demonstrate the potential of updated AI models, but larger-scale studies are still needed.
Collapse
Affiliation(s)
| | - Altuğ Kanbakan
- Department of Emergency Medicine, Ufuk University School of Medicine, Ankara, Türkiye
| | - Buğra Ilhan
- Department of Emergency Medicine, Kırıkkale University School of Medicine, Kırıkkale, Türkiye
| | - Fatih Doğanay
- Department of Emergency Medicine, University of Health Sciences School of Medicine, İstanbul, Türkiye
| |
Collapse
|
3
|
Malainho B, Freitas J, Rodrigues C, Tonelli AC, Santanchè A, Carvalho-Filho MA, Fonseca JC, Queirós S. Semi-supervised Ensemble Learning for Automatic Interpretation of Lung Ultrasound Videos. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01344-y. [PMID: 39673011 DOI: 10.1007/s10278-024-01344-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 11/14/2024] [Accepted: 11/15/2024] [Indexed: 12/15/2024]
Abstract
Point-of-care ultrasound (POCUS) stands as a safe, portable, and cost-effective imaging modality for swift bedside patient examinations. Specifically, lung ultrasonography (LUS) has proven useful in evaluating both acute and chronic pulmonary conditions. Despite its clinical value, automatic LUS interpretation remains relatively unexplored, particularly in multi-label contexts. This work proposes a novel deep learning (DL) framework tailored for interpreting lung POCUS videos, whose outputs are the finding(s) present in these videos (such as A-lines, B-lines, or consolidations). The pipeline, based on a residual (2+1)D architecture, initiates with a pre-processing routine for video masking and standardisation, and employs a semi-supervised approach to harness available unlabeled data. Additionally, we introduce an ensemble modeling strategy that aggregates outputs from models trained to predict distinct label sets, thereby leveraging the hierarchical nature of LUS findings. The proposed framework and its building blocks were evaluated through extensive experiments with both multi-class and multi-label models, highlighting its versatility. In a held-out test set, the categorical proposal, suited for expedite triage, achieved an average F1-score of 92.4%, while the multi-label proposal, helpful for patient management and referral, achieved an average F1-score of 70.5% across five relevant LUS findings. Overall, the semi-supervised methodology contributed significantly to improved performance, while the proposed hierarchy-aware ensemble provided moderate additional gains.
Collapse
Affiliation(s)
- Bárbara Malainho
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João Freitas
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Catarina Rodrigues
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Ana Claudia Tonelli
- Department of Internal Medicine, Hospital Clínicas de Porto Alegre, Porto Alegre, Brazil
| | - André Santanchè
- Institute of Computing, University of Campinas, São Paulo, Brazil
| | - Marco A Carvalho-Filho
- Wenckebach Institute, Research program LEARN, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Sandro Queirós
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Braga, Portugal.
- ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal.
| |
Collapse
|
4
|
Khan U, Thompson R, Li J, Etter LP, Camelo I, Pieciak RC, Castro-Aragon I, Setty B, Gill CC, Demi L, Betke M. FLUEnT: Transformer for detecting lung consolidations in videos using fused lung ultrasound encodings. Comput Biol Med 2024; 180:109014. [PMID: 39163826 DOI: 10.1016/j.compbiomed.2024.109014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 08/06/2024] [Accepted: 08/09/2024] [Indexed: 08/22/2024]
Abstract
Pneumonia is the leading cause of death among children around the world. According to WHO, a total of 740,180 lives under the age of five were lost due to pneumonia in 2019. Lung ultrasound (LUS) has been shown to be particularly useful for supporting the diagnosis of pneumonia in children and reducing mortality in resource-limited settings. The wide application of point-of-care ultrasound at the bedside is limited mainly due to a lack of training for data acquisition and interpretation. Artificial Intelligence can serve as a potential tool to automate and improve the LUS data interpretation process, which mainly involves analysis of hyper-echoic horizontal and vertical artifacts, and hypo-echoic small to large consolidations. This paper presents, Fused Lung Ultrasound Encoding-based Transformer (FLUEnT), a novel pediatric LUS video scoring framework for detecting lung consolidations using fused LUS encodings. Frame-level embeddings from a variational autoencoder, features from a spatially attentive ResNet-18, and encoded patient information as metadata combiningly form the fused encodings. These encodings are then passed on to the transformer for binary classification of the presence or absence of consolidations in the video. The video-level analysis using fused encodings resulted in a mean balanced accuracy of 89.3 %, giving an average improvement of 4.7 % points in comparison to when using these encodings individually. In conclusion, outperforming the state-of-the-art models by an average margin of 8 % points, our proposed FLUEnT framework serves as a benchmark for detecting lung consolidations in LUS videos from pediatric pneumonia patients.
Collapse
Affiliation(s)
- Umair Khan
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
| | | | - Jason Li
- Department of Computer Science, Boston University, Boston, MA, USA
| | | | - Ingrid Camelo
- Augusta University, Pediatric Infectious Disease, Augusta, GA, USA
| | - Rachel C Pieciak
- Department of Global Health, Boston University School of Public Health, Boston, MA, USA
| | | | - Bindu Setty
- Department of Radiology, Boston Medical Center, Boston, MA, USA
| | - Christopher C Gill
- Department of Global Health, Boston University School of Public Health, Boston, MA, USA
| | - Libertario Demi
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.
| | - Margrit Betke
- Department of Computer Science, Boston University, Boston, MA, USA
| |
Collapse
|
5
|
Mao M, Va H, Hong M. Video Classification of Cloth Simulations: Deep Learning and Position-Based Dynamics for Stiffness Prediction. SENSORS (BASEL, SWITZERLAND) 2024; 24:549. [PMID: 38257643 PMCID: PMC10820360 DOI: 10.3390/s24020549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 01/04/2024] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
In virtual reality, augmented reality, or animation, the goal is to represent the movement of deformable objects in the real world as similar as possible in the virtual world. Therefore, this paper proposed a method to automatically extract cloth stiffness values from video scenes, and then they are applied as material properties for virtual cloth simulation. We propose the use of deep learning (DL) models to tackle this issue. The Transformer model, in combination with pre-trained architectures like DenseNet121, ResNet50, VGG16, and VGG19, stands as a leading choice for video classification tasks. Position-Based Dynamics (PBD) is a computational framework widely used in computer graphics and physics-based simulations for deformable entities, notably cloth. It provides an inherently stable and efficient way to replicate complex dynamic behaviors, such as folding, stretching, and collision interactions. Our proposed model characterizes virtual cloth based on softness-to-stiffness labels and accurately categorizes videos using this labeling. The cloth movement dataset utilized in this research is derived from a meticulously designed stiffness-oriented cloth simulation. Our experimental assessment encompasses an extensive dataset of 3840 videos, contributing to a multi-label video classification dataset. Our results demonstrate that our proposed model achieves an impressive average accuracy of 99.50%. These accuracies significantly outperform alternative models such as RNN, GRU, LSTM, and Transformer.
Collapse
Affiliation(s)
- Makara Mao
- Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea; (M.M.); (H.V.)
| | - Hongly Va
- Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea; (M.M.); (H.V.)
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
6
|
Zhang P, Swaminathan A, Uddin AA. Pulmonary disease detection and classification in patient respiratory audio files using long short-term memory neural networks. Front Med (Lausanne) 2023; 10:1269784. [PMID: 38020156 PMCID: PMC10656606 DOI: 10.3389/fmed.2023.1269784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction In order to improve the diagnostic accuracy of respiratory illnesses, our research introduces a novel methodology to precisely diagnose a subset of lung diseases using patient respiratory audio recordings. These lung diseases include Chronic Obstructive Pulmonary Disease (COPD), Upper Respiratory Tract Infections (URTI), Bronchiectasis, Pneumonia, and Bronchiolitis. Methods Our proposed methodology trains four deep learning algorithms on an input dataset consisting of 920 patient respiratory audio files. These audio files were recorded using digital stethoscopes and comprise the Respiratory Sound Database. The four deployed models are Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), CNN ensembled with unidirectional LSTM (CNN-LSTM), and CNN ensembled with bidirectional LSTM (CNN-BLSTM). Results The aforementioned models are evaluated using metrics such as accuracy, precision, recall, and F1-score. The best performing algorithm, LSTM, has an overall accuracy of 98.82% and F1-score of 0.97. Discussion The LSTM algorithm's extremely high predictive accuracy can be attributed to its penchant for capturing sequential patterns in time series based audio data. In summary, this algorithm is able to ingest patient audio recordings and make precise lung disease predictions in real-time.
Collapse
Affiliation(s)
- Pinzhi Zhang
- College of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | | | - Ahmed Abrar Uddin
- College of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
7
|
Hasan MM, Hossain MM, Rahman MM, Azad A, Alyami SA, Moni MA. FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI. Comput Biol Med 2023; 165:107407. [PMID: 37678140 DOI: 10.1016/j.compbiomed.2023.107407] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/08/2023] [Accepted: 08/26/2023] [Indexed: 09/09/2023]
Abstract
The COVID-19 pandemic wreaks havoc on healthcare systems all across the world. In pandemic scenarios like COVID-19, the applicability of diagnostic modalities is crucial in medical diagnosis, where non-invasive ultrasound imaging has the potential to be a useful biomarker. This research develops a computer-assisted intelligent methodology for ultrasound lung image classification by utilizing a fuzzy pooling-based convolutional neural network FP-CNN with underlying evidence of particular decisions. The fuzzy-pooling method finds better representative features for ultrasound image classification. The FPCNN model categorizes ultrasound images into one of three classes: covid, disease-free (normal), and pneumonia. Explanations of diagnostic decisions are crucial to ensure the fairness of an intelligent system. This research has used Shapley Additive Explanation (SHAP) to explain the prediction of the FP-CNN models. The prediction of the black-box model is illustrated using the SHAP explanation of the intermediate layers of the black-box model. To determine the most effective model, we have tested different state-of-the-art convolutional neural network architectures with various training strategies, including fine-tuned models, single-layer fuzzy pooling models, and fuzzy pooling at all pooling layers. Among different architectures, the Xception model with all pooling layers having fuzzy pooling achieves the best classification results of 97.2% accuracy. We hope our proposed method will be helpful for the clinical diagnosis of covid-19 from lung ultrasound (LUS) images.
Collapse
Affiliation(s)
- Md Mahmodul Hasan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh.
| | - Muhammad Minoar Hossain
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh; Department of Computer Science and Engineering, Bangladesh University, Mohammadpur, Dhaka, 1207, Bangladesh.
| | - Mohammad Motiur Rahman
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Dhaka, Bangladesh.
| | - Akm Azad
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 13318, Saudi Arabia.
| | - Salem A Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 13318, Saudi Arabia.
| | - Mohammad Ali Moni
- Artificial Intelligence & Data Science, School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD 4072, Australia; Artificial Intelligence and Cyber Futures Institute, Charles Stuart University, Bathurst, NSW 2795, Australia.
| |
Collapse
|
8
|
Sagreiya H, Jacobs MA, Akhbardeh A. Automated Lung Ultrasound Pulmonary Disease Quantification Using an Unsupervised Machine Learning Technique for COVID-19. Diagnostics (Basel) 2023; 13:2692. [PMID: 37627951 PMCID: PMC10453777 DOI: 10.3390/diagnostics13162692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/30/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
COVID-19 is an ongoing global health pandemic. Although COVID-19 can be diagnosed with various tests such as PCR, these tests do not establish pulmonary disease burden. Whereas point-of-care lung ultrasound (POCUS) can directly assess the severity of characteristic pulmonary findings of COVID-19, the advantage of using US is that it is inexpensive, portable, and widely available for use in many clinical settings. For automated assessment of pulmonary findings, we have developed an unsupervised learning technique termed the calculated lung ultrasound (CLU) index. The CLU can quantify various types of lung findings, such as A or B lines, consolidations, and pleural effusions, and it uses these findings to calculate a CLU index score, which is a quantitative measure of pulmonary disease burden. This is accomplished using an unsupervised, patient-specific approach that does not require training on a large dataset. The CLU was tested on 52 lung ultrasound examinations from several institutions. CLU demonstrated excellent concordance with radiologist findings in different pulmonary disease states. Given the global nature of COVID-19, the CLU would be useful for sonographers and physicians in resource-strapped areas with limited ultrasound training and diagnostic capacities for more accurate assessment of pulmonary status.
Collapse
Affiliation(s)
- Hersh Sagreiya
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michael A. Jacobs
- The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Department of Diagnostic and Interventional Imaging, The University of Texas Health Science Center, Houston, TX 77030, USA
| | - Alireza Akhbardeh
- Department of Diagnostic and Interventional Imaging, The University of Texas Health Science Center, Houston, TX 77030, USA
- Ambient Digital LLC, Daly City, CA 94014, USA
| |
Collapse
|
9
|
Chang M, Ku Y. LSTM model for predicting the daily number of asthma patients in Seoul, South Korea, using meteorological and air pollution data. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:37440-37448. [PMID: 36574119 DOI: 10.1007/s11356-022-24956-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 12/20/2022] [Indexed: 06/18/2023]
Abstract
Asthma is a common respiratory disease that is affected by air pollutants and meteorological factors. In this study, we developed models that predict the daily number of patients receiving treatment for asthma using air pollution and meteorological data. A neural network with long short-term memory (LSTM) and fully connected (FC) layers was used. The daily number of asthma patients in the city of Seoul, the capital of South Korea, was collected from the National Health Insurance Service. The data from 2015 to 2018 were used as the training and validation datasets for model development. Unseen data from 2019 were used for testing. The daily number of asthma patients per 100,000 inhabitants was predicted. The LSTM-FC neural network model achieved a Pearson correlation coefficient of 0.984 (P < 0.001) and root mean square error of 3.472 between the predicted and original values on the unseen testing dataset. The factors that impacted the prediction were the number of asthma patients in the previous time step before the predicted date, type of day (regular day and day after a holiday), minimum temperature, SO2, daily changes in the amount of cloud, and daily changes in diurnal temperature range. We successfully developed a neural network that predicts the onset and exacerbation of asthma, and we identified the crucial influencing air pollutants and meteorological factors. This study will help us to establish appropriate measures according to the daily predicted number of asthma patients and reduce the daily onset and exacerbation of asthma in the susceptible population.
Collapse
Affiliation(s)
- Munyoung Chang
- Department of Otorhinolaryngology-Head and Neck Surgery, Chung-Ang University College of Medicine, 84 Heukseok-Ro, Dongjak-Gu, 06974, Seoul, South Korea.
- Department of Electrical and Computer Engineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, 08826, Seoul, South Korea.
| | - Yunseo Ku
- Department of Biomedical Engineering, Chungnam National University College of Medicine, 99 Daehak-Ro, Yuseong-Gu, 34134, Daejeon, South Korea
| |
Collapse
|
10
|
Hussein SA, Bayoumi AERS, Soliman AM. Automated detection of human mental disorder. JOURNAL OF ELECTRICAL SYSTEMS AND INFORMATION TECHNOLOGY 2023; 10:9. [DOI: 10.1186/s43067-023-00076-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 02/02/2023] [Indexed: 09/02/2023]
Abstract
AbstractThe pressures of daily life result in a proliferation of terms such as stress, anxiety, and mood swings. These feelings may be developed to depression and more complicated mental problems. Unfortunately, the mood and emotional changes are difficult to notice and considered a disease that must be treated until late. The late diagnosis appears in suicidal intensions and harmful behaviors. In this work, main human observable facial behaviors are detected and classified by a model that has developed to assess a person’s mental health. Haar feature-based cascade is used to extract the features from the detected faces from FER+ dataset. VGG model classifies if the user is normal or abnormal. Then in the case of abnormal, the model predicts if he has depression, anxiety, or other disorder according to the detected facial expression. The required assistance and support can be provided in a timely manner with this prediction. The system has achieved a 95% of overall prediction accuracy.
Collapse
|
11
|
Ur Rehman A, Naseer A, Karim S, Tamoor M, Naz S. Deep learning classifiers for computer-aided diagnosis of multiple lungs disease. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:1125-1143. [PMID: 37522236 DOI: 10.3233/xst-230113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
BACKGROUND Computer aided diagnosis has gained momentum in the recent past. The advances in deep learning and availability of huge volumes of data along with increased computational capabilities has reshaped the diagnosis and prognosis procedures. OBJECTIVE These methods are proven to be relatively less expensive and safer alternatives of the otherwise traditional approaches. This study is focused on efficient diagnosis of three very common diseases: lung cancer, pneumonia and Covid-19 using X-ray images. METHODS Three different deep learning models are designed and developed to perform 4-way classification. Inception V3, Convolutional Neural Networks (CNN) and Long Short Term Memory models (LSTM) are used as building blocks. The performance of these models is evaluated using three publicly available datasets, the first dataset contains images for Lung cancer, second contains images for Covid-19 and third dataset contains images for Pneumonia and normal subjects. Combining three datasets creates a class imbalance problem which is resolved using pre-processing and data augmentation techniques. After data augmentation 1386 subjects are randomly chosen for each class. RESULTS It is observed that CNN when combined with LSTM (CNN-LSTM) produces significantly improved results (accuracy of 94.5 %) which is better than CNN and InceptionV3-LSTM. 3,5, and 10 fold cross validation is performed to verify all results calculated using three different classifiersConclusions:This research concludes that a single computer-aided diagnosis system can be developed for diagnosing multiple diseases.
Collapse
Affiliation(s)
- Aziz Ur Rehman
- National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Saira Karim
- National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Maria Tamoor
- Forman Christian College University, Zahoor Ilahi Road, Lahore, Pakistan
| | - Samina Naz
- Muhammad Nawaz Sharif university of engineering and technology, Multan, Pakistan
| |
Collapse
|
12
|
Yu M, Zheng H, Xu D, Shuai Y, Tian S, Cao T, Zhou M, Zhu Y, Zhao S, Li X. Non-contact detection method of pregnant sows backfat thickness based on two-dimensional images. Anim Genet 2022; 53:769-781. [PMID: 35989407 DOI: 10.1111/age.13248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Revised: 07/16/2022] [Accepted: 07/27/2022] [Indexed: 11/27/2022]
Abstract
Since sow backfat thickness (BFT) is highly correlated with its service life and reproductive effectiveness, dynamic monitoring of BFT is a critical component of large-scale sow farm productivity. Existing contact measures of sow BFT have their problems including, high measurement intensity and sows' stress reaction, low biological safety, and difficulty in meeting the requirements for multiple measurements. This article presents a two-dimensional (2D) image-based approach for determining the BFT of pregnant sows when combined with the backfat growth rate (BGR). The 2D image features of sows extracted by convolutional neural networks (CNN) and the artificially defined phenotypic features of sows such as hip width, hip height, body length, hip height-width ratio, length-width ratio, and waist-hip ratio, were used respectively, combined with BGR, to construct a prediction model for sow BFT using support vector regression (SVR). Following testing and comparison, it was shown that using CNN to extract features from images could effectively replace artificially defined features, BGR contributed to the model's accuracy improvement. The CNN-BGR-SVR model performed the best, with R2 of 0.72 and mean absolute error of 1.21 mm, and root mean square error of 1.50 mm, and mean absolute percentage error of 7.57%. The results demonstrated that the CNN-BGR-SVR model based on 2D images was capable of detecting sow BFT, establishing a new reference for non-contact sow BFT detection technology.
Collapse
Affiliation(s)
- Mengyuan Yu
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Hongya Zheng
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Dihong Xu
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Yonghui Shuai
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Shanfeng Tian
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Tingjin Cao
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Mingyan Zhou
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Yuhua Zhu
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen, China
| | - Shuhong Zhao
- College of Animal Science and Technology, Huazhong Agricultural University, Wuhan, Hubei, China
| | - Xuan Li
- Key Laboratory of Smart Farming for Agricultural Animals, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan, Hubei, China.,Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen, China
| |
Collapse
|
13
|
Coleman S, Kerr D, Zhang Y. Image Sensing and Processing with Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:3612. [PMID: 35632021 PMCID: PMC9146735 DOI: 10.3390/s22103612] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
Convolutional neural networks are a class of deep neural networks that leverage spatial information, and they are therefore well suited to classifying images for a range of applications [...].
Collapse
Affiliation(s)
- Sonya Coleman
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry BT48 7JL, UK;
| | - Dermot Kerr
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry BT48 7JL, UK;
| | - Yunzhou Zhang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China;
| |
Collapse
|
14
|
De Rosa L, L'Abbate S, Kusmic C, Faita F. Applications of artificial intelligence in lung ultrasound: Review of deep learning methods for COVID-19 fighting. Artif Intell Med Imaging 2022; 3:42-54. [DOI: 10.35711/aimi.v3.i2.42] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/22/2022] [Accepted: 04/26/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The pandemic outbreak of the novel coronavirus disease (COVID-19) has highlighted the need to combine rapid, non-invasive and widely accessible techniques with the least risk of patient’s cross-infection to achieve a successful early detection and surveillance of the disease. In this regard, the lung ultrasound (LUS) technique has been proved invaluable in both the differential diagnosis and the follow-up of COVID-19 patients, and its potential may be destined to evolve. Recently, indeed, LUS has been empowered through the development of automated image processing techniques.
AIM To provide a systematic review of the application of artificial intelligence (AI) technology in medical LUS analysis of COVID-19 patients using the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines.
METHODS A literature search was performed for relevant studies published from March 2020 - outbreak of the pandemic - to 30 September 2021. Seventeen articles were included in the result synthesis of this paper.
RESULTS As part of the review, we presented the main characteristics related to AI techniques, in particular deep learning (DL), adopted in the selected articles. A survey was carried out on the type of architectures used, availability of the source code, network weights and open access datasets, use of data augmentation, use of the transfer learning strategy, type of input data and training/test datasets, and explainability.
CONCLUSION Finally, this review highlighted the existing challenges, including the lack of large datasets of reliable COVID-19-based LUS images to test the effectiveness of DL methods and the ethical/regulatory issues associated with the adoption of automated systems in real clinical scenarios.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Serena L'Abbate
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
- Institute of Life Sciences, Scuola Superiore Sant’Anna, Pisa 56124, Italy
| | - Claudia Kusmic
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Francesco Faita
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| |
Collapse
|
15
|
Wang J, Yang X, Zhou B, Sohn JJ, Zhou J, Jacob JT, Higgins KA, Bradley JD, Liu T. Review of Machine Learning in Lung Ultrasound in COVID-19 Pandemic. J Imaging 2022; 8:65. [PMID: 35324620 PMCID: PMC8952297 DOI: 10.3390/jimaging8030065] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 12/25/2022] Open
Abstract
Ultrasound imaging of the lung has played an important role in managing patients with COVID-19-associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.
Collapse
Affiliation(s)
- Jing Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - Boran Zhou
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - James J. Sohn
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23219, USA;
| | - Jun Zhou
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - Jesse T. Jacob
- Division of Infectious Diseases, Department of Medicine, Emory University, Atlanta, GA 30322, USA;
| | - Kristin A. Higgins
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - Jeffrey D. Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA 30322, USA; (J.W.); (X.Y.); (B.Z.); (J.Z.); (K.A.H.); (J.D.B.)
| |
Collapse
|
16
|
Zhao L, Lediju Bell MA. A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. BME FRONTIERS 2022; 2022:9780173. [PMID: 36714302 PMCID: PMC9880989 DOI: 10.34133/2022/9780173] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA,Department of Computer Science, Johns Hopkins University, Baltimore, USA,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|