1
|
Kucerenko A, Buddenkotte T, Apostolova I, Klutmann S, Ledig C, Buchert R. Incorporating label uncertainty during the training of convolutional neural networks improves performance for the discrimination between certain and inconclusive cases in dopamine transporter SPECT. Eur J Nucl Med Mol Imaging 2025; 52:1535-1548. [PMID: 39592475 PMCID: PMC11839851 DOI: 10.1007/s00259-024-06988-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Accepted: 11/11/2024] [Indexed: 11/28/2024]
Abstract
PURPOSE Deep convolutional neural networks (CNN) hold promise for assisting the interpretation of dopamine transporter (DAT)-SPECT. For improved communication of uncertainty to the user it is crucial to reliably discriminate certain from inconclusive cases that might be misclassified by strict application of a predefined decision threshold on the CNN output. This study tested two methods to incorporate existing label uncertainty during the training to improve the utility of the CNN sigmoid output for this task. METHODS Three datasets were used retrospectively: a "development" dataset (n = 1740) for CNN training, validation and testing, two independent out-of-distribution datasets (n = 640, 645) for testing only. In the development dataset, binary classification based on visual inspection was performed carefully by three well-trained readers. A ResNet-18 architecture was trained for binary classification of DAT-SPECT using either a randomly selected vote ("random vote training", RVT), the proportion of "reduced" votes ( "average vote training", AVT) or the majority vote (MVT) across the three readers as reference standard. Balanced accuracy was computed separately for "inconclusive" sigmoid outputs (within a predefined interval around the 0.5 decision threshold) and for "certain" (non-inconclusive) sigmoid outputs. RESULTS The proportion of "inconclusive" test cases that had to be accepted to achieve a given balanced accuracy in the "certain" test case was lower with RVT and AVT than with MVT in all datasets (e.g., 1.9% and 1.2% versus 2.8% for 98% balanced accuracy in "certain" test cases from the development dataset). In addition, RVT and AVT resulted in slightly higher balanced accuracy in all test cases independent of their certainty (97.3% and 97.5% versus 97.0% in the development dataset). CONCLUSION Making between-readers-discrepancy known to CNN during the training improves the utility of their sigmoid output to discriminate certain from inconclusive cases that might be misclassified by the CNN when the predefined decision threshold is strictly applied. This does not compromise on overall accuracy.
Collapse
Affiliation(s)
- Aleksej Kucerenko
- xAILab Bamberg, Chair of Explainable Machine Learning, Faculty of Information Systems and Applied Computer Sciences, Otto-Friedrich-University, Bamberg, Germany
| | - Thomas Buddenkotte
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Ivayla Apostolova
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Susanne Klutmann
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Christian Ledig
- xAILab Bamberg, Chair of Explainable Machine Learning, Faculty of Information Systems and Applied Computer Sciences, Otto-Friedrich-University, Bamberg, Germany
| | - Ralph Buchert
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| |
Collapse
|
2
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
3
|
Li X, Zhang H, Yue J, Yin L, Li W, Ding G, Peng B, Xie S. A multi-task deep learning approach for real-time view classification and quality assessment of echocardiographic images. Sci Rep 2024; 14:20484. [PMID: 39227373 PMCID: PMC11372079 DOI: 10.1038/s41598-024-71530-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 08/28/2024] [Indexed: 09/05/2024] Open
Abstract
High-quality standard views in two-dimensional echocardiography are essential for accurate cardiovascular disease diagnosis and treatment decisions. However, the quality of echocardiographic images is highly dependent on the practitioner's experience. Ensuring timely quality control of echocardiographic images in the clinical setting remains a significant challenge. In this study, we aimed to propose new quality assessment criteria and develop a multi-task deep learning model for real-time multi-view classification and image quality assessment (six standard views and "others"). A total of 170,311 echocardiographic images collected between 2015 and 2022 were utilized to develop and evaluate the model. On the test set, the model achieved an overall classification accuracy of 97.8% (95%CI 97.7-98.0) and a mean absolute error of 6.54 (95%CI 6.43-6.66). A single-frame inference time of 2.8 ms was achieved, meeting real-time requirements. We also analyzed pre-stored images from three distinct groups of echocardiographers (junior, senior, and expert) to evaluate the clinical feasibility of the model. Our multi-task model can provide objective, reproducible, and clinically significant view quality assessment results for echocardiographic images, potentially optimizing the clinical image acquisition process and improving AI-assisted diagnosis accuracy.
Collapse
Affiliation(s)
- Xinyu Li
- School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu, 610500, China
| | - Hongmei Zhang
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
- Department of Cardiovascular Ultrasound & Noninvasive Cardiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
| | - Jing Yue
- School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu, 610500, China
| | - Lixue Yin
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
- Department of Cardiovascular Ultrasound & Noninvasive Cardiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
| | - Wenhua Li
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
- Department of Cardiovascular Ultrasound & Noninvasive Cardiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
| | - Geqi Ding
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
- Department of Cardiovascular Ultrasound & Noninvasive Cardiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China
| | - Bo Peng
- School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu, 610500, China
| | - Shenghua Xie
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China.
- Department of Cardiovascular Ultrasound & Noninvasive Cardiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, 32# W. Sec 2, 1st Ring Rd., Chengdu, 610072, China.
| |
Collapse
|
4
|
Zhao H, Zheng Q, Teng C, Yasrab R, Drukker L, Papageorghiou AT, Noble JA. Memory-based unsupervised video clinical quality assessment with multi-modality data in fetal ultrasound. Med Image Anal 2023; 90:102977. [PMID: 37778101 DOI: 10.1016/j.media.2023.102977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 08/03/2023] [Accepted: 09/18/2023] [Indexed: 10/03/2023]
Abstract
In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.
Collapse
Affiliation(s)
- He Zhao
- Institute of Biomedical Engineering, University of Oxford, United Kingdom.
| | - Qingqing Zheng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Clare Teng
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Robail Yasrab
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom; Department of Obstetrics and Gynecology, Tel-Aviv University, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| |
Collapse
|
5
|
Guo Y, Hu M, Min X, Wang Y, Dai M, Zhai G, Zhang XP, Yang X. Blind Image Quality Assessment for Pathological Microscopic Image Under Screen and Immersion Scenarios. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3295-3306. [PMID: 37267133 DOI: 10.1109/tmi.2023.3282387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The high-quality pathological microscopic images are essential for physicians or pathologists to make a correct diagnosis. Image quality assessment (IQA) can quantify the visual distortion degree of images and guide the imaging system to improve image quality, thus raising the quality of pathological microscopic images. Current IQA methods are not ideal for pathological microscopy images due to their specificity. In this paper, we present deep learning-based blind image quality assessment model with saliency block and patch block for pathological microscopic images. The saliency block and patch block can handle the local and global distortions, respectively. To better capture the area of interest of pathologists when viewing pathological images, the saliency block is fine-tuned by eye movement data of pathologists. The patch block can capture lots of global information strongly related to image quality via the interaction between different image patches from different positions. The performance of the developed model is validated by the home-made Pathological Microscopic Image Quality Database under Screen and Immersion Scenarios (PMIQD-SIS) and cross-validated by the five public datasets. The results of ablation experiments demonstrate the contribution of the added blocks. The dataset and the corresponding code are publicly available at: https://github.com/mikugyf/PMIQD-SIS.
Collapse
|
6
|
Zhang Y, Zhu H, Cheng J, Wang J, Gu X, Han J, Zhang Y, Zhao Y, He Y, Zhang H. Improving the Quality of Fetal Heart Ultrasound Imaging With Multihead Enhanced Self-Attention and Contrastive Learning. IEEE J Biomed Health Inform 2023; 27:5518-5529. [PMID: 37556337 DOI: 10.1109/jbhi.2023.3303573] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
Fetal congenital heart disease (FCHD) is a common, serious birth defect affecting ∼1% of newborns annually. Fetal echocardiography is the most effective and important technique for prenatal FCHD diagnosis. The prerequisites for accurate ultrasound FCHD diagnosis are accurate view recognition and high-quality diagnostic view extraction. However, these manual clinical procedures have drawbacks such as, varying technical capabilities and inefficiency. Therefore, the automatic identification of high-quality multiview fetal heart scan images is highly desirable to improve prenatal diagnosis efficiency and accuracy of FCHD. Here, we present a framework for multiview fetal heart ultrasound image recognition and quality assessment that comprises two parts: a multiview classification and localization network (MCLN) and an improved contrastive learning network (ICLN). In the MCLN, a multihead enhanced self-attention mechanism is applied to construct the classification network and identify six accurate and interpretable views of the fetal heart. In the ICLN, anatomical structure standardization and image clarity are considered. With contrastive learning, the absolute loss, feature relative loss and predicted value relative loss are combined to achieve favorable quality assessment results. Experiments show that the MCLN outperforms other state-of-the-art networks by 1.52-13.61% when determining the F1 score in six standard view recognition tasks, and the ICLN is comparable to the performance of expert cardiologists in the quality assessment of fetal heart ultrasound images, reaching 97% on a test set within 2 points for the four-chamber view task. Thus, our architecture offers great potential in helping cardiologists improve quality control for fetal echocardiographic images in clinical practice.
Collapse
|
7
|
Seoni S, Jahmunah V, Salvi M, Barua PD, Molinari F, Acharya UR. Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013-2023). Comput Biol Med 2023; 165:107441. [PMID: 37683529 DOI: 10.1016/j.compbiomed.2023.107441] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
Uncertainty estimation in healthcare involves quantifying and understanding the inherent uncertainty or variability associated with medical predictions, diagnoses, and treatment outcomes. In this era of Artificial Intelligence (AI) models, uncertainty estimation becomes vital to ensure safe decision-making in the medical field. Therefore, this review focuses on the application of uncertainty techniques to machine and deep learning models in healthcare. A systematic literature review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our analysis revealed that Bayesian methods were the predominant technique for uncertainty quantification in machine learning models, with Fuzzy systems being the second most used approach. Regarding deep learning models, Bayesian methods emerged as the most prevalent approach, finding application in nearly all aspects of medical imaging. Most of the studies reported in this paper focused on medical images, highlighting the prevalent application of uncertainty quantification techniques using deep learning models compared to machine learning models. Interestingly, we observed a scarcity of studies applying uncertainty quantification to physiological signals. Thus, future research on uncertainty quantification should prioritize investigating the application of these techniques to physiological signals. Overall, our review highlights the significance of integrating uncertainty techniques in healthcare applications of machine learning and deep learning models. This can provide valuable insights and practical solutions to manage uncertainty in real-world medical data, ultimately improving the accuracy and reliability of medical diagnoses and treatment recommendations.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | | | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
8
|
Ferraz S, Coimbra M, Pedrosa J. Assisted probe guidance in cardiac ultrasound: A review. Front Cardiovasc Med 2023; 10:1056055. [PMID: 36865885 PMCID: PMC9971589 DOI: 10.3389/fcvm.2023.1056055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Echocardiography is the most frequently used imaging modality in cardiology. However, its acquisition is affected by inter-observer variability and largely dependent on the operator's experience. In this context, artificial intelligence techniques could reduce these variabilities and provide a user independent system. In recent years, machine learning (ML) algorithms have been used in echocardiography to automate echocardiographic acquisition. This review focuses on the state-of-the-art studies that use ML to automate tasks regarding the acquisition of echocardiograms, including quality assessment (QA), recognition of cardiac views and assisted probe guidance during the scanning process. The results indicate that performance of automated acquisition was overall good, but most studies lack variability in their datasets. From our comprehensive review, we believe automated acquisition has the potential not only to improve accuracy of diagnosis, but also help novice operators build expertise and facilitate point of care healthcare in medically underserved areas.
Collapse
Affiliation(s)
- Sofia Ferraz
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| | - Miguel Coimbra
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Sciences of the University of Porto (FCUP), Porto, Portugal
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| |
Collapse
|
9
|
Zamzmi G, Rajaraman S, Hsu LY, Sachdev V, Antani S. Real-time echocardiography image analysis and quantification of cardiac indices. Med Image Anal 2022; 80:102438. [PMID: 35868819 PMCID: PMC9310146 DOI: 10.1016/j.media.2022.102438] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 01/24/2022] [Accepted: 03/28/2022] [Indexed: 11/24/2022]
Abstract
Deep learning has a huge potential to transform echocardiography in clinical practice and point of care ultrasound testing by providing real-time analysis of cardiac structure and function. Automated echocardiography analysis is benefited through use of machine learning for tasks such as image quality assessment, view classification, cardiac region segmentation, and quantification of diagnostic indices. By taking advantage of high-performing deep neural networks, we propose a novel and eicient real-time system for echocardiography analysis and quantification. Our system uses a self-supervised modality-specific representation trained using a publicly available large-scale dataset. The trained representation is used to enhance the learning of target echo tasks with relatively small datasets. We also present a novel Trilateral Attention Network (TaNet) for real-time cardiac region segmentation. The proposed network uses a module for region localization and three lightweight pathways for encoding rich low-level, textural, and high-level features. Feature embeddings from these individual pathways are then aggregated for cardiac region segmentation. This network is fine-tuned using a joint loss function and training strategy. We extensively evaluate the proposed system and its components, which are echo view retrieval, cardiac segmentation, and quantification, using four echocardiography datasets. Our experimental results show a consistent improvement in the performance of echocardiography analysis tasks with enhanced computational eiciency that charts a path toward its adoption in clinical practice. Specifically, our results show superior real-time performance in retrieving good quality echo from individual cardiac view, segmenting cardiac chambers with complex overlaps, and extracting cardiac indices that highly agree with the experts' values. The source code of our implementation can be found in the project's GitHub page.
Collapse
Affiliation(s)
- Ghada Zamzmi
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Li-Yueh Hsu
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA
| | - Vandana Sachdev
- Echocardiography Laboratory, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
10
|
Saeed SU, Fu Y, Stavrinides V, Baum ZMC, Yang Q, Rusu M, Fan RE, Sonn GA, Noble JA, Barratt DC, Hu Y. Image quality assessment for machine learning tasks using meta-reinforcement learning. Med Image Anal 2022; 78:102427. [PMID: 35344824 DOI: 10.1016/j.media.2022.102427] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 01/24/2022] [Accepted: 03/18/2022] [Indexed: 11/23/2022]
Abstract
In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.
Collapse
Affiliation(s)
- Shaheer U Saeed
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK.
| | - Yunguan Fu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK; InstaDeep, London, UK
| | - Vasilis Stavrinides
- Division of Surgery & Interventional Science, University College London, London, UK; Department of Urology, University College Hospital NHS Foundation Trust, London, UK
| | - Zachary M C Baum
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Qianye Yang
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, California, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, California, USA; Department of Urology, Stanford University, Stanford, California, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
11
|
Towards targeted ultrasound-guided prostate biopsy by incorporating model and label uncertainty in cancer detection. Int J Comput Assist Radiol Surg 2021; 17:121-128. [PMID: 34783976 DOI: 10.1007/s11548-021-02485-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 08/16/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Systematic prostate biopsy is widely used for cancer diagnosis. The procedure is blind to underlying prostate tissue micro-structure; hence, it can lead to a high rate of false negatives. Development of a machine-learning model that can reliably identify suspicious cancer regions is highly desirable. However, the models proposed to-date do not consider the uncertainty present in their output or the data to benefit clinical decision making for targeting biopsy. METHODS We propose a deep network for improved detection of prostate cancer in systematic biopsy considering both the label and model uncertainty. The architecture of our model is based on U-Net, trained with temporal enhanced ultrasound (TeUS) data. We estimate cancer detection uncertainty using test-time augmentation and test-time dropout. We then use uncertainty metrics to report the cancer probability for regions with high confidence to help the clinical decision making during the biopsy procedure. RESULTS Experiments for prostate cancer classification includes data from 183 prostate biopsy cores of 41 patients. We achieve an area under the curve, sensitivity, specificity and balanced accuracy of 0.79, 0.78, 0.71 and 0.75, respectively. CONCLUSION Our key contribution is to automatically estimate model and label uncertainty towards enabling targeted ultrasound-guided prostate biopsy. We anticipate that such information about uncertainty can decrease the number of unnecessary biopsy with a higher rate of cancer yield.
Collapse
|
12
|
de Siqueira VS, Borges MM, Furtado RG, Dourado CN, da Costa RM. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif Intell Med 2021; 120:102165. [PMID: 34629153 DOI: 10.1016/j.artmed.2021.102165] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 08/07/2021] [Accepted: 08/31/2021] [Indexed: 12/16/2022]
Abstract
The echocardiogram is a test that is widely used in Heart Disease Diagnoses. However, its analysis is largely dependent on the physician's experience. In this regard, artificial intelligence has become an essential technology to assist physicians. This study is a Systematic Literature Review (SLR) of primary state-of-the-art studies that used Artificial Intelligence (AI) techniques to automate echocardiogram analyses. Searches on the leading scientific article indexing platforms using a search string returned approximately 1400 articles. After applying the inclusion and exclusion criteria, 118 articles were selected to compose the detailed SLR. This SLR presents a thorough investigation of AI applied to support medical decisions for the main types of echocardiogram (Transthoracic, Transesophageal, Doppler, Stress, and Fetal). The article's data extraction indicated that the primary research interest of the studies comprised four groups: 1) Improvement of image quality; 2) identification of the cardiac window vision plane; 3) quantification and analysis of cardiac functions, and; 4) detection and classification of cardiac diseases. The articles were categorized and grouped to show the main contributions of the literature to each type of ECHO. The results indicate that the Deep Learning (DL) methods presented the best results for the detection and segmentation of the heart walls, right and left atrium and ventricles, and classification of heart diseases using images/videos obtained by echocardiography. The models that used Convolutional Neural Network (CNN) and its variations showed the best results for all groups. The evidence produced by the results presented in the tabulation of the studies indicates that the DL contributed significantly to advances in echocardiogram automated analysis processes. Although several solutions were presented regarding the automated analysis of ECHO, this area of research still has great potential for further studies to improve the accuracy of results already known in the literature.
Collapse
Affiliation(s)
- Vilson Soares de Siqueira
- Federal Institute of Tocantins, Av. Bernado Sayão, S/N, Santa Maria, Colinas do Tocantins, TO, Brazil; Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| | - Moisés Marcos Borges
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Rogério Gomes Furtado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Colandy Nunes Dourado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil. http://www.cdigoias.com.br
| | - Ronaldo Martins da Costa
- Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| |
Collapse
|
13
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Dilated densely connected U-Net with uncertainty focus loss for 3D ABUS mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106313. [PMID: 34364182 DOI: 10.1016/j.cmpb.2021.106313] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) images plays an important role in qualitative and quantitative ABUS image analysis. Yet this task is challenging due to the low signal to noise ratio and serious artifacts in ABUS images, the large shape and size variation of breast masses, as well as the small training dataset compared with natural images. The purpose of this study is to address these difficulties by designing a dilated densely connected U-Net (D2U-Net) together with an uncertainty focus loss. METHODS A lightweight yet effective densely connected segmentation network is constructed to extensively explore feature representations in the small ABUS dataset. In order to deal with the high variation in shape and size of breast masses, a set of hybrid dilated convolutions is integrated into the dense blocks of the D2U-Net. We further suggest an uncertainty focus loss to put more attention on unreliable network predictions, especially the ambiguous mass boundaries caused by low signal to noise ratio and artifacts. Our segmentation algorithm is evaluated on an ABUS dataset of 170 volumes from 107 patients. Ablation analysis and comparison with existing methods are conduct to verify the effectiveness of the proposed method. RESULTS Experiment results demonstrate that the proposed algorithm outperforms existing methods on 3D ABUS mass segmentation tasks, with Dice similarity coefficient, Jaccard index and 95% Hausdorff distance of 69.02%, 56.61% and 4.92 mm, respectively. CONCLUSIONS The proposed method is effective in segmenting breast masses on our small ABUS dataset, especially breast masses with large shape and size variations.
Collapse
Affiliation(s)
- Xuyang Cao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Lin Cheng
- Peking University People's Hospital, Beijing 100044, China
| |
Collapse
|
14
|
Gao Y, Zhu Y, Liu B, Hu Y, Yu G, Guo Y. Automated Recognition of Ultrasound Cardiac Views Based on Deep Learning with Graph Constraint. Diagnostics (Basel) 2021; 11:diagnostics11071177. [PMID: 34209538 PMCID: PMC8303427 DOI: 10.3390/diagnostics11071177] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/23/2021] [Accepted: 06/24/2021] [Indexed: 11/16/2022] Open
Abstract
In transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of cardiac views of the TTE examination, particularly when obtained by non-trained providers. A new method for automatic recognition of cardiac views is proposed consisting of three processes. First, a spatial transform network is performed to learn cardiac shape changes during a cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrate channel-wise feature responses. Finally, the structured signals by the similarities among cardiac views are transformed into the graph-based image embedding, which acts as unsupervised regularization constraints to improve the generalization accuracy. The proposed method is trained and tested in 171792 cardiac images from 584 subjects. The overall accuracy of the proposed method on cardiac image classification is 99.10%, and the mean AUC is 99.36%, better than known methods. Moreover, the overall accuracy is 97.73%, and the mean AUC is 98.59% on an independent test set with 37,883 images from 100 subjects. The proposed automated recognition model achieved comparable accuracy with true cardiac views, and thus can be applied clinically to help find standard cardiac views.
Collapse
Affiliation(s)
- Yanhua Gao
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, #277 West Yanta Road, Xi’an 710061, China;
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Yuan Zhu
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Bo Liu
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Yue Hu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, #172 Tongzipo Road, Changsha 410013, China;
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, #172 Tongzipo Road, Changsha 410013, China;
- Correspondence: (G.Y.); (Y.G.); Tel./Fax: +0731-8265-0001 (G.Y.); +029-8532-3112 (Y.G.)
| | - Youmin Guo
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, #277 West Yanta Road, Xi’an 710061, China;
- Correspondence: (G.Y.); (Y.G.); Tel./Fax: +0731-8265-0001 (G.Y.); +029-8532-3112 (Y.G.)
| |
Collapse
|
15
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
16
|
Ulloa Cerna AE, Jing L, Good CW, vanMaanen DP, Raghunath S, Suever JD, Nevius CD, Wehner GJ, Hartzel DN, Leader JB, Alsaid A, Patel AA, Kirchner HL, Pfeifer JM, Carry BJ, Pattichis MS, Haggerty CM, Fornwalt BK. Deep-learning-assisted analysis of echocardiographic videos improves predictions of all-cause mortality. Nat Biomed Eng 2021; 5:546-554. [PMID: 33558735 DOI: 10.1038/s41551-020-00667-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/24/2020] [Indexed: 01/30/2023]
Abstract
Machine learning promises to assist physicians with predictions of mortality and of other future clinical events by learning complex patterns from historical data, such as longitudinal electronic health records. Here we show that a convolutional neural network trained on raw pixel data in 812,278 echocardiographic videos from 34,362 individuals provides superior predictions of one-year all-cause mortality. The model's predictions outperformed the widely used pooled cohort equations, the Seattle Heart Failure score (measured in an independent dataset of 2,404 patients with heart failure who underwent 3,384 echocardiograms), and a machine learning model involving 58 human-derived variables from echocardiograms and 100 clinical variables derived from electronic health records. We also show that cardiologists assisted by the model substantially improved the sensitivity of their predictions of one-year all-cause mortality by 13% while maintaining prediction specificity. Large unstructured datasets may enable deep learning to improve a wide range of clinical prediction models.
Collapse
Affiliation(s)
- Alvaro E Ulloa Cerna
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Electrical and Computer Engineering Department, University of New Mexico, Albuquerque, NM, USA
| | - Linyuan Jing
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | | | - David P vanMaanen
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Sushravya Raghunath
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Jonathan D Suever
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Christopher D Nevius
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA
| | - Gregory J Wehner
- Department of Biomedical Engineering, University of Kentucky, Lexington, KY, USA
| | - Dustin N Hartzel
- Phenomic Analytics and Clinical Data Core, Geisinger, Danville, PA, USA
| | - Joseph B Leader
- Phenomic Analytics and Clinical Data Core, Geisinger, Danville, PA, USA
| | - Amro Alsaid
- Heart Institute, Geisinger, Danville, PA, USA
| | | | - H Lester Kirchner
- Department of Population Health Sciences, Geisinger, Danville, PA, USA
| | - John M Pfeifer
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Heart and Vascular Center, Evangelical Hospital, Lewisburg, PA, USA
| | | | - Marios S Pattichis
- Electrical and Computer Engineering Department, University of New Mexico, Albuquerque, NM, USA
| | - Christopher M Haggerty
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA.,Heart Institute, Geisinger, Danville, PA, USA
| | - Brandon K Fornwalt
- Department of Translational Data Science and Informatics, Geisinger, Danville, PA, USA. .,Heart Institute, Geisinger, Danville, PA, USA. .,Department of Radiology, Geisinger, Danville, PA, USA.
| |
Collapse
|
17
|
Steps to use artificial intelligence in echocardiography. J Echocardiogr 2020; 19:21-27. [PMID: 33044715 PMCID: PMC7549428 DOI: 10.1007/s12574-020-00496-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 09/29/2020] [Accepted: 10/01/2020] [Indexed: 11/27/2022]
Abstract
Artificial intelligence (AI) has influenced every field of cardiovascular imaging in all phases from acquisition to reporting. Compared with computed tomography and magnetic resonance imaging, there is an issue of high observer variation in the interpretation of echocardiograms. Therefore, AI can help minimize the observer variation and provide accurate diagnosis in the field of echocardiography. In this review, we summarize the necessity for automated diagnosis in the echocardiographic field, and discuss the results of AI application to echocardiography and future perspectives. Currently, there are two roles for AI in cardiovascular imaging. One is the automation of tasks performed by humans, such as image segmentation, measurement of cardiac structural and functional parameters. The other is the discovery of clinically important insights. Most reported applications were focused on the automation of tasks. Moreover, algorithms that can obtain cardiac measurements are also being reported. In the next stage, AI can be expected to expand and enrich existing knowledge. With the continual evolution of technology, cardiologists should become well versed in this new knowledge of AI and be able to harness it as a tool. AI can be incorporated into everyday clinical practice and become a valuable aid for many healthcare professionals dealing with cardiovascular diseases.
Collapse
|
18
|
Blaivas M, Adhikari S, Savitsky EA, Blaivas LN, Liu YT. Artificial intelligence versus expert: a comparison of rapid visual inferior vena cava collapsibility assessment between POCUS experts and a deep learning algorithm. J Am Coll Emerg Physicians Open 2020; 1:857-864. [PMID: 33145532 PMCID: PMC7593461 DOI: 10.1002/emp2.12206] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 07/02/2020] [Accepted: 07/13/2020] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVES We sought to create a deep learning algorithm to determine the degree of inferior vena cava (IVC) collapsibility in critically ill patients to enable novice point-of-care ultrasound (POCUS) providers. METHODS We used publicly available long short term memory (LSTM) deep learning basic architecture that can track temporal changes and relationships in real-time video, to create an algorithm for ultrasound video analysis. The algorithm was trained on public domain IVC ultrasound videos to improve its ability to recognize changes in varied ultrasound video. A total of 220 IVC videos were used, 10% of the data was randomly used for cross correlation during training. Data were augmented through video rotation and manipulation to multiply effective training data quantity. After training, the algorithm was tested on the 50 new IVC ultrasound video obtained from public domain sources and not part of the data set used in training or cross validation. Fleiss' κ was calculated to compare level of agreement between the 3 POCUS experts and between deep learning algorithm and POCUS experts. RESULTS There was very substantial agreement between the 3 POCUS experts with κ = 0.65 (95% CI = 0.49-0.81). Agreement between experts and algorithm was moderate with κ = 0.45 (95% CI = 0.33-0.56). CONCLUSIONS Our algorithm showed good agreement with POCUS experts in visually estimating degree of IVC collapsibility that has been shown in previously published studies to differentiate fluid responsive from fluid unresponsive septic shock patients. Such an algorithm could be adopted to run in real-time on any ultrasound machine with a video output, easing the burden on novice POCUS users by limiting their task to obtaining and maintaining a sagittal proximal IVC view and allowing the artificial intelligence make real-time determinations.
Collapse
Affiliation(s)
- Michael Blaivas
- Department of Emergency Medicine, St. Francis Hospital, School of MedicineUniversity of South CarolinaColumbusSouth CarolinaUSA
| | - Srikar Adhikari
- Department of Emergency Medicine, School of MedicineUniversity of ArizonaTucsonArizonaUSA
| | - Eric A. Savitsky
- Department of Emergency Medicine, UCLA David Geffen School of MedicineUCLA Ronald Reagan Medical CenterLos AngelesCaliforniaUSA
| | - Laura N. Blaivas
- Department of Emergency Medicine, Harbor‐UCLA Medical Center, David Geffren School of MedicineUCLALos AngelesCaliforniaUSA
| | - Yiju T. Liu
- Michigan State University‐East LansingEast LansingMichiganUSA
| |
Collapse
|
19
|
Abstract
PURPOSE OF REVIEW Recent development in artificial intelligence (AI) for cardiovascular imaging analysis, involving deep learning, is the start of a new phase in the research field. We review the current state of AI in cardiovascular field and discuss about its potential to improve clinical workflows and accuracy of diagnosis. RECENT FINDINGS In the AI cardiovascular imaging field, there are many applications involving efficient image reconstruction, patient triage, and support for clinical decisions. These tools have a role to support repetitive clinical tasks. Although they will be powerful in some situations, these applications may have new potential in the hands of echo cardiologists, assisting but not replacing the human observer. We believe AI has the potential to improve the quality of echocardiography. Someday AI may be incorporated into the daily clinical setting, being an instrumental tool for cardiologists dealing with cardiovascular diseases.
Collapse
Affiliation(s)
- Kenya Kusunose
- Department of Cardiovascular Medicine, Tokushima University Hospital, 2-50-1 Kuramoto, Tokushima, Japan.
| |
Collapse
|