1
|
Chen H, Cohen E, Alfred M. Examining the development, effectiveness, and limitations of computer-aided diagnosis systems for retained surgical items detection: a systematic review. ERGONOMICS 2025:1-16. [PMID: 40208001 DOI: 10.1080/00140139.2025.2487558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/27/2025] [Indexed: 04/11/2025]
Abstract
Retained surgical items (RSIs) can lead to severe complications, and infections, with morbidity rates up to 84.32%. Computer-aided detection (CAD) systems offer potential advancement in enhancing the detection of RSIs. This systematic review aims to summarise the characteristics of CAD systems developed for the detection of RSIs, evaluate their development, effectiveness, and limitations, and propose opportunities for enhancement. The systematic review adheres to Preferred Reporting Items for Systematic Reviews and Meta-Analysis 2020 guidelines. Studies that have developed and evaluated CAD systems for identifying RSIs were eligible for inclusion. Five electronic databases were searched from inception to March 2023 and eleven studies were found eligible. The sensitivity of CAD systems ranges from 0.61 to 1 and specificity varied between 0.73 and 1. Most studies utilised synthesised RSI radiographs for developing CAD systems which raises generalisability concerns. Moreover, deep learning-based CAD systems did not incorporate explainable artificial intelligence techniques to ensure decision transparency.
Collapse
Affiliation(s)
- Hongbo Chen
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| | - Eldan Cohen
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| | - Myrtede Alfred
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| |
Collapse
|
2
|
Hafeez Y, Memon K, AL-Quraishi MS, Yahya N, Elferik S, Ali SSA. Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It. Diagnostics (Basel) 2025; 15:168. [PMID: 39857052 PMCID: PMC11764244 DOI: 10.3390/diagnostics15020168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 12/22/2024] [Accepted: 01/08/2025] [Indexed: 01/27/2025] Open
Abstract
Background: Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. Methods: In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Results: Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Conclusions: Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice.
Collapse
Affiliation(s)
- Yasir Hafeez
- Faculty of Science and Engineering, University of Nottingham, Jalan Broga, Semenyih 43500, Selangor Darul Ehsan, Malaysia;
| | - Khuhed Memon
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Maged S. AL-Quraishi
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Norashikin Yahya
- Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia; (K.M.); (N.Y.)
| | - Sami Elferik
- Interdisciplinary Research Center for Smart Mobility and Logistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (M.S.A.-Q.); (S.E.)
| | - Syed Saad Azhar Ali
- Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, and Interdisciplinary Research Center Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
| |
Collapse
|
3
|
C Pereira S, Mendonça AM, Campilho A, Sousa P, Teixeira Lopes C. Automated image label extraction from radiology reports - A review. Artif Intell Med 2024; 149:102814. [PMID: 38462277 DOI: 10.1016/j.artmed.2024.102814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/29/2023] [Accepted: 02/12/2024] [Indexed: 03/12/2024]
Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.
Collapse
Affiliation(s)
- Sofia C Pereira
- Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), Portugal; Faculty of Engineering of the University of Porto, Portugal.
| | - Ana Maria Mendonça
- Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), Portugal; Faculty of Engineering of the University of Porto, Portugal.
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), Portugal; Faculty of Engineering of the University of Porto, Portugal.
| | - Pedro Sousa
- Hospital Center of Vila Nova de Gaia/Espinho, Portugal.
| | - Carla Teixeira Lopes
- Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), Portugal; Faculty of Engineering of the University of Porto, Portugal.
| |
Collapse
|
4
|
Hamdan M, Badr Z, Bjork J, Saxe R, Malensek F, Miller C, Shah R, Han S, Mohammad-Rahimi H. Detection of dental restorations using no-code artificial intelligence. J Dent 2023; 139:104768. [PMID: 39492546 DOI: 10.1016/j.jdent.2023.104768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/23/2023] [Accepted: 10/27/2023] [Indexed: 11/05/2024] Open
Abstract
OBJECTIVES The purpose of this study was to utilize a no-code computer vision platform to develop, train, and evaluate a model specifically designed for segmenting dental restorations on panoramic radiographs. METHODS One hundred anonymized panoramic radiographs were selected for this study. Accurate labeling of dental restorations was performed by calibrated dental faculty and students, with subsequent final review by an oral radiologist. The radiographs were automatically split within the platform into training (70%), development (20%), and testing (10%) subgroups. The model was trained for 40 epochs using a medium model size. Data augmentation techniques available within the platform, namely horizontal and vertical flip, were utilized on the training set to improve the model's predictions. Post-training, the model was tested for independent predictions. The model's diagnostic validity was assessed through the calculation of sensitivity, specificity, accuracy, precision, F1-score by pixel and by tooth, and by ROC-AUC. RESULTS A total of 1,108 restorations were labeled on 960 teeth. At a confidence threshold of 0.95, the model achieved 86.64% sensitivity, 99.78% specificity, 99.63% accuracy, 82.4% precision and an F1-score of 0.844 by pixel. The model achieved 98.34% sensitivity, 98.13% specificity, 98.21% accuracy, 98.85% precision and an F1-score of 0.98 by tooth. ROC curve showed high performance with an AUC of 0.978. CONCLUSIONS The no-code computer vision platform used in this study accurately detected dental restorations on panoramic radiographs. However, further research and validation are required to evaluate the performance of no-code platforms on larger and more diverse datasets, as well as for other detection and segmentation tasks. CLINICAL SIGNIFICANCE The advent of no-code computer vision holds significant promise in dentistry and dental research by eliminating the requirement for coding skills, democratizing access to artificial intelligence tools, and potentially revolutionizing dental diagnostics.
Collapse
Affiliation(s)
- Manal Hamdan
- Department of General Dental Sciences, Marquette University School of Dentistry, Milwaukee, WI 53233, USA.
| | - Zaid Badr
- Technological Innovation Center, Department of General Dental Sciences, Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | - Jennifer Bjork
- Department of General Dental Sciences, Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | - Reagan Saxe
- Department of General Dental Sciences, Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | | | - Caroline Miller
- Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | - Rakhi Shah
- Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | - Shengtong Han
- Deans Office, Marquette University School of Dentistry, Milwaukee, WI 53233, USA
| | - Hossein Mohammad-Rahimi
- Division of Artificial Intelligence Imaging Research, University of Maryland School of Dentistry, Baltimore, MD 21201, USA
| |
Collapse
|
5
|
Rivas-Villar D, Motschi AR, Pircher M, Hitzenberger CK, Schranz M, Roberts PK, Schmidt-Erfurth U, Bogunović H. Automated inter-device 3D OCT image registration using deep learning and retinal layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3726-3747. [PMID: 37497506 PMCID: PMC10368062 DOI: 10.1364/boe.493047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/18/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Alice R Motschi
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Michael Pircher
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Christoph K Hitzenberger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Markus Schranz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Philipp K Roberts
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Hrvoje Bogunović
- Medical University of Vienna, Department of Ophthalmology and Optometry, Christian Doppler Lab for Artificial Intelligence in Retina, Vienna, Austria
| |
Collapse
|
6
|
Suttels V, Guedes Da Costa S, Garcia E, Brahier T, Hartley MA, Agodokpessi G, Wachinou P, Fasseur F, Boillat-Blanco N. Barriers and facilitators to implementation of point-of-care lung ultrasonography in a tertiary centre in Benin: a qualitative study among general physicians and pneumologists. BMJ Open 2023; 13:e070765. [PMID: 37369423 DOI: 10.1136/bmjopen-2022-070765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/29/2023] Open
Abstract
OBJECTIVES Owing to its ease-of-use and excellent diagnostic performance for the assessment of respiratory symptoms, point-of-care lung ultrasound (POC-LUS) has emerged as an attractive skill in resource-low settings, where limited access to specialist care and inconsistent radiology services erode health equity.To narrow down the research to practice gap, this study aims to gain in-depth insights in the perceptions on POC-LUS and computer-assisted POC-LUS for the diagnosis of lower respiratory tract infections (LRTIs) in a low-income and middle-income country (LMIC) of sub-Saharan Africa. DESIGN AND SETTING Qualitative study using face-to-face semi-structured interviews with three pneumologists and five general physicians in a tertiary centre for pneumology and tuberculosis in Benin, West Africa. The center hosts a prospective cohort study on the diagnostic performance of POC-LUS for LRTI. In this context, all participants started a POC-LUS training programme 6 months before the current study. Transcripts were coded by the interviewer, checked for intercoder reliability by an independent psychologist, compared and thematically summarised according to grounded theory methods. RESULTS Various barriers- and facilitators+ to POC-LUS implementation were identified related to four principal categories: (1) hospital setting (eg, lack of resources for device renewal or maintenance-, need for POC tests+), (2) physician's perceptions (eg, lack of opportunity to practice-, willingness to appropriate the technique+), (3) tool characteristics (eg, unclear lifespan-, expedited diagnosis+) and (4) patient's experience (no analogous image to keep-, reduction in costs+). Furthermore, all interviewees had positive attitudes towards computer-assisted POC-LUS. CONCLUSIONS There is a clear need for POC affordable lung imaging techniques in LMIC and physicians are willing to implement POC-LUS to optimise the diagnostic approach of LRTI with an affordable tool. Successful integration of POC-LUS into clinical routine will require adequate responses to local challenges related to the lack of available maintenance resources and limited opportunity to supervised practice for physicians.
Collapse
Affiliation(s)
| | - Sofia Guedes Da Costa
- Research Center for Psychology of Health, Aging and Sport Examination (PHASE), University of Lausanne, Lausanne, Switzerland
| | - Elena Garcia
- Emergency Department, CHUV, Lausanne, Switzerland
| | | | - Mary-Anne Hartley
- Digital Global Health Department, University of Lausanne, Lausanne, Switzerland
- Intelligent Global Health Research Group, Swiss Institute of Technology (EPFL), Lausanne, Switzerland
| | - Gildas Agodokpessi
- National Hospital Center of Pneumology, University of Abomey-Calavi, Cotonou, Benin
| | - Prudence Wachinou
- National Hospital Center of Pneumology, University of Abomey-Calavi, Cotonou, Benin
| | - Fabienne Fasseur
- Research Center for Psychology of Health, Aging and Sport Examination (PHASE), University of Lausanne, Lausanne, Switzerland
| | | |
Collapse
|
7
|
Kadhim YA, Khan MU, Mishra A. Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228999. [PMID: 36433595 PMCID: PMC9692938 DOI: 10.3390/s22228999] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 05/26/2023]
Abstract
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors.
Collapse
Affiliation(s)
- Yezi Ali Kadhim
- Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
- Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Muhammad Umer Khan
- Department of Mechatronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Alok Mishra
- Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
- Informatics and Digitalization Group, Molde University College—Specialized University in Logistics, 6410 Molde, Norway
| |
Collapse
|
8
|
Integrating patient symptoms, clinical readings, and radiologist feedback with computer-aided diagnosis system for detection of infectious pulmonary disease: a feasibility study. Med Biol Eng Comput 2022; 60:2549-2565. [DOI: 10.1007/s11517-022-02611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 06/07/2022] [Indexed: 10/17/2022]
|
9
|
Chandra TB, Singh BK, Jain D. Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106947. [PMID: 35749885 PMCID: PMC9403875 DOI: 10.1016/j.cmpb.2022.106947] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India.
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
10
|
Abstract
Robotics have important applications in the field of disaster medical rescue. The deployment of urban rescue robots at the earthquake site can help shorten response time, improve rescue efficiency and keep rescue personnel away from danger. This discussion introduces the performance of some robots in actual rescue scenarios, focuses on the current research status of robots that can provide medical assistance, and analyzes the merits and shortcomings of each system. Based on existing studies, the limitations and development directions of urban rescue robots are also discussed.
Collapse
|
11
|
Branch F, Williams KM, Santana IN, Hegdé J. How well do practicing radiologists interpret the results of CAD technology? A quantitative characterization. Cogn Res Princ Implic 2022; 7:52. [PMID: 35723763 PMCID: PMC9209598 DOI: 10.1186/s41235-022-00375-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 02/21/2022] [Indexed: 11/21/2022] Open
Abstract
Many studies have shown that using a computer-aided detection (CAD) system does not significantly improve diagnostic accuracy in radiology, possibly because radiologists fail to interpret the CAD results properly. We tested this possibility using screening mammography as an illustrative example. We carried out two experiments, one using 28 practicing radiologists, and a second one using 25 non-professional subjects. During each trial, subjects were shown the following four pieces of information necessary for evaluating the actual probability of cancer in a given unseen mammogram: the binary decision of the CAD system as to whether the mammogram was positive for cancer, the true-positive and false-positive rates of the system, and the prevalence of breast cancer in the relevant patient population. Based only on this information, the subjects had to estimate the probability that the unseen mammogram in question was positive for cancer. Additionally, the non-professional subjects also had to decide, based on the same information, whether to recall the patients for additional testing. Both groups of subjects similarly (and significantly) overestimated the cancer probability regardless of the categorical CAD decision, suggesting that this effect is not peculiar to either group. The misestimations were not fully attributable to causes well-known in other contexts, such as base rate neglect or inverse fallacy. Non-professional subjects tended to recall the patients at high rates, even when the actual probably of cancer was at or near zero. Moreover, the recall rates closely reflected the subjects' estimations of cancer probability. Together, our results show that subjects interpret CAD system output poorly when only the probabilistic information about the underlying decision parameters is available to them. Our results also highlight the need for making the output of CAD systems more readily interpretable, and for providing training and assistance to radiologists in evaluating the output.
Collapse
Affiliation(s)
- Fallon Branch
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA
| | - K Matthew Williams
- Department of Psychological Sciences, Augusta University, Augusta, GA, USA
| | - Isabella Noel Santana
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA
| | - Jay Hegdé
- Department of Neuroscience and Regenerative Medicine, Medical College of Georgia, Augusta University, Augusta University, DNRM, CA-2003, 1469 Laney Walker Blvd, Augusta, GA, 30912-2697, USA.
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA, USA.
- James and Jean Culver Vision Discovery Institute, Augusta University, Augusta, GA, USA.
- The Graduate School, Augusta University, Augusta, GA, USA.
| |
Collapse
|
12
|
Chharia A, Upadhyay R, Kumar V, Cheng C, Zhang J, Wang T, Xu M. Deep-Precognitive Diagnosis: Preventing Future Pandemics by Novel Disease Detection With Biologically-Inspired Conv-Fuzzy Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:23167-23185. [PMID: 35360503 PMCID: PMC8967064 DOI: 10.1109/access.2022.3153059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Accepted: 02/12/2022] [Indexed: 05/07/2023]
Abstract
Deep learning-based Computer-Aided Diagnosis has gained immense attention in recent years due to its capability to enhance diagnostic performance and elucidate complex clinical tasks. However, conventional supervised deep learning models are incapable of recognizing novel diseases that do not exist in the training dataset. Automated early-stage detection of novel infectious diseases can be vital in controlling their rapid spread. Moreover, the development of a conventional CAD model is only possible after disease outbreaks and datasets become available for training (viz. COVID-19 outbreak). Since novel diseases are unknown and cannot be included in training data, it is challenging to recognize them through existing supervised deep learning models. Even after data becomes available, recognizing new classes with conventional models requires a complete extensive re-training. The present study is the first to report this problem and propose a novel solution to it. In this study, we propose a new class of CAD models, i.e., Deep-Precognitive Diagnosis, wherein artificial agents are enabled to identify unknown diseases that have the potential to cause a pandemic in the future. A de novo biologically-inspired Conv-Fuzzy network is developed. Experimental results show that the model trained to classify Chest X-Ray (CXR) scans into normal and bacterial pneumonia detected a novel disease during testing, unseen by it in the training sample and confirmed to be COVID-19 later. The model is also tested on SARS-CoV-1 and MERS-CoV samples as unseen diseases and achieved state-of-the-art accuracy. The proposed model eliminates the need for model re-training by creating a new class in real-time for the detected novel disease, thus classifying it on all subsequent occurrences. Second, the model addresses the challenge of limited labeled data availability, which renders most supervised learning techniques ineffective and establishes that modified fuzzy classifiers can achieve high accuracy on image classification tasks.
Collapse
Affiliation(s)
- Aviral Chharia
- Mechanical Engineering DepartmentThapar Institute of Engineering and TechnologyPatialaPunjab147004India
| | - Rahul Upadhyay
- Electronics and Communication Engineering DepartmentThapar Institute of Engineering and TechnologyPatialaPunjab147004India
| | - Vinay Kumar
- Electronics and Communication Engineering DepartmentThapar Institute of Engineering and TechnologyPatialaPunjab147004India
| | - Chao Cheng
- Department of MedicineBaylor College of MedicineHoustonTX77030USA
| | - Jing Zhang
- Department of Computer ScienceUniversity of California at IrvineIrvineCA92697USA
| | - Tianyang Wang
- Department of Computer Science and Information TechnologyAustin Peay State UniversityClarksvilleTN37044USA
| | - Min Xu
- Computational Biology DepartmentSchool of Computer ScienceCarnegie Mellon UniversityPittsburghPA15213USA
- Computer Vision DepartmentMohamed bin Zayed University of Artificial IntelligenceAbu DhabiUnited Arab Emirates
| |
Collapse
|
13
|
A Comprehensive Review on Seismocardiogram: Current Advancements on Acquisition, Annotation, and Applications. MATHEMATICS 2021. [DOI: 10.3390/math9182243] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In recent years, cardiovascular diseases are on the rise, and they entail enormous health burdens on global economies. Cardiac vibrations yield a wide and rich spectrum of essential information regarding the functioning of the heart, and thus it is necessary to take advantage of this data to better monitor cardiac health by way of prevention in early stages. Specifically, seismocardiography (SCG) is a noninvasive technique that can record cardiac vibrations by using new cutting-edge devices as accelerometers. Therefore, providing new and reliable data regarding advancements in the field of SCG, i.e., new devices and tools, is necessary to outperform the current understanding of the State-of-the-Art (SoTA). This paper reviews the SoTA on SCG and concentrates on three critical aspects of the SCG approach, i.e., on the acquisition, annotation, and its current applications. Moreover, this comprehensive overview also presents a detailed summary of recent advancements in SCG, such as the adoption of new techniques based on the artificial intelligence field, e.g., machine learning, deep learning, artificial neural networks, and fuzzy logic. Finally, a discussion on the open issues and future investigations regarding the topic is included.
Collapse
|
14
|
Multitask Classification Method Based on Label Correction for Breast Tumor Ultrasound Images. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10455-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
15
|
Kaur B, Goyal B, Daniel E. A survey on Machine learning based Medical Assistive systems in Current Oncological Sciences. Curr Med Imaging 2021; 18:445-459. [PMID: 33596810 DOI: 10.2174/1573405617666210217154446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 12/04/2020] [Accepted: 01/15/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Cancer is one of the life threatening disease which is affecting a large number of population worldwide. The cancer cells multiply inside the body without showing much symptoms on the surface of the skin thereby making it difficult to predict and detect at the onset of disease. Many organizations are working towards automating the process of cancer detection with minimal false detection rates. INTRODUCTION The machine learning algorithms serve to be a promising alternative to support health care practitioners to rule out the disease and predict the growth with various imaging and statistical analysis tools. The medical practitioners are utilizing the output of these algorithms to diagnose and design the course of treatment. These algorithms are capable of finding out the risk level of the patient and can reduce the mortality rate concerning to cancer disease. METHOD This article presents the existing state of art techniques for identifying cancer affecting human organs based on machine learning models. The supported set of imaging operations are also elaborated for each type of Cancer. CONCLUSION The CAD tools are the aid for the diagnostic radiologists for preliminary investigations and detecting the nature of tumor cells.
Collapse
Affiliation(s)
| | | | - Ebenezer Daniel
- City of Hope, National Medical Centre, California. United States
| |
Collapse
|
16
|
Chugh G, Kumar S, Singh N. Survey on Machine Learning and Deep Learning Applications in Breast Cancer Diagnosis. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09813-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
17
|
The seven key challenges for life-critical shared decision making systems. Int J Med Inform 2021; 148:104377. [PMID: 33517102 DOI: 10.1016/j.ijmedinf.2021.104377] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 12/29/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND Shared decision making (SDM) for life-critical diseases or conditions is a crucial type of SDM. This type of SDM is still greatly underdeveloped and it faces a number of key challenges. The main goal of this study is to identify the challenges that impede the development and use of life-critical SDM. METHODS This is a hybrid research and systematic / narrative review paper. Its results were derived by analyzing reviews already conducted by the authors when they were working on six recently published papers. These papers had collectively required two systematic reviews and four narrative reviews. The topics covered in the six published papers were related to computer-aided diagnosis (CAD) in medicine, the analysis of health state utilities, and the selection of the best treatment for life-critical diseases / conditions. A new narrative review was also executed to explore some new issues. RESULTS The key challenges for life-critical SDM relate to the following aspects: The mathematical models used to make the decisions, the data used to feed these models, the role the patient plays within the SDM framework, and finally, the role healthcare professionals play along with the pertinent rules and regulations that guide the use of this type of SDM today. CONCLUSIONS Life-critical SDM is the most important type of SDM. However, some challenges impede its successful development and use. A number of developments and enhancements need to be made urgently for this type of SDM to become widely acceptable and useful. The seven key challenges identified in this study and the suggested directions for future research offer a compelling path towards elevating life-critical SDM to the next level and do so both effectively and efficiently.
Collapse
|
18
|
Danos AM, Krysiak K, Barnell EK, Coffman AC, McMichael JF, Kiwala S, Spies NC, Sheta LM, Pema SP, Kujan L, Clark KA, Wollam AZ, Rao S, Ritter DI, Sonkin D, Raca G, Lin WH, Grisdale CJ, Kim RH, Wagner AH, Madhavan S, Griffith M, Griffith OL. Standard operating procedure for curation and clinical interpretation of variants in cancer. Genome Med 2019; 11:76. [PMID: 31779674 PMCID: PMC6883603 DOI: 10.1186/s13073-019-0687-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 11/07/2019] [Indexed: 02/04/2023] Open
Abstract
Manually curated variant knowledgebases and their associated knowledge models are serving an increasingly important role in distributing and interpreting variants in cancer. These knowledgebases vary in their level of public accessibility, and the complexity of the models used to capture clinical knowledge. CIViC (Clinical Interpretation of Variants in Cancer - www.civicdb.org) is a fully open, free-to-use cancer variant interpretation knowledgebase that incorporates highly detailed curation of evidence obtained from peer-reviewed publications and meeting abstracts, and currently holds over 6300 Evidence Items for over 2300 variants derived from over 400 genes. CIViC has seen increased adoption by, and also undertaken collaboration with, a wide range of users and organizations involved in research. To enhance CIViC’s clinical value, regular submission to the ClinVar database and pursuit of other regulatory approvals is necessary. For this reason, a formal peer reviewed curation guideline and discussion of the underlying principles of curation is needed. We present here the CIViC knowledge model, standard operating procedures (SOP) for variant curation, and detailed examples to support community-driven curation of cancer variants.
Collapse
Affiliation(s)
- Arpad M Danos
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Kilannin Krysiak
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA.,Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, USA
| | - Erica K Barnell
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA.,Department of Genetics, Washington University School of Medicine, St. Louis, MO, USA
| | - Adam C Coffman
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Joshua F McMichael
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Susanna Kiwala
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Nicholas C Spies
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Lana M Sheta
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Shahil P Pema
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Lynzey Kujan
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Kaitlin A Clark
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Amber Z Wollam
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA
| | - Shruti Rao
- Innovation Center for Biomedical Informatics, Georgetown University, Washington DC, USA
| | - Deborah I Ritter
- Department of Pediatrics, Texas Children's Hospital, Baylor College of Medicine, Houston, TX, USA
| | - Dmitriy Sonkin
- Biometric Research Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, Rockville, MD, USA
| | - Gordana Raca
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Wan-Hsin Lin
- Department of Cancer Biology, Mayo Clinic, Jacksonville, Florida, USA
| | - Cameron J Grisdale
- Canada's Michael Smith Genome Sciences Centre, British Columbia Cancer Agency, Vancouver, BC, Canada
| | - Raymond H Kim
- Fred A. Litwin Family Center in Genetic Medicine, University Health Network, Toronto, ON, Canada
| | - Alex H Wagner
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA.,Department of Genetics, Washington University School of Medicine, St. Louis, MO, USA
| | - Subha Madhavan
- Innovation Center for Biomedical Informatics, Georgetown University, Washington DC, USA.,Georgetown Lombardi Comprehensive Cancer Center, Washington DC, USA
| | - Malachi Griffith
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA. .,Department of Genetics, Washington University School of Medicine, St. Louis, MO, USA. .,Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA. .,Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA.
| | - Obi L Griffith
- McDonnell Genome Institute, Washington University School of Medicine, St. Louis, MO, USA. .,Department of Genetics, Washington University School of Medicine, St. Louis, MO, USA. .,Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO, USA. .,Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA.
| |
Collapse
|