1
|
D'hondt L, Kellens PJ, Torfs K, Bosmans H, Bacher K, Snoeckx A. Absolute ground truth-based validation of computer-aided nodule detection and volumetry in low-dose CT imaging. Phys Med 2024; 121:103344. [PMID: 38593627 DOI: 10.1016/j.ejmp.2024.103344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/20/2024] [Accepted: 03/27/2024] [Indexed: 04/11/2024] Open
Abstract
PURPOSE To validate the performance of computer-aided detection (CAD) and volumetry software using an anthropomorphic phantom with a ground truth (GT) set of 3D-printed nodules. METHODS The Kyoto Kaguku Lungman phantom, containing 3D-printed solid nodules including six diameters (4 to 9 mm) and three morphologies (smooth, lobulated, spiculated), was scanned at varying CTDIvol levels (6.04, 1.54 and 0.20 mGy). Combinations of reconstruction algorithms (iterative and deep learning image reconstruction) and kernels (soft and hard) were applied. Detection, volumetry and density results recorded by a commercially available AI-based algorithm (AVIEW LCS + ) were compared to the absolute GT, which was determined through µCT scanning at 50 µm resolution. The associations between image acquisition parameters or nodule characteristics and accuracy of nodule detection and characterization were analyzed with chi square tests and multiple linear regression. RESULTS High levels of detection sensitivity and precision (minimal 83 % and 91 % respectively) were observed across all acquisitions. Neither reconstruction algorithm nor radiation dose showed significant associations with detection. Nodule diameter however showed a highly significant association with detection (p < 0.0001). Volumetric measurements for nodules > 6 mm were accurate within 10 % absolute range from volumeGT, regardless of dose and reconstruction. Nodule diameter and morphology are major determinants of volumetric accuracy (p < 0.001). Density assignment was not significantly influenced by any parameters. CONCLUSIONS Our study confirms the software's accurate performance in nodule volumetry, detection and density characterization with robustness for variations in CT imaging protocols. This study suggests the incorporation of similar phantom setups in quality assurance of CAD tools.
Collapse
Affiliation(s)
- Louise D'hondt
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Proeftuinstraat 86, Ghent, Belgium; Faculty of Medicine, University of Antwerp, Universiteitsplein 1, Wilrijk, Belgium.
| | - Pieter-Jan Kellens
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Proeftuinstraat 86, Ghent, Belgium
| | - Kwinten Torfs
- Leuven University Center of Medical Physics in Radiology, University Hospitals Leuven, Herestraat 49, Leuven, Belgium
| | - Hilde Bosmans
- Leuven University Center of Medical Physics in Radiology, University Hospitals Leuven, Herestraat 49, Leuven, Belgium
| | - Klaus Bacher
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Proeftuinstraat 86, Ghent, Belgium
| | - Annemiek Snoeckx
- Faculty of Medicine, University of Antwerp, Universiteitsplein 1, Wilrijk, Belgium; Department of Radiology, Antwerp University Hospital, Drie Eikenstraat 655, Edegem, Belgium
| |
Collapse
|
2
|
Assis Y, Liao L, Pierre F, Anxionnat R, Kerrien E. Intracranial aneurysm detection: an object detection perspective. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03132-z. [PMID: 38632166 DOI: 10.1007/s11548-024-03132-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 03/28/2024] [Indexed: 04/19/2024]
Abstract
PURPOSE Intracranial aneurysm detection from 3D Time-Of-Flight Magnetic Resonance Angiography images is a problem of increasing clinical importance. Recently, a streak of methods have shown promising performance by using segmentation neural networks. However, these methods may be less relevant in a clinical settings where diagnostic decisions rely on detecting objects rather than their segmentation. METHODS We introduce a 3D single-stage object detection method tailored for small object detection such as aneurysms. Our anchor-free method incorporates fast data annotation, adapted data sampling and generation to address class imbalance problem, and spherical representations for improved detection. RESULTS A comprehensive evaluation was conducted, comparing our method with the state-of-the-art SCPM-Net, nnDetection and nnUNet baselines, using two datasets comprising 402 subjects. The evaluation used adapted object detection metrics. Our method exhibited comparable or superior performance, with an average precision of 78.96%, sensitivity of 86.78%, and 0.53 false positives per case. CONCLUSION Our method significantly reduces the detection complexity compared to existing methods and highlights the advantages of object detection over segmentation-based approaches for aneurysm detection. It also holds potential for application to other small object detection problems.
Collapse
Affiliation(s)
- Youssef Assis
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France.
| | - Liang Liao
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
- Department of Diagnostic and Therapeutic Interventional Neuroradiology, Université de Lorraine, CHRU-Nancy, 54000, Nancy, France
- Université de Lorraine, Inserm, IADI, 54000, Nancy, France
| | - Fabien Pierre
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
| | - René Anxionnat
- Department of Diagnostic and Therapeutic Interventional Neuroradiology, Université de Lorraine, CHRU-Nancy, 54000, Nancy, France
- Université de Lorraine, Inserm, IADI, 54000, Nancy, France
| | - Erwan Kerrien
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
| |
Collapse
|
3
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
4
|
Hayum AA, Jaya J, Sivakumar R, Paulchamy B. An efficient breast cancer classification model using bilateral filtering and fuzzy convolutional neural network. Sci Rep 2024; 14:6290. [PMID: 38491186 PMCID: PMC10943067 DOI: 10.1038/s41598-024-56698-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 03/09/2024] [Indexed: 03/18/2024] Open
Abstract
BC (Breast cancer) is the second most common reason for women to die from cancer. Recent workintroduced a model for BC classifications where input breast images were pre-processed using median filters for reducing noises. Weighed KMC (K-Means clustering) is used to segment the ROI (Region of Interest) after the input image has been cleaned of noise. Block-based CDF (Centre Distance Function) and CDTM (Diagonal Texture Matrix)-based texture and shape descriptors are utilized for feature extraction. The collected features are reduced in counts using KPCA (Kernel Principal Component Analysis). The appropriate feature selection is computed using ICSO (Improved Cuckoo Search Optimization). The MRNN ((Modified Recurrent Neural Network)) values are then improved through optimization before being utilized to divide British Columbia into benign and malignant types. However, ICSO has many disadvantages, such as slow search speed and low convergence accuracy and training an MRNN is a completely tough task. To avoid those problems in this work preprocessing is done by bilateral filtering to remove the noise from the input image. Bilateral filter using linear Gaussian for smoothing. Contrast stretching is applied to improve the image quality. ROI segmentation is calculated based on MFCM (modified fuzzy C means) clustering. CDTM-based, CDF-based color histogram and shape description methods are applied for feature extraction. It summarizes two important pieces of information about an object such as the colors present in the image, and the relative proportion of each color in the given image. After the features are extracted, KPCA is used to reduce the size. Feature selection was performed using MCSO (Mutational Chicken Flock Optimization). Finally, BC detection and classification were performed using FCNN (Fuzzy Convolutional Neural Network) and its parameters were optimized using MCSO. The proposed model is evaluated for accuracy, recall, f-measure and accuracy. This work's experimental results achieve high values of accuracy when compared to other existing models.
Collapse
Affiliation(s)
- A Abdul Hayum
- Electronics and Communication Engineering, Hindusthan Institute of Technology, Coimbatore, 641032, India.
| | - J Jaya
- Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, 641032, India
| | - R Sivakumar
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - B Paulchamy
- Electronics and Communication Engineering, Hindusthan Institute of Technology, Coimbatore, 641032, India
| |
Collapse
|
5
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024:CBM230251. [PMID: 38517775 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
6
|
Fairchild A, Salama JK, Godfrey D, Wiggins WF, Ackerson BG, Oyekunle T, Niedzwiecki D, Fecci PE, Kirkpatrick JP, Floyd SR. Incidence and imaging characteristics of difficult to detect retrospectively identified brain metastases in patients receiving repeat courses of stereotactic radiosurgery. J Neurooncol 2024:10.1007/s11060-024-04594-6. [PMID: 38340295 DOI: 10.1007/s11060-024-04594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
PURPOSE During stereotactic radiosurgery (SRS) planning for brain metastases (BM), brain MRIs are reviewed to select appropriate targets based on radiographic characteristics. Some BM are difficult to detect and/or definitively identify and may go untreated initially, only to become apparent on future imaging. We hypothesized that in patients receiving multiple courses of SRS, reviewing the initial planning MRI would reveal early evidence of lesions that developed into metastases requiring SRS. METHODS Patients undergoing two or more courses of SRS to BM within 6 months between 2016 and 2018 were included in this single-institution, retrospective study. Brain MRIs from the initial course were reviewed for lesions at the same location as subsequently treated metastases; if present, this lesion was classified as a "retrospectively identified metastasis" or RIM. RIMs were subcategorized as meeting or not meeting diagnostic imaging criteria for BM (+ DC or -DC, respectively). RESULTS Among 683 patients undergoing 923 SRS courses, 98 patients met inclusion criteria. There were 115 repeat courses of SRS, with 345 treated metastases in the subsequent course, 128 of which were associated with RIMs found in a prior MRI. 58% of RIMs were + DC. 17 (15%) of subsequent courses consisted solely of metastases associated with + DC RIMs. CONCLUSION Radiographic evidence of brain metastases requiring future treatment was occasionally present on brain MRIs from prior SRS treatments. Most RIMs were + DC, and some subsequent SRS courses treated only + DC RIMs. These findings suggest enhanced BM detection might enable earlier treatment and reduce the need for additional SRS.
Collapse
Affiliation(s)
- Andrew Fairchild
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.
- Piedmont Radiation Oncology, 3333 Silas Creek Parkway, Winston Salem, NC, 27103, USA.
| | - Joseph K Salama
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Radiation Oncology Service, Durham VA Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Walter F Wiggins
- Deartment of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Bradley G Ackerson
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Taofik Oyekunle
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, NC, USA
| | - Peter E Fecci
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - John P Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
- Department of Neurosurgery, Duke University Medical Center, Durham, NC, USA
| | - Scott R Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
7
|
Lee J, Nishikawa RM. Improving lesion detection in mammograms by leveraging a Cycle-GAN-based lesion remover. Breast Cancer Res 2024; 26:21. [PMID: 38303004 PMCID: PMC10832219 DOI: 10.1186/s13058-024-01777-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 01/20/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND The wide heterogeneity in the appearance of breast lesions and normal breast structures can confuse computerized detection algorithms. Our purpose was therefore to develop a Lesion Highlighter (LH) that can improve the performance of computer-aided detection algorithms for detecting breast cancer on screening mammograms. METHODS We hypothesized that a Cycle-GAN based Lesion Remover (LR) could act as an LH, which can improve the performance of lesion detection algorithms. We used 10,310 screening mammograms from 4,832 women that included 4,942 recalled lesions (BI-RADS 0) and 5,368 normal results (BI-RADS 1). We divided the dataset into Train:Validate:Test folds with the ratios of 0.64:0.16:0.2. We segmented image patches (400 × 400 pixels) from either lesions marked by MQSA radiologists or normal tissue in mammograms. We trained a Cycle-GAN to develop two GANs, where each GAN transferred the style of one image to another. We refer to the GAN transferring the style of a lesion to normal breast tissue as the LR. We then highlighted the lesion by color-fusing the mammogram after applying the LR to its original. Using ResNet18, DenseNet201, EfficientNetV2, and Vision Transformer as backbone architectures, we trained three deep networks for each architecture, one trained on lesion highlighted mammograms (Highlighted), another trained on the original mammograms (Baseline), and Highlighted and Baseline combined (Combined). We conducted ROC analysis for the three versions of each deep network on the test set. RESULTS The Combined version of all networks achieved AUCs ranging from 0.963 to 0.974 for identifying the image with a recalled lesion from a normal breast tissue image, which was statistically improved (p-value < 0.001) over their Baseline versions with AUCs that ranged from 0.914 to 0.967. CONCLUSIONS Our results showed that a Cycle-GAN based LR is effective for enhancing lesion conspicuity and this can improve the performance of a detection algorithm.
Collapse
Affiliation(s)
- Juhun Lee
- Department of Radiology, The University of Pittsburgh, 200 Lothrop Street, Pittsburgh, PA, 15237, USA.
- Department of Bioengineering, The University of Pittsburgh, 302 Benedum Hall, Pittsburgh, PA, 15237, USA.
| | - Robert M Nishikawa
- Department of Radiology, The University of Pittsburgh, 200 Lothrop Street, Pittsburgh, PA, 15237, USA
| |
Collapse
|
8
|
Kwon MR, Youn I, Lee MY, Lee HA. Diagnostic Performance of Artificial Intelligence-Based Computer-Aided Detection Software for Automated Breast Ultrasound. Acad Radiol 2024; 31:480-491. [PMID: 37813703 DOI: 10.1016/j.acra.2023.09.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/25/2023] [Accepted: 09/12/2023] [Indexed: 10/11/2023]
Abstract
RATIONALE AND OBJECTIVES This study aimed to evaluate the diagnostic performance of radiologists following the utilization of artificial intelligence (AI)-based computer-aided detection software (CAD) in detecting suspicious lesions in automated breast ultrasounds (ABUS). MATERIALS AND METHODS ABUS-detected 262 breast lesions (histopathological verification; January 2020 to December 2022) were included. Two radiologists reviewed the images and assigned a Breast Imaging Reporting and Data System (BI-RADS) category. ABUS images were classified as positive or negative using AI-CAD. The BI-RADS category was readjusted in four ways: the radiologists modified the BI-RADS category using the AI results (AI-aided 1), upgraded or downgraded based on AI results (AI-aided 2), only upgraded for positive results (AI-aided 3), or only downgraded for negative results (AI-aided 4). The AI-aided diagnostic performances were compared to radiologists. The AI-CAD-positive and AI-CAD-negative cancer characteristics were compared. RESULTS For 262 lesions (145 malignant and 117 benign) in 231 women (mean age, 52.2 years), the area under the receiver operator characteristic curve (AUC) of radiologists was 0.870 (95% confidence interval [CI], 0.832-0.908). The AUC significantly improved to 0.919 (95% CI, 0.890-0.947; P = 0.001) using AI-aided 1, whereas it improved without significance to 0.884 (95% CI, 0.844-0.923), 0.890 (95% CI, 0.852-0.929), and 0.890 (95% CI, 0.853-0.928) using AI-aided 2, 3, and 4, respectively. AI-CAD-negative cancers were smaller, less frequently exhibited retraction phenomenon, and had lower BI-RADS category. Among nonmass lesions, AI-CAD-negative cancers showed no posterior shadowing. CONCLUSION AI-CAD implementation significantly improved the radiologists' diagnostic performance and may serve as a valuable diagnostic tool.
Collapse
Affiliation(s)
- Mi-Ri Kwon
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.)
| | - Inyoung Youn
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.).
| | - Mi Yeon Lee
- Division of Biostatistics, Department of R&D Management, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (M.Y.L.)
| | - Hyun-Ah Lee
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.)
| |
Collapse
|
9
|
Okada K, Yamada N, Takayanagi K, Hiasa Y, Kitamura Y, Hoshino Y, Hirao S, Yoshiyama T, Onozaki I, Kato S. Applicability of artificial intelligence-based computer-aided detection (AI-CAD) for pulmonary tuberculosis to community-based active case finding. Trop Med Health 2024; 52:2. [PMID: 38163868 PMCID: PMC10759734 DOI: 10.1186/s41182-023-00560-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 12/02/2023] [Indexed: 01/03/2024] Open
Abstract
BACKGROUND Artificial intelligence-based computer-aided detection (AI-CAD) for tuberculosis (TB) has become commercially available and several studies have been conducted to evaluate the performance of AI-CAD for pulmonary tuberculosis (TB) in clinical settings. However, little is known about its applicability to community-based active case-finding (ACF) for TB. METHODS We analysed an anonymized data set obtained from a community-based ACF in Cambodia, targeting persons aged 55 years or over, persons with any TB symptoms, such as chronic cough, and persons at risk of TB, including household contacts. All of the participants in the ACF were screened by chest radiography (CXR) by Cambodian doctors, followed by Xpert test when they were eligible for sputum examination. Interpretation by an experienced chest physician and abnormality scoring by a newly developed AI-CAD were retrospectively conducted for the CXR images. With a reference of Xpert-positive TB or human interpretations, receiver operating characteristic (ROC) curves were drawn to evaluate the AI-CAD performance by area under the ROC curve (AUROC). In addition, its applicability to community-based ACFs in Cambodia was examined. RESULTS TB scores of the AI-CAD were significantly associated with the CXR classifications as indicated by the severity of TB disease, and its AUROC as the bacteriological reference was 0.86 (95% confidence interval 0.83-0.89). Using a threshold for triage purposes, the human reading and bacteriological examination needed fell to 21% and 15%, respectively, detecting 95% of Xpert-positive TB in ACF. For screening purposes, we could detect 98% of Xpert-positive TB cases. CONCLUSIONS AI-CAD is applicable to community-based ACF in high TB burden settings, where experienced human readers for CXR images are scarce. The use of AI-CAD in developing countries has the potential to expand CXR screening in community-based ACFs, with a substantial decrease in the workload on human readers and laboratory labour. Further studies are needed to generalize the results to other countries by increasing the sample size and comparing the AI-CAD performance with that of more human readers.
Collapse
Affiliation(s)
- Kosuke Okada
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan.
- Department of International Programme, Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan.
| | - Norio Yamada
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Kiyoko Takayanagi
- Fukujuji Hospital, Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Yuta Hiasa
- Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, Tokyo, Japan
| | - Yoshiro Kitamura
- Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, Tokyo, Japan
| | - Yutaka Hoshino
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Susumu Hirao
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Takashi Yoshiyama
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
- Fukujuji Hospital, Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Ikushi Onozaki
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
- Department of International Programme, Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| | - Seiya Kato
- The Research Institute of Tuberculosis (RIT), Japan Anti-Tuberculosis Association (JATA), Tokyo, Japan
| |
Collapse
|
10
|
Fruchtman Brot H, Mango VL. Artificial intelligence in breast ultrasound: application in clinical practice. Ultrasonography 2024; 43:3-14. [PMID: 38109894 PMCID: PMC10766882 DOI: 10.14366/usg.23116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/14/2023] [Accepted: 08/29/2023] [Indexed: 12/20/2023] Open
Abstract
Ultrasound (US) is a widely accessible and extensively used tool for breast imaging. It is commonly used as an additional screening tool, especially for women with dense breast tissue. Advances in artificial intelligence (AI) have led to the development of various AI systems that assist radiologists in identifying and diagnosing breast lesions using US. This article provides an overview of the background and supporting evidence for the use of AI in hand held breast US. It discusses the impact of AI on clinical workflow, covering breast cancer detection, diagnosis, prediction of molecular subtypes, evaluation of axillary lymph node status, and response to neoadjuvant chemotherapy. Additionally, the article highlights the potential significance of AI in breast US for low and middle income countries.
Collapse
|
11
|
Lou S, Du F, Song W, Xia Y, Yue X, Yang D, Cui B, Liu Y, Han P. Artificial intelligence for colorectal neoplasia detection during colonoscopy: a systematic review and meta-analysis of randomized clinical trials. EClinicalMedicine 2023; 66:102341. [PMID: 38078195 PMCID: PMC10698672 DOI: 10.1016/j.eclinm.2023.102341] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/14/2023] [Accepted: 11/15/2023] [Indexed: 05/11/2024] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in detecting colorectal neoplasia during colonoscopy holds the potential to enhance adenoma detection rates (ADRs) and reduce adenoma miss rates (AMRs). However, varied outcomes have been observed across studies. Thus, this study aimed to evaluate the potential advantages and disadvantages of employing AI-aided systems during colonoscopy. METHODS Using Medical Subject Headings (MeSH) terms and keywords, a comprehensive electronic literature search was performed of the Embase, Medline, and the Cochrane Library databases from the inception of each database until October 04, 2023, in order to identify randomized controlled trials (RCTs) comparing AI-assisted with standard colonoscopy for detecting colorectal neoplasia. Primary outcomes included AMR, ADR, and adenomas detected per colonoscopy (APC). Secondary outcomes comprised the poly missed detection rate (PMR), poly detection rate (PDR), and poly detected per colonoscopy (PPC). We utilized random-effects meta-analyses with Hartung-Knapp adjustment to consolidate results. The prediction interval (PI) and I2 statistics were utilized to quantify between-study heterogeneity. Moreover, meta-regression and subgroup analyses were performed to investigate the potential sources of heterogeneity. This systematic review and meta-analysis is registered with PROSPERO (CRD42023428658). FINDINGS This study encompassed 33 trials involving 27,404 patients. Those undergoing AI-aided colonoscopy experienced a significant decrease in PMR (RR, 0.475; 95% CI, 0.294-0.768; I2 = 87.49%) and AMR (RR, 0.495; 95% CI, 0.390-0.627; I2 = 48.76%). Additionally, a significant increase in PDR (RR, 1.238; 95% CI, 1.158-1.323; I2 = 81.67%) and ADR (RR, 1.242; 95% CI, 1.159-1.332; I2 = 78.87%), along with a significant increase in the rates of PPC (IRR, 1.388; 95% CI, 1.270-1.517; I2 = 91.99%) and APC (IRR, 1.390; 95% CI, 1.277-1.513; I2 = 86.24%), was observed. This resulted in 0.271 more PPCs (95% CI, 0.144-0.259; I2 = 65.61%) and 0.202 more APCs (95% CI, 0.144-0.259; I2 = 68.15%). INTERPRETATION AI-aided colonoscopy significantly enhanced the detection of colorectal neoplasia detection, likely by reducing the miss rate. However, future studies should focus on evaluating the cost-effectiveness and long-term benefits of AI-aided colonoscopy in reducing cancer incidence. FUNDING This work was supported by the Heilongjiang Provincial Natural Science Foundation of China (LH2023H096), the Postdoctoral research project in Heilongjiang Province (LBH-Z22210), the National Natural Science Foundation of China's General Program (82072640) and the Outstanding Youth Project of Heilongjiang Natural Science Foundation (YQ2021H023).
Collapse
Affiliation(s)
- Shenghan Lou
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Fenqi Du
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Wenjie Song
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Yixiu Xia
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Xinyu Yue
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Da Yang
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Binbin Cui
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Yanlong Liu
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| | - Peng Han
- Department of Colorectal Surgery, Harbin Medical University Cancer Hospital, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
- Key Laboratory of Tumor Immunology in Heilongjiang, No.150 Haping Road, Harbin, Heilongjiang, 150081, China
| |
Collapse
|
12
|
Wang J, Sun H, Jiang K, Cao W, Chen S, Zhu J, Yang X, Zheng J. CAPNet: Context attention pyramid network for computer-aided detection of microcalcification clusters in digital breast tomosynthesis. Comput Methods Programs Biomed 2023; 242:107831. [PMID: 37783114 DOI: 10.1016/j.cmpb.2023.107831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 12/25/2022] [Accepted: 09/23/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided detection (CADe) of microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) is crucial in the early diagnosis of breast cancer. Although convolutional neural network (CNN)-based detection models have achieved excellent performance in medical lesion detection, they are subject to some limitations in MC detection: 1) Most existing models employ the feature pyramid network (FPN) for multi-scale object detection; however, the rough feature sharing between adjacent layers in the FPN may limit the detection ability for small and low-contrast MCs; and 2) the MCs region only accounts for a small part of the annotation box, so the features extracted indiscriminately within the whole box may easily be affected by the background. In this paper, we develop a novel CNN-based CADe method to alleviate the impacts of the above limitations for the accurate and rapid detection of MCs in DBT. METHODS The proposed method has two parts: a novel context attention pyramid network (CAPNet) for intra-layer MC detection in two-dimensional (2D) slices and a three-dimensional (3D) aggregation procedure for aggregating 2D intra-layer MCs into a 3D result according to their connectivity in 3D space. The proposed CAPNet is based on an anchor-free and one-stage detection architecture and contains a context feature selection fusion (CFSF) module and a microcalcification response (MCR) branch. The CFSF module can efficiently enrich shallow layers' features by the complementary selection of local context features, aiming to reduce the missed detection of small and low-contrast MCs. The MCR branch is a one-layer branch parallel to the classification branch, which can alleviate the influence of the background region within the annotation box on feature extraction and enhance the ability of the model to distinguish MCs from normal breast tissue. RESULTS We performed a comparison experiment on an in-house clinical dataset with 648 DBT volumes, and the proposed method achieved impressive performance with a sensitivity of 91.56 % at 1 false positive per DBT volume (FPs/volume) and 93.51 % at 2 FPs/volume, outperforming other representative detection models. CONCLUSIONS The experimental results indicate that the proposed method is effective in the detection of MCs in DBT. This method can provide objective, accurate, and quick diagnostic suggestions for radiologists, presenting potential clinical value for early breast cancer screening.
Collapse
Affiliation(s)
- Jingkun Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Haotian Sun
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Ke Jiang
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Shuangqing Chen
- Gusu School, Nanjing Medical University, Suzhou 215006, China; Department of Radiology, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou 215000, China
| | - Jianbing Zhu
- Suzhou Science & Technology Town Hospital, Gusu School, Nanjing Medical University, Suzhou 215153, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China.
| |
Collapse
|
13
|
Fujioka T, Kubota K, Hsu JF, Chang RF, Sawada T, Ide Y, Taruno K, Hankyo M, Kurita T, Nakamura S, Tateishi U, Takei H. Examining the effectiveness of a deep learning-based computer-aided breast cancer detection system for breast ultrasound. J Med Ultrason (2001) 2023; 50:511-520. [PMID: 37400724 PMCID: PMC10556122 DOI: 10.1007/s10396-023-01332-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/03/2023] [Indexed: 07/05/2023]
Abstract
PURPOSE This study aimed to evaluate the clinical usefulness of a deep learning-based computer-aided detection (CADe) system for breast ultrasound. METHODS The set of 88 training images was expanded to 14,000 positive images and 50,000 negative images. The CADe system was trained to detect lesions in real- time using deep learning with an improved model of YOLOv3-tiny. Eighteen readers evaluated 52 test image sets with and without CADe. Jackknife alternative free-response receiver operating characteristic analysis was used to estimate the effectiveness of this system in improving lesion detection. RESULT The area under the curve (AUC) for image sets was 0.7726 with CADe and 0.6304 without CADe, with a 0.1422 difference, indicating that with CADe was significantly higher than that without CADe (p < 0.0001). The sensitivity per case was higher with CADe (95.4%) than without CADe (83.7%). The specificity of suspected breast cancer cases with CADe (86.6%) was higher than that without CADe (65.7%). The number of false positives per case (FPC) was lower with CADe (0.22) than without CADe (0.43). CONCLUSION The use of a deep learning-based CADe system for breast ultrasound by readers significantly improved their reading ability. This system is expected to contribute to highly accurate breast cancer screening and diagnosis.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan.
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, 2-1-50 Minami-Koshigaya, Koshigaya, Saitama, 343-8555, Japan.
| | - Jen Feng Hsu
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan, ROC
| | - Ruey Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan, ROC
| | - Terumasa Sawada
- Department of Breast Surgery, NTT Medical Center Tokyo, 5-9-22 Higashi-Gotanda, Shinagawa-ku, Tokyo, 141-8625, Japan
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Yoshimi Ide
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
- Department of Breast Oncology, Kikuna Memorial Hospital, 4-4-27 Kikuna, Kohoku-ku, Yokohama, 222-0011, Japan
| | - Kanae Taruno
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Meishi Hankyo
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| | - Tomoko Kurita
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| | - Seigo Nakamura
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Hiroyuki Takei
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| |
Collapse
|
14
|
Bhure U, Cieciera M, Lehnick D, Del Sol Pérez Lago M, Grünig H, Lima T, Roos JE, Strobel K. Incorporation of CAD ( computer-aided detection) with thin-slice lung CT in routine 18F-FDG PET/CT imaging read-out protocol for detection of lung nodules. Eur J Hybrid Imaging 2023; 7:17. [PMID: 37718372 PMCID: PMC10505603 DOI: 10.1186/s41824-023-00177-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
OBJECTIVE To evaluate the detection rate and performance of 18F-FDG PET alone (PET), the combination of PET and low-dose thick-slice CT (PET/lCT), PET and diagnostic thin-slice CT (PET/dCT), and additional computer-aided detection (PET/dCT/CAD) for lung nodules (LN)/metastases in tumor patients. Along with this, assessment of inter-reader agreement and time requirement for different techniques were evaluated as well. METHODS In 100 tumor patients (56 male, 44 female; age range: 22-93 years, mean age: 60 years) 18F-FDG PET images, low-dose CT with shallow breathing (5 mm slice thickness), and diagnostic thin-slice CT (1 mm slice thickness) in full inspiration were retrospectively evaluated by three readers with variable experience (junior, mid-level, and senior) for the presence of lung nodules/metastases and additionally analyzed with CAD. Time taken for each analysis and number of the nodules detected were assessed. Sensitivity, specificity, positive and negative predictive value, accuracy, and Receiver operating characteristic (ROC) analysis of each technique was calculated. Histopathology and/or imaging follow-up served as reference standard for the diagnosis of metastases. RESULTS Three readers, on an average, detected 40 LN in 17 patients with PET only, 121 LN in 37 patients using ICT, 283 LN in 60 patients with dCT, and 282 LN in 53 patients with CAD. On average, CAD detected 49 extra LN, missed by the three readers without CAD, whereas CAD overall missed 53 LN. There was very good inter-reader agreement regarding the diagnosis of metastases for all four techniques (kappa: 0.84-0.93). The average time required for the evaluation of LN in PET, lCT, dCT, and CAD was 25, 31, 60, and 40 s, respectively; the assistance of CAD lead to average 33% reduction in time requirement for evaluation of lung nodules compared to dCT. The time-saving effect was highest in the less experienced reader. Regarding the diagnosis of metastases, sensitivity and specificity combined of all readers were 47.8%/96.2% for PET, 80.0%/81.9% for PET/lCT, 100%/56.7% for PET/dCT, and 95.6%/64.3% for PET/CAD. No significant difference was observed regarding the ROC AUC (area under the curve) between the imaging methods. CONCLUSION Implementation of CAD for the detection of lung nodules/metastases in routine 18F-FDG PET/CT read-out is feasible. The combination of diagnostic thin-slice CT and CAD significantly increases the detection rate of lung nodules in tumor patients compared to the standard PET/CT read-out. PET combined with low-dose CT showed the best balance between sensitivity and specificity regarding the diagnosis of metastases per patient. CAD reduces the time required for lung nodule/metastasis detection, especially for less experienced readers.
Collapse
Affiliation(s)
- Ujwal Bhure
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Matthäus Cieciera
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Dirk Lehnick
- Faculty of Health Sciences and Medicine, University of Lucerne, Frohburgstrasse 3, 6002, Lucerne, Switzerland
- Clinical Trial Unit Central Switzerland, University of Lucerne, 6002, Lucerne, Switzerland
| | | | - Hannes Grünig
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Thiago Lima
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Justus E Roos
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Klaus Strobel
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland.
- Division of Nuclear Medicine, Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, 6000, Lucerne 16, Switzerland.
| |
Collapse
|
15
|
Hong W, Hwang EJ, Park CM, Goo JM. Effects of Implementing Artificial Intelligence-Based Computer-Aided Detection for Chest Radiographs in Daily Practice on the Rate of Referral to Chest Computed Tomography in Pulmonology Outpatient Clinic. Korean J Radiol 2023; 24:890-902. [PMID: 37634643 PMCID: PMC10462895 DOI: 10.3348/kjr.2023.0255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 03/19/2023] [Accepted: 06/26/2023] [Indexed: 08/29/2023] Open
Abstract
OBJECTIVE The clinical impact of artificial intelligence-based computer-aided detection (AI-CAD) beyond diagnostic accuracy remains uncertain. We aimed to investigate the influence of the clinical implementation of AI-CAD for chest radiograph (CR) interpretation in daily practice on the rate of referral for chest computed tomography (CT). MATERIALS AND METHODS AI-CAD was implemented in clinical practice at the Seoul National University Hospital. CRs obtained from patients who visited the pulmonology outpatient clinics before (January-December 2019) and after (January-December 2020) implementation were included in this study. After implementation, the referring pulmonologist requested CRs with or without AI-CAD analysis. We conducted multivariable logistic regression analyses to evaluate the associations between using AI-CAD and the following study outcomes: the rate of chest CT referral, defined as request and actual acquisition of chest CT within 30 days after CR acquisition, and the CT referral rates separately for subsequent positive and negative CT results. Multivariable analyses included various covariates such as patient age and sex, time of CR acquisition (before versus after AI-CAD implementation), referring pulmonologist, nature of the CR examination (baseline versus follow-up examination), and radiology reports presence at the time of the pulmonology visit. RESULTS A total of 28546 CRs from 14565 patients (mean age: 67 years; 7130 males) and 25888 CRs from 12929 patients (mean age: 67 years; 6435 males) before and after AI-CAD implementation were included. The use of AI-CAD was independently associated with increased chest CT referrals (odds ratio [OR], 1.33; P = 0.008) and referrals with subsequent negative chest CT results (OR, 1.46; P = 0.005). Meanwhile, referrals with positive chest CT results were not significantly associated with AI-CAD use (OR, 1.08; P = 0.647). CONCLUSION The use of AI-CAD for CR interpretation in pulmonology outpatients was independently associated with an increased frequency of overall referrals for chest CT scans and referrals with subsequent negative results.
Collapse
Affiliation(s)
- Wonju Hong
- Department of Radiology, Hallym University Sacred Heart Hospital, Anyang, Republic of Korea
| | - Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
- Cancer Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
- Cancer Research Institute, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
16
|
Ngosa D, Moonga G, Shanaube K, Jacobs C, Ruperez M, Kasese N, Klinkenberg E, Schaap A, Mureithi L, Floyd S, Fidler S, Sichizya V, Maleya A, Ayles H. Assessment of non-tuberculosis abnormalities on digital chest x-rays with high CAD4TB scores from a tuberculosis prevalence survey in Zambia and South Africa. BMC Infect Dis 2023; 23:518. [PMID: 37553658 PMCID: PMC10408069 DOI: 10.1186/s12879-023-08460-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/14/2023] [Indexed: 08/10/2023] Open
Abstract
BACKGROUND Chest X-rays (CXRs) have traditionally been used to aid the diagnosis of TB-suggestive abnormalities. Using Computer-Aided Detection (CAD) algorithms, TB risk is quantified to assist with diagnostics. However, CXRs capture all other structural abnormalities. Identification of non-TB abnormalities in individuals with CXRs that have high CAD scores but don't have bacteriologically confirmed TB is unknown. This presents a missed opportunity of extending novel CAD systems' potential to simultaneously provide information on other non-TB abnormalities alongside TB. This study aimed to characterize and estimate the prevalence of non-TB abnormalities on digital CXRs with high CAD4TB scores from a TB prevalence survey in Zambia and South Africa. METHODOLOGY This was a cross-sectional analysis of clinical data of participants from the TREATS TB prevalence survey conducted in 21 communities in Zambia and South Africa. The study included individuals aged ≥ 15 years who had high CAD4TB scores (score ≥ 70), but had no bacteriologically confirmed TB in any of the samples submitted, were not on TB treatment, and had no history of TB. Two consultant radiologists reviewed the images for non-TB abnormalities. RESULTS Of the 525 CXRs reviewed, 46.7% (245/525) images were reported to have non-TB abnormalities. About 11.43% (28/245) images had multiple non-TB abnormalities, while 88.67% (217/245) had a single non-TB abnormality. The readers had a fair inter-rater agreement (r = 0.40). Based on anatomical location, non-TB abnormalities in the lung parenchyma (19%) were the most prevalent, followed by Pleura (15.4%), then heart & great vessels (6.1%) abnormalities. Pleural effusion/thickening/calcification (8.8%) and cardiomegaly (5%) were the most prevalent non-TB abnormalities. Prevalence of (2.7%) for pneumonia not typical of pulmonary TB and (2.1%) mass/nodules (benign/ malignant) were also reported. CONCLUSION A wide range of non-TB abnormalities can be identified on digital CXRs among individuals with high CAD4TB scores but don't have bacteriologically confirmed TB. Adaptation of AI systems like CAD4TB as a tool to simultaneously identify other causes of abnormal CXRs alongside TB can be interesting and useful in non-faculty-based screening programs to better link cases to appropriate care.
Collapse
Affiliation(s)
- Dennis Ngosa
- Department of Epidemiology and Biostatistics, School of Public Health, The University of Zambia, Lusaka, Zambia.
| | - Given Moonga
- Department of Epidemiology and Biostatistics, School of Public Health, The University of Zambia, Lusaka, Zambia
| | - Kwame Shanaube
- Zambia Aids Related Tuberculosis (ZAMBART), Lusaka, Zambia
| | - Choolwe Jacobs
- Department of Epidemiology and Biostatistics, School of Public Health, The University of Zambia, Lusaka, Zambia
| | - Maria Ruperez
- London School of Hygiene and Tropical Medicine, London, UK
| | - Nkatya Kasese
- Zambia Aids Related Tuberculosis (ZAMBART), Lusaka, Zambia
| | - Eveline Klinkenberg
- London School of Hygiene and Tropical Medicine, London, UK
- Department of Global Health, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Ab Schaap
- Zambia Aids Related Tuberculosis (ZAMBART), Lusaka, Zambia
| | | | - Sian Floyd
- London School of Hygiene and Tropical Medicine, London, UK
| | - Sarah Fidler
- Department of Infectious Disease, Faculty of Medicine, Imperial College London, London, UK
| | | | | | - Helen Ayles
- Zambia Aids Related Tuberculosis (ZAMBART), Lusaka, Zambia
- London School of Hygiene and Tropical Medicine, London, UK
| |
Collapse
|
17
|
Sun R, Wei C, Jiang Z, Huang G, Xie Y, Nie S. Weakly Supervised Breast Lesion Detection in Dynamic Contrast-Enhanced MRI. J Digit Imaging 2023; 36:1553-1564. [PMID: 37253896 PMCID: PMC10406986 DOI: 10.1007/s10278-023-00846-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/05/2023] [Accepted: 05/08/2023] [Indexed: 06/01/2023] Open
Abstract
Currently, obtaining accurate medical annotations requires high labor and time effort, which largely limits the development of supervised learning-based tumor detection tasks. In this work, we investigated a weakly supervised learning model for detecting breast lesions in dynamic contrast-enhanced MRI (DCE-MRI) with only image-level labels. Two hundred fifty-four normal and 398 abnormal cases with pathologically confirmed lesions were retrospectively enrolled into the breast dataset, which was divided into the training set (80%), validation set (10%), and testing set (10%) at the patient level. First, the second image series S2 after the injection of a contrast agent was acquired from the 3.0-T, T1-weighted dynamic enhanced MR imaging sequences. Second, a feature pyramid network (FPN) with convolutional block attention module (CBAM) was proposed to extract multi-scale feature maps of the modified classification network VGG16. Then, initial location information was obtained from the heatmaps generated using the layer class activation mapping algorithm (Layer-CAM). Finally, the detection results of breast lesion were refined by the conditional random field (CRF). Accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) were utilized for evaluation of image-level classification. Average precision (AP) was estimated for breast lesion localization. Delong's test was used to compare the AUCs of different models for significance. The proposed model was effective with accuracy of 95.2%, sensitivity of 91.6%, specificity of 99.2%, and AUC of 0.986. The AP for breast lesion detection was 84.1% using weakly supervised learning. Weakly supervised learning based on FPN combined with Layer-CAM facilitated automatic detection of breast lesion.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Chuanling Wei
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Zhuoyun Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Gang Huang
- Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Tai'an Central Hospital, No. 29 Long-Tan Road, Shandong, 271099, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China.
| |
Collapse
|
18
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
19
|
Mansour NM. Artificial Intelligence in Colonoscopy. Curr Gastroenterol Rep 2023; 25:122-129. [PMID: 37129831 DOI: 10.1007/s11894-023-00872-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/03/2023] [Indexed: 05/03/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is a rapidly growing field in gastrointestinal endoscopy, and its potential applications are virtually endless, with studies demonstrating use of AI for early gastric cancer, inflammatory bowel disease, Barrett's esophagus, capsule endoscopy, as well as other areas in gastroenterology. Much of the early studies and applications of AI in gastroenterology have revolved around colonoscopy, particularly with regards to real-time polyp detection and characterization. This review will cover much of the existing data on computer-aided detection (CADe), computer-aided diagnosis (CADx), and briefly discuss some other interesting applications of AI for colonoscopy, while also considering some of the challenges and limitations that exist around the use of AI for colonoscopy. RECENT FINDINGS Multiple randomized controlled trials have now been published which show a statistically significant improvement when using AI to improve adenoma detection and reduce adenoma miss rates during colonoscopy. There is also a growing pool of literature showing that AI can be helpful for characterizing/diagnosing colorectal polyps in real time. AI has also shown promise in other areas of colonoscopy, including polyp sizing and automated measurement and monitoring of quality metrics during colonoscopy. AI is a promising tool that has the ability to shape the future of gastrointestinal endoscopy, with much of the early data showing significant benefits to use of AI during colonoscopy. However, there remain several challenges that may delay or hamper the widespread use of AI in the field.
Collapse
Affiliation(s)
- Nabil M Mansour
- Section of Gastroenterology and Hepatology, Baylor College of Medicine, 7200 Cambridge St., Suite 8B, Houston, TX, 77030, USA.
| |
Collapse
|
20
|
Dreizin D, Staziaki PV, Khatri GD, Beckmann NM, Feng Z, Liang Y, Delproposto ZS, Klug M, Spann JS, Sarkar N, Fu Y. Artificial intelligence CAD tools in trauma imaging: a scoping review from the American Society of Emergency Radiology (ASER) AI/ML Expert Panel. Emerg Radiol 2023; 30:251-265. [PMID: 36917287 PMCID: PMC10640925 DOI: 10.1007/s10140-023-02120-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
BACKGROUND AI/ML CAD tools can potentially improve outcomes in the high-stakes, high-volume model of trauma radiology. No prior scoping review has been undertaken to comprehensively assess tools in this subspecialty. PURPOSE To map the evolution and current state of trauma radiology CAD tools along key dimensions of technology readiness. METHODS Following a search of databases, abstract screening, and full-text document review, CAD tool maturity was charted using elements of data curation, performance validation, outcomes research, explainability, user acceptance, and funding patterns. Descriptive statistics were used to illustrate key trends. RESULTS A total of 4052 records were screened, and 233 full-text articles were selected for content analysis. Twenty-one papers described FDA-approved commercial tools, and 212 reported algorithm prototypes. Works ranged from foundational research to multi-reader multi-case trials with heterogeneous external data. Scalable convolutional neural network-based implementations increased steeply after 2016 and were used in all commercial products; however, options for explainability were narrow. Of FDA-approved tools, 9/10 performed detection tasks. Dataset sizes ranged from < 100 to > 500,000 patients, and commercialization coincided with public dataset availability. Cross-sectional torso datasets were uniformly small. Data curation methods with ground truth labeling by independent readers were uncommon. No papers assessed user acceptance, and no method included human-computer interaction. The USA and China had the highest research output and frequency of research funding. CONCLUSIONS Trauma imaging CAD tools are likely to improve patient care but are currently in an early stage of maturity, with few FDA-approved products for a limited number of uses. The scarcity of high-quality annotated data remains a major barrier.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| | - Pedro V Staziaki
- Cardiothoracic Imaging, Department of Radiology, Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | - Garvit D Khatri
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Nicholas M Beckmann
- Memorial Hermann Orthopedic & Spine Hospital, McGovern Medical School at UTHealth, Houston, TX, USA
| | - Zhaoyong Feng
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Zachary S Delproposto
- Division of Emergency Radiology, Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - J Stephen Spann
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Nathan Sarkar
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yunting Fu
- Health Sciences and Human Services Library, University of Maryland, Baltimore, Baltimore, MD, USA
| |
Collapse
|
21
|
Nasser AA, Akhloufi MA. Deep Learning Methods for Chest Disease Detection Using Radiography Images. SN Comput Sci 2023; 4:388. [PMID: 37200562 PMCID: PMC10173935 DOI: 10.1007/s42979-023-01818-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 04/04/2023] [Indexed: 05/20/2023]
Abstract
X-ray images are the most widely used medical imaging modality. They are affordable, non-dangerous, accessible, and can be used to identify different diseases. Multiple computer-aided detection (CAD) systems using deep learning (DL) algorithms were recently proposed to support radiologists in identifying different diseases on medical images. In this paper, we propose a novel two-step approach for chest disease classification. The first is a multi-class classification step based on classifying X-ray images by infected organs into three classes (normal, lung disease, and heart disease). The second step of our approach is a binary classification of seven specific lungs and heart diseases. We use a consolidated dataset of 26,316 chest X-ray (CXR) images. Two deep learning methods are proposed in this paper. The first is called DC-ChestNet. It is based on ensembling deep convolutional neural network (DCNN) models. The second is named VT-ChestNet. It is based on a modified transformer model. VT-ChestNet achieved the best performance overcoming DC-ChestNet and state-of-the-art models (DenseNet121, DenseNet201, EfficientNetB5, and Xception). VT-ChestNet obtained an area under curve (AUC) of 95.13% for the first step. For the second step, it obtained an average AUC of 99.26% for heart diseases and an average AUC of 99.57% for lung diseases.
Collapse
Affiliation(s)
- Adnane Ait Nasser
- Perception, Robotics, and Intelligent Machines (PRIME), Université de Moncton, Moncton, NB E1C 3E9 Canada
| | - Moulay A. Akhloufi
- Perception, Robotics, and Intelligent Machines (PRIME), Université de Moncton, Moncton, NB E1C 3E9 Canada
| |
Collapse
|
22
|
Kao EF, Hsieh YJ, Ke CC, Lin WC, Ou Yang FY. Automated identification of single and clustered chromosomes for metaphase image analysis. Heliyon 2023; 9:e16408. [PMID: 37251870 PMCID: PMC10220244 DOI: 10.1016/j.heliyon.2023.e16408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/12/2023] [Accepted: 05/16/2023] [Indexed: 05/31/2023] Open
Abstract
Background Chromosome analysis is laborious and time-consuming. Automated methods can significantly increase the efficiency of chromosome analysis. For the automated analysis of chromosome images, single and clustered chromosomes must be identified. Herein, we propose a feature-based method for distinguishing between single chromosomes and clustered chromosome. Method The proposed method comprises three main steps. In the first step, chromosome objects are segmented from metaphase chromosome images in advance. In the second step, seven features are extracted from each segmented object, i.e., the normalized area, area/boundary ratio, side branch index, exhaustive thresholding index, normalized minimum width, minimum concave angle, and maximum boundary shift. Finally, the segmented objects are classified as a single chromosome or chromosome cluster using a combination of the seven features. Results In total, 43,391 segmented objects, including 39,892 single chromosomes and 3,499 chromosome clusters, are used to evaluate the proposed method. The results show that the proposed method achieves an accuracy of 98.92% by combining the seven features using support vector machine. Conclusions The proposed method is highly effective in distinguishing between single and clustered chromosomes and can be used as a preprocessing procedure for automated chromosome image analysis.
Collapse
Affiliation(s)
- E-Fong Kao
- Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Ya-Ju Hsieh
- Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chien-Chih Ke
- Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Wan-Chi Lin
- Isotope Application Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan
| | - Fang-Yu Ou Yang
- Isotope Application Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan
| |
Collapse
|
23
|
Dikici E, Nguyen XV, Takacs N, Prevedello LM. Prediction of model generalizability for unseen data: Methodology and case study in brain metastases detection in T1-Weighted contrast-enhanced 3D MRI. Comput Biol Med 2023; 159:106901. [PMID: 37068317 DOI: 10.1016/j.compbiomed.2023.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/08/2023] [Accepted: 04/09/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND AND PURPOSE A medical AI system's generalizability describes the continuity of its performance acquired from varying geographic, historical, and methodologic settings. Previous literature on this topic has mostly focused on "how" to achieve high generalizability (e.g., via larger datasets, transfer learning, data augmentation, model regularization schemes), with limited success. Instead, we aim to understand "when" the generalizability is achieved: Our study presents a medical AI system that could estimate its generalizability status for unseen data on-the-fly. MATERIALS AND METHODS We introduce a latent space mapping (LSM) approach utilizing Fréchet distance loss to force the underlying training data distribution into a multivariate normal distribution. During the deployment, a given test data's LSM distribution is processed to detect its deviation from the forced distribution; hence, the AI system could predict its generalizability status for any previously unseen data set. If low model generalizability is detected, then the user is informed by a warning message integrated into a sample deployment workflow. While the approach is applicable for most classification deep neural networks (DNNs), we demonstrate its application to a brain metastases (BM) detector for T1-weighted contrast-enhanced (T1c) 3D MRI. The BM detection model was trained using 175 T1c studies acquired internally (from the authors' institution) and tested using (1) 42 internally acquired exams and (2) 72 externally acquired exams from the publicly distributed Brain Mets dataset provided by the Stanford University School of Medicine. Generalizability scores, false positive (FP) rates, and sensitivities of the BM detector were computed for the test datasets. RESULTS AND CONCLUSION The model predicted its generalizability to be low for 31% of the testing data (i.e., two of the internally and 33 of the externally acquired exams), where it produced (1) ∼13.5 false positives (FPs) at 76.1% BM detection sensitivity for the low and (2) ∼10.5 FPs at 89.2% BM detection sensitivity for the high generalizability groups respectively. These results suggest that the proposed formulation enables a model to predict its generalizability for unseen data.
Collapse
Affiliation(s)
- Engin Dikici
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA.
| | - Xuan V Nguyen
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Noah Takacs
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| |
Collapse
|
24
|
Hwang EJ, Goo JM, Nam JG, Park CM, Hong KJ, Kim KH. Conventional Versus Artificial Intelligence-Assisted Interpretation of Chest Radiographs in Patients With Acute Respiratory Symptoms in Emergency Department: A Pragmatic Randomized Clinical Trial. Korean J Radiol 2023; 24:259-270. [PMID: 36788769 PMCID: PMC9971841 DOI: 10.3348/kjr.2022.0651] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 12/20/2022] [Accepted: 12/24/2022] [Indexed: 02/16/2023] Open
Abstract
OBJECTIVE It is unknown whether artificial intelligence-based computer-aided detection (AI-CAD) can enhance the accuracy of chest radiograph (CR) interpretation in real-world clinical practice. We aimed to compare the accuracy of CR interpretation assisted by AI-CAD to that of conventional interpretation in patients who presented to the emergency department (ED) with acute respiratory symptoms using a pragmatic randomized controlled trial. MATERIALS AND METHODS Patients who underwent CRs for acute respiratory symptoms at the ED of a tertiary referral institution were randomly assigned to intervention group (with assistance from an AI-CAD for CR interpretation) or control group (without AI assistance). Using a commercial AI-CAD system (Lunit INSIGHT CXR, version 2.0.2.0; Lunit Inc.). Other clinical practices were consistent with standard procedures. Sensitivity and false-positive rates of CR interpretation by duty trainee radiologists for identifying acute thoracic diseases were the primary and secondary outcomes, respectively. The reference standards for acute thoracic disease were established based on a review of the patient's medical record at least 30 days after the ED visit. RESULTS We randomly assigned 3576 participants to either the intervention group (1761 participants; mean age ± standard deviation, 65 ± 17 years; 978 males; acute thoracic disease in 472 participants) or the control group (1815 participants; 64 ± 17 years; 988 males; acute thoracic disease in 491 participants). The sensitivity (67.2% [317/472] in the intervention group vs. 66.0% [324/491] in the control group; odds ratio, 1.02 [95% confidence interval, 0.70-1.49]; P = 0.917) and false-positive rate (19.3% [249/1289] vs. 18.5% [245/1324]; odds ratio, 1.00 [95% confidence interval, 0.79-1.26]; P = 0.985) of CR interpretation by duty radiologists were not associated with the use of AI-CAD. CONCLUSION AI-CAD did not improve the sensitivity and false-positive rate of CR interpretation for diagnosing acute thoracic disease in patients with acute respiratory symptoms who presented to the ED.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
- Cancer Research Institute, Seoul National University, Seoul, Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea.
| | - Ju Gang Nam
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
- Cancer Research Institute, Seoul National University, Seoul, Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Ki Jeong Hong
- Department of Emergency Medicine, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Ki Hong Kim
- Department of Emergency Medicine, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
25
|
Loizidou K, Elia R, Pitris C. Computer-aided breast cancer detection and classification in mammography: A comprehensive review. Comput Biol Med 2023; 153:106554. [PMID: 36646021 DOI: 10.1016/j.compbiomed.2023.106554] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/13/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Cancer is the second cause of mortality worldwide and it has been identified as a perilous disease. Breast cancer accounts for ∼20% of all new cancer cases worldwide, making it a major cause of morbidity and mortality. Mammography is an effective screening tool for the early detection and management of breast cancer. However, the identification and interpretation of breast lesions is challenging even for expert radiologists. For that reason, several Computer-Aided Diagnosis (CAD) systems are being developed to assist radiologists to accurately detect and/or classify breast cancer. This review examines the recent literature on the automatic detection and/or classification of breast cancer in mammograms, using both conventional feature-based machine learning and deep learning algorithms. The review begins with a comparison of algorithms developed specifically for the detection and/or classification of two types of breast abnormalities, micro-calcifications and masses, followed by the use of sequential mammograms for improving the performance of the algorithms. The available Food and Drug Administration (FDA) approved CAD systems related to triage and diagnosis of breast cancer in mammograms are subsequently presented. Finally, a description of the open access mammography datasets is provided and the potential opportunities for future work in this field are highlighted. The comprehensive review provided here can serve both as a thorough introduction to the field but also provide indicative directions to guide future applications.
Collapse
Affiliation(s)
- Kosmia Loizidou
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Rafaella Elia
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Costas Pitris
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| |
Collapse
|
26
|
Chang D, Chen PT, Wang P, Wu T, Yeh AY, Lee PC, Sung YH, Liu KL, Wu MS, Yang D, Roth H, Liao WC, Wang W. Detection of pancreatic cancer with two- and three-dimensional radiomic analysis in a nationwide population-based real-world dataset. BMC Cancer 2023; 23:58. [PMID: 36650440 PMCID: PMC9843893 DOI: 10.1186/s12885-023-10536-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND CT is the major detection tool for pancreatic cancer (PC). However, approximately 40% of PCs < 2 cm are missed on CT, underscoring a pressing need for tools to supplement radiologist interpretation. METHODS Contrast-enhanced CT studies of 546 patients with pancreatic adenocarcinoma diagnosed by histology/cytology between January 2005 and December 2019 and 733 CT studies of controls with normal pancreas obtained between the same period in a tertiary referral center were retrospectively collected for developing an automatic end-to-end computer-aided detection (CAD) tool for PC using two-dimensional (2D) and three-dimensional (3D) radiomic analysis with machine learning. The CAD tool was tested in a nationwide dataset comprising 1,477 CT studies (671 PCs, 806 controls) obtained from institutions throughout Taiwan. RESULTS The CAD tool achieved 0.918 (95% CI, 0.895-0.938) sensitivity and 0.822 (95% CI, 0.794-0.848) specificity in differentiating between studies with and without PC (area under curve 0.947, 95% CI, 0.936-0.958), with 0.707 (95% CI, 0.602-0.797) sensitivity for tumors < 2 cm. The positive and negative likelihood ratios of PC were 5.17 (95% CI, 4.45-6.01) and 0.10 (95% CI, 0.08-0.13), respectively. Where high specificity is needed, using 2D and 3D analyses in series yielded 0.952 (95% CI, 0.934-0.965) specificity with a sensitivity of 0.742 (95% CI, 0.707-0.775), whereas using 2D and 3D analyses in parallel to maximize sensitivity yielded 0.915 (95% CI, 0.891-0.935) sensitivity at a specificity of 0.791 (95% CI, 0.762-0.819). CONCLUSIONS The high accuracy and robustness of the CAD tool supported its potential for enhancing the detection of PC.
Collapse
Affiliation(s)
- Dawei Chang
- grid.19188.390000 0004 0546 0241Data Science Degree Program, National Taiwan University and Academia Sinica, Taipei, Taiwan
| | - Po-Ting Chen
- grid.412094.a0000 0004 0572 7815Department of Medical Imaging, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Pochuan Wang
- grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Tinghui Wu
- grid.19188.390000 0004 0546 0241Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei, 10617 Taiwan
| | - Andre Yanchen Yeh
- grid.19188.390000 0004 0546 0241School of Medicine, National Taiwan University, Taipei, Taiwan
| | - Po-Chang Lee
- grid.454740.6National Health Insurance Administration, Ministry of Health and Welfare, Taipei, Taiwan
| | - Yi-Hui Sung
- grid.454740.6National Health Insurance Administration, Ministry of Health and Welfare, Taipei, Taiwan
| | - Kao-Lang Liu
- grid.412094.a0000 0004 0572 7815Department of Medical Imaging, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan ,grid.19188.390000 0004 0546 0241Department of Medical Imaging, National Taiwan University Cancer Center, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Ming-Shiang Wu
- grid.412094.a0000 0004 0572 7815Department of Internal Medicine, Division of Gastroenterology and Hepatology, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan ,grid.19188.390000 0004 0546 0241Internal Medicine, National Taiwan University College of Medicine, No. 7, Chung-Shan South Road, Taipei, 10002 Taiwan
| | - Dong Yang
- grid.451133.10000 0004 0458 4453NVIDIA, Bethesda, MD 20814 USA
| | - Holger Roth
- grid.451133.10000 0004 0458 4453NVIDIA, Bethesda, MD 20814 USA
| | - Wei-Chih Liao
- grid.412094.a0000 0004 0572 7815Department of Internal Medicine, Division of Gastroenterology and Hepatology, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan ,grid.19188.390000 0004 0546 0241Internal Medicine, National Taiwan University College of Medicine, No. 7, Chung-Shan South Road, Taipei, 10002 Taiwan
| | - Weichung Wang
- grid.19188.390000 0004 0546 0241Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei, 10617 Taiwan
| |
Collapse
|
27
|
Nakashima H, Kitazawa N, Fukuyama C, Kawachi H, Kawahira H, Momma K, Sakaki N. Clinical Evaluation of Computer-Aided Colorectal Neoplasia Detection Using a Novel Endoscopic Artificial Intelligence: A Single-Center Randomized Controlled Trial. Digestion 2023:1-9. [PMID: 36599306 DOI: 10.1159/000528085] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 11/12/2022] [Indexed: 01/06/2023]
Abstract
INTRODUCTION Computer-aided diagnostic systems are emerging in the field of gastrointestinal endoscopy. In this study, we assessed the clinical performance of the computer-aided detection (CADe) of colonic adenomas using a new endoscopic artificial intelligence system. METHODS This was a single-center prospective randomized study including 415 participants allocated into the CADe group (n = 207) and control group (n = 208). All endoscopic examinations were performed by experienced endoscopists. The performance of the CADe was assessed based on the adenoma detection rate (ADR). Additionally, we compared the adenoma miss rate for the rectosigmoid colon (AMRrs) between the groups. RESULTS The basic demographic and procedural characteristics of the CADe and control groups were as follows: mean age, 54.9 and 55.9 years; male sex, 73.9% and 69.7% of participants; and mean withdrawal time, 411.8 and 399.0 s, respectively. The ADR was 59.4% in the CADe group and 47.6% in the control group (p = 0.018). The AMRrs was 11.9% in the CADe group and 26.0% in the control group (p = 0.037). CONCLUSION The colonoscopy with the CADe system yielded an 11.8% higher ADR than that performed by experienced endoscopists alone. Moreover, there was no need to extend the examination time or request the assistance of additional medical staff to achieve this improved effectiveness. We believe that the novel CADe system can lead to considerable advances in colorectal cancer diagnosis.
Collapse
Affiliation(s)
- Hirotaka Nakashima
- Department of Gastroenterology, Foundation for Detection of Early Gastric Carcinoma, Tokyo, Japan
| | - Naoko Kitazawa
- Department of Gastroenterology, Foundation for Detection of Early Gastric Carcinoma, Tokyo, Japan
| | - Chika Fukuyama
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Hiroshi Kawachi
- Department of Pathology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | - Kumiko Momma
- Department of Gastroenterology, Foundation for Detection of Early Gastric Carcinoma, Tokyo, Japan
| | - Nobuhiro Sakaki
- Department of Gastroenterology, Foundation for Detection of Early Gastric Carcinoma, Tokyo, Japan
| |
Collapse
|
28
|
Toda N, Hashimoto M, Iwabuchi Y, Nagasaka M, Takeshita R, Yamada M, Yamada Y, Jinzaki M. Validation of deep learning-based computer-aided detection software use for interpretation of pulmonary abnormalities on chest radiographs and examination of factors that influence readers' performance and final diagnosis. Jpn J Radiol 2023; 41:38-44. [PMID: 36121622 DOI: 10.1007/s11604-022-01330-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 08/15/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE To evaluate the performance of a deep learning-based computer-aided detection (CAD) software for detecting pulmonary nodules, masses, and consolidation on chest radiographs (CRs) and to examine the effect of readers' experience and data characteristics on the sensitivity and final diagnosis. MATERIALS AND METHODS The CRs of 453 patients were retrospectively selected from two institutions. Among these CRs, 60 images with abnormal findings (pulmonary nodules, masses, and consolidation) and 140 without abnormal findings were randomly selected for sequential observer-performance testing. In the test, 12 readers (three radiologists, three pulmonologists, three non-pulmonology physicians, and three junior residents) interpreted 200 images with and without CAD, and the findings were compared. Weighted alternative free-response receiver operating characteristic (wAFROC) figure of merit (FOM) was used to analyze observer performance. The lesions that readers initially missed but CAD detected were stratified by anatomic location and degree of subtlety, and the adoption rate was calculated. Fisher's exact test was used for comparison. RESULTS The mean wAFROC FOM score of the 12 readers significantly improved from 0.746 to 0.810 with software assistance (P = 0.007). In the reader group with < 6 years of experience, the mean FOM score significantly improved from 0.680 to 0.779 (P = 0.011), while that in the reader group with ≥ 6 years of experience increased from 0.811 to 0.841 (P = 0.12). The sensitivity of the CAD software and the adoption rate for the lesions with subtlety level 2 or 3 (obscure) lesions were significantly lower than for level 4 or 5 (distinct) lesions (50% vs. 93%, P < 0.001; and 55% vs. 74%, P = 0.04, respectively). CONCLUSION CAD software use improved doctors' performance in detecting nodules/masses and consolidation on CRs, particularly for non-expert doctors, by preventing doctors from missing distinct lesions rather than helping them to detect obscure lesions.
Collapse
Affiliation(s)
- Naoki Toda
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Masahiro Hashimoto
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan.
| | - Yu Iwabuchi
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Misa Nagasaka
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Ryo Takeshita
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Minoru Yamada
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yoshitake Yamada
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Masahiro Jinzaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
29
|
Dovbysh A, Shelehov I, Romaniuk A, Moskalenko R, Savchenko T. Decision-making support system for diagnosis of oncopathologies by histological images. J Pathol Inform 2023; 14:100193. [PMID: 36873571 PMCID: PMC9975312 DOI: 10.1016/j.jpi.2023.100193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 01/17/2023] [Accepted: 01/17/2023] [Indexed: 01/27/2023] Open
Abstract
The aim of the study is to increase the functional efficiency of machine learning decision support system (DSS) for the diagnosis of oncopathology on the basis of tissue morphology. The method of hierarchical information-extreme machine learning of diagnostic DSS is offered. The method is developed within the framework of the functional approach to modeling of natural intelligence cognitive processes at formation and acceptance of classification decisions. This approach, in contrast to neuronal structures, allows diagnostic DSS to adapt to arbitrary conditions of histological imaging and flexibility in retraining the system by expanding the recognition classes alphabet that characterize different structures of tissue morphology. In addition, the decisive rules built within the geometric approach are practically invariant to the multidimensionality of the diagnostic features space. The developed method allows to create information, algorithmic, and software of the automated workplace of the histologist for diagnosing oncopathologies of different genesis. The machine learning method is implemented on the example of diagnosing breast cancer.
Collapse
Affiliation(s)
- Anatoliy Dovbysh
- Department of Computer Science, Sumy State University, 2 Rymskogo-Korsakova Street, Sumy, Sumy Region, Ukraine
| | - Ihor Shelehov
- Department of Computer Science, Sumy State University, 2 Rymskogo-Korsakova Street, Sumy, Sumy Region, Ukraine
| | - Anatolii Romaniuk
- Department of Pathology, Sumy State University, 31 Pryvokzalna Street, Sumy, Sumy Region, Ukraine
| | - Roman Moskalenko
- Department of Pathology, Sumy State University, 31 Pryvokzalna Street, Sumy, Sumy Region, Ukraine
| | - Taras Savchenko
- Department of Computer Science, Sumy State University, 2 Rymskogo-Korsakova Street, Sumy, Sumy Region, Ukraine
| |
Collapse
|
30
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
31
|
Quek TC, Takahashi K, Kang HG, Thakur S, Deshmukh M, Tseng RMWW, Nguyen H, Tham YC, Rim TH, Kim SS, Yanagi Y, Liew G, Cheng CY. Predictive, preventive, and personalized management of retinal fluid via computer-aided detection app for optical coherence tomography scans. EPMA J 2022; 13:547-560. [PMID: 36505893 PMCID: PMC9727042 DOI: 10.1007/s13167-022-00301-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 10/25/2022] [Indexed: 11/21/2022]
Abstract
Aims Computer-aided detection systems for retinal fluid could be beneficial for disease monitoring and management by chronic age-related macular degeneration (AMD) and diabetic retinopathy (DR) patients, to assist in disease prevention via early detection before the disease progresses to a "wet AMD" pathology or diabetic macular edema (DME), requiring treatment. We propose a proof-of-concept AI-based app to help predict fluid via a "fluid score", prevent fluid progression, and provide personalized, serial monitoring, in the context of predictive, preventive, and personalized medicine (PPPM) for patients at risk of retinal fluid complications. Methods The app comprises a convolutional neural network-Vision Transformer (CNN-ViT)-based segmentation deep learning (DL) network, trained on a small dataset of 100 training images (augmented to 992 images) from the Singapore Epidemiology of Eye Diseases (SEED) study, together with a CNN-based classification network trained on 8497 images, that can detect fluid vs. non-fluid optical coherence tomography (OCT) scans. Both networks are validated on external datasets. Results Internal testing for our segmentation network produced an IoU score of 83.0% (95% CI = 76.7-89.3%) and a DICE score of 90.4% (86.3-94.4%); for external testing, we obtained an IoU score of 66.7% (63.5-70.0%) and a DICE score of 78.7% (76.0-81.4%). Internal testing of our classification network produced an area under the receiver operating characteristics curve (AUC) of 99.18%, and a Youden index threshold of 0.3806; for external testing, we obtained an AUC of 94.55%, and an accuracy of 94.98% and an F1 score of 85.73% with Youden index. Conclusion We have developed an AI-based app with an alternative transformer-based segmentation algorithm that could potentially be applied in the clinic with a PPPM approach for serial monitoring, and could allow for the generation of retrospective data to research into the varied use of treatments for AMD and DR. The modular system of our app can be scaled to add more iterative features based on user feedback for more efficient monitoring. Further study and scaling up of the algorithm dataset could potentially boost its usability in a real-world clinical setting. Supplementary information The online version contains supplementary material available at 10.1007/s13167-022-00301-5.
Collapse
Affiliation(s)
- Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
| | | | - Hyun Goo Kang
- Department of Ophthalmology, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
| | - Mihir Deshmukh
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
| | - Rachel Marjorie Wei Wen Tseng
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
| | - Helen Nguyen
- Department of Ophthalmology, Centre for Vision Research, Westmead Institute for Medical Research, University of Sydney, Sydney, Australia
- School of Optometry and Vision Science, Faculty of Science, The University of New South Wales, Sydney, NSW Australia
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Medi Whale Inc, Seoul, South Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yasuo Yanagi
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology and Microtechnology, Yokohama City University, Yokohama, Japan
| | - Gerald Liew
- Department of Ophthalmology, Centre for Vision Research, Westmead Institute for Medical Research, University of Sydney, Sydney, Australia
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore, 169856 Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
| |
Collapse
|
32
|
Brand M, Troya J, Krenzer A, De Maria C, Mehlhase N, Götze S, Walter B, Meining A, Hann A. Frame-by-Frame Analysis of a Commercially Available Artificial Intelligence Polyp Detection System in Full-Length Colonoscopies. Digestion 2022; 103:378-385. [PMID: 35767938 DOI: 10.1159/000525345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/01/2022] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Computer-aided detection (CADe) helps increase colonoscopic polyp detection. However, little is known about other performance metrics like the number and duration of false-positive (FP) activations or how stable the detection of a polyp is. METHODS 111 colonoscopy videos with total 1,793,371 frames were analyzed on a frame-by-frame basis using a commercially available CADe system (GI-Genius, Medtronic Inc.). Primary endpoint was the number and duration of FP activations per colonoscopy. Additionally, we analyzed other CADe performance parameters, including per-polyp sensitivity, per-frame sensitivity, and first detection time of a polyp. We additionally investigated whether a threshold for withholding CADe activations can be set to suppress short FP activations and how this threshold alters the CADe performance parameters. RESULTS A mean of 101 ± 88 FPs per colonoscopy were found. Most of the FPs consisted of less than three frames with a maximal 66-ms duration. The CADe system detected all 118 polyps and achieved a mean per-frame sensitivity of 46.6 ± 26.6%, with the lowest value for flat polyps (37.6 ± 24.8%). Withholding CADe detections up to 6 frames length would reduce the number of FPs by 87.97% (p < 0.001) without a significant impact on CADe performance metrics. CONCLUSIONS The CADe system works reliable but generates many FPs as a side effect. Since most FPs are very short, withholding short-term CADe activations could substantially reduce the number of FPs without impact on other performance metrics. Clinical practice would benefit from the implementation of customizable CADe thresholds.
Collapse
Affiliation(s)
- Markus Brand
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| | - Adrian Krenzer
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany.,Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians-Universität, Würzburg, Germany
| | - Costanza De Maria
- Department of Gastroenterology and Hepatology, Ente Ospedaliero Cantonale (EOC), Bellinzona, Switzerland.,Department of Biomedical Science, University of Italian Switzerland (USI), Lugano, Switzerland
| | - Niklas Mehlhase
- Department of Internal Medicine I, University Hospital Ulm, Ulm, Germany
| | - Sebastian Götze
- Department of Internal Medicine I, University Hospital Ulm, Ulm, Germany
| | - Benjamin Walter
- Department of Internal Medicine I, University Hospital Ulm, Ulm, Germany
| | - Alexander Meining
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Würzburg, Germany
| |
Collapse
|
33
|
Sakamoto T, Nakashima H, Nakamura K, Nagahama R, Saito Y. Performance of Computer-Aided Detection and Diagnosis of Colorectal Polyps Compares to That of Experienced Endoscopists. Dig Dis Sci 2022; 67:3976-83. [PMID: 34403031 DOI: 10.1007/s10620-021-07217-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 08/03/2021] [Indexed: 12/13/2022]
Abstract
BACKGROUND Differential diagnosis of neoplasms and non-neoplasms is crucial in ensuring appropriate and proper medical management for patients undergoing colonoscopy. Diagnostic ability can vary, depending on the colonoscopist's experience. To overcome this issue, artificial intelligence (AI) may be effective. AIMS To assess the performance of a computer-aided detection (CADe) and a computer-aided diagnosis (CADx) system for the detection and characterization of colorectal polyps by comparing their data with those of experienced endoscopists. METHODS This retrospective, still image-based validation study was conducted at three Japanese medical centers. A total of 579 white-light images (WLIs) and 605 linked color images (LCIs) were used for testing the CADe and 308 WLIs and 296 blue laser/light images (BLIs) for testing the CADx. The performances of the CADe and CADx systems were assessed and compared with the correct answers provided by three experienced endoscopists. RESULTS CADe in WLI demonstrated a sensitivity of 94.5% (95% confidence interval (CI), 92.0-96.9%) and a specificity of 87.2% (84.5-89.9%). CADe in LCI demonstrated a sensitivity of 96.0% (93.9-98.1%) and a specificity of 85.1% (82.3-87.9%). CADx in WLI demonstrated a sensitivity of 95.5% (92.9-98.1%) and a specificity of 84.4% (73.4-91.5%), resulting in an accuracy of 93.2% (90.4-96.0%). CADx in BLI showed a sensitivity of 96.3% (93.9-98.7%) and a specificity of 88.7% (77.1-95.1%), resulting in an accuracy of 94.9% (92.4-97.4%). CONCLUSIONS CADe and CADx demonstrated sufficient diagnostic performance to support the use of an AI system.
Collapse
|
34
|
Pramanik R, Biswas M, Sen S, Souza Júnior LAD, Papa JP, Sarkar R. A fuzzy distance-based ensemble of deep models for cervical cancer detection. Comput Methods Programs Biomed 2022; 219:106776. [PMID: 35398621 DOI: 10.1016/j.cmpb.2022.106776] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/22/2022] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. METHODS We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. RESULTS In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. CONCLUSION Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.
Collapse
Affiliation(s)
- Rishav Pramanik
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Momojit Biswas
- Department of Metallurgical and Material Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Shibaprasad Sen
- Department of Computer Science and Technology, University of Engineering and Management, Kolkata, 700160, West Bengal, India.
| | - Luis Antonio de Souza Júnior
- Department of Computing, São Carlos Federal University-UFScar, São Carlos, São Paulo, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany.
| | - João Paulo Papa
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Bavaria, Germany; Department of Computing, São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01, Bauru, São Paulo, Brazil.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188 Raja S C Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
35
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. Comput Methods Programs Biomed 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
36
|
Tavaziva G, Majidulla A, Nazish A, Saeed S, Benedetti A, Khan AJ, Ahmad Khan F. Diagnostic accuracy of a commercially available, deep learning-based chest X-ray interpretation software for detecting culture-confirmed pulmonary tuberculosis. Int J Infect Dis 2022; 122:15-20. [PMID: 35597555 DOI: 10.1016/j.ijid.2022.05.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 05/12/2022] [Accepted: 05/13/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Few evaluations of computer-aided detection (CAD) software for analyzing chest radiographs for tuberculosis have used mycobacterial culture as the reference standard. METHODS Using data from a prospective study of symptomatic adults and household contacts of persons with tuberculosis who were seeking care in Karachi, we evaluated the accuracy of LUNIT INSIGHT version 3.1.0.0 (LUNIT, South Korea) for detecting pulmonary tuberculosis in the triage use case. The reference standard was liquid culture. We estimated the diagnostic accuracy using three developer-recommended threshold scores for tuberculosis: 15, 30, and 45. RESULTS A total 269 of 2190 (12%) participants had culture-confirmed pulmonary tuberculosis. LUNIT-reported abnormalities of nodule, consolidation, fibrosis, and pleural effusion were more common with culture-confirmed tuberculosis. At the tuberculosis threshold score of 30, sensitivity and specificity were, respectively, 87.7% [95% CI: 83.2-91.4%] and 64.3% [62.1-66.4%]. Sensitivity was similar at scores of 15, 88.1% [95% CI: 83.6-91.7%] and 45, 86.6% [82.0 - 90.5%]; and specificity was 57.9% [55.7-60.2%] and 69.9% [67.8-71.9%], respectively. Sensitivity was lower for smear-negative disease, and specificity was lower with increasing age, previous tuberculosis, and decreasing body mass index. Diabetes and tobacco smoking did not modify accuracy. CONCLUSION In a population where most tuberculosis was smear-positive, LUNIT-reported radiographic abnormalities were associated with culture-confirmed disease. Manufacturer-recommended threshold scores had limited sensitivity.
Collapse
Affiliation(s)
- Gamuchirai Tavaziva
- McGill International TB Centre, Centre for Outcomes Research & Evaluation, Research Institute of the McGill University Health Centre, Montreal, Canada.
| | - Arman Majidulla
- Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA.
| | - Ahsana Nazish
- Indus Hospital and Health Network, Karachi, Pakistan.
| | - Saima Saeed
- Indus Hospital and Health Network, Karachi, Pakistan.
| | - Andrea Benedetti
- McGill International TB Centre, Centre for Outcomes Research & Evaluation, Research Institute of the McGill University Health Centre, Montreal, Canada; Departments of Medicine & Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, Canada.
| | | | - Faiz Ahmad Khan
- McGill International TB Centre, Centre for Outcomes Research & Evaluation, Research Institute of the McGill University Health Centre, Montreal, Canada; Departments of Medicine & Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, Canada.
| |
Collapse
|
37
|
Minchenberg SB, Walradt T, Glissen Brown JR. Scoping out the future: The application of artificial intelligence to gastrointestinal endoscopy. World J Gastrointest Oncol 2022; 14:989-1001. [PMID: 35646286 PMCID: PMC9124983 DOI: 10.4251/wjgo.v14.i5.989] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/21/2021] [Accepted: 04/21/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a quickly expanding field in gastrointestinal endoscopy. Although there are a myriad of applications of AI ranging from identification of bleeding to predicting outcomes in patients with inflammatory bowel disease, a great deal of research has focused on the identification and classification of gastrointestinal malignancies. Several of the initial randomized, prospective trials utilizing AI in clinical medicine have centered on polyp detection during screening colonoscopy. In addition to work focused on colorectal cancer, AI systems have also been applied to gastric, esophageal, pancreatic, and liver cancers. Despite promising results in initial studies, the generalizability of most of these AI systems have not yet been evaluated. In this article we review recent developments in the field of AI applied to gastrointestinal oncology.
Collapse
Affiliation(s)
- Scott B Minchenberg
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| | - Trent Walradt
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| | - Jeremy R Glissen Brown
- Division of Gastroenterology, Beth Israel Deaconess Medical Center, Boston, MA 02130, United States
| |
Collapse
|
38
|
Sivananthan A, Nazarian S, Ayaru L, Patel K, Ashrafian H, Darzi A, Patel N. Does computer-aided diagnostic endoscopy improve the detection of commonly missed polyps? A meta-analysis. Clin Endosc 2022; 55:355-364. [PMID: 35545215 PMCID: PMC9178131 DOI: 10.5946/ce.2021.228] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 12/14/2021] [Indexed: 11/28/2022] Open
Abstract
Background/Aims Colonoscopy is the gold standard diagnostic method for colorectal neoplasia, allowing detection and resection of adenomatous polyps; however, significant proportions of adenomas are missed. Computer-aided detection (CADe) systems in endoscopy are currently available to help identify lesions. Diminutive (≤5 mm) and nonpedunculated polyps are most commonly missed. This meta-analysis aimed to assess whether CADe systems can improve the real-time detection of these commonly missed lesions.
Methods A comprehensive literature search was performed. Randomized controlled trials evaluating CADe systems categorized by morphology and lesion size were included. The mean number of polyps and adenomas per patient was derived. Independent proportions and their differences were calculated using DerSimonian and Laird random-effects modeling.
Results Seven studies, including 2,595 CADe-assisted colonoscopies and 2,622 conventional colonoscopies, were analyzed. CADe-assisted colonoscopy demonstrated an 80% increase in the mean number of diminutive adenomas detected per patient compared with conventional colonoscopy (0.31 vs. 0.17; effect size, 0.13; 95% confidence interval [CI], 0.09–0.18); it also demonstrated a 91.7% increase in the mean number of nonpedunculated adenomas detected per patient (0.32 vs. 0.19; effect size, 0.05; 95% CI, 0.02–0.07).
Conclusions CADe-assisted endoscopy significantly improved the detection of most commonly missed adenomas. Although this method is a potentially exciting technology, limitations still apply to current data, prompting the need for further real-time studies.
Collapse
Affiliation(s)
- Arun Sivananthan
- Institute of Global Health Innovation, Imperial College, London, UK.,Department of Surgery and Cancer, Imperial College NHS Healthcare Trust, London, UK
| | - Scarlet Nazarian
- Institute of Global Health Innovation, Imperial College, London, UK
| | - Lakshmana Ayaru
- Department of Surgery and Cancer, Imperial College NHS Healthcare Trust, London, UK
| | - Kinesh Patel
- Department of Gastroenterology, Chelsea and Westminster NHS Healthcare Trust, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College, London, UK.,Department of Surgery and Cancer, Imperial College NHS Healthcare Trust, London, UK
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College, London, UK.,Department of Surgery and Cancer, Imperial College NHS Healthcare Trust, London, UK
| | - Nisha Patel
- Institute of Global Health Innovation, Imperial College, London, UK.,Department of Surgery and Cancer, Imperial College NHS Healthcare Trust, London, UK
| |
Collapse
|
39
|
Chung M, Kong ST, Park B, Chung Y, Jung KH, Seo JB. Utilizing Synthetic Nodules for Improving Nodule Detection in Chest Radiographs. J Digit Imaging 2022. [PMID: 35304676 DOI: 10.1007/s10278-022-00608-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 01/31/2022] [Accepted: 02/14/2022] [Indexed: 10/18/2022] Open
Abstract
Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model's recall while retaining the low level of false positives.
Collapse
|
40
|
Li Y, Yuan W, Fan M, Zheng B, Li L. Prediction of Short-Term Breast Cancer Risk with Fusion of CC- and MLO-Based Risk Models in Four-View Mammograms. J Digit Imaging 2022. [PMID: 35262841 DOI: 10.1007/s10278-019-00266-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
This study performed and assessed a novel program to improve the accuracy of short-term breast cancer risk prediction by using information from craniocaudal (CC) and mediolateral-oblique (MLO) views of two breasts. An age-matched dataset of 556 patients with at least two sequential full-field digital mammography examinations was applied. In the second examination, 278 cases were diagnosed and pathologically verified as cancer, and 278 were negative, while all cases in the first examination were negative (not recalled). Two generalized linear-model-based risk prediction models were established with global- and local-based bilateral asymmetry features for CC and MLO views first. Then, a new fusion risk model was developed by fusing prediction results of the CC- and MLO-based risk models with an adaptive alpha-integration-based fusion method. The AUC of the fusion risk model was 0.72 ± 0.02, which was significantly higher than the AUC of CC- or MLO-based risk model (P < 0.05). The maximum odds ratio for CC- and MLO-based risk models were 8.09 and 5.25, respectively, and increased to 11.99 for the fusion risk model. For subgroups of patients aged 37-49 years, 50-65 years, and 66-87 years, the AUCs of 0.73, 0.71, and 0.75 for the fusion risk model were higher than AUC for CC- and MLO-based risk models. For the BIRADS 2 and 3 subgroups, the AUC values were 0.72 and 0.71 respectively for the fusion risk model which were higher than the AUC for the CC- and MLO-based risk models. This study demonstrated that the fusion risk model we established could effectively derive and integrate supplementary and useful information extracted from both CC and MLO view images and adaptively fuse them to increase the predictive power of the short-term breast cancer risk assessment model.
Collapse
|
41
|
Razavi S, Khameneh FD, Nouri H, Androutsos D, Done SJ, Khademi A. MiNuGAN: Dual Segmentation of Mitoses and Nuclei Using Conditional GANs on Multi-center Breast H&E Images. J Pathol Inform 2022; 13:100002. [PMID: 35242442 DOI: 10.1016/j.jpi.2022.100002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
Breast cancer is the second most commonly diagnosed type of cancer among women as of 2021. Grading of histopathological images is used to guide breast cancer treatment decisions and a critical component of this is a mitotic score, which is related to tumor aggressiveness. Manual mitosis counting is an extremely tedious manual task, but automated approaches can be used to overcome inefficiency and subjectivity. In this paper, we propose an automatic mitosis and nuclear segmentation method for a diverse set of H&E breast cancer pathology images. The method is based on a conditional generative adversarial network to segment both mitoses and nuclei at the same time. Architecture optimizations are investigated, including hyper parameters and the addition of a focal loss. The accuracy of the proposed method is investigated using images from multiple centers and scanners, including TUPAC16, ICPR14 and ICPR12 datasets. In TUPAC16, we use 618 carefully annotated images of size 256×256 scanned at 40×. TUPAC16 is used to train the model, and segmentation performance is measured on the test set for both nuclei and mitoses. Results on 200 held-out testing images from the TUPAC16 dataset were mean DSC = 0.784 and 0.721 for nuclear and mitosis, respectively. On 202 ICPR12 images, mitosis segmentation accuracy had a mean DSC = 0.782, indicating the model generalizes well to unseen datasets. For datasets that had mitosis centroid annotations, which included 200 TUPAC16, 202 ICPR12 and 524 ICPR14, a mean F1-score of 0.854 was found indicating high mitosis detection accuracy.
Collapse
|
42
|
Lancaster HL, Zheng S, Aleshina OO, Yu D, Yu Chernina V, Heuvelmans MA, de Bock GH, Dorrius MD, Gratama JW, Morozov SP, Gombolevskiy VA, Silva M, Yi J, Oudkerk M. Outstanding negative prediction performance of solid pulmonary nodule volume AI for ultra-LDCT baseline lung cancer screening risk stratification. Lung Cancer 2022; 165:133-140. [PMID: 35123156 DOI: 10.1016/j.lungcan.2022.01.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/04/2021] [Accepted: 01/03/2022] [Indexed: 12/17/2022]
Abstract
OBJECTIVE To evaluate performance of AI as a standalone reader in ultra-low-dose CT lung cancer baseline screening, and compare it to that of experienced radiologists. METHODS 283 participants who underwent a baseline ultra-LDCT scan in Moscow Lung Cancer Screening, between February 2017-2018, and had at least one solid lung nodule, were included. Volumetric nodule measurements were performed by five experienced blinded radiologists, and independently assessed using an AI lung cancer screening prototype (AVIEW LCS, v1.0.34, Coreline Soft, Co. ltd, Seoul, Korea) to automatically detect, measure, and classify solid nodules. Discrepancies were stratified into two groups: positive-misclassification (PM); nodule classified by the reader as a NELSON-plus /EUPS-indeterminate/positive nodule, which at the reference consensus read was < 100 mm3, and negative-misclassification (NM); nodule classified as a NELSON-plus /EUPS-negative nodule, which at consensus read was ≥ 100 mm3. RESULTS 1149 nodules with a solid-component were detected, of which 878 were classified as solid nodules. For the largest solid nodule per participant (n = 283); 61 [21.6 %; 53 PM, 8 NM] discrepancies were reported for AI as a standalone reader, compared to 43 [15.1 %; 22 PM, 21 NM], 36 [12.7 %; 25 PM, 11 NM], 29 [10.2 %; 25 PM, 4 NM], 28 [9.9 %; 6 PM, 22 NM], and 50 [17.7 %; 15 PM, 35 NM] discrepancies for readers 1, 2, 3, 4, and 5 respectively. CONCLUSION Our results suggest that through the use of AI as an impartial reader in baseline lung cancer screening, negative-misclassification results could exceed that of four out of five experienced radiologists, and radiologists' workload could be drastically diminished by up to 86.7%.
Collapse
Affiliation(s)
- Harriet L Lancaster
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Sunyi Zheng
- Department of Radiotherapy, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Olga O Aleshina
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | | | - Valeria Yu Chernina
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | - Marjolein A Heuvelmans
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Geertruida H de Bock
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Monique D Dorrius
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Sergey P Morozov
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | - Victor A Gombolevskiy
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation; AIRI, Moscow, Russian Federation
| | - Mario Silva
- Scienze Radiologiche, Department of Medicine and Surgery (DiMeC), University of Parma, Parma, Italy
| | | | - Matthijs Oudkerk
- Institute for Diagnostic Accuracy, Groningen, Netherlands; Faculty of Medical Sciences, University of Groningen, Groningen, Netherlands.
| |
Collapse
|
43
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning 2.5-dimensional single-shot detector improves the performance of automated detection of brain metastases on contrast-enhanced CT. Neuroradiology 2022. [PMID: 35064786 DOI: 10.1007/s00234-022-02902-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/15/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Collapse
|
44
|
J L G, Abraham B, M S S, Nair MS. A computer-aided diagnosis system for the classification of COVID-19 and non-COVID-19 pneumonia on chest X-ray images by integrating CNN with sparse autoencoder and feed forward neural network. Comput Biol Med 2021; 141:105134. [PMID: 34971978 PMCID: PMC8668604 DOI: 10.1016/j.compbiomed.2021.105134] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/19/2021] [Accepted: 12/10/2021] [Indexed: 12/15/2022]
Abstract
Several infectious diseases have affected the lives of many people and have caused great dilemmas all over the world. COVID-19 was declared a pandemic caused by a newly discovered virus named Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) by the World Health Organisation in 2019. RT-PCR is considered the golden standard for COVID-19 detection. Due to the limited RT-PCR resources, early diagnosis of the disease has become a challenge. Radiographic images such as Ultrasound, CT scans, X-rays can be used for the detection of the deathly disease. Developing deep learning models using radiographic images for detecting COVID-19 can assist in countering the outbreak of the virus. This paper presents a computer-aided detection model utilizing chest X-ray images for combating the pandemic. Several pre-trained networks and their combinations have been used for developing the model. The method uses features extracted from pre-trained networks along with Sparse autoencoder for dimensionality reduction and a Feed Forward Neural Network (FFNN) for the detection of COVID-19. Two publicly available chest X-ray image datasets, consisting of 504 COVID-19 images and 542 non-COVID-19 images, have been combined to train the model. The method was able to achieve an accuracy of 0.9578 and an AUC of 0.9821, using the combination of InceptionResnetV2 and Xception. Experiments have proved that the accuracy of the model improves with the usage of sparse autoencoder as the dimensionality reduction technique.
Collapse
Affiliation(s)
- Gayathri J L
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam, 691 601, Kerala, India.
| | - Bejoy Abraham
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam, 691 601, Kerala, India.
| | - Sujarani M S
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam, 691 601, Kerala, India
| | - Madhu S Nair
- Artificial Intelligence & Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, 682 022, Kerala, India
| |
Collapse
|
45
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning single-shot detector for automatic detection of brain metastases with the combined use of contrast-enhanced and non-enhanced computed tomography images. Eur J Radiol 2021; 144:110015. [PMID: 34742108 DOI: 10.1016/j.ejrad.2021.110015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/10/2021] [Accepted: 10/27/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE To develop a deep-learning object detection model for automatic detection of brain metastases that simultaneously uses contrast-enhanced and non-enhanced images as inputs, and to compare its performance with that of a model that uses only contrast-enhanced images. METHOD A total of 116 computed tomography (CT) scans of 116 patients with brain metastases were included in this study. They showed a total of 659 metastases, 428 of which were used for training and validation (mean size, 11.3 ± 9.9 mm) and 231 were used for testing (mean size, 9.0 ± 7.0 mm). Single-shot detector (SSD) models were constructed with a feature fusion module, and their results were compared per lesion at a confidence threshold of 50%. RESULTS The sensitivity was 88.7% for the model that used both contrast-enhanced and non-enhanced CT images (the CE + NECT model) and 87.6% for the model that used only contrast-enhanced CT images (the CECT model). The positive predictive value (PPV) was 44.0% for the CE + NECT model and 37.2% for the CECT model. The number of false positives per patient was 9.9 for the CE + NECT model and 13.6 for the CECT model. The CE + NECT model had a significantly higher PPV (t test, p < 0.001), significantly fewer false positives (t test, p < 0.001), and a tendency to be more sensitive (t test, p = 0.14). CONCLUSIONS The results indicate that the information on true contrast enhancement obtained by comparing the contrast-enhanced and non-enhanced images may prevent the detection of pseudolesions, suppress false positives, and improve the performance of deep-learning object detection models.
Collapse
Affiliation(s)
- Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
46
|
Kundu R, Singh PK, Mirjalili S, Sarkar R. COVID-19 detection from lung CT-Scans using a fuzzy integral-based CNN ensemble. Comput Biol Med 2021; 138:104895. [PMID: 34649147 PMCID: PMC8483997 DOI: 10.1016/j.compbiomed.2021.104895] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 09/19/2021] [Accepted: 09/22/2021] [Indexed: 12/16/2022]
Abstract
The COVID-19 pandemic has collapsed the public healthcare systems, along with severely damaging the economy of the world. The SARS-CoV-2 virus also known as the coronavirus, led to community spread, causing the death of more than a million people worldwide. The primary reason for the uncontrolled spread of the virus is the lack of provision for population-wise screening. The apparatus for RT-PCR based COVID-19 detection is scarce and the testing process takes 6-9 h. The test is also not satisfactorily sensitive (71% sensitive only). Hence, Computer-Aided Detection techniques based on deep learning methods can be used in such a scenario using other modalities like chest CT-scan images for more accurate and sensitive screening. In this paper, we propose a method that uses a Sugeno fuzzy integral ensemble of four pre-trained deep learning models, namely, VGG-11, GoogLeNet, SqueezeNet v1.1 and Wide ResNet-50-2, for classification of chest CT-scan images into COVID and Non-COVID categories. The proposed framework has been tested on a publicly available dataset for evaluation and it achieves 98.93% accuracy and 98.93% sensitivity on the same. The model outperforms state-of-the-art methods on the same dataset and proves to be a reliable COVID-19 detector. The relevant source codes for the proposed approach can be found at: https://github.com/Rohit-Kundu/Fuzzy-Integral-Covid-Detection.
Collapse
Affiliation(s)
- Rohit Kundu
- Department of Electrical Engineering, Jadavpur University, 188, Raja S. C. Mallick Road, Kolkata-700032, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata-700106, West Bengal, India
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimization, Torrens University, Australia,Yonser Frontier Lab, Yonsei University, South Korea,Corresponding author. Centre for Artificial Intelligence Research and Optimization, Torrens University, Australia
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, 188, Raja S. C. Mallick Road, Kolkata-700032, West Bengal, India
| |
Collapse
|
47
|
Deliwala SS, Hamid K, Barbarawi M, Lakshman H, Zayed Y, Kandel P, Malladi S, Singh A, Bachuwa G, Gurvits GE, Chawla S. Artificial intelligence (AI) real-time detection vs. routine colonoscopy for colorectal neoplasia: a meta-analysis and trial sequential analysis. Int J Colorectal Dis 2021; 36:2291-2303. [PMID: 33934173 DOI: 10.1007/s00384-021-03929-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/07/2021] [Indexed: 02/04/2023]
Abstract
GOALS AND BACKGROUND Studies analyzing artificial intelligence (AI) in colonoscopies have reported improvements in detecting colorectal cancer (CRC) lesions, however its utility in the realworld remains limited. In this systematic review and meta-analysis, we evaluate the efficacy of AI-assisted colonoscopies against routine colonoscopy (RC). STUDY We performed an extensive search of major databases (through January 2021) for randomized controlled trials (RCTs) reporting adenoma and polyp detection rates. Odds ratio (OR) and standardized mean differences (SMD) with 95% confidence intervals (CIs) were reported. Additionally, trial sequential analysis (TSA) was performed to guard against errors. RESULTS Six RCTs were included (4996 participants). The mean age (SD) was 51.99 (4.43) years, and 49% were females. Detection rates favored AI over RC for adenomas (OR 1.77; 95% CI: 1.570-2.08) and polyps (OR 1.91; 95% CI: 1.68-2.16). Secondary outcomes including mean number of adenomas (SMD 0.23; 95% CI: 0.18-0.29) and polyps (SMD 0.23; 95% CI: 0.17-0.29) detected per procedure favored AI. However, RC outperformed AI in detecting pedunculated polyps. Withdrawal times (WTs) favored AI when biopsies were included, while WTs without biopsies, cecal intubation times, and bowel preparation adequacy were similar. CONCLUSIONS Colonoscopies equipped with AI detection algorithms could significantly detect previously missed adenomas and polyps while retaining the ability to self-assess and improve periodically. More effective clearance of diminutive adenomas may allow lengthening in surveillance intervals, reducing the burden of surveillance colonoscopies, and increasing its accessibility to those at higher risk. TSA ruled out the risk for false-positive results and confirmed a sufficient sample size to detect the observed effect. Currently, these findings suggest that AI-assisted colonoscopy can serve as a useful proxy to address critical gaps in CRC identification.
Collapse
Affiliation(s)
- Smit S Deliwala
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA.
| | - Kewan Hamid
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Mahmoud Barbarawi
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Harini Lakshman
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Yazan Zayed
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Pujan Kandel
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Srikanth Malladi
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Adiraj Singh
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Ghassan Bachuwa
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Grigoriy E Gurvits
- Department of Internal Medicine - Division of Gastroenterology, New York University/Langone Medical Center, New York, NY, USA
| | - Saurabh Chawla
- Department of Internal Medicine - Division of Gastroenterology, Emory University, Atlanta, GA, USA
| |
Collapse
|
48
|
Fischer G, De Silvestro A, Müller M, Frauenfelder T, Martini K. Computer-Aided Detection of Seven Chest Pathologies on Standard Posteroanterior Chest X-Rays Compared to Radiologists Reading Dual-Energy Subtracted Radiographs. Acad Radiol 2021; 29:e139-e148. [PMID: 34706849 DOI: 10.1016/j.acra.2021.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 09/06/2021] [Accepted: 09/21/2021] [Indexed: 11/01/2022]
Abstract
RATIONALE AND OBJECTIVES Retrospective performance evaluation of a computer-aided detection (CAD) system on standard posteroanterior (PA) chest radiographs (PA-CXR) in detection of pulmonary nodules, infectious consolidation, pneumothorax, pleural effusion, aortic calcification, cardiomegaly and rib fractures compared to radiologists analyzing PA-CXR including dual-energy subtraction radiography (further termed as DESR). MATERIALS AND METHODS PA-CXR/DESR images of 197 patients were included. All patients underwent chest CT (gold standard) within a short interval (mean 28 hours). All images were evaluated by three blinded readers for the presence of pulmonary nodules, infectious consolidation, pneumothorax, pleural effusion, aortic calcification, cardiomegaly, and rib fractures. Meanwhile PA-CXR were analyzed by a CAD software. CAD results were compared to the majority result of the three readers. Sensitivity and specificity were calculated. McNemar's test was applied to test for significant differences. Interobserver agreement was defined using Cohen's kappa (κ). RESULTS Sensitivity of the CAD software was significantly higher (p < 0.05) for detection of infectious consolidation and pulmonary nodules (67.9% vs 26.8% and 54% vs 35.6%, respectively; p < 0.001) compared to radiologists analyzing DESR images. For the residual evaluated pathologies no statistical significant differences could be found. Overall, mean inter observer agreement between the three radiologists was moderate (k = 0.534). The best interobserver agreement could be reached for pneumothorax (k = 0.708) and pleural effusion (k = 0.699), while the worst was obtained for rib fractures (k = 0.412). CONCLUSION The CAD system has the potential to improve the detection of infectious consolidation and pulmonary nodules on CXR images.
Collapse
|
49
|
Abstract
Over the past decade, artificial intelligence (AI) has been broadly applied to many aspects of human life, with recent groundbreaking successes in facial recognition, natural language processing, autonomous driving, and medical imaging. Gastroenterology has applied AI to a vast array of clinical problems, and some of the earliest prospective trials examining AI in medicine have been in computer vision applied to endoscopy. Evidence is mounting for 2 broad areas of AI as applied to gastroenterology: computer-aided detection and computer-aided diagnosis.
Collapse
Affiliation(s)
- Jeremy R Glissen Brown
- Center for Advanced Endoscopy, Division of Gastroenterology and Hepatology, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Avenue, Boston, MA 02130, USA.
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Division of Gastroenterology and Hepatology, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Avenue, Boston, MA 02130, USA
| |
Collapse
|
50
|
Hwang EJ, Goo JM, Yoon SH, Beck KS, Seo JB, Choi BW, Chung MJ, Park CM, Jin KN, Lee SM. Use of Artificial Intelligence-Based Software as Medical Devices for Chest Radiography: A Position Paper from the Korean Society of Thoracic Radiology. Korean J Radiol 2021; 22:1743-1748. [PMID: 34564966 PMCID: PMC8546139 DOI: 10.3348/kjr.2021.0544] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 07/07/2021] [Accepted: 07/07/2021] [Indexed: 12/28/2022] Open
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea.,Cancer Research Institute, Seoul National University, Seoul, Korea.
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, UMass Memorial Medical Center, Worcester, MA, USA
| | - Kyongmin Sarah Beck
- Department of Radiology, Seoul St Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Byoung Wook Choi
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Myung Jin Chung
- Department of Radiology and Medical AI Research Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Kwang Nam Jin
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|