1
|
Maki JH, Patel NU, Ulrich EJ, Dhaouadi J, Jones RW. Part II: Effect of different evaluation methods to the application of a computer-aided prostate MRI detection/diagnosis (CADe/ CADx) device on reader performance. Curr Probl Diagn Radiol 2024:S0363-0188(24)00073-2. [PMID: 38702282 DOI: 10.1067/j.cpradiol.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/14/2024] [Accepted: 04/18/2024] [Indexed: 05/06/2024]
Abstract
INTRODUCTION The construction and results of a multiple-reader multiple-case prostate MRI study are described and reported to illustrate recommendations for how to standardize artificial intelligence (AI) prostate studies per the review constituting Part I1. METHODS Our previously reported approach was applied to review and report an IRB approved, HIPAA compliant multiple-reader multiple-case clinical study of 150 bi-parametric prostate MRI studies across 9 readers, measuring physician performance both with and without the use of the recently FDA cleared CADe/CADx software ProstatID. RESULTS Unassisted reader AUC values ranged from 0.418 - 0.759, with AI assisted AUC values ranging from 0.507 - 0.787. This represented a statistically significant AUC improvement of 0.045 (α = 0.05). A free-response ROC (FROC) analysis similarly demonstrated a statistically significant increase in θ from 0.405 to 0.453 (α = 0.05). The standalone performance of ProstatID performed across all prostate tissues demonstrated an AUC of 0.929, while the standalone lesion level performance of ProstatID at all biopsied locations achieved an AUC of 0.710. CONCLUSION This study applies and illustrates suggested reporting and standardization methods for prostate AI studies that will make it easier to understand, evaluate and compare between AI studies. Providing radiologists with the ProstatID CADe/CADx software significantly increased diagnostic performance as assessed by both ROC and free-response ROC metrics. Such algorithms have the potential to improve radiologist performance in the detection and localization of clinically significant prostate cancer.
Collapse
Affiliation(s)
- Jeffrey H Maki
- Department of Radiology, University of Colorado Anschutz Medical Center, 12401 E 17th Ave (MS L954), Aurora, CO 80045, USA.
| | - Nayana U Patel
- University of New Mexico Department of Radiology, Albuquerque, NM, USA
| | | | | | | |
Collapse
|
2
|
Maki JH, Patel NU, Ulrich EJ, Dhaouadi J, Jones RW. Part I: prostate cancer detection, artificial intelligence for prostate cancer and how we measure diagnostic performance: a comprehensive review. Curr Probl Diagn Radiol 2024:S0363-0188(24)00072-0. [PMID: 38658286 DOI: 10.1067/j.cpradiol.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/14/2024] [Accepted: 04/18/2024] [Indexed: 04/26/2024]
Abstract
MRI has firmly established itself as a mainstay for the detection, staging and surveillance of prostate cancer. Despite its success, prostate MRI continues to suffer from poor inter-reader variability and a low positive predictive value. The recent emergence of Artificial Intelligence (AI) to potentially improve diagnostic performance shows great potential. Understanding and interpreting the AI landscape as well as ever-increasing research literature, however, is difficult. This is in part due to widely varying study design and reporting techniques. This paper aims to address this need by first outlining the different types of AI used for the detection and diagnosis of prostate cancer, next deciphering how data collection methods, statistical analysis metrics (such as ROC and FROC analysis) and end points/outcomes (lesion detection vs. case diagnosis) affect the performance and limit the ability to compare between studies. Finally, this work explores the need for appropriately enriched investigational datasets and proper ground truth, and provides guidance on how to best conduct AI prostate MRI studies. Published in parallel, a clinical study applying this suggested study design was applied to review and report a multiple-reader multiple-case clinical study of 150 bi-parametric prostate MRI studies across nine readers, measuring physician performance both with and without the use of a recently FDA cleared Artificial Intelligence software.1.
Collapse
Affiliation(s)
- Jeffrey H Maki
- University of Colorado Anschutz Medical Center, Department of Radiology, 12401 E 17th Ave (MS L954), Aurora, Colorado, USA.
| | - Nayana U Patel
- University of New Mexico Department of Radiology, Albuquerque, NM, USA
| | | | | | | |
Collapse
|
3
|
Djinbachian R, Haumesser C, Taghiakbari M, Pohl H, Barkun A, Sidani S, Liu Chen Kiow J, Panzini B, Bouchard S, Deslandres E, Alj A, von Renteln D. Autonomous Artificial Intelligence vs Artificial Intelligence-Assisted Human Optical Diagnosis of Colorectal Polyps: A Randomized Controlled Trial. Gastroenterology 2024:S0016-5085(24)00131-8. [PMID: 38331204 DOI: 10.1053/j.gastro.2024.01.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/15/2024] [Accepted: 01/30/2024] [Indexed: 02/10/2024]
Abstract
BACKGROUND & AIMS Artificial intelligence (AI)-based optical diagnosis systems (CADx) have been developed to allow pathology prediction of colorectal polyps during colonoscopies. However, CADx systems have not yet been validated for autonomous performance. Therefore, we conducted a trial comparing autonomous AI to AI-assisted human (AI-H) optical diagnosis. METHODS We performed a randomized noninferiority trial of patients undergoing elective colonoscopies at 1 academic institution. Patients were randomized into (1) autonomous AI-based CADx optical diagnosis of diminutive polyps without human input or (2) diagnosis by endoscopists who performed optical diagnosis of diminutive polyps after seeing the real-time CADx diagnosis. The primary outcome was accuracy in optical diagnosis in both arms using pathology as the gold standard. Secondary outcomes included agreement with pathology for surveillance intervals. RESULTS A total of 467 patients were randomized (238 patients/158 polyps in the autonomous AI group and 229 patients/179 polyps in the AI-H group). Accuracy for optical diagnosis was 77.2% (95% confidence interval [CI], 69.7-84.7) in the autonomous AI group and 72.1% (95% CI, 65.5-78.6) in the AI-H group (P = .86). For high-confidence diagnoses, accuracy for optical diagnosis was 77.2% (95% CI, 69.7-84.7) in the autonomous AI group and 75.5% (95% CI, 67.9-82.0) in the AI-H group. Autonomous AI had statistically significantly higher agreement with pathology-based surveillance intervals compared to AI-H (91.5% [95% CI, 86.9-96.1] vs 82.1% [95% CI, 76.5-87.7]; P = .016). CONCLUSIONS Autonomous AI-based optical diagnosis exhibits noninferior accuracy to endoscopist-based diagnosis. Both autonomous AI and AI-H exhibited relatively low accuracy for optical diagnosis; however, autonomous AI achieved higher agreement with pathology-based surveillance intervals. (ClinicalTrials.gov, Number NCT05236790).
Collapse
Affiliation(s)
- Roupen Djinbachian
- Montreal University Hospital Research Center, Montreal, Quebec, Canada; Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Claire Haumesser
- Montreal University Hospital Research Center, Montreal, Quebec, Canada
| | - Mahsa Taghiakbari
- Montreal University Hospital Research Center, Montreal, Quebec, Canada; Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Heiko Pohl
- Section of Gastroenterology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire; Department of Gastroenterology, Veterans Affairs White River Junction, Vermont
| | - Alan Barkun
- Division of Gastroenterology, McGill University and McGill University Health Center, Montreal, Quebec, Canada
| | - Sacha Sidani
- Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Jeremy Liu Chen Kiow
- Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Benoit Panzini
- Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Simon Bouchard
- Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Erik Deslandres
- Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Abla Alj
- Division of Internal Medicine, Montreal University Hospital Center, Montreal, Quebec, Canada
| | - Daniel von Renteln
- Montreal University Hospital Research Center, Montreal, Quebec, Canada; Division of Gastroenterology, Montreal University Hospital Center, Montreal, Quebec, Canada.
| |
Collapse
|
4
|
Samarasena J, Yang D, Berzin TM. AGA Clinical Practice Update on the Role of Artificial Intelligence in Colon Polyp Diagnosis and Management: Commentary. Gastroenterology 2023; 165:1568-1573. [PMID: 37855759 DOI: 10.1053/j.gastro.2023.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 10/20/2023]
Abstract
DESCRIPTION The purpose of this American Gastroenterological Association (AGA) Institute Clinical Practice Update (CPU) is to review the available evidence and provide expert commentary on the current landscape of artificial intelligence in the evaluation and management of colorectal polyps. METHODS This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee (CPUC) and the AGA Governing Board to provide timely guidance on a topic of high clinical importance to the AGA membership and underwent internal peer review by the CPUC and external peer review through standard procedures of Gastroenterology. This Expert Commentary incorporates important as well as recently published studies in this field, and it reflects the experiences of the authors who are experienced endoscopists with expertise in the field of artificial intelligence and colorectal polyps.
Collapse
Affiliation(s)
- Jason Samarasena
- Division of Gastroenterology, University of California Irvine, Orange, California
| | - Dennis Yang
- Center for Interventional Endoscopy, AdventHealth, Orlando, Florida.
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
5
|
Ding M, Yan J, Chao G, Zhang S. Application of artificial intelligence in colorectal cancer screening by colonoscopy: Future prospects (Review). Oncol Rep 2023; 50:199. [PMID: 37772392 DOI: 10.3892/or.2023.8636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/07/2023] [Indexed: 09/30/2023] Open
Abstract
Colorectal cancer (CRC) has become a severe global health concern, with the third‑high incidence and second‑high mortality rate of all cancers. The burden of CRC is expected to surge to 60% by 2030. Fortunately, effective early evidence‑based screening could significantly reduce the incidence and mortality of CRC. Colonoscopy is the core screening method for CRC with high popularity and accuracy. Yet, the accuracy of colonoscopy in CRC screening is related to the experience and state of operating physicians. It is challenging to maintain the high CRC diagnostic rate of colonoscopy. Artificial intelligence (AI)‑assisted colonoscopy will compensate for the above shortcomings and improve the accuracy, efficiency, and quality of colonoscopy screening. The unique advantages of AI, such as the continuous advancement of high‑performance computing capabilities and innovative deep‑learning architectures, which hugely impact the control of colorectal cancer morbidity and mortality expectancy, highlight its role in colonoscopy screening.
Collapse
Affiliation(s)
- Menglu Ding
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Junbin Yan
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Guanqun Chao
- Department of General Practice, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang 310000, P.R. China
| | - Shuo Zhang
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| |
Collapse
|
6
|
Garg G. Computer-Aided Diagnosis Systems for Prostate Cancer: A Comprehensive Study. Curr Med Imaging 2023:CMIR-EPUB-131994. [PMID: 37218186 DOI: 10.2174/1573405620666230522151406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 03/12/2023] [Accepted: 04/12/2023] [Indexed: 05/24/2023]
Abstract
The American Cancer Society (ACS) reported in their Cancer Facts and Figures 2021 that prostate cancer (PCa) is the second leading cause of death among American men, with an average age of diagnosis being 66 years. This health issue predominantly affects older men and poses a significant challenge for radiologists, urologists, and oncologists when it comes to accurately diagnosing and treating it in a timely manner. Detecting PCa with precision and on time is crucial for proper treatment planning and reducing the increasing mortality rate. This paper focuses on a Computer-Aided Diagnosis (CADx) system, which is discussed in detail with different phases specific to PCa. Each phase of CADx is comprehensively analyzed and evaluated based on recent state-of-the-art techniques in both quantitative and qualitative aspects. This study outlines significant research gaps and findings for every phase of CADx, providing valuable insights to biomedical engineers and researchers.
Collapse
Affiliation(s)
- Gaurav Garg
- Department of Computer Science and Engineering Chitkara School of Engineering and Technology Chitkara University, Baddi, Himachal Pradesh, INDIA
| |
Collapse
|
7
|
Gimeno-García AZ, Hernández-Pérez A, Nicolás-Pérez D, Hernández-Guerra M. Artificial Intelligence Applied to Colonoscopy: Is It Time to Take a Step Forward? Cancers (Basel) 2023; 15:cancers15082193. [PMID: 37190122 DOI: 10.3390/cancers15082193] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/04/2023] [Accepted: 04/05/2023] [Indexed: 05/17/2023] Open
Abstract
Growing evidence indicates that artificial intelligence (AI) applied to medicine is here to stay. In gastroenterology, AI computer vision applications have been stated as a research priority. The two main AI system categories are computer-aided polyp detection (CADe) and computer-assisted diagnosis (CADx). However, other fields of expansion are those related to colonoscopy quality, such as methods to objectively assess colon cleansing during the colonoscopy, as well as devices to automatically predict and improve bowel cleansing before the examination, predict deep submucosal invasion, obtain a reliable measurement of colorectal polyps and accurately locate colorectal lesions in the colon. Although growing evidence indicates that AI systems could improve some of these quality metrics, there are concerns regarding cost-effectiveness, and large and multicentric randomized studies with strong outcomes, such as post-colonoscopy colorectal cancer incidence and mortality, are lacking. The integration of all these tasks into one quality-improvement device could facilitate the incorporation of AI systems in clinical practice. In this manuscript, the current status of the role of AI in colonoscopy is reviewed, as well as its current applications, drawbacks and areas for improvement.
Collapse
Affiliation(s)
- Antonio Z Gimeno-García
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Anjara Hernández-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - David Nicolás-Pérez
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| | - Manuel Hernández-Guerra
- Gastroenterology Department, Hospital Universitario de Canarias, 38200 San Cristóbal de La Laguna, Tenerife, Spain
- Instituto Universitario de Tecnologías Biomédicas (ITB) & Centro de Investigación Biomédica de Canarias (CIBICAN), Internal Medicine Department, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Tenerife, Spain
| |
Collapse
|
8
|
Dhaliwal J, Walsh CM. Artificial Intelligence in Pediatric Endoscopy: Current Status and Future Applications. Gastrointest Endosc Clin N Am 2023; 33:291-308. [PMID: 36948747 DOI: 10.1016/j.giec.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
The application of artificial intelligence (AI) has great promise for improving pediatric endoscopy. The majority of preclinical studies have been undertaken in adults, with the greatest progress being made in the context of colorectal cancer screening and surveillance. This development has only been possible with advances in deep learning, like the convolutional neural network model, which has enabled real-time detection of pathology. Comparatively, the majority of deep learning systems developed in inflammatory bowel disease have focused on predicting disease severity and were developed using still images rather than videos. The application of AI to pediatric endoscopy is in its infancy, thus providing an opportunity to develop clinically meaningful and fair systems that do not perpetuate societal biases. In this review, we provide an overview of AI, summarize the advances of AI in endoscopy, and describe its potential application to pediatric endoscopic practice and education.
Collapse
Affiliation(s)
- Jasbir Dhaliwal
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children's Hospital Medictal Center, University of Cincinnati, OH, USA.
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, and the SickKids Research and Learning Institutes, The Hospital for Sick Children, Toronto, ON, Canada; Department of Paediatrics and The Wilson Centre, University of Toronto, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Göreke V. A Novel Deep-Learning-Based CADx Architecture for Classification of Thyroid Nodules Using Ultrasound Images. Interdiscip Sci 2023:10.1007/s12539-023-00560-4. [PMID: 36976511 PMCID: PMC10043860 DOI: 10.1007/s12539-023-00560-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 03/03/2023] [Accepted: 03/05/2023] [Indexed: 03/29/2023]
Abstract
Nodules of thyroid cancer occur in the cells of the thyroid as benign or malign types. Thyroid sonographic images are mostly used for diagnosis of thyroid cancer. The aim of this study is to introduce a computer-aided diagnosis system that can classify the thyroid nodules with high accuracy using the data gathered from ultrasound images. Acquisition and labeling of sub-images were performed by a specialist physician. Then the number of these sub-images were increased using data augmentation methods. Deep features were obtained from the images using a pre-trained deep neural network. The dimensions of the features were reduced and features were improved. The improved features were combined with morphological and texture features. This feature group was rated by a value called similarity coefficient value which was obtained from a similarity coefficient generator module. The nodules were classified as benign or malignant using a multi-layer deep neural network with a pre-weighting layer designed with a novel approach. In this study, a novel multi-layer computer-aided diagnosis system was proposed for thyroid cancer detection. In the first layer of the system, a novel feature extraction method based on the class similarity of images was developed. In the second layer, a novel pre-weighting layer was proposed by modifying the genetic algorithm. The proposed system showed superior performance in different metrics compared to the literature.
Collapse
Affiliation(s)
- Volkan Göreke
- Department of Computer Technologies, Sivas Vocational School of Technical Sciences, Sivas Cumhuriyet University, 58140, Sivas, Türkiye.
| |
Collapse
|
10
|
Cherubini A, Dinh NN. A Review of the Technology, Training, and Assessment Methods for the First Real-Time AI-Enhanced Medical Device for Endoscopy. Bioengineering (Basel) 2023; 10:bioengineering10040404. [PMID: 37106592 PMCID: PMC10136070 DOI: 10.3390/bioengineering10040404] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 02/25/2023] [Accepted: 03/22/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI) has the potential to assist in endoscopy and improve decision making, particularly in situations where humans may make inconsistent judgments. The performance assessment of the medical devices operating in this context is a complex combination of bench tests, randomized controlled trials, and studies on the interaction between physicians and AI. We review the scientific evidence published about GI Genius, the first AI-powered medical device for colonoscopy to enter the market, and the device that is most widely tested by the scientific community. We provide an overview of its technical architecture, AI training and testing strategies, and regulatory path. In addition, we discuss the strengths and limitations of the current platform and its potential impact on clinical practice. The details of the algorithm architecture and the data that were used to train the AI device have been disclosed to the scientific community in the pursuit of a transparent AI. Overall, the first AI-enabled medical device for real-time video analysis represents a significant advancement in the use of AI for endoscopies and has the potential to improve the accuracy and efficiency of colonoscopy procedures.
Collapse
Affiliation(s)
- Andrea Cherubini
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
- Milan Center for Neuroscience, University of Milano-Bicocca, 20126 Milano, Italy
| | - Nhan Ngo Dinh
- Cosmo Intelligent Medical Devices, D02KV60 Dublin, Ireland
| |
Collapse
|
11
|
Fonollà R, van der Zander QEW, Schreuder RM, Subramaniam S, Bhandari P, Masclee AAM, Schoon EJ, van der Sommen F, de With PHN. Automatic image and text-based description for colorectal polyps using BASIC classification. Artif Intell Med 2021; 121:102178. [PMID: 34763800 DOI: 10.1016/j.artmed.2021.102178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/01/2021] [Accepted: 09/21/2021] [Indexed: 12/18/2022]
Abstract
Colorectal polyps (CRP) are precursor lesions of colorectal cancer (CRC). Correct identification of CRPs during in-vivo colonoscopy is supported by the endoscopist's expertise and medical classification models. A recent developed classification model is the Blue light imaging Adenoma Serrated International Classification (BASIC) which describes the differences between non-neoplastic and neoplastic lesions acquired with blue light imaging (BLI). Computer-aided detection (CADe) and diagnosis (CADx) systems are efficient at visually assisting with medical decisions but fall short at translating decisions into relevant clinical information. The communication between machine and medical expert is of crucial importance to improve diagnosis of CRP during in-vivo procedures. In this work, the combination of a polyp image classification model and a language model is proposed to develop a CADx system that automatically generates text comparable to the human language employed by endoscopists. The developed system generates equivalent sentences as the human-reference and describes CRP images acquired with white light (WL), blue light imaging (BLI) and linked color imaging (LCI). An image feature encoder and a BERT module are employed to build the AI model and an external test set is used to evaluate the results and compute the linguistic metrics. The experimental results show the construction of complete sentences with an established metric scores of BLEU-1 = 0.67, ROUGE-L = 0.83 and METEOR = 0.50. The developed CADx system for automatic CRP image captioning facilitates future advances towards automatic reporting and may help reduce time-consuming histology assessment.
Collapse
Affiliation(s)
- Roger Fonollà
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands.
| | - Quirine E W van der Zander
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, the Netherlands; GROW, School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - Ramon M Schreuder
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Noord-Brabant, the Netherlands
| | - Sharmila Subramaniam
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth, United Kingdom
| | - Pradeep Bhandari
- Department of Gastroenterology, Portsmouth Hospitals University NHS Trust, Portsmouth, United Kingdom
| | - Ad A M Masclee
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, the Netherlands; NUTRIM, School of Nutrition & Translational Research in Metabolism, Maastricht University, Maastricht, the Netherlands
| | - Erik J Schoon
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, Noord-Brabant, the Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Video Coding and Architectures (VCA), Eindhoven University of Technology, Eindhoven, Noord-Brabant, the Netherlands
| |
Collapse
|
12
|
Deliwala SS, Hamid K, Barbarawi M, Lakshman H, Zayed Y, Kandel P, Malladi S, Singh A, Bachuwa G, Gurvits GE, Chawla S. Artificial intelligence (AI) real-time detection vs. routine colonoscopy for colorectal neoplasia: a meta-analysis and trial sequential analysis. Int J Colorectal Dis 2021; 36:2291-2303. [PMID: 33934173 DOI: 10.1007/s00384-021-03929-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/07/2021] [Indexed: 02/04/2023]
Abstract
GOALS AND BACKGROUND Studies analyzing artificial intelligence (AI) in colonoscopies have reported improvements in detecting colorectal cancer (CRC) lesions, however its utility in the realworld remains limited. In this systematic review and meta-analysis, we evaluate the efficacy of AI-assisted colonoscopies against routine colonoscopy (RC). STUDY We performed an extensive search of major databases (through January 2021) for randomized controlled trials (RCTs) reporting adenoma and polyp detection rates. Odds ratio (OR) and standardized mean differences (SMD) with 95% confidence intervals (CIs) were reported. Additionally, trial sequential analysis (TSA) was performed to guard against errors. RESULTS Six RCTs were included (4996 participants). The mean age (SD) was 51.99 (4.43) years, and 49% were females. Detection rates favored AI over RC for adenomas (OR 1.77; 95% CI: 1.570-2.08) and polyps (OR 1.91; 95% CI: 1.68-2.16). Secondary outcomes including mean number of adenomas (SMD 0.23; 95% CI: 0.18-0.29) and polyps (SMD 0.23; 95% CI: 0.17-0.29) detected per procedure favored AI. However, RC outperformed AI in detecting pedunculated polyps. Withdrawal times (WTs) favored AI when biopsies were included, while WTs without biopsies, cecal intubation times, and bowel preparation adequacy were similar. CONCLUSIONS Colonoscopies equipped with AI detection algorithms could significantly detect previously missed adenomas and polyps while retaining the ability to self-assess and improve periodically. More effective clearance of diminutive adenomas may allow lengthening in surveillance intervals, reducing the burden of surveillance colonoscopies, and increasing its accessibility to those at higher risk. TSA ruled out the risk for false-positive results and confirmed a sufficient sample size to detect the observed effect. Currently, these findings suggest that AI-assisted colonoscopy can serve as a useful proxy to address critical gaps in CRC identification.
Collapse
Affiliation(s)
- Smit S Deliwala
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA.
| | - Kewan Hamid
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Mahmoud Barbarawi
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Harini Lakshman
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Yazan Zayed
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Pujan Kandel
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Srikanth Malladi
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Adiraj Singh
- Department of Internal Medicine/Pediatrics, Michigan State University at Hurley Medical Center, Flint, MI, USA
| | - Ghassan Bachuwa
- Department of Internal Medicine, Michigan State University at Hurley Medical Center, Two Hurley Plaza, Ste 212, Flint, MI, 48503, USA
| | - Grigoriy E Gurvits
- Department of Internal Medicine - Division of Gastroenterology, New York University/Langone Medical Center, New York, NY, USA
| | - Saurabh Chawla
- Department of Internal Medicine - Division of Gastroenterology, Emory University, Atlanta, GA, USA
| |
Collapse
|
13
|
Yoshida N, Inoue K, Tomita Y, Kobayashi R, Hashimoto H, Sugino S, Hirose R, Dohi O, Yasuda H, Morinaga Y, Inada Y, Murakami T, Zhu X, Itoh Y. An analysis about the function of a new artificial intelligence, CAD EYE with the lesion recognition and diagnosis for colorectal polyps in clinical practice. Int J Colorectal Dis 2021; 36:2237-2245. [PMID: 34406437 DOI: 10.1007/s00384-021-04006-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/02/2021] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Recently, CAD EYE (Fujifilm, Tokyo, Japan), an artificial intelligence for the lesion recognition (CADe) and the optical diagnosis (CADx) of colorectal polyps, was released. We evaluated the function of CADe and CADx of CAD EYE. METHODS In this single-center retrospective study, we examined consecutive polyps ≤ 10 mm detected from March to April 2021 to determine whether CAD EYE could recognize them live with both normal- and high-speed observation using white-light imaging (WLI) and linked-color imaging (LCI). We then examined whether the polyps were neoplastic or hyperplastic live with magnified or non-magnified blue-laser imaging (BLI-LASER) or blue-light imaging (BLI-LED) under CAD EYE, comparing the retrospective evaluations with 5 experts and 5 trainees using still images. All polyps were histopathologically examined. RESULTS We analyzed 100 polyps (mean size 3.9 ± 2.6 mm; 55 neoplastic and 45 hyperplastic lesions) in 25 patients. Regarding CADe, the respective detection rates of CAD EYE with normal- and high-speed observation were 85.0% and 67.0% for WLI (p = 0.002) and 89.0% and 75.0% for LCI (p = 0.009). Regarding CADx for differentiating neoplastic and hyperplastic lesions, the diagnostic accuracy values of CAD EYE with non-magnified and magnified BLI-LASER/LED were 88.8% and 87.8%. Regarding magnified BLI-LASER/LED, the diagnostic accuracy value of CAD EYE was not significantly different from that of experts (92.0%, p = 0.17), but that of trainees (79.0%, p = 0.04). We also found no significant differences in CADe or CADx between LED (53 lesions) and LASER (47 lesions). CONCLUSIONS CAD EYE was a helpful tool for CADe and CADx in clinical practice.
Collapse
Affiliation(s)
- Naohisa Yoshida
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan.
| | - Ken Inoue
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Yuri Tomita
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Reo Kobayashi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Hikaru Hashimoto
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Satoshi Sugino
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Ryohei Hirose
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Osamu Dohi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Hiroaki Yasuda
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| | - Yukiko Morinaga
- Department of Surgical Pathology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yutaka Inada
- Department of Gastroenterology, Kyoto First Red Cross Hospital, Kyoto, Japan
| | - Takaaki Murakami
- Department of Gastroenterology, Aiseikai Yamashina Hospital, Kyoto, Japan
| | - Xin Zhu
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Yoshito Itoh
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajii-cho, Kawaramachi-Hirokoji, Kamigyo-ku, Kyoto, 602-8566, Japan
| |
Collapse
|
14
|
Vivar G, Kazi A, Burwinkel H, Zwergal A, Navab N, Ahmadi SA. Simultaneous imputation and classification using Multigraph Geometric Matrix Completion (MGMC): Application to neurodegenerative disease classification. Artif Intell Med 2021; 117:102097. [PMID: 34127236 DOI: 10.1016/j.artmed.2021.102097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 05/04/2021] [Accepted: 05/05/2021] [Indexed: 10/21/2022]
Abstract
Large-scale population-based studies in medicine are a key resource towards better diagnosis, monitoring, and treatment of diseases. They also serve as enablers of clinical decision support systems, in particular computer-aided diagnosis (CADx) using machine learning (ML). Numerous ML approaches for CADx have been proposed in literature. However, these approaches assume feature-complete data, which is often not the case in clinical data. To account for missing data, incomplete data samples are either removed or imputed, which could lead to data bias and may negatively affect classification performance. As a solution, we propose an end-to-end learning of imputation and disease prediction of incomplete medical datasets via Multi-graph Geometric Matrix Completion (MGMC). MGMC uses multiple recurrent graph convolutional networks, where each graph represents an independent population model based on a key clinical meta-feature like age, sex, or cognitive function. Graph signal aggregation from local patient neighborhoods, combined with multi-graph signal fusion via self-attention, has a regularizing effect on both matrix reconstruction and classification performance. Our proposed approach is able to impute class relevant features as well as perform accurate and robust classification on two publicly available medical datasets. We empirically show the superiority of our proposed approach in terms of classification and imputation performance when compared with state-of-the-art approaches. MGMC enables disease prediction in multimodal and incomplete medical datasets. These findings could serve as baseline for future CADx approaches which utilize incomplete datasets.
Collapse
Affiliation(s)
- Gerome Vivar
- Department of Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany; German Center for Vertigo and Balance Disorders (DSGZ), Ludwig-Maximilians University (LMU), Fraunhoferstr. 20, 82152, Planegg, Germany
| | - Anees Kazi
- Department of Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany
| | - Hendrik Burwinkel
- Department of Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany
| | - Andreas Zwergal
- German Center for Vertigo and Balance Disorders (DSGZ), Ludwig-Maximilians University (LMU), Fraunhoferstr. 20, 82152, Planegg, Germany
| | - Nassir Navab
- Department of Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany
| | - Seyed-Ahmad Ahmadi
- Department of Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany; German Center for Vertigo and Balance Disorders (DSGZ), Ludwig-Maximilians University (LMU), Fraunhoferstr. 20, 82152, Planegg, Germany.
| | | |
Collapse
|
15
|
Calheiros JLL, de Amorim LBV, de Lima LL, de Lima Filho AF, Ferreira Júnior JR, de Oliveira MC. The Effects of Perinodular Features on Solid Lung Nodule Classification. J Digit Imaging 2021; 34:798-810. [PMID: 33791910 DOI: 10.1007/s10278-021-00453-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 02/11/2021] [Accepted: 03/22/2021] [Indexed: 12/09/2022] Open
Abstract
Lung cancer is the most lethal malignant neoplasm worldwide, with an annual estimated rate of 1.8 million deaths. Computed tomography has been widely used to diagnose and detect lung cancer, but its diagnosis remains an intricate and challenging work, even for experienced radiologists. Computer-aided diagnosis tools and radiomics tools have provided support to the radiologist's decision, acting as a second opinion. The main focus of these tools has been to analyze the intranodular zone; nevertheless, recent works indicate that the interaction between the nodule and its surroundings (perinodular zone) could be relevant to the diagnosis process. However, only a few works have investigated the importance of specific attributes of the perinodular zone and have shown how important they are in the classification of lung nodules. In this context, the purpose of this work is to evaluate the impact of using the perinodular zone on the characterization of lung lesions. Motivated by reproducible research, we used a large public dataset of solid lung nodule images and extracted fine-tuned radiomic attributes from the perinodular and intranodular zones. Our best-evaluated model obtained an average AUC of 0.916, an accuracy of 84.26%, a sensitivity of 84.45%, and specificity of 83.84%. The combination of attributes from the perinodular and intranodular zones in the image characterization resulted in an improvement in all the metrics analyzed when compared to intranodular-only characterization. Therefore, our results highlighted the importance of using the perinodular zone in the solid pulmonary nodules classification process.
Collapse
Affiliation(s)
| | | | - Lucas Lins de Lima
- Computing Institute, Federal University of Alagoas (UFAL), Maceió, AL, Brazil
| | | | | | | |
Collapse
|
16
|
Sumiyama K, Futakuchi T, Kamba S, Matsui H, Tamai N. Artificial intelligence in endoscopy: Present and future perspectives. Dig Endosc 2021; 33:218-230. [PMID: 32935376 DOI: 10.1111/den.13837] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Accepted: 09/04/2020] [Indexed: 02/08/2023]
Abstract
Artificial intelligence (AI) has been attracting considerable attention as an important scientific topic in the field of medicine. Deep-leaning (DL) technologies have been applied more dominantly than other traditional machine-learning methods. They have demonstrated excellent capability to retract visual features of objectives, even unnoticeable ones for humans, and analyze huge amounts of information within short periods. The amount of research applying DL-based models to real-time computer-aided diagnosis (CAD) systems has been increasing steadily in the GI endoscopy field. An array of published data has already demonstrated the advantages of DL-based CAD models in the detection and characterization of various neoplastic lesions, regardless of the level of the GI tract. Although the diagnostic performances and study designs vary widely, owing to a lack of academic standards to assess the capability of AI for GI endoscopic diagnosis fairly, the superiority of CAD models has been demonstrated for almost all applications studied so far. Most of the challenges associated with AI in the endoscopy field are general problems for AI models used in the real world outside of medical fields. Solutions have been explored seriously and some solutions have been tested in the endoscopy field. Given that AI has become the basic technology to make machines react to the environment, AI would be a major technological paradigm shift, for not only diagnosis but also treatment. In the near future, autonomous endoscopic diagnosis might no longer be just a dream, as we are witnessing with the advent of autonomously driven electric vehicles.
Collapse
Affiliation(s)
- Kazuki Sumiyama
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Toshiki Futakuchi
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Shunsuke Kamba
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Hiroaki Matsui
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| | - Naoto Tamai
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
17
|
Arzehgar A, Khalilzadeh MM, Varshoei F. Assessment and Classification of Mass Lesions Based on Expert Knowledge Using Mammographic Analysis. Curr Med Imaging 2020; 15:199-208. [PMID: 31975666 DOI: 10.2174/1573405614666171213161559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 11/29/2017] [Accepted: 12/02/2017] [Indexed: 11/22/2022]
Abstract
BACKGROUND Masses are one of the most important indicators of breast cancer in mammograms, and their classification into two groups as benign and malignant is highly necessary. Computer Aided Diagnosis (CADx) helps radiologists enhance the accuracy of their decision. Hence, the system is required to support and assess with radiologist's interaction as an expert. METHODS In this research, classification of breast masses using mammography in the two main views which include MLO and CC, is evaluated with respect to the shape, texture and asymmetry aspect. Additionally, a method was developed and proposed using the classification of breast tissue density based on the decision tree. DISCUSSION This study therefore, aims to provide a method based on the human decision-making model that will help in designing the perfect tool for radiologists, regardless of the complexity of computing, costly procedures and also reducing the diagnosis error. CONCLUSION Results show that the proposed system for entirely fat, scattered fibroglandular densities, heterogeneously dense, and extremely dense breast achieved 100, 99, 99 and 98% true malignant rate, respectively with cross-validation procedure.
Collapse
Affiliation(s)
- Afrooz Arzehgar
- Department of Biomedical Engineering, Islamic Azad University, Mashhad Branch, Mashhad, Iran
| | | | | |
Collapse
|
18
|
Lee J, Nishikawa RM, Reiser I, Boone JM. Neutrosophic segmentation of breast lesions for dedicated breast computed tomography. J Med Imaging (Bellingham) 2018; 5:014505. [PMID: 29541650 PMCID: PMC5839418 DOI: 10.1117/1.jmi.5.1.014505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 02/12/2018] [Indexed: 11/14/2022] Open
Abstract
We proposed the neutrosophic approach for segmenting breast lesions in breast computed tomography (bCT) images. The neutrosophic set considers the nature and properties of neutrality (or indeterminacy). We considered the image noise as an indeterminate component while treating the breast lesion and other breast areas as true and false components. We iteratively smoothed and contrast-enhanced the image to reduce the noise level of the true set. We then applied one existing algorithm for bCT images, the RGI segmentation, on the resulting noise-reduced image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used 122 breast lesions (44 benign and 78 malignant) of 111 noncontrast enhanced bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the Dice coefficient. The average Dice values of the NS-RGI and RGI were 0.82 and 0.80, respectively, and their difference was statistically significant ([Formula: see text]). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI ([Formula: see text]) improved over that of the RGI ([Formula: see text], [Formula: see text]).
Collapse
Affiliation(s)
- Juhun Lee
- University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States
| | - Robert M. Nishikawa
- University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States
| | - Ingrid Reiser
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - John M. Boone
- University of California Davis Medical Center, Department of Radiology, Sacramento, California, United States
| |
Collapse
|
19
|
Alilou M, Beig N, Orooji M, Rajiah P, Velcheti V, Rakshit S, Reddy N, Yang M, Jacono F, Gilkeson RC, Linden P, Madabhushi A. An integrated segmentation and shape-based classification scheme for distinguishing adenocarcinomas from granulomas on lung CT. Med Phys 2017; 44:3556-3569. [PMID: 28295386 DOI: 10.1002/mp.12208] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Revised: 02/20/2017] [Accepted: 02/27/2017] [Indexed: 12/30/2022] Open
Abstract
PURPOSE Distinguishing between benign granulmoas and adenocarcinomas is confounded by their similar visual appearance on routine CT scans. Unfortunately, owing to the inability to discriminate these lesions radigraphically, many patients with benign granulomas are subjected to unnecessary surgical wedge resections and biopsies for pathologic confirmation of cancer presence or absence. This suggests the need for improved computerized characterization of these nodules in order to distinguish between these two classes of lesions on CT scans. While there has been substantial interest in the use of textural analysis for radiomic characterization of lung nodules, relatively less work has been done in shape based characterization of lung nodules, particularly with respect to granulmoas and adenocarcinomas. The primary goal of this study is to evaluate the role of 3D shape features for discrimination of benign granulomas from malignant adenocarcinomas on lung CT images. Towards this end we present an integrated framework for segmentation, feature characterization and classification of these nodules on CT. METHODS The nodule segmentation method starts with separation of lung regions from the surrounding lung anatomy. Next, the lung CT scans are projected into and represented in a three dimensional spectral embedding (SE) space, allowing for better determination of the boundaries of the nodule. This then enables the application of a gradient vector flow active contour (SEGvAC) model for nodule boundary extraction. A set of 24 shape features from both 2D slices and 3D surface of the segmented nodules are extracted, including features pertaining to the angularity, spiculation, elongation and nodule compactness. A feature selection scheme, PCA-VIP, is employed to identify the most discriminating set of features to distinguish granulmoas from adenocarcinomas within a learning set of 82 patients. The features thus identified were then combined with a support vector machine classifier and independently validated on a distinct test set comprising 67 patients. The performance of the classifier for both of the training and validation cohorts was evaluated by the area under receiver characteristic curve (ROC). RESULTS We used 82 and 67 studies from two different institutions respectively for training and independent validation of the model and the shape features. The Dice coefficient between automatically segmented nodules by SEGvAC and the manual delineations by expert radiologists (readers) was 0.84± 0.04 whereas inter-reader segmentation agreement was 0.79± 0.12. We also identified a set of consistent features (Roughness, Convexity and Spherecity) that were found to be strongly correlated across both manual and automated nodule segmentations (R > 0.80, p < 0.0001) and capture the marginal smoothness and 3D compactness of the nodules. On the independent validation set of 67 studies our classifier yielded a ROC AUC of 0.72 and 0.64 for manually- and automatically segmented nodules respectively. On a subset of 20 studies, the AUCs for the two expert radiologists and 1 pulmonologist were found to be 0.82, 0.68 and 0.58 respectively. CONCLUSIONS The major finding of this study was that certain shape features appear to differentially express between granulomas and adenocarcinomas and thus computer extracted shape cues could be used to distinguish these radiographically similar pathologies.
Collapse
Affiliation(s)
- Mehdi Alilou
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Niha Beig
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Mahdi Orooji
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Prabhakar Rajiah
- Department of Radiology, University of Texas Southwestern Medical Centre, Dallas, TX, 75390, USA
| | | | - Sagar Rakshit
- Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH, 44195, USA
| | - Niyoti Reddy
- Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH, 44195, USA
| | - Michael Yang
- Department of Pathology, University Hospital Cleveland Medical Center, Cleveland, OH, 44106, USA
| | - Frank Jacono
- Division of Pulmonology and Critical Care, Louis Stokes Cleveland VA Medical Center, Cleveland, OH, 44106, USA
| | - Robert C Gilkeson
- Department of Radiology, University Hospital Cleveland Medical Center, Cleveland, OH, 44106, USA
| | - Philip Linden
- Division of Thoracic and Esophageal Surgery, University Hospital Cleveland Medical Center, Cleveland, OH, 44106, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
| |
Collapse
|
20
|
Lee J, Nishikawa RM, Reiser I, Boone JM. Optimal reconstruction and quantitative image features for computer-aided diagnosis tools for breast CT. Med Phys 2017; 44:1846-1856. [PMID: 28295405 DOI: 10.1002/mp.12214] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 03/03/2017] [Accepted: 03/07/2017] [Indexed: 01/19/2023] Open
Abstract
PURPOSE The purpose of this study is to determine the optimal representative reconstruction and quantitative image feature set for a computer-aided diagnosis (CADx) scheme for dedicated breast computer tomography (bCT). METHOD We used 93 bCT scans that contain 102 breast lesions (62 malignant, 40 benign). Using an iterative image reconstruction (IIR) algorithm, we created 37 reconstructions with different image appearances for each case. In addition, we added a clinical reconstruction for comparison purposes. We used image sharpness, determined by the gradient of gray value in a parenchymal portion of the reconstructed breast, as a surrogate measure of the image qualities/appearances for the 38 reconstructions. After segmentation of the breast lesion, we extracted 23 quantitative image features. Using leave-one-out-cross-validation (LOOCV), we conducted the feature selection, classifier training, and testing. For this study, we used the linear discriminant analysis classifier. Then, we selected the representative reconstruction and feature set for the classifier with the best diagnostic performance among all reconstructions and feature sets. Then, we conducted an observer study with six radiologists using a subset of breast lesions (N = 50). Using 1000 bootstrap samples, we compared the diagnostic performance of the trained classifier to those of the radiologists. RESULT The diagnostic performance of the trained classifier increased as the image sharpness of a given reconstruction increased. Among combinations of reconstructions and quantitative image feature sets, we selected one of the sharp reconstructions and three quantitative image feature sets with the first three highest diagnostic performances under LOOCV as the representative reconstruction and feature set for the classifier. The classifier on the representative reconstruction and feature set achieved better diagnostic performance with an area under the ROC curve (AUC) of 0.94 (95% CI = [0.81, 0.98]) than those of the radiologists, where their maximum AUC was 0.78 (95% CI = [0.63, 0.90]). Moreover, the partial AUC, at 90% sensitivity or higher, of the classifier (pAUC = 0.085 with 95% CI = [0.063, 0.094]) was statistically better (P-value < 0.0001) than those of the radiologists (maximum pAUC = 0.009 with 95% CI = [0.003, 0.024]). CONCLUSION We found that image sharpness measure can be a good candidate to estimate the diagnostic performance of a given CADx algorithm. In addition, we found that there exists a reconstruction (i.e., sharp reconstruction) and a feature set that maximizes the diagnostic performance of a CADx algorithm. On this optimal representative reconstruction and feature set, the CADx algorithm outperformed radiologists.
Collapse
Affiliation(s)
- Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Robert M Nishikawa
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Ingrid Reiser
- Department of Radiology, The University of Chicago, Chicago, IL, 60637, USA
| | - John M Boone
- Department of Radiology, University of California Davis Medical Center, Sacramento, CA, 95817, USA
| |
Collapse
|
21
|
Zhou Y, Teomete U, Dandin O, Osman O, Dandinoglu T, Bagci U, Zhao W. Computer-Aided Detection ( CADx) for Plastic Deformation Fractures in Pediatric Forearm. Comput Biol Med 2016; 78:120-125. [PMID: 27684324 DOI: 10.1016/j.compbiomed.2016.09.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 09/14/2016] [Accepted: 09/15/2016] [Indexed: 10/21/2022]
Abstract
Bowing fractures are incomplete fractures of tubular long bones, often observed in pediatric patients, where plain radiographic film is the non-invasive imaging modality of choice in routine radiological workflow. Due to weak association between bent bone and distinct cortex disruption, bowing fractures may not be diagnosed properly while reading plain radiography. Missed fractures and dislocations are common in accidents and emergency practice, particularly in children. These missed injuries can result in more complicated treatment or even long-term disability. The most common reason for missed fractures is that junior radiologists or physicians lack expertise in pediatric skeletal injury diagnosis. Not only is additional radiation exposure inevitable in the case of misdiagnosis, but other consequences include the patient's prolonged uncomfortableness and possible unnecessary surgical procedures. Therefore, a computerized image analysis system, which would be secondary to the radiologists' interpretations, may reduce adverse effects and improve the diagnostic rates of bowing fracture (detection and quantification). This system would be highly desirable and particularly useful in emergency rooms. To address this need, we investigated and developed a new Computer Aided Detection (CADx) system for pediatric bowing fractures. The proposed system has been tested on 226 cases of pediatric forearms with bowing fractures with respect to normal controls. Receiver operation characteristic (ROC) curves show that the sensitivity and selectivity of the developed CADx system are satisfactory and promising. A clinically feasible graphical user interface (GUI) was developed to serve the practical needs in the emergency room as a diagnostic reference. The developed CADx system also has strong potential to train radiology residents for diagnosing pediatric forearm bowing fractures.
Collapse
Affiliation(s)
- Yuwei Zhou
- Department of Biomedical Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Uygar Teomete
- Department of Radiology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Ozgur Dandin
- Department of General Surgery, Bursa Military Hospital, Bursa 16800, Turkey
| | - Onur Osman
- Department of Electrical and Electronics Engineering, Istanbul Arel University, Istanbul 34500, Turkey
| | - Taner Dandinoglu
- Department of Physical Medicine and Rehabilitation, Bursa Military Hospital, Bursa 16800, Turkey
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida, Orlando, FL 32816, USA
| | - Weizhao Zhao
- Department of Biomedical Engineering, University of Miami, Coral Gables, FL 33146, USA; Department of Radiology, University of Miami Miller School of Medicine, Miami, FL 33136, USA.
| |
Collapse
|