151
|
Hameed BMZ, Prerepa G, Patil V, Shekhar P, Zahid Raza S, Karimi H, Paul R, Naik N, Modi S, Vigneswaran G, Prasad Rai B, Chłosta P, Somani BK. Engineering and clinical use of artificial intelligence (AI) with machine learning and data science advancements: radiology leading the way for future. Ther Adv Urol 2021; 13:17562872211044880. [PMID: 34567272 PMCID: PMC8458681 DOI: 10.1177/17562872211044880] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 08/21/2021] [Indexed: 12/29/2022] Open
Abstract
Over the years, many clinical and engineering methods have been adapted for testing and screening for the presence of diseases. The most commonly used methods for diagnosis and analysis are computed tomography (CT) and X-ray imaging. Manual interpretation of these images is the current gold standard but can be subject to human error, is tedious, and is time-consuming. To improve efficiency and productivity, incorporating machine learning (ML) and deep learning (DL) algorithms could expedite the process. This article aims to review the role of artificial intelligence (AI) and its contribution to data science as well as various learning algorithms in radiology. We will analyze and explore the potential applications in image interpretation and radiological advances for AI. Furthermore, we will discuss the usage, methodology implemented, future of these concepts in radiology, and their limitations and challenges.
Collapse
Affiliation(s)
- B M Zeeshan Hameed
- Department of Urology, Father Muller Medical College, Mangalore, Karnataka, India
| | - Gayathri Prerepa
- Department of Electronics and Communication, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Vathsala Patil
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Pranav Shekhar
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Syed Zahid Raza
- Department of Urology, Dr. B.R. Ambedkar Medical College, Bengaluru, Karnataka, India
| | - Hadis Karimi
- Manipal College of Pharmaceutical Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nithesh Naik
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| | - Sachin Modi
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Ganesh Vigneswaran
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Bhavan Prasad Rai
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group Manipal, India
| | - Piotr Chłosta
- Department of Urology, Jagiellonian University in Kraków, Kraków, Poland
| | - Bhaskar K Somani
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| |
Collapse
|
152
|
Alzubaidi L, Duan Y, Al-Dujaili A, Ibraheem IK, Alkenani AH, Santamaría J, Fadhel MA, Al-Shamma O, Zhang J. Deepening into the suitability of using pre-trained models of ImageNet against a lightweight convolutional neural network in medical imaging: an experimental study. PeerJ Comput Sci 2021; 7:e715. [PMID: 34722871 PMCID: PMC8530098 DOI: 10.7717/peerj-cs.715] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/24/2021] [Indexed: 05/14/2023]
Abstract
Transfer learning (TL) has been widely utilized to address the lack of training data for deep learning models. Specifically, one of the most popular uses of TL has been for the pre-trained models of the ImageNet dataset. Nevertheless, although these pre-trained models have shown an effective performance in several domains of application, those models may not offer significant benefits in all instances when dealing with medical imaging scenarios. Such models were designed to classify a thousand classes of natural images. There are fundamental differences between these models and those dealing with medical imaging tasks regarding learned features. Most medical imaging applications range from two to ten different classes, where we suspect that it would not be necessary to employ deeper learning models. This paper investigates such a hypothesis and develops an experimental study to examine the corresponding conclusions about this issue. The lightweight convolutional neural network (CNN) model and the pre-trained models have been evaluated using three different medical imaging datasets. We have trained the lightweight CNN model and the pre-trained models with two scenarios which are with a small number of images once and a large number of images once again. Surprisingly, it has been found that the lightweight model trained from scratch achieved a more competitive performance when compared to the pre-trained model. More importantly, the lightweight CNN model can be successfully trained and tested using basic computational tools and provide high-quality results, specifically when using medical imaging datasets.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, Baghdad, Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri - Columbia, Columbia, Missouri, United States
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, Baghdad, Iraq
| | - Ibraheem Kasim Ibraheem
- Department of Electrical Engineering, College of Engineering, University of Baghdad, Baghdad, Baghdad, Iraq
| | - Ahmed H. Alkenani
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
- The Australian E-Health Research Centre, CSIRO, Brisbane, Queensland, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Rafia, Thi Qar, Iraq
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, Baghdad, Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
153
|
Stollmayer R, Budai BK, Tóth A, Kalina I, Hartmann E, Szoldán P, Bérczi V, Maurovich-Horvat P, Kaposi PN. Diagnosis of focal liver lesions with deep learning-based multi-channel analysis of hepatocyte-specific contrast-enhanced magnetic resonance imaging. World J Gastroenterol 2021; 27:5978-5988. [PMID: 34629814 PMCID: PMC8475009 DOI: 10.3748/wjg.v27.i35.5978] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/07/2021] [Accepted: 08/25/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The nature of input data is an essential factor when training neural networks. Research concerning magnetic resonance imaging (MRI)-based diagnosis of liver tumors using deep learning has been rapidly advancing. Still, evidence to support the utilization of multi-dimensional and multi-parametric image data is lacking. Due to higher information content, three-dimensional input should presumably result in higher classification precision. Also, the differentiation between focal liver lesions (FLLs) can only be plausible with simultaneous analysis of multi-sequence MRI images.
AIM To compare diagnostic efficiency of two-dimensional (2D) and three-dimensional (3D)-densely connected convolutional neural networks (DenseNet) for FLLs on multi-sequence MRI.
METHODS We retrospectively collected T2-weighted, gadoxetate disodium-enhanced arterial phase, portal venous phase, and hepatobiliary phase MRI scans from patients with focal nodular hyperplasia (FNH), hepatocellular carcinomas (HCC) or liver metastases (MET). Our search identified 71 FNH, 69 HCC and 76 MET. After volume registration, the same three most representative axial slices from all sequences were combined into four-channel images to train the 2D-DenseNet264 network. Identical bounding boxes were selected on all scans and stacked into 4D volumes to train the 3D-DenseNet264 model. The test set consisted of 10-10-10 tumors. The performance of the models was compared using area under the receiver operating characteristic curve (AUROC), specificity, sensitivity, positive predictive values (PPV), negative predictive values (NPV), and f1 scores.
RESULTS The average AUC value of the 2D model (0.98) was slightly higher than that of the 3D model (0.94). Mean PPV, sensitivity, NPV, specificity and f1 scores (0.94, 0.93, 0.97, 0.97, and 0.93) of the 2D model were also superior to metrics of the 3D model (0.84, 0.83, 0.92, 0.92, and 0.83). The classification metrics of FNH were 0.91, 1.00, 1.00, 0.95, and 0.95 using the 2D and 0.90, 0.90, 0.95, 0.95, and 0.90 using the 3D models. The 2D and 3D networks' performance in the diagnosis of HCC were 1.00, 0.80, 0.91, 1.00, and 0.89 and 0.88, 0.70, 0.86, 0.95, and 0.78, respectively; while the evaluation of MET lesions resulted in 0.91, 1.00, 1.00, 0.95, and 0.95 and 0.75, 0.90, 0.94, 0.85, and 0.82 using the 2D and 3D networks, respectively.
CONCLUSION Both 2D and 3D-DenseNets can differentiate FNH, HCC and MET with good accuracy when trained on hepatocyte-specific contrast-enhanced multi-sequence MRI volumes.
Collapse
Affiliation(s)
- Róbert Stollmayer
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Bettina K Budai
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Ambrus Tóth
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Ildikó Kalina
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Erika Hartmann
- Department of Transplantation and Surgery, Faculty of Medicine, Semmelweis University, Budapest 1082, Hungary
| | - Péter Szoldán
- MedInnoScan Research and Development Ltd., Budapest 1112, Hungary
| | - Viktor Bérczi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Pál Maurovich-Horvat
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| | - Pál N Kaposi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest 1083, Hungary
| |
Collapse
|
154
|
Czajkowska J, Badura P, Korzekwa S, Płatkowska-Szczerek A, Słowińska M. Deep Learning-Based High-Frequency Ultrasound Skin Image Classification with Multicriteria Model Evaluation. SENSORS 2021; 21:s21175846. [PMID: 34502735 PMCID: PMC8434172 DOI: 10.3390/s21175846] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/22/2021] [Accepted: 08/27/2021] [Indexed: 02/01/2023]
Abstract
This study presents the first application of convolutional neural networks to high-frequency ultrasound skin image classification. This type of imaging opens up new opportunities in dermatology, showing inflammatory diseases such as atopic dermatitis, psoriasis, or skin lesions. We collected a database of 631 images with healthy skin and different skin pathologies to train and assess all stages of the methodology. The proposed framework starts with the segmentation of the epidermal layer using a DeepLab v3+ model with a pre-trained Xception backbone. We employ transfer learning to train the segmentation model for two purposes: to extract the region of interest for classification and to prepare the skin layer map for classification confidence estimation. For classification, we train five models in different input data modes and data augmentation setups. We also introduce a classification confidence level to evaluate the deep model’s reliability. The measure combines our skin layer map with the heatmap produced by the Grad-CAM technique designed to indicate image regions used by the deep model to make a classification decision. Moreover, we propose a multicriteria model evaluation measure to select the optimal model in terms of classification accuracy, confidence, and test dataset size. The experiments described in the paper show that the DenseNet-201 model fed with the extracted region of interest produces the most reliable and accurate results.
Collapse
Affiliation(s)
- Joanna Czajkowska
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland;
- Correspondence: ; Tel.: +48-322-774-67
| | - Pawel Badura
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland;
| | - Szymon Korzekwa
- Department of Temporomandibular Disorders, Division of Prosthodontics, Poznan University of Medical Sciences, 60-512 Poznań, Poland;
| | | | - Monika Słowińska
- Department of Dermatology, Military Institute of Medicine, 01-755 Warszawa, Poland;
| |
Collapse
|
155
|
An artificial intelligent framework for prediction of wildlife vehicle collision hotspots based on geographic information systems and multispectral imagery. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
156
|
Grais EM, Wang X, Wang J, Zhao F, Jiang W, Cai Y, Zhang L, Lin Q, Yang H. Analysing wideband absorbance immittance in normal and ears with otitis media with effusion using machine learning. Sci Rep 2021; 11:10643. [PMID: 34017019 PMCID: PMC8137706 DOI: 10.1038/s41598-021-89588-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 04/14/2021] [Indexed: 11/09/2022] Open
Abstract
Wideband Absorbance Immittance (WAI) has been available for more than a decade, however its clinical use still faces the challenges of limited understanding and poor interpretation of WAI results. This study aimed to develop Machine Learning (ML) tools to identify the WAI absorbance characteristics across different frequency-pressure regions in the normal middle ear and ears with otitis media with effusion (OME) to enable diagnosis of middle ear conditions automatically. Data analysis included pre-processing of the WAI data, statistical analysis and classification model development, and key regions extraction from the 2D frequency-pressure WAI images. The experimental results show that ML tools appear to hold great potential for the automated diagnosis of middle ear diseases from WAI data. The identified key regions in the WAI provide guidance to practitioners to better understand and interpret WAI data and offer the prospect of quick and accurate diagnostic decisions.
Collapse
Affiliation(s)
- Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, CF5 2YB, UK
| | - Xiaoya Wang
- Department of Otolaryngology, Guangzhou Women and Children's Medical Centre, Guangzhou City, Guangdong Province, 510623, China
| | - Jie Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Beijing, 100730, China.,Key Laboratory of Otolaryngology Head and Neck Surgery, Ministry of Education, Beijing Engineering Research Centre of Hearing Technology, Beijing, 100730, China
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, CF5 2YB, UK.
| | - Wen Jiang
- Department of Hearing and Speech Sciences, Xuzhou Medical University, Xuzhou City, Jiangsu Province, 221000, China
| | - Yuexin Cai
- Sun Yat-sen Memorial Hospital, Department of Otolaryngology, Sun Yat-sen University, Guangzhou City, Guangdong Province, 510120, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, 510120, China
| | - Lifang Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Beijing, 100730, China.,Key Laboratory of Otolaryngology Head and Neck Surgery, Ministry of Education, Beijing Engineering Research Centre of Hearing Technology, Beijing, 100730, China
| | - Qingwen Lin
- Department of Otolaryngology, Guangzhou Women and Children's Medical Centre, Guangzhou City, Guangdong Province, 510623, China
| | - Haidi Yang
- Sun Yat-sen Memorial Hospital, Department of Otolaryngology, Sun Yat-sen University, Guangzhou City, Guangdong Province, 510120, China. .,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, 510120, China.
| |
Collapse
|
157
|
Budd S, Robinson EC, Kainz B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med Image Anal 2021; 71:102062. [PMID: 33901992 DOI: 10.1016/j.media.2021.102062] [Citation(s) in RCA: 135] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 03/26/2021] [Accepted: 03/30/2021] [Indexed: 12/21/2022]
Abstract
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.
Collapse
Affiliation(s)
- Samuel Budd
- Department of Computing, Imperial College London, UK.
| | | | | |
Collapse
|
158
|
Borjali A, Chen AF, Bedair HS, Melnic CM, Muratoglu OK, Morid MA, Varadarajan KM. Comparing the performance of a deep convolutional neural network with orthopedic surgeons on the identification of total hip prosthesis design from plain radiographs. Med Phys 2021; 48:2327-2336. [PMID: 33411949 DOI: 10.1002/mp.14705] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 12/24/2020] [Accepted: 12/27/2020] [Indexed: 01/09/2023] Open
Abstract
PURPOSE A crucial step in the preoperative planning for a revision total hip replacement (THR) surgery is the accurate identification of the failed implant design, especially if one or more well-fixed/functioning components are to be retained. Manual identification of the implant design from preoperative radiographic images can be time-consuming and inaccurate, which can ultimately lead to increased operating room time, more complex surgery, and increased healthcare costs. METHOD In this study, we present a novel approach to identifying THR femoral implants' design from plain radiographs using a convolutional neural network (CNN). We evaluated a total of 402 radiographs of nine different THR implant designs including, Accolade II (130 radiographs), Corail (89 radiographs), M/L Taper (31 radiographs), Summit (31 radiographs), Anthology (26 radiographs), Versys (26 radiographs), S-ROM (24 radiographs), Taperloc Standard Offset (24 radiographs), and Taperloc High Offset (21 radiographs). We implemented a transfer learning approach and adopted a DenseNet-201 CNN architecture by replacing the final classifier with nine fully connected neurons. Furthermore, we used saliency maps to explain the CNN decision-making process by visualizing the most important pixels in a given radiograph on the CNN's outcome. We also compared the CNN's performance with three board-certified and fellowship-trained orthopedic surgeons. RESULTS The CNN achieved the same or higher performance than at least one of the surgeons in identifying eight of nine THR implant designs and underperformed all of the surgeons in identifying one THR implant design (Anthology). Overall, the CNN achieved a lower Cohen's kappa (0.78) than surgeon 1 (1.00), the same Cohen's kappa as surgeon 2 (0.78), and a slightly higher Cohen's kappa than surgeon 3 (0.76) in identifying all the nine THR implant designs. Furthermore, the saliency maps showed that the CNN generally focused on each implant's unique design features to make a decision. Regarding the time spent performing the implant identification, the CNN accomplished this task in ~0.06 s per radiograph. The surgeon's identification time varied based on the method they utilized. When using their personal experience to identify the THR implant design, they spent negligible time. However, the identification time increased to an average of 8.4 min (standard deviation 6.1 min) per radiograph when they used another identification method (online search, consulting with the orthopedic company representative, and using image atlas), which occurred in about 17% of cases in the test subset (40 radiographs). CONCLUSIONS CNNs such as the one developed in this study can be used to automatically identify the design of a failed THR femoral implant preoperatively in just a fraction of a second, saving time and in some cases improving identification accuracy.
Collapse
Affiliation(s)
- Alireza Borjali
- Department of Orthopaedic, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Antonia F Chen
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Hany S Bedair
- Department of Orthopaedics, Massachusetts General Hospital, Boston, MA, USA
- Kaplan Joint Center, Department of Orthopedics, Newton-Wellesley Hospital, Newton, MA, USA
| | - Christopher M Melnic
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
- Department of Orthopaedics, Massachusetts General Hospital, Boston, MA, USA
- Kaplan Joint Center, Department of Orthopedics, Newton-Wellesley Hospital, Newton, MA, USA
| | - Orhun K Muratoglu
- Department of Orthopaedic, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Mohammad A Morid
- Department of Information Systems and Analytics, Santa Clara University Leavey School of Business, Santa Clara, CA, USA
| | - Kartik M Varadarajan
- Department of Orthopaedic, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
159
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 131] [Impact Index Per Article: 32.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
160
|
Ayana G, Dese K, Choe SW. Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers (Basel) 2021; 13:738. [PMID: 33578891 PMCID: PMC7916666 DOI: 10.3390/cancers13040738] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 02/05/2021] [Accepted: 02/08/2021] [Indexed: 11/26/2022] Open
Abstract
Transfer learning is a machine learning approach that reuses a learning method developed for a task as the starting point for a model on a target task. The goal of transfer learning is to improve performance of target learners by transferring the knowledge contained in other (but related) source domains. As a result, the need for large numbers of target-domain data is lowered for constructing target learners. Due to this immense property, transfer learning techniques are frequently used in ultrasound breast cancer image analyses. In this review, we focus on transfer learning methods applied on ultrasound breast image classification and detection from the perspective of transfer learning approaches, pre-processing, pre-training models, and convolutional neural network (CNN) models. Finally, comparison of different works is carried out, and challenges-as well as outlooks-are discussed.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| |
Collapse
|
161
|
Borjali A, Magnéli M, Shin D, Malchau H, Muratoglu OK, Varadarajan KM. Natural language processing with deep learning for medical adverse event detection from free-text medical narratives: A case study of detecting total hip replacement dislocation. Comput Biol Med 2020; 129:104140. [PMID: 33278631 DOI: 10.1016/j.compbiomed.2020.104140] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 11/18/2020] [Accepted: 11/19/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND Accurate and timely detection of medical adverse events (AEs) from free-text medical narratives can be challenging. Natural language processing (NLP) with deep learning has already shown great potential for analyzing free-text data, but its application for medical AE detection has been limited. METHOD In this study, we developed deep learning based NLP (DL-NLP) models for efficient and accurate hip dislocation AE detection following primary total hip replacement from standard (radiology notes) and non-standard (follow-up telephone notes) free-text medical narratives. We benchmarked these proposed models with traditional machine learning based NLP (ML-NLP) models, and also assessed the accuracy of International Classification of Diseases (ICD) and Current Procedural Terminology (CPT) codes in capturing these hip dislocation AEs in a multi-center orthopaedic registry. RESULTS All DL-NLP models outperformed all of the ML-NLP models, with a convolutional neural network (CNN) model achieving the best overall performance (Kappa = 0.97 for radiology notes, and Kappa = 1.00 for follow-up telephone notes). On the other hand, the ICD/CPT codes of the patients who sustained a hip dislocation AE were only 75.24% accurate. CONCLUSIONS We demonstrated that a DL-NLP model can be used in largescale orthopaedic registries for accurate and efficient detection of hip dislocation AEs. The NLP model in this study was developed with data from the most frequently used electronic medical record (EMR) system in the U.S., Epic. This NLP model could potentially be implemented in other Epic-based EMR systems to improve AE detection, and consequently, quality of care and patient outcomes.
Collapse
Affiliation(s)
- Alireza Borjali
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA; Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Martin Magnéli
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA; Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Karolinska Institutet, Department of Clinical Sciences, Danderyd Hospital, Stockholm, Sweden
| | - David Shin
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Henrik Malchau
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA; Department of Orthopaedic Surgery, Sahlgrenska University Hospital, Sweden
| | - Orhun K Muratoglu
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA; Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA
| | - Kartik M Varadarajan
- Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA; Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
162
|
Esaki T, Kawashima T. [Supplementing a Web-based Exposure Estimation System with Deep Learning for Automatic Classification of CT Images to Increase the Efficiency of Effective Dose Estimation]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1107-1117. [PMID: 33229840 DOI: 10.6009/jjrt.2020_jsrt_76.11.1107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Web-based exposure estimation systems are advantageous for estimating exposure doses for computed tomography (CT) scans. However, such systems depend on the imaging conditions of the slices, and a considerable amount of time and effort is needed to select the slices and extract their imaging conditions from the relevant CT volume data. In this study, we used a convolutional neural network (CNN) to automatically classify specific slices from available CT volume data for use by a Web-based exposure estimation system. We also proposed a method to automatically obtain the imaging conditions of these classified slices. The objective of this study was to improve the efficiency of effective dose estimation. METHOD We automatically classified specific slices from CT volume data using two different CNN architectures: VGG-16 and Xception. We organized the dataset into 5 categories corresponding to the contents of the specific slices. We also tested a 9-category version in which the slices were supplemented with their adjacent slices. We automatically obtained the imaging conditions from the DICOM tags of the specific slices that were classified from the CT volume data by the CNN and then estimated the effective exposure dose provided by the Web-based exposure estimation system. RESULT Using the 5-category dataset approach, the error in the effective exposure dose was 13% for VGG16 and 6% for Xception. When the 9-category approach was used, the error in the effective exposure dose was 0.8% for VGG16 and 0.6% for Xception. In both the architectures, less than 5 minutes was needed in the classification of the specific slices, followed by the extraction of their imaging conditions; however, VGG16 required the shortest processing time. CONCLUSION By supplementing a Web-based exposure estimation system with a CNN and adopting our proposed method, we were able to improve the efficiency of effective dose estimation.
Collapse
Affiliation(s)
- Toru Esaki
- Department of Radiologic Technology, Jichi Medical University Hospital
| | - Tomoaki Kawashima
- Department of Radiologic Technology, Jichi Medical University Hospital
| |
Collapse
|