151
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
152
|
A deep learning system that generates quantitative CT reports for diagnosing pulmonary Tuberculosis. APPL INTELL 2020. [DOI: 10.1007/s10489-020-02051-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractThe purpose of this study was to establish and validate a new deep learning system that generates quantitative computed tomography (CT) reports for the diagnosis of pulmonary tuberculosis (PTB) in clinic. 501 CT imaging datasets were collected from 223 patients with active PTB, while another 501 datasets, which served as negative samples, were collected from a healthy population. All the PTB datasets were labeled and classified manually by professional radiologists. Then, four state-of-the-art 3D convolution neural network (CNN) models were trained and evaluated in the inspection of PTB CT images. The best model was selected to annotate the spatial location of lesions and classify them into miliary, infiltrative, caseous, tuberculoma, and cavitary types. The Noisy-Or Bayesian function was used to generate an overall infection probability of this case. The results showed that the recall and precision rates of detection, from the perspective of a single lesion region of PTB, were 85.9% and 89.2%, respectively. The overall recall and precision rates of detection, from the perspective of one PTB case, were 98.7% and 93.7%, respectively. Moreover, the precision rate of type classification of the PTB lesion was 90.9%. Finally, a quantitative diagnostic report of PTB was generated including infection possibility, locations of the lesion, as well as the types. This new method might serve as an effective reference for decision making by clinical doctors.
Collapse
|
153
|
Image Annotation by Eye Tracking: Accuracy and Precision of Centerlines of Obstructed Small-Bowel Segments Placed Using Eye Trackers. J Digit Imaging 2020; 32:855-864. [PMID: 31144146 DOI: 10.1007/s10278-018-0169-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Small-bowel obstruction (SBO) is a common and important disease, for which machine learning tools have yet to be developed. Image annotation is a critical first step for development of such tools. This study assesses whether image annotation by eye tracking is sufficiently accurate and precise to serve as a first step in the development of machine learning tools for detection of SBO on CT. Seven subjects diagnosed with SBO by CT were included in the study. For each subject, an obstructed segment of bowel was chosen. Three observers annotated the centerline of the segment by manual fiducial placement and by visual fiducial placement using a Tobii 4c eye tracker. Each annotation was repeated three times. The distance between centerlines was calculated after alignment using dynamic time warping (DTW) and statistically compared to clinical thresholds for diagnosis of SBO. Intra-observer DTW distance between manual and visual centerlines was calculated as a measure of accuracy. These distances were 1.1 ± 0.2, 1.3 ± 0.4, and 1.8 ± 0.2 cm for the three observers and were less than 1.5 cm for two of three observers (P < 0.01). Intra- and inter-observer DTW distances between centerlines placed with each method were calculated as measures of precision. These distances were 0.6 ± 0.1 and 0.8 ± 0.2 cm for manual centerlines, 1.1 ± 0.4 and 1.9 ± 0.6 cm for visual centerlines, and were less than 3.0 cm in all cases (P < 0.01). Results suggest that eye tracking-based annotation is sufficiently accurate and precise for small-bowel centerline annotation for use in machine learning-based applications.
Collapse
|
154
|
Ouyang N, Wang W, Ma L, Wang Y, Chen Q, Yang S, Xie J, Su S, Cheng Y, Cheng Q, Zheng L, Yuan Y. Diagnosing acute promyelocytic leukemia by using convolutional neural network. Clin Chim Acta 2020; 512:1-6. [PMID: 33159948 DOI: 10.1016/j.cca.2020.10.039] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 10/20/2020] [Accepted: 10/31/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE To evaluate the efficacy of diagnosis systems based upon instance segmentation with convolutional neural networks (CNNs) for diagnosing acute promyelocytic leukemia (APL) in bone marrow smear images. MATERIALS AND METHODS A self-established dataset was used in this study that was exempted from review by the institution review board, which consisted of 13,504 bone marrow smear images. One subset of the dataset with 12,215 labeled images was split into training (80%) and validation (20%), another with 1289 labeled images was used to test, in which each test entry consists of about 130 images. An instance segmentation method named Mask R-CNN was used to detect and classify the nucleated cells. Here, we train a trained neural network from scratch; for comparison, we also use a network pre-trained on MS COCO (common objects in context, a data set provided by Microsoft which can be used for image recognition, the images in MS coco dataset are divided into training, validation and test sets) and fine-tuned with our dataset and both were trained with same data augmentation scheme. Diagnosis systems based on trained models and "FAB Classification" (French-American-British classification systems, a series of diagnostic criteria for acute leukemia, which was first proposed in 1976) were developed for diagnosing the test entry as APL or as not. Average precision (AP) and average recall (AR) were used to evaluate model performance. RESULTS The best-performing model had an average precision of 62.5%, which was the augmented pre-trained Mask R-CNN with average recall 84.1%. The average precision of the pre-trained model was greater than that of the model trained from scratch (P < 0.05). Augmenting the dataset further increased accuracy (P < 0 0.03). CONCLUSION Deep learning technology such as instance segmentation with Mask R-CNN may accurately diagnose APL in bone marrow smear images with an average precision of 62.5% when 0.5 as IoU thresholds. A data augmentation and pre-trained approach further improved accuracy.
Collapse
Affiliation(s)
- Nengliang Ouyang
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China; Department of Laboratory Medicine, Nanfang Hospital, Southern Medical University, GuangZhou, PR China
| | - Weijia Wang
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Li Ma
- Zhongshan Yangshi Technology Co., Ltd, ZhongShan, PR China
| | - Yanfang Wang
- Zhongshan Yangshi Technology Co., Ltd, ZhongShan, PR China
| | - Qingwu Chen
- Zhongshan Yangshi Technology Co., Ltd, ZhongShan, PR China
| | - Shanhong Yang
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Jinye Xie
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Shaoshen Su
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Yin Cheng
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Qiong Cheng
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| | - Lei Zheng
- Department of Laboratory Medicine, Nanfang Hospital, Southern Medical University, GuangZhou, PR China
| | - Yong Yuan
- Department of Laboratory Medicine, Zhongshan Hospital, Sun Yat-sen University, Zhongshan, PR China
| |
Collapse
|
155
|
Venegas P, Pérez N, Zapata S, Mosquera JD, Augot D, Rojo-Álvarez JL, Benítez D. An approach to automatic classification of Culicoides species by learning the wing morphology. PLoS One 2020; 15:e0241798. [PMID: 33147271 PMCID: PMC7641368 DOI: 10.1371/journal.pone.0241798] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022] Open
Abstract
Fast and accurate identification of biting midges is crucial in the study of Culicoides-borne diseases. In this work, we propose a two-stage method for automatically analyzing Culicoides (Diptera: Ceratopogonidae) species. First, an image preprocessing task composed of median and Wiener filters followed by equalization and morphological operations is used to improve the quality of the wing image in order to allow an adequate segmentation of particles of interest. Then, the segmentation of the zones of interest inside the biting midge wing is made using the watershed transform. The proposed method is able to produce optimal feature vectors that help to identify Culicoides species. A database containing wing images of C. obsoletus, C. pusillus, C. foxi, and C. insignis species was used to test its performance. Feature relevance analysis indicated that the mean of hydraulic radius and eccentricity were relevant for the decision boundary between C. obsoletus and C. pusillus species. In contrast, the number of particles and the mean of the hydraulic radius was relevant for deciding between C. foxi and C. insignis species. Meanwhile, for distinguishing among the four species, the number of particles and zones, and the mean of circularity were the most relevant features. The linear discriminant analysis classifier was the best model for the three experimental classification scenarios previously described, achieving averaged areas under the receiver operating characteristic curve of 0.98, 0.90, and 0.96, respectively.
Collapse
Affiliation(s)
- Pablo Venegas
- Colegio de Ciencias e Ingenierías “El Politécnico”, Universidad San Francisco de Quito USFQ, Quito, Ecuador
| | - Noel Pérez
- Colegio de Ciencias e Ingenierías “El Politécnico”, Universidad San Francisco de Quito USFQ, Quito, Ecuador
| | - Sonia Zapata
- Instituto de Microbiología, Colegio de Ciencias Biológicas y Ambientales “COCIBA”, Universidad San Francisco de Quito USFQ, Quito, Ecuador
| | - Juan Daniel Mosquera
- Instituto de Microbiología, Colegio de Ciencias Biológicas y Ambientales “COCIBA”, Universidad San Francisco de Quito USFQ, Quito, Ecuador
| | - Denis Augot
- Usc Vecpar, ANSES LSA, EA7510, Université de Reims Champagne-Ardenne, Reims, France
| | - José Luis Rojo-Álvarez
- Department of Signal Theory and Communications, Rey Juan Carlos University, Fuenlabrada, Spain
| | - Diego Benítez
- Colegio de Ciencias e Ingenierías “El Politécnico”, Universidad San Francisco de Quito USFQ, Quito, Ecuador
- * E-mail:
| |
Collapse
|
156
|
Peña-Solórzano CA, Albrecht DW, Bassed RB, Burke MD, Dimmock MR. Findings from machine learning in clinical medical imaging applications - Lessons for translation to the forensic setting. Forensic Sci Int 2020; 316:110538. [PMID: 33120319 PMCID: PMC7568766 DOI: 10.1016/j.forsciint.2020.110538] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Machine learning (ML) techniques are increasingly being used in clinical medical imaging to automate distinct processing tasks. In post-mortem forensic radiology, the use of these algorithms presents significant challenges due to variability in organ position, structural changes from decomposition, inconsistent body placement in the scanner, and the presence of foreign bodies. Existing ML approaches in clinical imaging can likely be transferred to the forensic setting with careful consideration to account for the increased variability and temporal factors that affect the data used to train these algorithms. Additional steps are required to deal with these issues, by incorporating the possible variability into the training data through data augmentation, or by using atlases as a pre-processing step to account for death-related factors. A key application of ML would be then to highlight anatomical and gross pathological features of interest, or present information to help optimally determine the cause of death. In this review, we highlight results and limitations of applications in clinical medical imaging that use ML to determine key implications for their application in the forensic setting.
Collapse
Affiliation(s)
- Carlos A Peña-Solórzano
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - David W Albrecht
- Clayton School of Information Technology, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Richard B Bassed
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Michael D Burke
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Matthew R Dimmock
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| |
Collapse
|
157
|
Nadeem MW, Goh HG, Ali A, Hussain M, Khan MA, Ponnusamy VA. Bone Age Assessment Empowered with Deep Learning: A Survey, Open Research Challenges and Future Directions. Diagnostics (Basel) 2020; 10:E781. [PMID: 33022947 PMCID: PMC7601134 DOI: 10.3390/diagnostics10100781] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/06/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a quite useful and proliferating technique of machine learning. Various applications, such as medical images analysis, medical images processing, text understanding, and speech recognition, have been using deep learning, and it has been providing rather promising results. Both supervised and unsupervised approaches are being used to extract and learn features as well as for the multi-level representation of pattern recognition and classification. Hence, the way of prediction, recognition, and diagnosis in various domains of healthcare including the abdomen, lung cancer, brain tumor, skeletal bone age assessment, and so on, have been transformed and improved significantly by deep learning. By considering a wide range of deep-learning applications, the main aim of this paper is to present a detailed survey on emerging research of deep-learning models for bone age assessment (e.g., segmentation, prediction, and classification). An enormous number of scientific research publications related to bone age assessment using deep learning are explored, studied, and presented in this survey. Furthermore, the emerging trends of this research domain have been analyzed and discussed. Finally, a critical discussion section on the limitations of deep-learning models has been presented. Open research challenges and future directions in this promising area have been included as well.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), 31900 Kampar, Perak, Malaysia;
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (A.A.); (M.A.K.)
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), 31900 Kampar, Perak, Malaysia;
| | - Abid Ali
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (A.A.); (M.A.K.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
| | - Muhammad Adnan Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (A.A.); (M.A.K.)
| | - Vasaki a/p Ponnusamy
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), 31900 Kampar, Perak, Malaysia;
| |
Collapse
|
158
|
A novel approach to classify urinary stones using dual-energy kidney, ureter and bladder (DEKUB) X-ray imaging. Appl Radiat Isot 2020; 164:109267. [DOI: 10.1016/j.apradiso.2020.109267] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Revised: 05/19/2020] [Accepted: 06/02/2020] [Indexed: 12/18/2022]
|
159
|
Kocak B, Kus EA, Kilickesmez O. How to read and review papers on machine learning and artificial intelligence in radiology: a survival guide to key methodological concepts. Eur Radiol 2020; 31:1819-1830. [PMID: 33006018 DOI: 10.1007/s00330-020-07324-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 08/25/2020] [Accepted: 09/18/2020] [Indexed: 12/16/2022]
Abstract
In recent years, there has been a dramatic increase in research papers about machine learning (ML) and artificial intelligence in radiology. With so many papers around, it is of paramount importance to make a proper scientific quality assessment as to their validity, reliability, effectiveness, and clinical applicability. Due to methodological complexity, the papers on ML in radiology are often hard to evaluate, requiring a good understanding of key methodological issues. In this review, we aimed to guide the radiology community about key methodological aspects of ML to improve their academic reading and peer-review experience. Key aspects of ML pipeline were presented within four broad categories: study design, data handling, modelling, and reporting. Sixteen key methodological items and related common pitfalls were reviewed with a fresh perspective: database size, robustness of reference standard, information leakage, feature scaling, reliability of features, high dimensionality, perturbations in feature selection, class balance, bias-variance trade-off, hyperparameter tuning, performance metrics, generalisability, clinical utility, comparison with traditional tools, data sharing, and transparent reporting.Key Points• Machine learning is new and rather complex for the radiology community.• Validity, reliability, effectiveness, and clinical applicability of studies on machine learning can be evaluated with a proper understanding of key methodological concepts about study design, data handling, modelling, and reporting.• Understanding key methodological concepts will provide a better academic reading and peer-review experience for the radiology community.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey.
| | - Ece Ates Kus
- Department of Radiology, Istanbul Training and Research Hospital, Samatya, 34098, Istanbul, Turkey
| | - Ozgur Kilickesmez
- Department of Radiology, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| |
Collapse
|
160
|
Porcu M, Solinas C, Mannelli L, Micheletti G, Lambertini M, Willard-Gallo K, Neri E, Flanders AE, Saba L. Radiomics and "radi-…omics" in cancer immunotherapy: a guide for clinicians. Crit Rev Oncol Hematol 2020; 154:103068. [PMID: 32805498 DOI: 10.1016/j.critrevonc.2020.103068] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/13/2020] [Accepted: 07/23/2020] [Indexed: 02/06/2023] Open
Abstract
In recent years the concept of precision medicine has become a popular topic particularly in medical oncology. Besides the identification of new molecular prognostic and predictive biomarkers and the development of new targeted and immunotherapeutic drugs, imaging has started to play a central role in this new era. Terms such as "radiomics", "radiogenomics" or "radi…-omics" are becoming increasingly common in the literature and soon they will represent an integral part of clinical practice. The use of artificial intelligence, imaging and "-omics" data can be used to develop models able to predict, for example, the features of the tumor immune microenvironment through imaging, and to monitor the therapeutic response beyond the standard radiological criteria. The aims of this narrative review are to provide a simplified guide for clinicians to these concepts, and to summarize the existing evidence on radiomics and "radi…-omics" in cancer immunotherapy.
Collapse
Affiliation(s)
- Michele Porcu
- Department of Radiology, AOU of Cagliari, University of Cagliari, Italy.
| | - Cinzia Solinas
- Medical Oncology, Azienda Tutela Salute Sardegna, Hospital Antonio Segni, Ozieri, SS, Italy
| | | | - Giulio Micheletti
- Department of Radiology, AOU of Cagliari, University of Cagliari, Italy
| | - Matteo Lambertini
- Department of Medical Oncology, U.O.C. Clinica di Oncologia Medica, IRCCS Ospedale Policlinico San Martino, Genova, Italy; Department of Internal Medicine and Medical Specialties (DiMI), School of Medicine, University of Genova, Genova, Italy
| | | | | | - Adam E Flanders
- Department of Radiology, Division of Neuroradiology, Thomas Jefferson University Hospital, Philadelphia, PA, USA
| | - Luca Saba
- Department of Radiology, AOU of Cagliari, University of Cagliari, Italy
| |
Collapse
|
161
|
Dikici E, Ryu JL, Demirer M, Bigelow M, White RD, Slone W, Erdal BS, Prevedello LM. Automated Brain Metastases Detection Framework for T1-Weighted Contrast-Enhanced 3D MRI. IEEE J Biomed Health Inform 2020; 24:2883-2893. [DOI: 10.1109/jbhi.2020.2982103] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
162
|
Ahmad A, Ibrahim Z, Sakr G, El-Bizri A, Masri L, Elhajj IH, El-Hachem N, Isma'eel H. A comparison of artificial intelligence-based algorithms for the identification of patients with depressed right ventricular function from 2-dimentional echocardiography parameters and clinical features. Cardiovasc Diagn Ther 2020; 10:859-868. [PMID: 32968641 DOI: 10.21037/cdt-20-471] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background Recognizing low right ventricular (RV) function from 2-dimentiontial echocardiography (2D-ECHO) is challenging when parameters are contradictory. We aim to develop a model to predict low RV function integrating the various 2D-ECHO parameters in reference to cardiac magnetic resonance (CMR)-the gold standard. Methods We retrospectively identified patients who underwent a 2D-ECHO and a CMR within 3 months of each other at our institution (American University of Beirut Medical Center). We extracted three parameters (TAPSE, S' and FACRV) that are classically used to assess RV function. We have assessed the ability of 2D-ECHO derived parameters and clinical features to predict RV function measured by the gold standard CMR. We compared outcomes from four machine learning algorithms, widely used in the biomedical community to solve classification problems. Results One hundred fifty-five patients were identified and included in our study. Average age was 43±17.1 years old and 52/156 (33.3%) were females. According to CMR, 21 patients were identified to have RV dysfunction, with an RVEF of 34.7%±6.4%, as opposed to 54.7%±6.7% in the normal RV population (P<0.0001). The Random Forest model was able to detect low RV function with an AUC =0.80, while general linear regression performed poorly in our population with an AUC of 0.62. Conclusions In this study, we trained and validated an ML-based algorithm that could detect low RV function from clinical and 2D-ECHO parameters. The algorithm has two advantages: first, it performed better than general linear regression, and second, it integrated the various 2D-ECHO parameters.
Collapse
Affiliation(s)
- Ali Ahmad
- Vascular Medicine Program, Division of Cardiology, American University of Beirut, Beirut, Lebanon.,Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA
| | - Zahi Ibrahim
- Vascular Medicine Program, Division of Cardiology, American University of Beirut, Beirut, Lebanon
| | - Georges Sakr
- Department of Computer Engineering, St Joseph University of Beirut, Beirut, Lebanon
| | - Abdallah El-Bizri
- Department of Internal Medicine, American University of Beirut, Beirut, Lebanon
| | - Lara Masri
- Vascular Medicine Program, Division of Cardiology, American University of Beirut, Beirut, Lebanon
| | - Imad H Elhajj
- Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | - Nehme El-Hachem
- Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | - Hussain Isma'eel
- Vascular Medicine Program, Division of Cardiology, American University of Beirut, Beirut, Lebanon.,Department of Internal Medicine, American University of Beirut, Beirut, Lebanon
| |
Collapse
|
163
|
Gokulnath BV, Usha Devi G. Boosted-DEPICT: an effective maize disease categorization framework using deep clustering. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05303-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
164
|
Feasibility of Automatic Seed Generation Applied to Cardiac MRI Image Analysis. MATHEMATICS 2020. [DOI: 10.3390/math8091511] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We present a method of using interactive image segmentation algorithms to reduce specific image segmentation problems to the task of finding small sets of pixels identifying the regions of interest. To this end, we empirically show the feasibility of automatically generating seeds for GrowCut, a popular interactive image segmentation algorithm. The principal contribution of our paper is the proposal of a method for automating the seed generation method for the task of whole-heart segmentation of MRI scans, which achieves competitive unsupervised results (0.76 Dice on the MMWHS dataset). Moreover, we show that segmentation performance is robust to seeds with imperfect precision, suggesting that GrowCut-like algorithms can be applied to medical imaging tasks with little modeling effort.
Collapse
|
165
|
Diagnostic accuracy of deep learning in orthopaedic fractures: a systematic review and meta-analysis. Clin Radiol 2020; 75:713.e17-713.e28. [DOI: 10.1016/j.crad.2020.05.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 05/20/2020] [Indexed: 02/07/2023]
|
166
|
Tran WT, Sadeghi-Naini A, Lu FI, Gandhi S, Meti N, Brackstone M, Rakovitch E, Curpen B. Computational Radiology in Breast Cancer Screening and Diagnosis Using Artificial Intelligence. Can Assoc Radiol J 2020; 72:98-108. [DOI: 10.1177/0846537120949974] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Breast cancer screening has been shown to significantly reduce mortality in women. The increased utilization of screening examinations has led to growing demands for rapid and accurate diagnostic reporting. In modern breast imaging centers, full-field digital mammography (FFDM) has replaced traditional analog mammography, and this has opened new opportunities for developing computational frameworks to automate detection and diagnosis. Artificial intelligence (AI), and its subdomain of deep learning, is showing promising results and improvements on diagnostic accuracy, compared to previous computer-based methods, known as computer-aided detection and diagnosis. In this commentary, we review the current status of computational radiology, with a focus on deep neural networks used in breast cancer screening and diagnosis. Recent studies are developing a new generation of computer-aided detection and diagnosis systems, as well as leveraging AI-driven tools to efficiently interpret digital mammograms, and breast tomosynthesis imaging. The use of AI in computational radiology necessitates transparency and rigorous testing. However, the overall impact of AI to radiology workflows will potentially yield more efficient and standardized processes as well as improve the level of care to patients with high diagnostic accuracy.
Collapse
Affiliation(s)
- William T. Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, Canada
| | - Fang-I Lu
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Sonal Gandhi
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medicine, University of Toronto, Toronto, Canada
| | - Nicholas Meti
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Muriel Brackstone
- Department of Surgical Oncology, London Health Sciences Centre, London, Ontario
| | - Eileen Rakovitch
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Belinda Curpen
- Division of Breast Imaging, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| |
Collapse
|
167
|
Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2020; 2020:6805710. [PMID: 32934610 PMCID: PMC7474774 DOI: 10.1155/2020/6805710] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/17/2020] [Accepted: 05/28/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) have impacted many scientific fields including biomedical maging. Magnetic resonance imaging (MRI) is a well-established method in breast imaging with several indications including screening, staging, and therapy monitoring. The rapid development and subsequent implementation of AI into clinical breast MRI has the potential to affect clinical decision-making, guide treatment selection, and improve patient outcomes. The goal of this review is to provide a comprehensive picture of the current status and future perspectives of AI in breast MRI. We will review DL applications and compare them to standard data-driven techniques. We will emphasize the important aspect of developing quantitative imaging biomarkers for precision medicine and the potential of breast MRI and DL in this context. Finally, we will discuss future challenges of DL applications for breast MRI and an AI-augmented clinical decision strategy.
Collapse
Affiliation(s)
- Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Torino, Italy
| | - Uwe Meyer-Bäse
- Department of Electrical and Computer Engineering, Florida A&M University and Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Katja Pinker
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Molecular and Gender Imaging, Medical University of Vienna, Vienna, Austria
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065, USA
| |
Collapse
|
168
|
Guerriero E, Ugga L, Cuocolo R. Artificial intelligence and pituitary adenomas: A review. Artif Intell Med Imaging 2020; 1:70-77. [DOI: 10.35711/aimi.v1.i2.70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/15/2020] [Accepted: 08/21/2020] [Indexed: 02/06/2023] Open
Abstract
The aim of this review was to provide an overview of the main concepts in machine learning (ML) and to analyze the ML applications in the imaging of pituitary adenomas. After describing the clinical, pathological and imaging features of pituitary tumors, we defined the difference between ML and classical rule-based algorithms, we illustrated the fundamental ML techniques: supervised, unsupervised and reinforcement learning and explained the characteristic of deep learning, a ML approach employing networks inspired by brain’s structure. Pre-treatment assessment and neurosurgical outcome prediction were the potential ML applications using magnetic resonance imaging. Regarding pre-treatment assessment, ML methods were used to have information about tumor consistency, predict cavernous sinus invasion and high proliferative index, discriminate null cell adenomas, which respond to neo-adjuvant radiotherapy from other subtypes, predict somatostatin analogues response and visual pathway injury. Regarding neurosurgical outcome prediction, the following applications were discussed: Gross total resection prediction, evaluation of Cushing disease recurrence after transsphenoidal surgery and prediction of cerebrospinal fluid fistula’s formation after surgery. Although clinical applicability requires more replicability, generalizability and validation, results are promising, and ML software can be a potential power to facilitate better clinical decision making in pituitary tumor patients.
Collapse
Affiliation(s)
- Elvira Guerriero
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, Naples 80131, Italy
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, Naples 80131, Italy
| | - Renato Cuocolo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, Naples 80131, Italy
| |
Collapse
|
169
|
Martín Noguerol T, Paulano-Godino F, Martín-Valdivia MT, Menias CO, Luna A. Strengths, Weaknesses, Opportunities, and Threats Analysis of Artificial Intelligence and Machine Learning Applications in Radiology. J Am Coll Radiol 2020; 16:1239-1247. [PMID: 31492401 DOI: 10.1016/j.jacr.2019.05.047] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 05/26/2019] [Accepted: 05/29/2019] [Indexed: 12/13/2022]
Abstract
Currently, the use of artificial intelligence (AI) in radiology, particularly machine learning (ML), has become a reality in clinical practice. Since the end of the last century, several ML algorithms have been introduced for a wide range of common imaging tasks, not only for diagnostic purposes but also for image acquisition and postprocessing. AI is now recognized to be a driving initiative in every aspect of radiology. There is growing evidence of the advantages of AI in radiology creating seamless imaging workflows for radiologists or even replacing radiologists. Most of the current AI methods have some internal and external disadvantages that are impeding their ultimate implementation in the clinical arena. As such, AI can be considered a portion of a business trying to be introduced in the health care market. For this reason, this review analyzes the current status of AI, and specifically ML, applied to radiology from the scope of strengths, weaknesses, opportunities, and threats (SWOT) analysis.
Collapse
Affiliation(s)
| | | | - María Teresa Martín-Valdivia
- SINAI Research Group, Computer Science Department, Advanced Studies Center in ICT (CEATIC), Universidad de Jaén, Jaén, Spain
| | | | - Antonio Luna
- MRI Unit, Radiology Department, Health Time, Jaén, Spain.
| |
Collapse
|
170
|
Egert M, Steward JE, Sundaram CP. Machine Learning and Artificial Intelligence in Surgical Fields. Indian J Surg Oncol 2020; 11:573-577. [PMID: 33299275 DOI: 10.1007/s13193-020-01166-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Accepted: 07/07/2020] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) have the potential to improve multiple facets of medical practice, including diagnosis of disease, surgical training, clinical outcomes, and access to healthcare. There have been various applications of this technology to surgical fields. AI and ML have been used to evaluate a surgeon's technical skill. These technologies can detect instrument motion, recognize patterns in video recordings, and track the physical motion, eye movements, and cognitive function of the surgeon. These modalities also aid in the advancement of robotic surgical training. The da Vinci Standard Surgical System developed a recording and playback system to help trainees receive tactical feedback to acquire more precision when operating. ML has shown promise in recognizing and classifying complex patterns on diagnostic images and within pathologic tissue analysis. This allows for more accurate and efficient diagnosis and treatment. Artificial neural networks are able to analyze sets of symptoms in conjunction with labs, imaging, and exam findings to determine the likelihood of a diagnosis or outcome. Telemedicine is another use of ML and AI that uses technology such as voice recognition to deliver health care remotely. Limitations include the need for large data sets to program computers to create the algorithms. There is also the potential for misclassification of data points that do not follow the typical patterns learned by the machine. As more applications of AI and ML are developed for the surgical field, further studies are needed to determine feasibility, efficacy, and cost.
Collapse
Affiliation(s)
- Melissa Egert
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| | - James E Steward
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| | - Chandru P Sundaram
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| |
Collapse
|
171
|
An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:6789306. [PMID: 32733596 PMCID: PMC7376410 DOI: 10.1155/2020/6789306] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 07/01/2020] [Indexed: 12/30/2022]
Abstract
Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.
Collapse
|
172
|
From CT to artificial intelligence for complex assessment of plaque-associated risk. Int J Cardiovasc Imaging 2020; 36:2403-2427. [PMID: 32617720 DOI: 10.1007/s10554-020-01926-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 06/25/2020] [Indexed: 02/07/2023]
Abstract
The recent technological developments in the field of cardiac imaging have established coronary computed tomography angiography (CCTA) as a first-line diagnostic tool in patients with suspected coronary artery disease (CAD). CCTA offers robust information on the overall coronary circulation and luminal stenosis, also providing the ability to assess the composition, morphology, and vulnerability of atherosclerotic plaques. In addition, the perivascular adipose tissue (PVAT) has recently emerged as a marker of increased cardiovascular risk. The addition of PVAT quantification to standard CCTA imaging may provide the ability to extract information on local inflammation, for an individualized approach in coronary risk stratification. The development of image post-processing tools over the past several years allowed CCTA to provide a significant amount of data that can be incorporated into machine learning (ML) applications. ML algorithms that use radiomic features extracted from CCTA are still at an early stage. However, the recent development of artificial intelligence will probably bring major changes in the way we integrate clinical, biological, and imaging information, for a complex risk stratification and individualized therapeutic decision making in patients with CAD. This review aims to present the current evidence on the complex role of CCTA in the detection and quantification of vulnerable plaques and the associated coronary inflammation, also describing the most recent developments in the radiomics-based machine learning approach for complex assessment of plaque-associated risk.
Collapse
|
173
|
Jayakumar P, Bozic KJ. Advanced decision-making using patient-reported outcome measures in total joint replacement. J Orthop Res 2020; 38:1414-1422. [PMID: 31994752 DOI: 10.1002/jor.24614] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 01/21/2020] [Indexed: 02/04/2023]
Abstract
Up to one-third of total joint replacement (TJR) procedures may be performed inappropriately in a subset of patients who remain dissatisfied with their outcomes, stressing the importance of shared decision-making. Patient-reported outcome measures capture physical, emotional, and social aspects of health and wellbeing from the patient's perspective. Powerful computer systems capable of performing highly sophisticated analysis using different types of data, including patient-derived data, such as patient-reported outcomes, may eliminate guess work, generating impactful metrics to better inform the decision-making process. We have created a shared decision-making tool which generates personalized predictions of risks and benefits from TJR based on patient-reported outcomes as well as clinical and demographic data. We present the protocol for a randomized controlled trial designed to assess the impact of this tool on decision quality, level of shared decision-making, and patient and process outcomes. We also discuss current concepts in this field and highlight opportunities leveraging patient-reported data and artificial intelligence for decision support across the care continuum.
Collapse
Affiliation(s)
- Prakash Jayakumar
- Department of Surgery and Perioperative Care, Dell Medical School, University of Texas at Austin, Austin, Texas
| | - Kevin J Bozic
- Department of Surgery and Perioperative Care, Dell Medical School, University of Texas at Austin, Austin, Texas
| |
Collapse
|
174
|
Siddique S, Chow JC. Artificial intelligence in radiotherapy. Rep Pract Oncol Radiother 2020; 25:656-666. [PMID: 32617080 PMCID: PMC7321818 DOI: 10.1016/j.rpor.2020.03.015] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 01/06/2020] [Accepted: 03/27/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has already been implemented widely in the medical field in the recent years. This paper first reviews the background of AI and radiotherapy. Then it explores the basic concepts of different AI algorithms and machine learning methods, such as neural networks, that are available to us today and how they are being implemented in radiotherapy and diagnostic processes, such as medical imaging, treatment planning, patient simulation, quality assurance and radiation dose delivery. It also explores the ongoing research on AI methods that are to be implemented in radiotherapy in the future. The review shows very promising progress and future for AI to be widely used in various areas of radiotherapy. However, basing on various concerns such as availability and security of using big data, and further work on polishing and testing AI algorithms, it is found that we may not ready to use AI primarily in radiotherapy at the moment.
Collapse
Affiliation(s)
- Sarkar Siddique
- Department of Physics, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - James C.L. Chow
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON M5G 1X6, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
175
|
Gaidano V, Tenace V, Santoro N, Varvello S, Cignetti A, Prato G, Saglio G, De Rosa G, Geuna M. A Clinically Applicable Approach to the Classification of B-Cell Non-Hodgkin Lymphomas with Flow Cytometry and Machine Learning. Cancers (Basel) 2020; 12:cancers12061684. [PMID: 32599959 PMCID: PMC7352227 DOI: 10.3390/cancers12061684] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 06/15/2020] [Accepted: 06/19/2020] [Indexed: 12/12/2022] Open
Abstract
The immunophenotype is a key element to classify B-cell Non-Hodgkin Lymphomas (B-NHL); while it is routinely obtained through immunohistochemistry, the use of flow cytometry (FC) could bear several advantages. However, few FC laboratories can rely on a long-standing practical experience, and the literature in support is still limited; as a result, the use of FC is generally restricted to the analysis of lymphomas with bone marrow or peripheral blood involvement. In this work, we applied machine learning to our database of 1465 B-NHL samples from different sources, building four artificial predictive systems which could classify B-NHL in up to nine of the most common clinico-pathological entities. Our best model shows an overall accuracy of 92.68%, a mean sensitivity of 88.54% and a mean specificity of 98.77%. Beyond the clinical applicability, our models demonstrate (i) the strong discriminatory power of MIB1 and Bcl2, whose integration in the predictive model significantly increased the performance of the algorithm; (ii) the potential usefulness of some non-canonical markers in categorizing B-NHL; and (iii) that FC markers should not be described as strictly positive or negative according to fixed thresholds, but they rather correlate with different B-NHL depending on their level of expression.
Collapse
Affiliation(s)
- Valentina Gaidano
- Department of Clinical and Biological Sciences, University of Turin, 10043 Orbassano, Italy; (V.G.); (G.S.)
- Division of Hematology, A.O. SS Antonio e Biagio e Cesare Arrigo, 15121 Alessandria, Italy
| | - Valerio Tenace
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT 84112, USA
- Correspondence: (V.T.); (M.G.)
| | - Nathalie Santoro
- Laboratory of Immunopathology, Division of Pathology, A.O. Ordine Mauriziano, 10128 Turin, Italy; (N.S.); (G.D.R.)
| | - Silvia Varvello
- University Division of Hematology and Cell Therapy, A.O. Ordine Mauriziano, 10128 Turin, Italy; (S.V.); (A.C.)
| | - Alessandro Cignetti
- University Division of Hematology and Cell Therapy, A.O. Ordine Mauriziano, 10128 Turin, Italy; (S.V.); (A.C.)
| | - Giuseppina Prato
- Division of Pathology, San Lazzaro Hospital, ASL CN2, 12051 Alba, Italy;
| | - Giuseppe Saglio
- Department of Clinical and Biological Sciences, University of Turin, 10043 Orbassano, Italy; (V.G.); (G.S.)
- University Division of Hematology and Cell Therapy, A.O. Ordine Mauriziano, 10128 Turin, Italy; (S.V.); (A.C.)
| | - Giovanni De Rosa
- Laboratory of Immunopathology, Division of Pathology, A.O. Ordine Mauriziano, 10128 Turin, Italy; (N.S.); (G.D.R.)
| | - Massimo Geuna
- Laboratory of Immunopathology, Division of Pathology, A.O. Ordine Mauriziano, 10128 Turin, Italy; (N.S.); (G.D.R.)
- Correspondence: (V.T.); (M.G.)
| |
Collapse
|
176
|
Abdalla-Aslan R, Yeshua T, Kabla D, Leichter I, Nadler C. An artificial intelligence system using machine-learning for automatic detection and classification of dental restorations in panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130:593-602. [PMID: 32646672 DOI: 10.1016/j.oooo.2020.05.012] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 04/28/2020] [Accepted: 05/22/2020] [Indexed: 12/19/2022]
Abstract
OBJECTIVES The aim of this study was to develop a computer vision algorithm based on artificial intelligence, designed to automatically detect and classify various dental restorations on panoramic radiographs. STUDY DESIGN A total of 738 dental restorations in 83 anonymized panoramic images were analyzed. Images were automatically cropped to obtain the region of interest containing maxillary and mandibular alveolar ridges. Subsequently, the restorations were segmented by using a local adaptive threshold. The segmented restorations were classified into 11 categories, and the algorithm was trained to classify them. Numerical features based on the shape and distribution of gray level values extracted by the algorithm were used for classifying the restorations into different categories. Finally, a Cubic Support Vector Machine algorithm with Error-Correcting Output Codes was used with a cross-validation approach for the multiclass classification of the restorations according to these features. RESULTS The algorithm detected 94.6% of the restorations. Classification eliminated all erroneous marks, and ultimately, 90.5% of the restorations were marked on the image. The overall accuracy of the classification stage in discriminating between the true restoration categories was 93.6%. CONCLUSIONS This machine-learning algorithm demonstrated excellent performance in detecting and classifying dental restorations on panoramic images.
Collapse
Affiliation(s)
- Ragda Abdalla-Aslan
- Researcher, Attending Physician, Department of Oral and Maxillofacial Surgery, Rambam Health Care Campus, Haifa, Israel
| | - Talia Yeshua
- Lecturer, Department of Applied Physics/Electro-optics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Daniel Kabla
- Department of Electrical and Electronics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Isaac Leichter
- Professor Emeritus, Department of Applied Physics/Electro-optics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Chen Nadler
- Lecturer, Oral Maxillofacial Imaging Unit, Oral Medicine Department, the Hebrew University, Hadassah School of Dental Medicine, Ein Kerem, Hadassah Medical Center Jerusalem, Israel.
| |
Collapse
|
177
|
Tougui I, Jilbab A, El Mhamdi J. Heart disease classification using data mining tools and machine learning techniques. HEALTH AND TECHNOLOGY 2020. [DOI: 10.1007/s12553-020-00438-1] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
178
|
Mendoza J, Pedrini H. Detection and classification of lung nodules in chest X‐ray images using deep convolutional neural networks. Comput Intell 2020. [DOI: 10.1111/coin.12241] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Julio Mendoza
- Institute of ComputingUniversity of Campinas Campinas‐SP Brazil
| | - Helio Pedrini
- Institute of ComputingUniversity of Campinas Campinas‐SP Brazil
| |
Collapse
|
179
|
Emami N, Pakchin PS, Ferdousi R. Computational predictive approaches for interaction and structure of aptamers. J Theor Biol 2020; 497:110268. [PMID: 32311376 DOI: 10.1016/j.jtbi.2020.110268] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 03/27/2020] [Accepted: 04/02/2020] [Indexed: 02/07/2023]
Abstract
Aptamers are short single-strand sequences that can bind to their specific targets with high affinity and specificity. Usually, aptamers are selected experimentally via systematic evolution of ligands by exponential enrichment (SELEX), an evolutionary process that consists of multiple cycles of selection and amplification. The SELEX process is expensive, time-consuming, and its success rates are relatively low. To overcome these difficulties, in recent years, several computational techniques have been developed in aptamer sciences that bring together different disciplines and branches of technologies. In this paper, a complementary review on computational predictive approaches of the aptamer has been organized. Generally, the computational prediction approaches of aptamer have been proposed to carry out in two main categories: interaction-based prediction and structure-based predictions. Furthermore, the available software packages and toolkits in this scope were reviewed. The aim of describing computational methods and tools in aptamer science is that aptamer scientists might take advantage of these computational techniques to develop more accurate and more sensitive aptamers.
Collapse
Affiliation(s)
- Neda Emami
- Department of Health Information Technology, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Parvin Samadi Pakchin
- Research Center for Pharmaceutical Nanotechnology, Biomedicine Institute, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Reza Ferdousi
- Department of Health Information Technology, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, Iran; Research Center for Pharmaceutical Nanotechnology, Biomedicine Institute, Tabriz University of Medical Sciences, Tabriz, Iran.
| |
Collapse
|
180
|
Watson J, Hutyra CA, Clancy SM, Chandiramani A, Bedoya A, Ilangovan K, Nderitu N, Poon EG. Overcoming barriers to the adoption and implementation of predictive modeling and machine learning in clinical care: what can we learn from US academic medical centers? JAMIA Open 2020; 3:167-172. [PMID: 32734155 PMCID: PMC7382631 DOI: 10.1093/jamiaopen/ooz046] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 10/09/2019] [Indexed: 12/17/2022] Open
Abstract
There is little known about how academic medical centers (AMCs) in the US develop, implement, and maintain predictive modeling and machine learning (PM and ML) models. We conducted semi-structured interviews with leaders from AMCs to assess their use of PM and ML in clinical care, understand associated challenges, and determine recommended best practices. Each transcribed interview was iteratively coded and reconciled by a minimum of 2 investigators to identify key barriers to and facilitators of PM and ML adoption and implementation in clinical care. Interviews were conducted with 33 individuals from 19 AMCs nationally. AMCs varied greatly in the use of PM and ML within clinical care, from some just beginning to explore their utility to others with multiple models integrated into clinical care. Informants identified 5 key barriers to the adoption and implementation of PM and ML in clinical care: (1) culture and personnel, (2) clinical utility of the PM and ML tool, (3) financing, (4) technology, and (5) data. Recommendation to the informatics community to overcome these barriers included: (1) development of robust evaluation methodologies, (2) partnership with vendors, and (3) development and dissemination of best practices. For institutions developing clinical PM and ML applications, they are advised to: (1) develop appropriate governance, (2) strengthen data access, integrity, and provenance, and (3) adhere to the 5 rights of clinical decision support. This article highlights key challenges of implementing PM and ML in clinical care at AMCs and suggests best practices for development, implementation, and maintenance at these institutions.
Collapse
Affiliation(s)
- Joshua Watson
- Department of Surgery, Duke University School of Medicine, Durham, North Carolina, USA
| | - Carolyn A Hutyra
- Department of Orthopedic Surgery, Duke University School of Medicine, Durham, North Carolina, USA
| | - Shayna M Clancy
- Duke Cancer Institute, Duke University School of Medicine, Durham, North Carolina, USA
| | - Anisha Chandiramani
- Division of General Internal Medicine, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA.,Duke Health Technology Solutions, Duke University Health System, Durham, North Carolina, USA
| | - Armando Bedoya
- Duke Health Technology Solutions, Duke University Health System, Durham, North Carolina, USA.,Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA
| | - Kumar Ilangovan
- Division of General Internal Medicine, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA.,Duke Health Technology Solutions, Duke University Health System, Durham, North Carolina, USA
| | - Nancy Nderitu
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Eric G Poon
- Division of General Internal Medicine, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA.,Duke Health Technology Solutions, Duke University Health System, Durham, North Carolina, USA
| |
Collapse
|
181
|
Park HJ, Park B, Lee SS. Radiomics and Deep Learning: Hepatic Applications. Korean J Radiol 2020; 21:387-401. [PMID: 32193887 PMCID: PMC7082656 DOI: 10.3348/kjr.2019.0752] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 01/05/2020] [Indexed: 12/12/2022] Open
Abstract
Radiomics and deep learning have recently gained attention in the imaging assessment of various liver diseases. Recent research has demonstrated the potential utility of radiomics and deep learning in staging liver fibroses, detecting portal hypertension, characterizing focal hepatic lesions, prognosticating malignant hepatic tumors, and segmenting the liver and liver tumors. In this review, we outline the basic technical aspects of radiomics and deep learning and summarize recent investigations of the application of these techniques in liver disease.
Collapse
Affiliation(s)
- Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Bumwoo Park
- Health Innovation Big Data Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
182
|
Pickhardt PJ, Graffy PM, Zea R, Lee SJ, Liu J, Sandfort V, Summers RM. Automated CT biomarkers for opportunistic prediction of future cardiovascular events and mortality in an asymptomatic screening population: a retrospective cohort study. LANCET DIGITAL HEALTH 2020; 2:e192-e200. [PMID: 32864598 DOI: 10.1016/s2589-7500(20)30025-x] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Background Body CT scans are frequently performed for a wide variety of clinical indications, but potentially valuable biometric information typically goes unused. We investigated the prognostic ability of automated CT-based body composition biomarkers derived from previously-developed deep-learning and feature-based algorithms for predicting major cardiovascular events and overall survival in an adult screening cohort, compared with clinical parameters. Methods Mature and fully-automated CT-based algorithms with pre-defined metrics for quantifying aortic calcification, muscle density, visceral/subcutaneous fat, liver fat, and bone mineral density (BMD) were applied to a generally-healthy asymptomatic outpatient cohort of 9223 adults (mean age, 57.1 years; 5152 women) undergoing abdominal CT for routine colorectal cancer screening. Longitudinal clinical follow-up (median, 8.8 years; IQR, 5.1-11.6 years) documented subsequent major cardiovascular events or death in 19.7% (n=1831). Predictive ability of CT-based biomarkers was compared against the Framingham Risk Score (FRS) and body mass index (BMI). Findings Significant differences were observed for all five automated CT-based body composition measures according to adverse events (p<0.001). Univariate 5-year AUROC (with 95% CI) for automated CT-based aortic calcification, muscle density, visceral/subcutaneous fat ratio, liver density, and vertebral density for predicting death were 0.743(0.705-0.780)/0.721(0.683-0.759)/0.661(0.625-0.697)/0.619 (0.582-0.656)/0.646(0.603-0.688), respectively, compared with 0.499(0.454-0.544) for BMI and 0.688(0.650-0.727) for FRS (p<0.05 for aortic calcification vs. FRS and BMI); all trends were similar for 2-year and 10-year ROC analyses. Univariate hazard ratios (with 95% CIs) for highest-risk quartile versus others for these same CT measures were 4.53(3.82-5.37) /3.58(3.02-4.23)/2.28(1.92-2.71)/1.82(1.52-2.17)/2.73(2.31-3.23), compared with 1.36(1.13-1.64) and 2.82(2.36-3.37) for BMI and FRS, respectively. Similar significant trends were observed for cardiovascular events. Multivariate combinations of CT biomarkers further improved prediction over clinical parameters (p<0.05 for AUROCs). For example, by combining aortic calcification, muscle density, and liver density, the 2-year AUROC for predicting overall survival was 0.811 (0.761-0.860). Interpretation Fully-automated quantitative tissue biomarkers derived from CT scans can outperform established clinical parameters for pre-symptomatic risk stratification for future serious adverse events, and add opportunistic value to CT scans performed for other indications.
Collapse
Affiliation(s)
- Perry J Pickhardt
- The University of Wisconsin School of Medicine & Public Health, Madison, WI
| | - Peter M Graffy
- The University of Wisconsin School of Medicine & Public Health, Madison, WI
| | - Ryan Zea
- The University of Wisconsin School of Medicine & Public Health, Madison, WI
| | - Scott J Lee
- The University of Wisconsin School of Medicine & Public Health, Madison, WI
| | - Jiamin Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD
| | - Veit Sandfort
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD
| |
Collapse
|
183
|
Zerka F, Barakat S, Walsh S, Bogowicz M, Leijenaar RTH, Jochems A, Miraglio B, Townend D, Lambin P. Systematic Review of Privacy-Preserving Distributed Machine Learning From Federated Databases in Health Care. JCO Clin Cancer Inform 2020; 4:184-200. [PMID: 32134684 PMCID: PMC7113079 DOI: 10.1200/cci.19.00047] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2020] [Indexed: 02/06/2023] Open
Abstract
Big data for health care is one of the potential solutions to deal with the numerous challenges of health care, such as rising cost, aging population, precision medicine, universal health coverage, and the increase of noncommunicable diseases. However, data centralization for big data raises privacy and regulatory concerns.Covered topics include (1) an introduction to privacy of patient data and distributed learning as a potential solution to preserving these data, a description of the legal context for patient data research, and a definition of machine/deep learning concepts; (2) a presentation of the adopted review protocol; (3) a presentation of the search results; and (4) a discussion of the findings, limitations of the review, and future perspectives.Distributed learning from federated databases makes data centralization unnecessary. Distributed algorithms iteratively analyze separate databases, essentially sharing research questions and answers between databases instead of sharing the data. In other words, one can learn from separate and isolated datasets without patient data ever leaving the individual clinical institutes.Distributed learning promises great potential to facilitate big data for medical application, in particular for international consortiums. Our purpose is to review the major implementations of distributed learning in health care.
Collapse
Affiliation(s)
- Fadila Zerka
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Oncoradiomics, Liège, Belgium
| | - Samir Barakat
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Oncoradiomics, Liège, Belgium
| | - Sean Walsh
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Oncoradiomics, Liège, Belgium
| | - Marta Bogowicz
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Ralph T. H. Leijenaar
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
- Oncoradiomics, Liège, Belgium
| | - Arthur Jochems
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | | | - David Townend
- Department of Health, Ethics, and Society, CAPHRI (Care and Public Health Research Institute), Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
184
|
Lee CS, Lee AY. How Artificial Intelligence Can Transform Randomized Controlled Trials. Transl Vis Sci Technol 2020; 9:9. [PMID: 32704415 PMCID: PMC7346875 DOI: 10.1167/tvst.9.2.9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Affiliation(s)
- Cecilia S Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA.,eScience Institute, University of Washington, Seattle, WA, USA
| |
Collapse
|
185
|
Abstract
PURPOSE OF REVIEW Machine learning (ML) is increasingly being studied for the screening, diagnosis, and management of diabetes and its complications. Although various models of ML have been developed, most have not led to practical solutions for real-world problems. There has been a disconnect between ML developers, regulatory bodies, health services researchers, clinicians, and patients in their efforts. Our aim is to review the current status of ML in various aspects of diabetes care and identify key challenges that must be overcome to leverage ML to its full potential. RECENT FINDINGS ML has led to impressive progress in development of automated insulin delivery systems and diabetic retinopathy screening tools. Compared with these, use of ML in other aspects of diabetes is still at an early stage. The Food & Drug Administration (FDA) is adopting some innovative models to help bring technologies to the market in an expeditious and safe manner. ML has great potential in managing diabetes and the future is in furthering the partnership of regulatory bodies with health service researchers, clinicians, developers, and patients to improve the outcomes of populations and individual patients with diabetes.
Collapse
Affiliation(s)
- David T Broome
- Department of Endocrinology, Diabetes & Metabolism, Cleveland Clinic Foundation, F-20 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - C Beau Hilton
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Neil Mehta
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, EC-40 9500 Euclid Ave, Cleveland, OH, 44195, USA.
| |
Collapse
|
186
|
Sun J, Tárnok A, Su X. Deep Learning-Based Single-Cell Optical Image Studies. Cytometry A 2020; 97:226-240. [PMID: 31981309 DOI: 10.1002/cyto.a.23973] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/03/2020] [Accepted: 01/10/2020] [Indexed: 12/17/2022]
Abstract
Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Jing Sun
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| | - Attila Tárnok
- Department of Therapy Validation, Fraunhofer Institute for Cell Therapy and Immunology (IZI), Leipzig, Germany.,Institute for Medical Informatics, Statistics and Epidemiology (IMISE), University of Leipzig, Leipzig, Germany
| | - Xuantao Su
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| |
Collapse
|
187
|
Almeida SD, Santinha J, Oliveira FPM, Ip J, Lisitskaya M, Lourenço J, Uysal A, Matos C, João C, Papanikolaou N. Quantification of tumor burden in multiple myeloma by atlas-based semi-automatic segmentation of WB-DWI. Cancer Imaging 2020; 20:6. [PMID: 31931880 PMCID: PMC6958755 DOI: 10.1186/s40644-020-0286-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 01/06/2020] [Indexed: 12/31/2022] Open
Abstract
Background Whole-body diffusion weighted imaging (WB-DWI) has proven value to detect multiple myeloma (MM) lesions. However, the large volume of imaging data and the presence of numerous lesions makes the reading process challenging. The aim of the current study was to develop a semi-automatic lesion segmentation algorithm for WB-DWI images in MM patients and to evaluate this smart-algorithm (SA) performance by comparing it to the manual segmentations performed by radiologists. Methods An atlas-based segmentation was developed to remove the high-signal intensity normal tissues on WB-DWI and to restrict the lesion area to the skeleton. Then, an outlier threshold-based segmentation was applied to WB-DWI images, and the segmented area’s signal intensity was compared to the average signal intensity of a low-fat muscle on T1-weighted images. This method was validated in 22 whole-body DWI images of patients diagnosed with MM. Dice similarity coefficient (DSC), sensitivity and positive predictive value (PPV) were computed to evaluate the SA performance against the gold standard (GS) and to compare with the radiologists. A non-parametric Wilcoxon test was also performed. Apparent diffusion coefficient (ADC) histogram metrics and lesion volume were extracted for the GS segmentation and for the correctly identified lesions by SA and their correlation was assessed. Results The mean inter-radiologists DSC was 0.323 ± 0.268. The SA vs GS achieved a DSC of 0.274 ± 0.227, sensitivity of 0.764 ± 0.276 and PPV 0.217 ± 0.207. Its distribution was not significantly different from the mean DSC of inter-radiologist segmentation (p = 0.108, Wilcoxon test). ADC and lesion volume intraclass correlation coefficient (ICC) of the GS and of the correctly identified lesions by the SA was 0.996 for the median and 0.894 for the lesion volume (p < 0.001). The duration of the lesion volume segmentation by the SA was, on average, 10.22 ± 0.86 min, per patient. Conclusions The SA provides equally reproducible segmentation results when compared to the manual segmentation of radiologists. Thus, the proposed method offers robust and efficient segmentation of MM lesions on WB-DWI. This method may aid accurate assessment of tumor burden and therefore provide insights to treatment response assessment.
Collapse
Affiliation(s)
- Sílvia D Almeida
- Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Av. Brasília, Doca de Pedrouços, 1400-038, Lisbon, Portugal
| | - João Santinha
- Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Av. Brasília, Doca de Pedrouços, 1400-038, Lisbon, Portugal
| | - Francisco P M Oliveira
- Radiopharmacology, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - Joana Ip
- Radiology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - Maria Lisitskaya
- Radiology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - João Lourenço
- Radiology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - Aycan Uysal
- Radiology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - Celso Matos
- Radiology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal
| | - Cristina João
- Hematology Department, Champalimaud Centre for the Unknown, Av. Brasília, 1400-038, Lisbon, Portugal.,Immunology Department, Nova Medical School, Nova University of Lisbon, 1169-056, Lisbon, Portugal
| | - Nikolaos Papanikolaou
- Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Av. Brasília, Doca de Pedrouços, 1400-038, Lisbon, Portugal.
| |
Collapse
|
188
|
Deep convolutional neural network-based detection of meniscus tears: comparison with radiologists and surgery as standard of reference. Skeletal Radiol 2020; 49:1207-1217. [PMID: 32170334 PMCID: PMC7299917 DOI: 10.1007/s00256-020-03410-2] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 02/11/2020] [Accepted: 03/01/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To clinically validate a fully automated deep convolutional neural network (DCNN) for detection of surgically proven meniscus tears. MATERIALS AND METHODS One hundred consecutive patients were retrospectively included, who underwent knee MRI and knee arthroscopy in our institution. All MRI were evaluated for medial and lateral meniscus tears by two musculoskeletal radiologists independently and by DCNN. Included patients were not part of the training set of the DCNN. Surgical reports served as the standard of reference. Statistics included sensitivity, specificity, accuracy, ROC curve analysis, and kappa statistics. RESULTS Fifty-seven percent (57/100) of patients had a tear of the medial and 24% (24/100) of the lateral meniscus, including 12% (12/100) with a tear of both menisci. For medial meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 93%, 91%, and 92%, for reader 2: 96%, 86%, and 92%, and for the DCNN: 84%, 88%, and 86%. For lateral meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 71%, 95%, and 89%, for reader 2: 67%, 99%, and 91%, and for the DCNN: 58%, 92%, and 84%. Sensitivity for medial meniscus tears was significantly different between reader 2 and the DCNN (p = 0.039), and no significant differences existed for all other comparisons (all p ≥ 0.092). The AUC-ROC of the DCNN was 0.882, 0.781, and 0.961 for detection of medial, lateral, and overall meniscus tear. Inter-reader agreement was very good for the medial (kappa = 0.876) and good for the lateral meniscus (kappa = 0.741). CONCLUSION DCNN-based meniscus tear detection can be performed in a fully automated manner with a similar specificity but a lower sensitivity in comparison with musculoskeletal radiologists.
Collapse
|
189
|
Machine learning based quantification of ejection and filling parameters by fully automated dynamic measurement of left ventricular volumes from cardiac magnetic resonance images. Magn Reson Imaging 2019; 67:28-32. [PMID: 31838116 DOI: 10.1016/j.mri.2019.12.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 11/13/2019] [Accepted: 12/07/2019] [Indexed: 12/19/2022]
Abstract
BACKGROUND Although analysis of cardiac magnetic resonance (CMR) images provides accurate and reproducible measurements of left ventricular (LV) volumes, these measurements are usually not performed throughout the cardiac cycle because of lack of tools that would allow such analysis within a reasonable timeframe. A fully-automated machine-learning (ML) algorithm was recently developed to automatically generate LV volume-time curves. Our aim was to validate ejection and filling parameters calculated from these curves using conventional analysis as a reference. METHODS We studied 21 patients undergoing clinical CMR examinations. LV volume-time curves were obtained using the ML-based algorithm (Neosoft), and independently using slice-by-slice, frame-by-frame manual tracing of the endocardial boundaries. Ejection and filling parameters derived from these curves were compared between the two techniques. For each parameter, Bland-Altman bias and limits of agreement (LOA) were expressed in percent of the mean measured value. RESULTS Time-volume curves were generated using the automated ML analysis within 2.5 ± 0.5 min, considerably faster than the manual analysis (43 ± 14 min per patient, including ~10 slices with 25-32 frames per slice). Time-volume curves were similar between the two techniques in magnitude and shape. Size and function parameters extracted from these curves showed no significant inter-technique differences, reflected by high correlations, small biases (<10%) and mostly reasonably narrow LOA. CONCLUSION ML software for dynamic LV volume measurement allows fast and accurate, fully automated analysis of ejection and filling parameters, compared to manual tracing based analysis. The ability to quickly evaluate time-volume curves is important for a more comprehensive evaluation of the patient's cardiac function.
Collapse
|
190
|
Mostapha M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Reson Imaging 2019; 64:171-189. [PMID: 31229667 PMCID: PMC6874895 DOI: 10.1016/j.mri.2019.06.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/08/2019] [Indexed: 12/17/2022]
Abstract
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them.
Collapse
Affiliation(s)
- Mahmoud Mostapha
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America.
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America; Neuro Image Research and Analysis Lab, Department of Psychiatry, University of North Carolina at Chapel Hill, NC 27599, United States of America.
| |
Collapse
|
191
|
Tran WT, Jerzak K, Lu FI, Klein J, Tabbarah S, Lagree A, Wu T, Rosado-Mendez I, Law E, Saednia K, Sadeghi-Naini A. Personalized Breast Cancer Treatments Using Artificial Intelligence in Radiomics and Pathomics. J Med Imaging Radiat Sci 2019; 50:S32-S41. [DOI: 10.1016/j.jmir.2019.07.010] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 07/22/2019] [Indexed: 12/19/2022]
|
192
|
Machine learning can accurately predict pre-admission baseline hemoglobin and creatinine in intensive care patients. NPJ Digit Med 2019; 2:116. [PMID: 31815192 PMCID: PMC6884624 DOI: 10.1038/s41746-019-0192-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 10/16/2019] [Indexed: 02/07/2023] Open
Abstract
Patients admitted to the intensive care unit frequently have anemia and impaired renal function, but often lack historical blood results to contextualize the acuteness of these findings. Using data available within two hours of ICU admission, we developed machine learning models that accurately (AUC 0.86-0.89) classify an individual patient's baseline hemoglobin and creatinine levels. Compared to assuming the baseline to be the same as the admission lab value, machine learning performed significantly better at classifying acute kidney injury regardless of initial creatinine value, and significantly better at predicting baseline hemoglobin value in patients with admission hemoglobin of <10 g/dl.
Collapse
|
193
|
Salvador R, Canales-Rodríguez E, Guerrero-Pedraza A, Sarró S, Tordesillas-Gutiérrez D, Maristany T, Crespo-Facorro B, McKenna P, Pomarol-Clotet E. Multimodal Integration of Brain Images for MRI-Based Diagnosis in Schizophrenia. Front Neurosci 2019; 13:1203. [PMID: 31787874 PMCID: PMC6855131 DOI: 10.3389/fnins.2019.01203] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/23/2019] [Indexed: 12/12/2022] Open
Abstract
Magnetic resonance imaging (MRI) has been proposed as a source of information for automatic prediction of individual diagnosis in schizophrenia. Optimal integration of data from different MRI modalities is an active area of research aimed at increasing diagnostic accuracy. Based on a sample of 96 patients with schizophrenia and a matched sample of 115 healthy controls that had undergone a single multimodal MRI session, we generated individual brain maps of gray matter vbm, 1back, and 2back levels of activation (nback fMRI), maps of amplitude of low-frequency fluctuations (resting-state fMRI), and maps of weighted global brain connectivity (resting-state fMRI). Four unimodal classifiers (Ridge, Lasso, Random Forests, and Gradient boosting) were applied to these maps to evaluate their classification accuracies. Based on the assignments made by the algorithms on test individuals, we quantified the amount of predictive information shared between maps (what we call redundancy analysis). Finally, we explored the added accuracy provided by a set of multimodal strategies that included post-classification integration based on probabilities, two-step sequential integration, and voxel-level multimodal integration through one-dimensional-convolutional neural networks (1D-CNNs). All four unimodal classifiers showed the highest test accuracies with the 2back maps (80% on average) achieving a maximum of 84% with the Lasso. Redundancy levels between brain maps were generally low (overall mean redundancy score of 0.14 in a 0–1 range), indicating that each brain map contained differential predictive information. The highest multimodal accuracy was delivered by the two-step Ridge classifier (87%) followed by the Ridge maximum and mean probability classifiers (both with 85% accuracy) and by the 1D-CNN, which achieved the same accuracy as the best unimodal classifier (84%). From these results, we conclude that from all MRI modalities evaluated task-based fMRI may be the best unimodal diagnostic option in schizophrenia. Low redundancy values point to ample potential for accuracy improvements through multimodal integration, with the two-step Ridge emerging as a suitable strategy.
Collapse
Affiliation(s)
- Raymond Salvador
- FIDMAG Hermanas Hospitalarias Research Foundation, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain
| | - Erick Canales-Rodríguez
- FIDMAG Hermanas Hospitalarias Research Foundation, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain
| | | | - Salvador Sarró
- FIDMAG Hermanas Hospitalarias Research Foundation, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain
| | - Diana Tordesillas-Gutiérrez
- Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain.,Hospital Universitario Marqués de Valdecilla, Universidad de Cantabria, Santander, Spain
| | | | - Benedicto Crespo-Facorro
- Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain.,Hospital Universitario Marqués de Valdecilla, Universidad de Cantabria, Santander, Spain
| | - Peter McKenna
- FIDMAG Hermanas Hospitalarias Research Foundation, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain
| | - Edith Pomarol-Clotet
- FIDMAG Hermanas Hospitalarias Research Foundation, Barcelona, Spain.,Centro de Investigación Biomédica en Red de Salud Mental, Madrid, Spain
| |
Collapse
|
194
|
Paliwal N, Jaiswal P, Tutino VM, Shallwani H, Davies JM, Siddiqui AH, Rai R, Meng H. Outcome prediction of intracranial aneurysm treatment by flow diverters using machine learning. Neurosurg Focus 2019; 45:E7. [PMID: 30453461 DOI: 10.3171/2018.8.focus18332] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 08/21/2018] [Indexed: 12/12/2022]
Abstract
OBJECTIVEFlow diverters (FDs) are designed to occlude intracranial aneurysms (IAs) while preserving flow to essential arteries. Incomplete occlusion exposes patients to risks of thromboembolic complications and rupture. A priori assessment of FD treatment outcome could enable treatment optimization leading to better outcomes. To that end, the authors applied image-based computational analysis to clinically FD-treated aneurysms to extract information regarding morphology, pre- and post-treatment hemodynamics, and FD-device characteristics and then used these parameters to train machine learning algorithms to predict 6-month clinical outcomes after FD treatment.METHODSData were retrospectively collected for 84 FD-treated sidewall aneurysms in 80 patients. Based on 6-month angiographic outcomes, IAs were classified as occluded (n = 63) or residual (incomplete occlusion, n = 21). For each case, the authors modeled FD deployment using a fast virtual stenting algorithm and hemodynamics using image-based computational fluid dynamics. Sixteen morphological, hemodynamic, and FD-based parameters were calculated for each aneurysm. Aneurysms were randomly assigned to a training or testing cohort in approximately a 3:1 ratio. The Student t-test and Mann-Whitney U-test were performed on data from the training cohort to identify significant parameters distinguishing the occluded from residual groups. Predictive models were trained using 4 types of supervised machine learning algorithms: logistic regression (LR), support vector machine (SVM; linear and Gaussian kernels), K-nearest neighbor, and neural network (NN). In the testing cohort, the authors compared outcome prediction by each model trained using all parameters versus only the significant parameters.RESULTSThe training cohort (n = 64) consisted of 48 occluded and 16 residual aneurysms and the testing cohort (n = 20) consisted of 15 occluded and 5 residual aneurysms. Significance tests yielded 2 morphological (ostium ratio and neck ratio) and 3 hemodynamic (pre-treatment inflow rate, post-treatment inflow rate, and post-treatment aneurysm averaged velocity) discriminants between the occluded (good-outcome) and the residual (bad-outcome) group. In both training and testing, all the models trained using all 16 parameters performed better than all the models trained using only the 5 significant parameters. Among the all-parameter models, NN (AUC = 0.967) performed the best during training, followed by LR and linear SVM (AUC = 0.941 and 0.914, respectively). During testing, NN and Gaussian-SVM models had the highest accuracy (90%) in predicting occlusion outcome.CONCLUSIONSNN and Gaussian-SVM models incorporating all 16 morphological, hemodynamic, and FD-related parameters predicted 6-month occlusion outcome of FD treatment with 90% accuracy. More robust models using the computational workflow and machine learning could be trained on larger patient databases toward clinical use in patient-specific treatment planning and optimization.
Collapse
Affiliation(s)
- Nikhil Paliwal
- Departments of1Mechanical & Aerospace Engineering.,2Canon Stroke and Vascular Research Center, University at Buffalo, the State University of New York, Buffalo, New York
| | | | - Vincent M Tutino
- 2Canon Stroke and Vascular Research Center, University at Buffalo, the State University of New York, Buffalo, New York.,4Biomedical Engineering, and
| | | | | | - Adnan H Siddiqui
- 2Canon Stroke and Vascular Research Center, University at Buffalo, the State University of New York, Buffalo, New York.,3Neurosurgery
| | - Rahul Rai
- Departments of1Mechanical & Aerospace Engineering
| | - Hui Meng
- Departments of1Mechanical & Aerospace Engineering.,4Biomedical Engineering, and
| |
Collapse
|
195
|
Lee LIT, Kanthasamy S, Ayyalaraju RS, Ganatra R. The Current State of Artificial Intelligence in Medical Imaging and Nuclear Medicine. BJR Open 2019; 1:20190037. [PMID: 33178956 PMCID: PMC7592467 DOI: 10.1259/bjro.20190037] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 09/08/2019] [Accepted: 09/25/2019] [Indexed: 12/31/2022] Open
Abstract
The last decade has seen a huge surge in interest surrounding artificial intelligence (AI). AI has been around since the 1950s, although technological limitations in the early days meant performance was initially inferior compared to humans.1 With rapid progression of algorithm design, growth of vast digital datasets and development of powerful computing power, AI now has the capability to outperform humans. Consequently, the integration of AI into the modern world is skyrocketing. This review article will give an overview of the use of AI in the modern world and discuss current and potential uses in healthcare, with a particular focus on its applications and likely impact in medical imaging. We will discuss the consequences and challenges of AI integration into healthcare.
Collapse
Affiliation(s)
- Louise I T Lee
- Department of Radiology, University Hospitals of Leicester, Leicester Royal Infirmary, England, UK
| | - Senthooran Kanthasamy
- Department of Trauma and Orthopaedics, Cambridge University Hospitals, Cambridge University Hospitals, Addenbrooke's Hospital, England, UK
| | | | - Rakesh Ganatra
- Department of Nuclear Medicine, University Hospitals of Leicester, Glenfield Hospital, England, UK
| |
Collapse
|
196
|
Hoodbhoy Z, Noman M, Shafique A, Nasim A, Chowdhury D, Hasan B. Use of Machine Learning Algorithms for Prediction of Fetal Risk using Cardiotocographic Data. Int J Appl Basic Med Res 2019; 9:226-230. [PMID: 31681548 PMCID: PMC6822315 DOI: 10.4103/ijabmr.ijabmr_370_18] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 03/29/2019] [Accepted: 08/01/2019] [Indexed: 12/13/2022] Open
Abstract
Background A major contributor to under-five mortality is the death of children in the 1st month of life. Intrapartum complications are one of the major causes of perinatal mortality. Fetal cardiotocograph (CTGs) can be used as a monitoring tool to identify high-risk women during labor. Aim The objective of this study was to study the precision of machine learning algorithm techniques on CTG data in identifying high-risk fetuses. Methods CTG data of 2126 pregnant women were obtained from the University of California Irvine Machine Learning Repository. Ten different machine learning classification models were trained using CTG data. Sensitivity, precision, and F1 score for each class and overall accuracy of each model were obtained to predict normal, suspect, and pathological fetal states. Model with best performance on specified metrics was then identified. Results Determined by obstetricians' interpretation of CTGs as gold standard, 70% of them were normal, 20% were suspect, and 10% had a pathological fetal state. On training data, the classification models generated by XGBoost, decision tree, and random forest had high precision (>96%) to predict the suspect and pathological state of the fetus based on the CTG tracings. However, on testing data, XGBoost model had the highest precision to predict a pathological fetal state (>92%). Conclusion The classification model developed using XGBoost technique had the highest prediction accuracy for an adverse fetal outcome. Lay health-care workers in low- and middle-income countries can use this model to triage pregnant women in remote areas for early referral and further management.
Collapse
Affiliation(s)
- Zahra Hoodbhoy
- Department of Paediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| | - Mohammad Noman
- Department of Artificial Intelligence, Ephlux Pvt Ltd., Karachi, Pakistan
| | - Ayesha Shafique
- Department of Artificial Intelligence, Ephlux Pvt Ltd., Karachi, Pakistan
| | - Ali Nasim
- Department of Artificial Intelligence, Ephlux Pvt Ltd., Karachi, Pakistan
| | | | - Babar Hasan
- Department of Paediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| |
Collapse
|
197
|
Wildeboer RR, Mannaerts CK, van Sloun RJG, Budäus L, Tilki D, Wijkstra H, Salomon G, Mischi M. Automated multiparametric localization of prostate cancer based on B-mode, shear-wave elastography, and contrast-enhanced ultrasound radiomics. Eur Radiol 2019; 30:806-815. [PMID: 31602512 PMCID: PMC6957554 DOI: 10.1007/s00330-019-06436-w] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Accepted: 08/27/2019] [Indexed: 12/17/2022]
Abstract
OBJECTIVES The aim of this study was to assess the potential of machine learning based on B-mode, shear-wave elastography (SWE), and dynamic contrast-enhanced ultrasound (DCE-US) radiomics for the localization of prostate cancer (PCa) lesions using transrectal ultrasound. METHODS This study was approved by the institutional review board and comprised 50 men with biopsy-confirmed PCa that were referred for radical prostatectomy. Prior to surgery, patients received transrectal ultrasound (TRUS), SWE, and DCE-US for three imaging planes. The images were automatically segmented and registered. First, model-based features related to contrast perfusion and dispersion were extracted from the DCE-US videos. Subsequently, radiomics were retrieved from all modalities. Machine learning was applied through a random forest classification algorithm, using the co-registered histopathology from the radical prostatectomy specimens as a reference to draw benign and malignant regions of interest. To avoid overfitting, the performance of the multiparametric classifier was assessed through leave-one-patient-out cross-validation. RESULTS The multiparametric classifier reached a region-wise area under the receiver operating characteristics curve (ROC-AUC) of 0.75 and 0.90 for PCa and Gleason > 3 + 4 significant PCa, respectively, thereby outperforming the best-performing single parameter (i.e., contrast velocity) yielding ROC-AUCs of 0.69 and 0.76, respectively. Machine learning revealed that combinations between perfusion-, dispersion-, and elasticity-related features were favored. CONCLUSIONS In this paper, technical feasibility of multiparametric machine learning to improve upon single US modalities for the localization of PCa has been demonstrated. Extended datasets for training and testing may establish the clinical value of automatic multiparametric US classification in the early diagnosis of PCa. KEY POINTS • Combination of B-mode ultrasound, shear-wave elastography, and contrast ultrasound radiomics through machine learning is technically feasible. • Multiparametric ultrasound demonstrated a higher prostate cancer localization ability than single ultrasound modalities. • Computer-aided multiparametric ultrasound could help clinicians in biopsy targeting.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Rondom 70, 5612 AP, Eindhoven, The Netherlands.
| | - Christophe K Mannaerts
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Rondom 70, 5612 AP, Eindhoven, The Netherlands
| | - Lars Budäus
- Martini-Clinic - Prostate Cancer Center, University Hospital Hamburg Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Derya Tilki
- Martini-Clinic - Prostate Cancer Center, University Hospital Hamburg Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.,Department of Urology, University Hospital Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Rondom 70, 5612 AP, Eindhoven, The Netherlands.,Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Georg Salomon
- Martini-Clinic - Prostate Cancer Center, University Hospital Hamburg Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Rondom 70, 5612 AP, Eindhoven, The Netherlands
| |
Collapse
|
198
|
Ramkumar PN, Haeberle HS, Ramanathan D, Cantrell WA, Navarro SM, Mont MA, Bloomfield M, Patterson BM. Remote Patient Monitoring Using Mobile Health for Total Knee Arthroplasty: Validation of a Wearable and Machine Learning-Based Surveillance Platform. J Arthroplasty 2019; 34:2253-2259. [PMID: 31128890 DOI: 10.1016/j.arth.2019.05.021] [Citation(s) in RCA: 94] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 05/07/2019] [Accepted: 05/10/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Recent technologic advances capable of measuring outcomes after total knee arthroplasty (TKA) are critical in quantifying value-based care. Traditionally accomplished through office assessments and surveys with variable follow-up, this strategy lacks continuous and complete data. The primary objective of this study was to validate the feasibility of a remote patient monitoring (RPM) system in terms of the frequency of data interruptions and patient acceptance. Second, we report pilot data for (1) mobility; (2) knee range of motion, (3) patient-reported outcome measures (PROMs); (4) opioid use; and (5) home exercise program (HEP) compliance. METHODS A pilot cohort of 25 patients undergoing primary TKA for osteoarthritis was enrolled. Patients downloaded the RPM mobile application preoperatively to collect baseline activity and PROMs data, and the wearable knee sleeve was paired to the smartphone during admission. The following was collected up to 3 months postoperatively: mobility (step count), range of motion, PROMs, opioid consumption, and HEP compliance. Validation was determined by acquisition of continuous data and patient tolerance at semistructured interviews 3 months after operation. RESULTS Of the 25 enrolled patients, 100% had uninterrupted passive data collection. Of the 22 available for follow-up interviews, all found the system motivating and engaging. Mean mobility returned to baseline within 6 weeks and exceeded preoperative baseline by 30% at 3 months. Mean knee flexion achieved was 119°, which did not differ from clinic measurements (P = .31). Mean KOOS improvement was 39.3 after 3 months (range: 3-60). Opioid use typically stopped by postoperative day 5. HEP compliance was 62% (range: 0%-99%). CONCLUSIONS In this pilot study, we established the ability to remotely acquire continuous data for patients undergoing TKA, who found the application to be engaging. RPM offers the newfound ability to more completely evaluate the patients undergoing TKA in terms of mobility and rehabilitation compliance. Study with more patients is required to establish clinical significance.
Collapse
Affiliation(s)
- Prem N Ramkumar
- Machine Learning Arthroplasty Lab, Cleveland Clinic, Cleveland, OH
| | - Heather S Haeberle
- Department of Orthopaedic Surgery, Baylor College of Medicine, Houston, TX
| | | | | | | | - Michael A Mont
- Lenox Hill Department of Orthopaedic Surgery, New York, NY
| | | | | |
Collapse
|
199
|
Bini SA, Shah RF, Bendich I, Patterson JT, Hwang KM, Zaid MB. Machine Learning Algorithms Can Use Wearable Sensor Data to Accurately Predict Six-Week Patient-Reported Outcome Scores Following Joint Replacement in a Prospective Trial. J Arthroplasty 2019; 34:2242-2247. [PMID: 31439405 DOI: 10.1016/j.arth.2019.07.024] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 07/16/2019] [Accepted: 07/18/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Tracking patient-generated health data (PGHD) following total joint arthroplasty (TJA) may enable data-driven early intervention to improve clinical results. We aim to demonstrate the feasibility of combining machine learning (ML) with PGHD in TJA to predict patient-reported outcome measures (PROMs). METHODS Twenty-two TJA patients were recruited for this pilot study. Three activity trackers collected 35 features from 4 weeks before to 6 weeks following surgery. PROMs were collected at both endpoints (Hip and Knee Disability and Osteoarthritis Outcome Score, Knee Osteoarthritis Outcome Score, and Veterans RAND 12-Item Health Survey Physical Component Score). We used ML to identify features with the highest correlation with PROMs. The algorithm trained on a subset of patients and used 3 feature sets (A, B, and C) to group the rest into one of the 3 PROM clusters. RESULTS Fifteen patients completed the study and collected 3 million data points. Three sets of features with the highest R2 values relative to PROMs were selected (A, B and C). Data collected through the 11th day had the highest predictive value. The ML algorithm grouped patients into 3 clusters predictive of 6-week PROM results, yielding total sum of squares values ranging from 3.86 (A) to 1.86 (C). CONCLUSION This small but critical proof-of-concept study demonstrates that ML can be used in combination with PGHD to predict 6-week PROM data as early as 11 days following TJA surgery. Further study is needed to confirm these findings and their clinical value.
Collapse
Affiliation(s)
- Stefano A Bini
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Romil F Shah
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Ilya Bendich
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Joseph T Patterson
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Kevin M Hwang
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Musa B Zaid
- Department of Orthopaedic Surgery, University of California, San Francisco, San Francisco, CA
| |
Collapse
|
200
|
Shah RF, Zaid MB, Bendich I, Hwang KM, Patterson JT, Bini SA. Optimal Sampling Frequency for Wearable Sensor Data in Arthroplasty Outcomes Research. A Prospective Observational Cohort Trial. J Arthroplasty 2019; 34:2248-2252. [PMID: 31445866 DOI: 10.1016/j.arth.2019.08.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Wearable sensors can track patient activity after surgery. The optimal data sampling frequency to identify an association between patient-reported outcome measures (PROMs) and sensor data is unknown. Most commercial grade sensors report 24-hour average data. We hypothesize that increasing the frequency of data collection may improve the correlation with PROM data. METHODS Twenty-two total joint arthroplasty (TJA) patients were prospectively recruited and provided wearable sensors. Second-by-second (Raw) and 24-hour average data (24Hr) were collected on 7 gait metrics on the 1st, 7th, 14th, 21st, and 42nd days postoperatively. The average for each metric as well as the slope of a linear regression for 24Hr data (24HrLR) was calculated. The R2 associations were calculated using machine learning algorithms against individual PROM results at 6 weeks. The resulting R2 values were defined having a mild, moderate, or strong fit (R2 ≥ 0.2, ≥0.3, and ≥0.6, respectively) with PROM results. The difference in frequency of fit was analyzed with the McNemar's test. RESULTS The frequency of at least a mild fit (R2 ≥ 0.2) for any data point at any time frame relative to either of the PROMs measured was higher for Raw data (42%) than 24Hr data (32%; P = .041). There was no difference in frequency of fit for 24hrLR data (32%) and 24Hr data values (32%; P > .05). Longer data collection improved frequency of fit. CONCLUSION In this prospective trial, increasing sampling frequency above the standard 24Hr average provided by consumer grade activity sensors improves the ability of machine learning algorithms to predict 6-week PROMs in our total joint arthroplasty cohort.
Collapse
Affiliation(s)
- Romil F Shah
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Musa B Zaid
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Ilya Bendich
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Kevin M Hwang
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Joseph T Patterson
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| | - Stefano A Bini
- Department of Orthopedic Surgery, University of California, San Francisco, San Francisco, CA
| |
Collapse
|