1
|
Wilson SB, Ward J, Munjal V, Lam CSA, Patel M, Zhang P, Xu DS, Chakravarthy VB. Machine Learning in Spine Oncology: A Narrative Review. Global Spine J 2024:21925682241261342. [PMID: 38860699 DOI: 10.1177/21925682241261342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
STUDY DESIGN Narrative Review. OBJECTIVE Machine learning (ML) is one of the latest advancements in artificial intelligence used in medicine and surgery with the potential to significantly impact the way physicians diagnose, prognose, and treat spine tumors. In the realm of spine oncology, ML is utilized to analyze and interpret medical imaging and classify tumors with incredible accuracy. The authors present a narrative review that specifically addresses the use of machine learning in spine oncology. METHODS This study was conducted in accordance with the Preferred Reporting Items of Systematic Reviews and Meta-Analysis (PRISMA) methodology. A systematic review of the literature in the PubMed, EMBASE, Web of Science, Scopus, and Cochrane Library databases since inception was performed to present all clinical studies with the search terms '[[Machine Learning] OR [Artificial Intelligence]] AND [[Spine Oncology] OR [Spine Cancer]]'. Data included studies that were extracted and included algorithms, training and test size, outcomes reported. Studies were separated based on the type of tumor investigated using the machine learning algorithms into primary, metastatic, both, and intradural. A minimum of 2 independent reviewers conducted the study appraisal, data abstraction, and quality assessments of the studies. RESULTS Forty-five studies met inclusion criteria out of 480 references screened from the initial search results. Studies were grouped by metastatic, primary, and intradural tumors. The majority of ML studies relevant to spine oncology focused on utilizing a mixture of clinical and imaging features to risk stratify mortality and frailty. Overall, these studies showed that ML is a helpful tool in tumor detection, differentiation, segmentation, predicting survival, predicting readmission rates of patients with either primary, metastatic, or intradural spine tumors. CONCLUSION Specialized neural networks and deep learning algorithms have shown to be highly effective at predicting malignant probability and aid in diagnosis. ML algorithms can predict the risk of tumor recurrence or progression based on imaging and clinical features. Additionally, ML can optimize treatment planning, such as predicting radiotherapy dose distribution to the tumor and surrounding normal tissue or in surgical resection planning. It has the potential to significantly enhance the accuracy and efficiency of health care delivery, leading to improved patient outcomes.
Collapse
Affiliation(s)
- Seth B Wilson
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Jacob Ward
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Vikas Munjal
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | | | - Mayur Patel
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University College of Engineering, Columbus, OH, USA
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, USA
| | - David S Xu
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | | |
Collapse
|
2
|
Koochaki F, Najafizadeh L. A Siamese Convolutional Neural Network for Identifying Mild Traumatic Brain Injury and Predicting Recovery. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1779-1786. [PMID: 38635385 DOI: 10.1109/tnsre.2024.3391067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Timely diagnosis of mild traumatic brain injury (mTBI) remains challenging due to the rapid recovery of acute symptoms and the absence of evidence of injury in static neuroimaging scans. Furthermore, while longitudinal tracking of mTBI is essential in understanding how the diseases progresses/regresses over time for enhancing personalized patient care, a standardized approach for this purpose is not yet available. Recent functional neuroimaging studies have provided evidence of brain function alterations following mTBI, suggesting mTBI-detection models can be built based on these changes. Most of these models, however, rely on manual feature engineering, but the optimal set of features for detecting mTBI may be unknown. Data-driven approaches, on the other hand, may uncover hidden relationships in an automated manner, making them suitable for the problem of mTBI detection. This paper presents a data-driven framework based on Siamese Convolutional Neural Network (SCNN) to detect mTBI and to monitor the recovery state from mTBI over time. The proposed framework is tested on the cortical images of Thy1-GCaMP6s mice, obtained via widefield calcium imaging, acquired in a longitudinal study. Results show that the proposed model achieves a classification accuracy of 96.5%. To track the state of the injured brain over time, a reference distance map is constructed, which together with the SCNN model, are employed to assess the recovery state in subsequent sessions after injury, revealing that the recovery progress varies among subjects. The promising results of this work suggest that a similar approach could be potentially applicable for monitoring recovery from mTBI, in humans.
Collapse
|
3
|
Kim DH, Seo J, Lee JH, Jeon ET, Jeong D, Chae HD, Lee E, Kang JH, Choi YH, Kim HJ, Chai JW. Automated Detection and Segmentation of Bone Metastases on Spine MRI Using U-Net: A Multicenter Study. Korean J Radiol 2024; 25:363-373. [PMID: 38528694 PMCID: PMC10973735 DOI: 10.3348/kjr.2023.0671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/11/2023] [Accepted: 01/13/2024] [Indexed: 03/27/2024] Open
Abstract
OBJECTIVE To develop and evaluate a deep learning model for automated segmentation and detection of bone metastasis on spinal MRI. MATERIALS AND METHODS We included whole spine MRI scans of adult patients with bone metastasis: 662 MRI series from 302 patients (63.5 ± 11.5 years; male:female, 151:151) from three study centers obtained between January 2015 and August 2021 for training and internal testing (random split into 536 and 126 series, respectively) and 49 MRI series from 20 patients (65.9 ± 11.5 years; male:female, 11:9) from another center obtained between January 2018 and August 2020 for external testing. Three sagittal MRI sequences, including non-contrast T1-weighted image (T1), contrast-enhanced T1-weighted Dixon fat-only image (FO), and contrast-enhanced fat-suppressed T1-weighted image (CE), were used. Seven models trained using the 2D and 3D U-Nets were developed with different combinations (T1, FO, CE, T1 + FO, T1 + CE, FO + CE, and T1 + FO + CE). The segmentation performance was evaluated using Dice coefficient, pixel-wise recall, and pixel-wise precision. The detection performance was analyzed using per-lesion sensitivity and a free-response receiver operating characteristic curve. The performance of the model was compared with that of five radiologists using the external test set. RESULTS The 2D U-Net T1 + CE model exhibited superior segmentation performance in the external test compared to the other models, with a Dice coefficient of 0.699 and pixel-wise recall of 0.653. The T1 + CE model achieved per-lesion sensitivities of 0.828 (497/600) and 0.857 (150/175) for metastases in the internal and external tests, respectively. The radiologists demonstrated a mean per-lesion sensitivity of 0.746 and a mean per-lesion positive predictive value of 0.701 in the external test. CONCLUSION The deep learning models proposed for automated segmentation and detection of bone metastases on spinal MRI demonstrated high diagnostic performance.
Collapse
Affiliation(s)
- Dong Hyun Kim
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
- College of Medicine, Seoul National University, Seoul, Republic of Korea
| | - Jiwoon Seo
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
- College of Medicine, Seoul National University, Seoul, Republic of Korea.
| | - Ji Hyun Lee
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Eun-Tae Jeon
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | | | - Hee Dong Chae
- College of Medicine, Seoul National University, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eugene Lee
- College of Medicine, Seoul National University, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Ji Hee Kang
- Department of Radiology, Konkuk University Medical Center, Seoul, Republic of Korea
| | - Yoon-Hee Choi
- Department of Physical Medicine and Rehabilitation, Soonchunhyang University Seoul Hospital, Seoul, Republic of Korea
| | - Hyo Jin Kim
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
- College of Medicine, Seoul National University, Seoul, Republic of Korea
| | - Jee Won Chai
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
- College of Medicine, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Johns WL, Martinazzi BJ, Miltenberg B, Nam HH, Hammoud S. ChatGPT Provides Unsatisfactory Responses to Frequently Asked Questions Regarding Anterior Cruciate Ligament Reconstruction. Arthroscopy 2024:S0749-8063(24)00061-6. [PMID: 38311261 DOI: 10.1016/j.arthro.2024.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/01/2024] [Accepted: 01/08/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE To determine whether the free online artificial intelligence platform ChatGPT could accurately, adequately, and appropriately answer questions regarding anterior cruciate ligament (ACL) reconstruction surgery. METHODS A list of 10 questions about ACL surgery was created based on a review of frequently asked questions that appeared on websites of various orthopaedic institutions. Each question was separately entered into ChatGPT (version 3.5), and responses were recorded, scored, and graded independently by 3 authors. The reading level of the ChatGPT response was calculated using the WordCalc software package, and readability was assessed using the Flesch-Kincaid grade level, Simple Measure of Gobbledygook index, Coleman-Liau index, Gunning fog index, and automated readability index. RESULTS Of the 10 frequently asked questions entered into ChatGPT, 6 were deemed as unsatisfactory and requiring substantial clarification; 1, as adequate and requiring moderate clarification; 1, as adequate and requiring minor clarification; and 2, as satisfactory and requiring minimal clarification. The mean DISCERN score was 41 (inter-rater reliability, 0.721), indicating the responses to the questions were average. According to the readability assessments, a full understanding of the ChatGPT responses required 13.4 years of education, which corresponds to the reading level of a college sophomore. CONCLUSIONS Most of the ChatGPT-generated responses were outdated and failed to provide an adequate foundation for patients' understanding regarding their injury and treatment options. The reading level required to understand the responses was too advanced for some patients, leading to potential misunderstanding and misinterpretation of information. ChatGPT lacks the ability to differentiate and prioritize information that is presented to patients. CLINICAL RELEVANCE Recognizing the shortcomings in artificial intelligence platforms may equip surgeons to better set expectations and provide support for patients considering and preparing for ACL reconstruction.
Collapse
Affiliation(s)
- William L Johns
- Rothman Orthopaedic Institute, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, U.S.A
| | - Brandon J Martinazzi
- Rothman Orthopaedic Institute, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, U.S.A..
| | - Benjamin Miltenberg
- Rothman Orthopaedic Institute, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, U.S.A
| | - Hannah H Nam
- Penn State College of Medicine, Hershey, Pennsylvania, U.S.A
| | - Sommer Hammoud
- Rothman Orthopaedic Institute, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, U.S.A
| |
Collapse
|
5
|
Ataei A, Eggermont F, Verdonschot N, Lessmann N, Tanck E. The effect of deep learning-based lesion segmentation on failure load calculations of metastatic femurs using finite element analysis. Bone 2024; 179:116987. [PMID: 38061504 DOI: 10.1016/j.bone.2023.116987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 11/29/2023] [Accepted: 12/04/2023] [Indexed: 12/17/2023]
Abstract
Bone ranks as the third most frequent tissue affected by cancer metastases, following the lung and liver. Bone metastases are often painful and may result in pathological fracture, which is a major cause of morbidity and mortality in cancer patients. To quantify fracture risk, finite element (FE) analysis has shown to be a promising tool, but metastatic lesions are typically not specifically segmented and therefore their mechanical properties may not be represented adequately. Deep learning methods potentially provide the opportunity to automatically segment these lesions and change the mechanical properties more adequately. In this study, our primary focus was to gain insight into the performance of an automatic segmentation algorithm for femoral metastatic lesions using deep learning methods and the subsequent effects on FE outcomes. The aims were to determine the similarity between manual segmentation and automatic segmentation; the differences in predicted failure load between FE models with automatically segmented osteolytic and mixed lesions and the models with CT-based lesion values (the gold standard); and the effect on the BOne Strength (BOS) score (failure load adjusted for body weight) and subsequent fracture risk assessments. From two patient cohorts, a total number of 50 femurs with osteolytic and mixed metastatic lesions were included in this study. The femurs were segmented from CT images and transferred into FE meshes. The material behavior was implemented as non-linear isotropic. These FE models were considered as gold standard (Finite Element no Segmented Lesion: FE-no-SL), whereby the local calcium equivalent density of both femur and metastatic lesion was extracted from CT-values. Lesions in the femur were manually segmented by two biomechanical experts after which final lesion segmentation for each femur was obtained based on consensus of opinions between two observers. Subsequently, a self-configuring variant of the popular deep learning model U-Net known as nnU-Net was used to automatically segment metastatic lesions within the femur. For these models with segmented lesions (Finite Element with Segmented Lesion: FE-with-SL), the calcium equivalent density within the metastatic lesions was set to zero after being segmented by the neural network, simulating absence of load-bearing capacity of these lesions. The models (either with or without automatically segmented lesions) were loaded incrementally in axial direction until failure was simulated. Dice coefficient was used to evaluate the similarity of the manual and automatic segmentation. Mean calcium equivalent density values within the automatically segmented lesions were calculated. Failure loads and patterns were determined. Furthermore, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for both groups by comparing the predictions to the occurrence or absence of actual fracture within the patient cohorts. The automatic segmentation algorithm performed in a none-robust manner. Dice coefficients describing the similarity between consented manual and automatic segmentations were relatively low (mean 0.45 ± standard deviation 0.33, median 0.54). Failure load difference between the FE-no-SL and FE-with-SL groups varied from 0 % to 48 % (mean 6.6 %). Correlation analysis of failure loads between the two groups showed a strong relationship (R2 > 0.9). From the 50 cases, four cases showed clear deviations for which models with automatic lesion segmentation (FE-with-SL) showed considerably lower failure loads. In the whole database including osteolytic and mixed lesions, sensitivity and NPV remained the same, but specificity and PPV decreased from 94 % to 83 %, and from 78 % to 54 % respectively from FE-no-SL to FE-with-SL. This study indicates that the nnU-Net yielded none-robust outcomes in femoral lesion segmentation and that other segmentation algorithms should be considered. However, the difference in failure pattern and failure load between FE models with automatically segmented osteolytic and mixed lesions were relatively small in most cases with a few exceptions. On the other hand, the accuracy of fracture risk assessment using the BOS score was lower compared to the FE-no-SL. In conclusion, this study showed that automatic lesion segmentation is a none-solved issue and therefore, quantifying lesion characteristics and the subsequent effect on the fracture risk using deep learning will remain challenging.
Collapse
Affiliation(s)
- Ali Ataei
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands.
| | - Florieke Eggermont
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands
| | - Nico Verdonschot
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands; Laboratory for Biomechanical Engineering, University of Twente, Enschede, the Netherlands
| | - Nikolas Lessmann
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud university medical center, Nijmegen, the Netherlands
| | - Esther Tanck
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands
| |
Collapse
|
6
|
Haim O, Agur A, Gabay S, Azolai L, Shutan I, Chitayat M, Katirai M, Sadon S, Artzi M, Lidar Z. Differentiating spinal pathologies by deep learning approach. Spine J 2024; 24:297-303. [PMID: 37797840 DOI: 10.1016/j.spinee.2023.09.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 09/19/2023] [Accepted: 09/26/2023] [Indexed: 10/07/2023]
Abstract
BACKGROUND CONTEXT Spinal pathologies are diverse in nature and, excluding trauma and degenerative diseases, includes infectious, neoplastic (either extradural or intradural), and inflammatory conditions. The preoperative diagnosis is made with clinical judgment incorporating lab findings and radiological studies. When the diagnosis is uncertain, a biopsy is almost always mandatory since the treatment is dictated by the type of pathology. This is an invasive, timely, and costly process. PURPOSE The aim of this study was to develop a deep learning (DL) algorithm, based on preoperative MRI and post-operative pathological results, to differentiate between leading spinal pathologies. STUDY DESIGN We retrospectively collected and analyzed clinical, radiological, and pathological data of patients who underwent spinal surgery or biopsy for various spinal pathologies between 2008 and 2022 at a tertiary center. The patients were stratified according to their pathological reports (the threshold for inclusion was set to 25 patients per diagnosis). METHODS Preoperative MRI, clinical data, and pathological results were processed by a deep learning model built on the Fast.ai framework on top of the PyTorch environment. RESULTS A total of 231 patients diagnosed with carcinoma (80), infection (57), meningioma (52), or schwannoma (42), were included in our model. The mean overall accuracy was 0.78±0.06 for the validation, and 0.93±0.03 for the test dataset. CONCLUSION Deep learning algorithm for differentiation between the aforementioned spinal pathologies, based solely on clinical MRI, proves as a feasible primary diagnostic modality. Larger studies should be performed to validate and improve this algorithm for clinical use. CLINICAL SIGNIFICANCE This study provides a proof-of-concept for predicting spinal pathologies solely by MRI based DL technology, allowing for a rapid, targeted, and cost-effective work-up and subsequent treatment.
Collapse
Affiliation(s)
- Oz Haim
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Ariel Agur
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel
| | - Segev Gabay
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel
| | - Lee Azolai
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel
| | - Itay Shutan
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel
| | - May Chitayat
- The Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University, Tel-Aviv 6997801, Israel; Sagol Brain Institute, Tel Aviv Medical Center, Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, 6 Weizman St, Tel-Aviv, Israel
| | - Michal Katirai
- The Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University, Tel-Aviv 6997801, Israel; Sagol Brain Institute, Tel Aviv Medical Center, Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, 6 Weizman St, Tel-Aviv, Israel
| | - Sapir Sadon
- Department of Cardiology, Tel Aviv Medical Center, Tel-Aviv, Israel
| | - Moran Artzi
- Sagol Brain Institute, Tel Aviv Medical Center, Faculty of Medicine and Sagol School of Neuroscience, Tel Aviv University, 6 Weizman St, Tel-Aviv, Israel.
| | - Zvi Lidar
- Department of Neurosurgery, Tel Aviv Medical Center, Tel-Aviv, Israel
| |
Collapse
|
7
|
Aslan S, Al-Smadi MW, Kozma I, Viola Á. Enhanced Precision and Safety in Thermal Ablation: O-Arm Cone Beam Computed Tomography with Magnetic Resonance Imaging Fusion for Spinal Column Tumor Targeting. Cancers (Basel) 2023; 15:5744. [PMID: 38136290 PMCID: PMC10741908 DOI: 10.3390/cancers15245744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 12/06/2023] [Accepted: 12/06/2023] [Indexed: 12/24/2023] Open
Abstract
Spinal metastatic tumors are common and often cause debilitating symptoms. Image-guided percutaneous thermal ablation (IPTA) has gained significant recognition in managing spinal column tumors due to its exceptional precision and effectiveness. Conventional guidance modalities, including computed tomography, fluoroscopy, and ultrasound, have been important in targeting spinal column tumors while minimizing harm to adjacent critical structures. This study presents a novel approach utilizing a fusion of cone beam computed tomography with magnetic resonance imaging to guide percutaneous thermal ablation for four patients with secondary spinal column tumors. The visual analog scale (VAS) evaluated the procedure effectiveness during an 18-month follow-up. Percutaneous vertebroplasty was performed in two cases, and a thermostat was used during all procedures. Imaging was performed using the Stealth Station navigation system Spine 8 (SSS8) and a 1.5T MRI machine. The fusion of CBCT with MRI allowed for precise tumor localization and guidance for thermal ablation. Initial results indicate successful tumor ablation and symptom reduction, emphasizing the potential of CBCT-MRI fusion in spinal column tumor management. This innovative approach is promising in optimizing therapy for secondary spinal column tumors. Further studies are necessary to validate its efficacy and applicability.
Collapse
Affiliation(s)
- Siran Aslan
- Department of Neurotraumatology, Semmelweis University, 1081 Budapest, Hungary;
- Department of Neurosurgery and Neurotraumatology, Dr. Manninger Jenő National Traumatology Institute, 1081 Budapest, Hungary; (M.W.A.-S.); (I.K.)
- Doctoral School of Clinical Medicine, Semmelweis University, 1083 Budapest, Hungary
| | - Mohammad Walid Al-Smadi
- Department of Neurosurgery and Neurotraumatology, Dr. Manninger Jenő National Traumatology Institute, 1081 Budapest, Hungary; (M.W.A.-S.); (I.K.)
- Department of Operative Techniques and Surgical Research, Faculty of Medicine, University of Debrecen, 4032 Debrecen, Hungary
- Department of Neurosurgery, Andras Josa Teaching Hospital, 4400 Nyiregyhaza, Hungary
| | - István Kozma
- Department of Neurosurgery and Neurotraumatology, Dr. Manninger Jenő National Traumatology Institute, 1081 Budapest, Hungary; (M.W.A.-S.); (I.K.)
| | - Árpad Viola
- Department of Neurotraumatology, Semmelweis University, 1081 Budapest, Hungary;
- Department of Neurosurgery and Neurotraumatology, Dr. Manninger Jenő National Traumatology Institute, 1081 Budapest, Hungary; (M.W.A.-S.); (I.K.)
| |
Collapse
|
8
|
Fayed AM, Mansur NSB, de Carvalho KA, Behrens A, D'Hooghe P, de Cesar Netto C. Artificial intelligence and ChatGPT in Orthopaedics and sports medicine. J Exp Orthop 2023; 10:74. [PMID: 37493985 PMCID: PMC10371934 DOI: 10.1186/s40634-023-00642-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Artificial intelligence (AI) is looked upon nowadays as the potential major catalyst for the fourth industrial revolution. In the last decade, AI use in Orthopaedics increased approximately tenfold. Artificial intelligence helps with tracking activities, evaluating diagnostic images, predicting injury risk, and several other uses. Chat Generated Pre-trained Transformer (ChatGPT), which is an AI-chatbot, represents an extremely controversial topic in the academic community. The aim of this review article is to simplify the concept of AI and study the extent of AI use in Orthopaedics and sports medicine literature. Additionally, the article will also evaluate the role of ChatGPT in scientific research and publications.Level of evidence: Level V, letter to review.
Collapse
Affiliation(s)
- Aly M Fayed
- Department of Orthopaedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, IA, USA.
| | | | - Kepler Alencar de Carvalho
- Department of Orthopaedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Andrew Behrens
- Department of Orthopaedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Pieter D'Hooghe
- Aspetar Orthopedic and Sports Medicine Hospital, Doha, Qatar
| | | |
Collapse
|
9
|
Yao Y, Zhu X, Zhang N, Wang P, Liu Z, Chen Y, Xu C, Ouyang T, Meng W. Microwave ablation versus radiofrequency ablation for treating spinal metastases. Medicine (Baltimore) 2023; 102:e34092. [PMID: 37352076 PMCID: PMC10289525 DOI: 10.1097/md.0000000000034092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 06/25/2023] Open
Abstract
BACKGROUND This study aimed to compare the clinical efficacy and safety of microwave ablation (MWA) and radiofrequency ablation (RFA) for the treatment of spinal metastases. METHODS A literature search was performed using the PubMed, Web of Science, and Cochrane Library databases according to the PRISMA statement (as of September 20, 2022). Two independent investigators screened articles based on the inclusion and exclusion criteria and included studies with primary outcomes of pain relief, tumor control, and complications. Article quality was assessed using the Risk Of Bias In Non-randomized Studies of Interventions tool. RESULTS Sixteen articles were finally included in this study, including 630 patients with spinal metastases, with ages ranging from 51.4 to 71.3 years. Of these, 393 (62.4%) underwent MWA and 237 (37.6%) underwent RFA. After MWA and RFA treatment, visual analog scale scores significantly decreased, and the local tumor control rates were all above 80%. Complications were reported in 27.4% of patients treated with MWA compared with 10.9% of patients treated with RFA. CONCLUSION The results of this systematic review suggest that MWA alone or in combination with surgery and RFA in combination with other modalities may improve pain caused by primary tumor metastasis to the spine, and MWA alone or in combination with surgery may have better local tumor control. However, MWA appears to result in more major complications than RFA in combination with other treatment modalities.
Collapse
Affiliation(s)
- Yuming Yao
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
- The First Clinical Medical College of Nanchang University, Nanchang, China
| | - Xiang Zhu
- Jiangxi Provincial Key Laboratory of Preventive Medicine, School of Public Health, Nanchang University, Nanchang, China
| | - Na Zhang
- Department of Neurology, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Ping Wang
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zhizheng Liu
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yun Chen
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Cong Xu
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Taohui Ouyang
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Wei Meng
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
10
|
Visheratina A, Visheratin A, Kumar P, Veksler M, Kotov NA. Chirality Analysis of Complex Microparticles using Deep Learning on Realistic Sets of Microscopy Images. ACS NANO 2023; 17:7431-7442. [PMID: 37058327 DOI: 10.1021/acsnano.2c12056] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nanoscale chirality is an actively growing research field spurred by the giant chiroptical activity, enantioselective biological activity, and asymmetric catalytic activity of chiral nanostructures. Compared to chiral molecules, the handedness of chiral nano- and microstructures can be directly established via electron microscopy, which can be utilized for the automatic analysis of chiral nanostructures and prediction of their properties. However, chirality in complex materials may have multiple geometric forms and scales. Computational identification of chirality from electron microscopy images rather than optical measurements is convenient but is fundamentally challenging, too, because (1) image features differentiating left- and right-handed particles can be ambiguous and (2) three-dimensional structure essential for chirality is 'flattened' into two-dimensional projections. Here, we show that deep learning algorithms can identify twisted bowtie-shaped microparticles with nearly 100% accuracy and classify them as left- and right-handed with as high as 99% accuracy. Importantly, such accuracy was achieved with as few as 30 original electron microscopy images of bowties. Furthermore, after training on bowtie particles with complex nanostructured features, the model can recognize other chiral shapes with different geometries without retraining for their specific chiral geometry with 93% accuracy, indicating the true learning abilities of the employed neural networks. These findings indicate that our algorithm trained on a practically feasible set of experimental data enables automated analysis of microscopy data for the accelerated discovery of chiral particles and their complex systems for multiple applications.
Collapse
Affiliation(s)
- Anastasia Visheratina
- Department of Chemical Engineering and Biointerfaces Institute, University of Michigan, Ann Arbor, Michigan 48109, United States
| | | | - Prashant Kumar
- Department of Chemical Engineering and Biointerfaces Institute, University of Michigan, Ann Arbor, Michigan 48109, United States
| | - Michael Veksler
- Department of Chemical Engineering and Biointerfaces Institute, University of Michigan, Ann Arbor, Michigan 48109, United States
| | - Nicholas A Kotov
- Department of Chemical Engineering and Biointerfaces Institute, University of Michigan, Ann Arbor, Michigan 48109, United States
- Department of Aeronautics, Faculty of Engineering, Imperial College London, South Kensington Campus London, SW7 2AZ, United Kingdom
| |
Collapse
|
11
|
Martín-Noguerol T, Oñate Miranda M, Amrhein TJ, Paulano-Godino F, Xiberta P, Vilanova JC, Luna A. The role of Artificial intelligence in the assessment of the spine and spinal cord. Eur J Radiol 2023; 161:110726. [PMID: 36758280 DOI: 10.1016/j.ejrad.2023.110726] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) application development is underway in all areas of radiology where many promising tools are focused on the spine and spinal cord. In the past decade, multiple spine AI algorithms have been created based on radiographs, computed tomography, and magnetic resonance imaging. These algorithms have wide-ranging purposes including automatic labeling of vertebral levels, automated description of disc degenerative changes, detection and classification of spine trauma, identification of osseous lesions, and the assessment of cord pathology. The overarching goals for these algorithms include improved patient throughput, reducing radiologist workload burden, and improving diagnostic accuracy. There are several pre-requisite tasks required in order to achieve these goals, such as automatic image segmentation, facilitating image acquisition and postprocessing. In this narrative review, we discuss some of the important imaging AI solutions that have been developed for the assessment of the spine and spinal cord. We focus on their practical applications and briefly discuss some key requirements for the successful integration of these tools into practice. The potential impact of AI in the imaging assessment of the spine and cord is vast and promises to provide broad reaching improvements for clinicians, radiologists, and patients alike.
Collapse
Affiliation(s)
| | - Marta Oñate Miranda
- Department of Radiology, Centre Hospitalier Universitaire de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Timothy J Amrhein
- Department of Radiology, Duke University Medical Center, Durham, USA.
| | | | - Pau Xiberta
- Graphics and Imaging Laboratory (GILAB), University of Girona, 17003 Girona, Spain.
| | - Joan C Vilanova
- Department of Radiology. Clinica Girona, Diagnostic Imaging Institute (IDI), University of Girona, 17002 Girona, Spain.
| | - Antonio Luna
- MRI unit, Radiology department. HT medica, Carmelo Torres n°2, 23007 Jaén, Spain.
| |
Collapse
|
12
|
Tummala S, Suresh AK. Few-shot learning using explainable Siamese twin network for the automated classification of blood cells. Med Biol Eng Comput 2023; 61:1549-1563. [PMID: 36800155 DOI: 10.1007/s11517-023-02804-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023]
Abstract
Automated classification of blood cells from microscopic images is an interesting research area owing to advancements of efficient neural network models. The existing deep learning methods rely on large data for network training and generating such large data could be time-consuming. Further, explainability is required via class activation mapping for better understanding of the model predictions. Therefore, we developed a Siamese twin network (STN) model based on contrastive learning that trains on relatively few images for the classification of healthy peripheral blood cells using EfficientNet-B3 as the base model. Hence, in this study, a total of 17,092 publicly accessible cell histology images were analyzed from which 6% were used for STN training, 6% for few-shot validation, and the rest 88% for few-shot testing. The proposed architecture demonstrates percent accuracies of 97.00, 98.78, 94.59, 95.70, 98.86, 97.09, 99.71, and 96.30 during 8-way 5-shot testing for the classification of basophils, eosinophils, immature granulocytes, erythroblasts, lymphocytes, monocytes, platelets, and neutrophils, respectively. Further, we propose a novel class activation mapping scheme that highlights the important regions in the test image for the STN model interpretability. Overall, the proposed framework could be used for a fully automated self-exploratory classification of healthy peripheral blood cells. The whole proposed framework demonstrates the Siamese twin network training and 8-way k-shot testing. The values indicate the amount of dissimilarity.
Collapse
Affiliation(s)
- Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University-AP, Amaravati, Andhra Pradesh, 522503, India.
| | - Anil K Suresh
- Bionanotechnology and Sustainable Laboratory, Department of Biological Sciences, School of Engineering and Sciences, SRM University-AP, Amaravati, Andhra Pradesh, 522503, India
| |
Collapse
|
13
|
Hajamohideen F, Shaffi N, Mahmud M, Subramanian K, Al Sariri A, Vimbi V, Abdesselam A. Four-way classification of Alzheimer's disease using deep Siamese convolutional neural network with triplet-loss function. Brain Inform 2023; 10:5. [PMID: 36806042 PMCID: PMC9937523 DOI: 10.1186/s40708-023-00184-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 01/03/2023] [Indexed: 02/19/2023] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease that causes irreversible damage to several brain regions, including the hippocampus causing impairment in cognition, function, and behaviour. Early diagnosis of the disease will reduce the suffering of the patients and their family members. Towards this aim, in this paper, we propose a Siamese Convolutional Neural Network (SCNN) architecture that employs the triplet-loss function for the representation of input MRI images as k-dimensional embeddings. We used both pre-trained and non-pretrained CNNs to transform images into the embedding space. These embeddings are subsequently used for the 4-way classification of Alzheimer's disease. The model efficacy was tested using the ADNI and OASIS datasets which produced an accuracy of 91.83% and 93.85%, respectively. Furthermore, obtained results are compared with similar methods proposed in the literature.
Collapse
Affiliation(s)
- Faizal Hajamohideen
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
| | - Noushath Shaffi
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
| | - Mufti Mahmud
- Department of Computer Science, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
- Medical Technologies Innovation Facility, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
- Computing and Informatics Research Centre, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
| | - Karthikeyan Subramanian
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
| | - Arwa Al Sariri
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
| | - Viswan Vimbi
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
| | - Abdelhamid Abdesselam
- Department of Computer Science, Sultan Qaboos University, 123 Muscat, Sultanate of Oman
| | - for the Alzheimer’s Disease Neuroimaging Initiative
- College of Computing and Information Sciences, University of Technology and Applied Sciences, Jamia Street, 311 Sohar, Sultanate of Oman
- Department of Computer Science, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
- Medical Technologies Innovation Facility, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
- Computing and Informatics Research Centre, Nottingham Trent University, Clifton Lane, NG11 8NS Nottingham, UK
- Department of Computer Science, Sultan Qaboos University, 123 Muscat, Sultanate of Oman
| |
Collapse
|
14
|
Baldi PF, Abdelkarim S, Liu J, To JK, Ibarra MD, Browne AW. Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning. Transl Vis Sci Technol 2023; 12:20. [PMID: 36648414 PMCID: PMC9851279 DOI: 10.1167/tvst.12.1.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Purpose To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The close-up detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos. Translational Relevance Deep learning offers the potential for software-based safety feedback mechanisms during surgery or the ability to extract metrics of surgical technique that can direct research to optimize surgical outcomes.
Collapse
Affiliation(s)
- Pierre F. Baldi
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA,Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | - Sherif Abdelkarim
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Junze Liu
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Josiah K. To
- Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | | | - Andrew W. Browne
- Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA,Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, CA, USA
| |
Collapse
|
15
|
Ahuja S, Panigrahi BK, Dey N, Taneja A, Gandhi TK. McS-Net: Multi-class Siamese network for severity of COVID-19 infection classification from lung CT scan slices. Appl Soft Comput 2022; 131:109683. [PMID: 36277300 PMCID: PMC9573862 DOI: 10.1016/j.asoc.2022.109683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 08/25/2022] [Accepted: 09/22/2022] [Indexed: 11/29/2022]
Abstract
Worldwide COVID-19 is a highly infectious and rapidly spreading disease in almost all age groups. The Computed Tomography (CT) scans of lungs are found to be accurate for the timely diagnosis of COVID-19 infection. In the proposed work, a deep learning-based P-shot N-ways Siamese network along with prototypical nearest neighbor classifiers is implemented for the classification of COVID-19 infection from lung CT scan slices. For this, a Siamese network with an identical sub-network (weight sharing) is used for image classification with a limited dataset for each class. The feature vectors are obtained from the pre-trained sub-networks having weight sharing. The performance of the proposed methodology is evaluated on the benchmark MosMed dataset having categories zero (healthy control) and numerous COVID-19 infections. The proposed methodology is evaluated on (a) chest CT scans provided by medical hospitals in Moscow, Russia for 1110 patients, and (b) case study of low-dose CT scans of 42 patients provided by Avtaran healthcare in India. The deep learning-based Siamese network (15-shot 5-ways) obtained an accuracy of 98.07%, the sensitivity of 95.66%, specificity of 98.83%, and F1-score of 95.10%. The proposed work outperforms the COVID-19 infection severity classification with limited scans availability for numerous infection categories.
Collapse
Affiliation(s)
- Sakshi Ahuja
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Bijaya Ketan Panigrahi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Nilanjan Dey
- Department of Computer Science and Engineering, Techno International New Town, Kolkata, 700156, India
| | - Arpit Taneja
- Department of Radiology, Avtaran Healthcare LLP, Kurukshetra, 136118, India
| | - Tapan Kumar Gandhi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| |
Collapse
|
16
|
Kumar V, Patel S, Baburaj V, Vardhan A, Singh PK, Vaishya R. Current understanding on artificial intelligence and machine learning in orthopaedics - A scoping review. J Orthop 2022; 34:201-206. [PMID: 36104993 PMCID: PMC9465367 DOI: 10.1016/j.jor.2022.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/25/2022] Open
Abstract
Background Artificial Intelligence (AI) has improved the way of looking at technological challenges. Today, we can afford to see many of the problems as just an input-output system rather than solving from the first principles. The field of Orthopaedics is not spared from this rapidly expanding technology. The recent surge in the use of AI can be attributed mainly to advancements in deep learning methodologies and computing resources. This review was conducted to draw an outline on the role of AI in orthopaedics. Methods We developed a search strategy and looked for articles on PubMed, Scopus, and EMBASE. A total of 40 articles were selected for this study, from tools for medical aid like imaging solutions, implant management, and robotic surgery to understanding scientific questions. Results A total of 40 studies have been included in this review. The role of AI in the various subspecialties such as arthroplasty, trauma, orthopaedic oncology, foot and ankle etc. have been discussed in detail. Conclusion AI has touched most of the aspects of Orthopaedics. The increase in technological literacy, data management plans, and hardware systems, amalgamated with the access to hand-held devices like mobiles, and electronic pads, augur well for the exciting times ahead in this field. We have discussed various technological breakthroughs in AI that have been able to perform in Orthopaedics, and also the limitations and the problem with the black-box approach of modern AI algorithms. We advocate for better interpretable algorithms which can help both the patients and surgeons alike.
Collapse
Affiliation(s)
- Vishal Kumar
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Sandeep Patel
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Vishnu Baburaj
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Aditya Vardhan
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Prasoon Kumar Singh
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | | |
Collapse
|
17
|
Tummala S, Kadry S, Bukhari SAC, Rauf HT. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Curr Oncol 2022; 29:7498-7511. [PMID: 36290867 PMCID: PMC9600395 DOI: 10.3390/curroncol29100590] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 09/29/2022] [Accepted: 10/04/2022] [Indexed: 01/13/2023] Open
Abstract
The automated classification of brain tumors plays an important role in supporting radiologists in decision making. Recently, vision transformer (ViT)-based deep neural network architectures have gained attention in the computer vision research domain owing to the tremendous success of transformer models in natural language processing. Hence, in this study, the ability of an ensemble of standard ViT models for the diagnosis of brain tumors from T1-weighted (T1w) magnetic resonance imaging (MRI) is investigated. Pretrained and finetuned ViT models (B/16, B/32, L/16, and L/32) on ImageNet were adopted for the classification task. A brain tumor dataset from figshare, consisting of 3064 T1w contrast-enhanced (CE) MRI slices with meningiomas, gliomas, and pituitary tumors, was used for the cross-validation and testing of the ensemble ViT model's ability to perform a three-class classification task. The best individual model was L/32, with an overall test accuracy of 98.2% at 384 × 384 resolution. The ensemble of all four ViT models demonstrated an overall testing accuracy of 98.7% at the same resolution, outperforming individual model's ability at both resolutions and their ensembling at 224 × 224 resolution. In conclusion, an ensemble of ViT models could be deployed for the computer-aided diagnosis of brain tumors based on T1w CE MRI, leading to radiologist relief.
Collapse
Affiliation(s)
- Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University—AP, Amaravati 522503, India
- Correspondence:
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 36, Lebanon
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, New York, NY 11439, USA
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| |
Collapse
|
18
|
Cui Y, Zhu J, Duan Z, Liao Z, Wang S, Liu W. Artificial Intelligence in Spinal Imaging: Current Status and Future Directions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11708. [PMID: 36141981 PMCID: PMC9517575 DOI: 10.3390/ijerph191811708] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
Spinal maladies are among the most common causes of pain and disability worldwide. Imaging represents an important diagnostic procedure in spinal care. Imaging investigations can provide information and insights that are not visible through ordinary visual inspection. Multiscale in vivo interrogation has the potential to improve the assessment and monitoring of pathologies thanks to the convergence of imaging, artificial intelligence (AI), and radiomic techniques. AI is revolutionizing computer vision, autonomous driving, natural language processing, and speech recognition. These revolutionary technologies are already impacting radiology, diagnostics, and other fields, where automated solutions can increase precision and reproducibility. In the first section of this narrative review, we provide a brief explanation of the many approaches currently being developed, with a particular emphasis on those employed in spinal imaging studies. The previously documented uses of AI for challenges involving spinal imaging, including imaging appropriateness and protocoling, image acquisition and reconstruction, image presentation, image interpretation, and quantitative image analysis, are then detailed. Finally, the future applications of AI to imaging of the spine are discussed. AI has the potential to significantly affect every step in spinal imaging. AI can make images of the spine more useful to patients and doctors by improving image quality, imaging efficiency, and diagnostic accuracy.
Collapse
Affiliation(s)
- Yangyang Cui
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Jia Zhu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhili Duan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhenhua Liao
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Song Wang
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Weiqiang Liu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| |
Collapse
|
19
|
Ong W, Zhu L, Zhang W, Kuah T, Lim DSW, Low XZ, Thian YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Artificial Intelligence Methods for Imaging of Spinal Metastasis. Cancers (Basel) 2022; 14:4025. [PMID: 36011018 PMCID: PMC9406500 DOI: 10.3390/cancers14164025] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 08/10/2022] [Accepted: 08/15/2022] [Indexed: 11/16/2022] Open
Abstract
Spinal metastasis is the most common malignant disease of the spine. Recently, major advances in machine learning and artificial intelligence technology have led to their increased use in oncological imaging. The purpose of this study is to review and summarise the present evidence for artificial intelligence applications in the detection, classification and management of spinal metastasis, along with their potential integration into clinical practice. A systematic, detailed search of the main electronic medical databases was undertaken in concordance with the PRISMA guidelines. A total of 30 articles were retrieved from the database and reviewed. Key findings of current AI applications were compiled and summarised. The main clinical applications of AI techniques include image processing, diagnosis, decision support, treatment assistance and prognostic outcomes. In the realm of spinal oncology, artificial intelligence technologies have achieved relatively good performance and hold immense potential to aid clinicians, including enhancing work efficiency and reducing adverse events. Further research is required to validate the clinical performance of the AI tools and facilitate their integration into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Tricia Kuah
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
| | - Yee Liang Thian
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd., Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
20
|
Zhou X, Wang H, Feng C, Xu R, He Y, Li L, Tu C. Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges. Front Oncol 2022; 12:908873. [PMID: 35928860 PMCID: PMC9345628 DOI: 10.3389/fonc.2022.908873] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed.
Collapse
Affiliation(s)
- Xiaowen Zhou
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Chengyao Feng
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Ruilin Xu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yu He
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lan Li
- Department of Pathology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chao Tu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Chao Tu,
| |
Collapse
|
21
|
Kuah T, Vellayappan BA, Makmur A, Nair S, Song J, Tan JH, Kumar N, Quek ST, Hallinan JTPD. State-of-the-Art Imaging Techniques in Metastatic Spinal Cord Compression. Cancers (Basel) 2022; 14:cancers14133289. [PMID: 35805059 PMCID: PMC9265325 DOI: 10.3390/cancers14133289] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/24/2022] [Accepted: 06/28/2022] [Indexed: 12/23/2022] Open
Abstract
Metastatic Spinal Cord Compression (MSCC) is a debilitating complication in oncology patients. This narrative review discusses the strengths and limitations of various imaging modalities in diagnosing MSCC, the role of imaging in stereotactic body radiotherapy (SBRT) for MSCC treatment, and recent advances in deep learning (DL) tools for MSCC diagnosis. PubMed and Google Scholar databases were searched using targeted keywords. Studies were reviewed in consensus among the co-authors for their suitability before inclusion. MRI is the gold standard of imaging to diagnose MSCC with reported sensitivity and specificity of 93% and 97% respectively. CT Myelogram appears to have comparable sensitivity and specificity to contrast-enhanced MRI. Conventional CT has a lower diagnostic accuracy than MRI in MSCC diagnosis, but is helpful in emergent situations with limited access to MRI. Metal artifact reduction techniques for MRI and CT are continually being researched for patients with spinal implants. Imaging is crucial for SBRT treatment planning and three-dimensional positional verification of the treatment isocentre prior to SBRT delivery. Structural and functional MRI may be helpful in post-treatment surveillance. DL tools may improve detection of vertebral metastasis and reduce time to MSCC diagnosis. This enables earlier institution of definitive therapy for better outcomes.
Collapse
Affiliation(s)
- Tricia Kuah
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Correspondence: ; Tel.: +65-6779-5555
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore 119074, Singapore;
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shalini Nair
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
| | - Junda Song
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
22
|
Tian Y, Zhao X, Huang W. Meta-learning approaches for learning-to-learn in deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
23
|
Deep Learning Model for Grading Metastatic Epidural Spinal Cord Compression on Staging CT. Cancers (Basel) 2022; 14:cancers14133219. [PMID: 35804990 PMCID: PMC9264856 DOI: 10.3390/cancers14133219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 02/02/2023] Open
Abstract
Background: Metastatic epidural spinal cord compression (MESCC) is a disastrous complication of advanced malignancy. Deep learning (DL) models for automatic MESCC classification on staging CT were developed to aid earlier diagnosis. Methods: This retrospective study included 444 CT staging studies from 185 patients with suspected MESCC who underwent MRI spine studies within 60 days of the CT studies. The DL model training/validation dataset consisted of 316/358 (88%) and the test set of 42/358 (12%) CT studies. Training/validation and test datasets were labeled in consensus by two subspecialized radiologists (6 and 11-years-experience) using the MRI studies as the reference standard. Test sets were labeled by the developed DL models and four radiologists (2−7 years of experience) for comparison. Results: DL models showed almost-perfect interobserver agreement for classification of CT spine images into normal, low, and high-grade MESCC, with kappas ranging from 0.873−0.911 (p < 0.001). The DL models (lowest κ = 0.873, 95% CI 0.858−0.887) also showed superior interobserver agreement compared to two of the four radiologists for three-class classification, including a specialist (κ = 0.820, 95% CI 0.803−0.837) and general radiologist (κ = 0.726, 95% CI 0.706−0.747), both p < 0.001. Conclusion: DL models for the MESCC classification on a CT showed comparable to superior interobserver agreement to radiologists and could be used to aid earlier diagnosis.
Collapse
|
24
|
Ogawa T, Yoshii T, Oyama J, Sugimura N, Akada T, Sugino T, Hashimoto M, Morishita S, Takahashi T, Motoyoshi T, Oyaizu T, Yamada T, Onuma H, Hirai T, Inose H, Nakajima Y, Okawa A. Detecting ossification of the posterior longitudinal ligament on plain radiographs using a deep convolutional neural network: a pilot study. Spine J 2022; 22:934-940. [PMID: 35017056 DOI: 10.1016/j.spinee.2022.01.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 12/01/2021] [Accepted: 01/03/2022] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT Its rare prevalence and subtle radiological changes often lead to difficulties in diagnosing cervical ossification of the posterior longitudinal ligament (OPLL) on plain radiographs. However, OPLL progression may lead to trauma-induced spinal cord injury, resulting in severe paralysis. To address the difficulties in diagnosis, a deep learning approach using a convolutional neural network (CNN) was applied. PURPOSE The aim of our research was to evaluate the performance of a CNN model for diagnosing cervical OPLL. STUDY DESIGN AND SETTING Diagnostic image study. PATIENT SAMPLE This study included 50 patients with cervical OPLL, and 50 control patients with plain radiographs. OUTCOME MEASURES For the CNN model performance evaluation, we calculated the area under the receiver operating characteristic curve (AUC). We also compared the sensitivity, specificity, and accuracy of the diagnosis by the CNN with those of general orthopedic surgeons and spine specialists. METHODS Computed tomography was used as the gold standard for diagnosis. Radiographs of the cervical spine in neutral, flexion, and extension positions were used for training and validation of the CNN model. We used the deep learning PyTorch framework to construct the CNN architecture. RESULTS The accuracy of the CNN model was 90% (18/20), with a sensitivity and specificity of 80% and 100%, respectively. In contrast, the mean accuracy of orthopedic surgeons was 70%, with a sensitivity and specificity of 73% (SD: 0.12) and 67% (SD: 0.17), respectively. The mean accuracy of the spine surgeons was 75%, with a sensitivity and specificity of 80% (SD: 0.08) and 70% (SD: 0.08), respectively. The AUC of the CNN model based on the radiographs was 0.924. CONCLUSIONS The CNN model had successful diagnostic accuracy and sufficient specificity in the diagnosis of OPLL.
Collapse
Affiliation(s)
- Takahisa Ogawa
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Toshitaka Yoshii
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan.
| | - Jun Oyama
- Department of Radiology, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Nobuhiro Sugimura
- Tokyo Medical and Dental University School of Medicine, Tokyo, Japan
| | - Takashi Akada
- Tokyo Medical and Dental University School of Medicine, Tokyo, Japan
| | - Takaaki Sugino
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo, Japan
| | - Motonori Hashimoto
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Shingo Morishita
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Takuya Takahashi
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Takayuki Motoyoshi
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Takuya Oyaizu
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Tsuyoshi Yamada
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Hiroaki Onuma
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Takashi Hirai
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Hiroyuki Inose
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| | - Yoshikazu Nakajima
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo, Japan
| | - Atsushi Okawa
- Department of Orthopaedic and Spine Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Tokyo, Japan
| |
Collapse
|
25
|
Qu B, Cao J, Qian C, Wu J, Lin J, Wang L, Ou-Yang L, Chen Y, Yan L, Hong Q, Zheng G, Qu X. Current development and prospects of deep learning in spine image analysis: a literature review. Quant Imaging Med Surg 2022; 12:3454-3479. [PMID: 35655825 PMCID: PMC9131328 DOI: 10.21037/qims-21-939] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 03/04/2022] [Indexed: 10/07/2023]
Abstract
BACKGROUND AND OBJECTIVE As the spine is pivotal in the support and protection of human bodies, much attention is given to the understanding of spinal diseases. Quick, accurate, and automatic analysis of a spine image greatly enhances the efficiency with which spine conditions can be diagnosed. Deep learning (DL) is a representative artificial intelligence technology that has made encouraging progress in the last 6 years. However, it is still difficult for clinicians and technicians to fully understand this rapidly evolving field due to the diversity of applications, network structures, and evaluation criteria. This study aimed to provide clinicians and technicians with a comprehensive understanding of the development and prospects of DL spine image analysis by reviewing published literature. METHODS A systematic literature search was conducted in the PubMed and Web of Science databases using the keywords "deep learning" and "spine". Date ranges used to conduct the search were from 1 January, 2015 to 20 March, 2021. A total of 79 English articles were reviewed. KEY CONTENT AND FINDINGS The DL technology has been applied extensively to the segmentation, detection, diagnosis, and quantitative evaluation of spine images. It uses static or dynamic image information, as well as local or non-local information. The high accuracy of analysis is comparable to that achieved manually by doctors. However, further exploration is needed in terms of data sharing, functional information, and network interpretability. CONCLUSIONS The DL technique is a powerful method for spine image analysis. We believe that, with the joint efforts of researchers and clinicians, intelligent, interpretable, and reliable DL spine analysis methods will be widely applied in clinical practice in the future.
Collapse
Affiliation(s)
- Biao Qu
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Jianpeng Cao
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Chen Qian
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jinyu Wu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jianzhong Lin
- Department of Radiology, Zhongshan Hospital of Xiamen University, Xiamen, China
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China
| | - Lin Ou-Yang
- Department of Medical Imaging of Southeast Hospital, Medical College of Xiamen University, Zhangzhou, China
| | - Yongfa Chen
- Department of Pediatric Orthopedic Surgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Liyue Yan
- Department of Information & Computational Mathematics, Xiamen University, Xiamen, China
| | - Qing Hong
- Biomedical Intelligent Cloud R&D Center, China Mobile Group, Xiamen, China
| | - Gaofeng Zheng
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| |
Collapse
|
26
|
Hallinan JTPD, Zhu L, Zhang W, Lim DSW, Baskar S, Low XZ, Yeong KY, Teo EC, Kumarakulasinghe NB, Yap QV, Chan YH, Lin S, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A. Deep Learning Model for Classifying Metastatic Epidural Spinal Cord Compression on MRI. Front Oncol 2022; 12:849447. [PMID: 35600347 PMCID: PMC9114468 DOI: 10.3389/fonc.2022.849447] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Abstract
Background Metastatic epidural spinal cord compression (MESCC) is a devastating complication of advanced cancer. A deep learning (DL) model for automated MESCC classification on MRI could aid earlier diagnosis and referral. Purpose To develop a DL model for automated classification of MESCC on MRI. Materials and Methods Patients with known MESCC diagnosed on MRI between September 2007 and September 2017 were eligible. MRI studies with instrumentation, suboptimal image quality, and non-thoracic regions were excluded. Axial T2-weighted images were utilized. The internal dataset split was 82% and 18% for training/validation and test sets, respectively. External testing was also performed. Internal training/validation data were labeled using the Bilsky MESCC classification by a musculoskeletal radiologist (10-year experience) and a neuroradiologist (5-year experience). These labels were used to train a DL model utilizing a prototypical convolutional neural network. Internal and external test sets were labeled by the musculoskeletal radiologist as the reference standard. For assessment of DL model performance and interobserver variability, test sets were labeled independently by the neuroradiologist (5-year experience), a spine surgeon (5-year experience), and a radiation oncologist (11-year experience). Inter-rater agreement (Gwet’s kappa) and sensitivity/specificity were calculated. Results Overall, 215 MRI spine studies were analyzed [164 patients, mean age = 62 ± 12(SD)] with 177 (82%) for training/validation and 38 (18%) for internal testing. For internal testing, the DL model and specialists all showed almost perfect agreement (kappas = 0.92–0.98, p < 0.001) for dichotomous Bilsky classification (low versus high grade) compared to the reference standard. Similar performance was seen for external testing on a set of 32 MRI spines with the DL model and specialists all showing almost perfect agreement (kappas = 0.94–0.95, p < 0.001) compared to the reference standard. Conclusion A DL model showed comparable agreement to a subspecialist radiologist and clinical specialists for the classification of malignant epidural spinal cord compression and could optimize earlier diagnosis and surgical referral.
Collapse
Affiliation(s)
- James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Lei Zhu
- NUS Graduate School, Integrative Sciences and Engineering Programme, National University of Singapore, Singapore, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Sangeetha Baskar
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kuan Yuen Yeong
- Department of Radiology, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | | | - Qai Ven Yap
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Shuxun Lin
- Division of Spine Surgery, Department of Orthopaedic Surgery, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Balamurugan A Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
27
|
Liu H, Jiao M, Yuan Y, Ouyang H, Liu J, Li Y, Wang C, Lang N, Qian Y, Jiang L, Yuan H, Wang X. Benign and malignant diagnosis of spinal tumors based on deep learning and weighted fusion framework on MRI. Insights Imaging 2022; 13:87. [PMID: 35536493 PMCID: PMC9091071 DOI: 10.1186/s13244-022-01227-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 04/18/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND The application of deep learning has allowed significant progress in medical imaging. However, few studies have focused on the diagnosis of benign and malignant spinal tumors using medical imaging and age information at the patient level. This study proposes a multi-model weighted fusion framework (WFF) for benign and malignant diagnosis of spinal tumors based on magnetic resonance imaging (MRI) images and age information. METHODS The proposed WFF included a tumor detection model, sequence classification model, and age information statistic module based on sagittal MRI sequences obtained from 585 patients with spinal tumors (270 benign, 315 malignant) between January 2006 and December 2019 from the cooperative hospital. The experimental results of the WFF were compared with those of one radiologist (D1) and two spine surgeons (D2 and D3). RESULTS In the case of reference age information, the accuracy (ACC) (0.821) of WFF was higher than three doctors' ACC (D1: 0.686; D2: 0.736; D3: 0.636). Without age information, the ACC (0.800) of the WFF was also higher than that of the three doctors (D1: 0.750; D2: 0.664; D3:0.614). CONCLUSIONS The proposed WFF is effective in the diagnosis of benign and malignant spinal tumors with complex histological types on MRI.
Collapse
Affiliation(s)
- Hong Liu
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing, 100190, China.
| | - Menglei Jiao
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100086, China
| | - Yuan Yuan
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Hanqiang Ouyang
- Department of Orthopaedics, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, 100191, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, 100191, China
| | - Jianfang Liu
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yuan Li
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Chunjie Wang
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yueliang Qian
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing, 100190, China
| | - Liang Jiang
- Department of Orthopaedics, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China.
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, 100191, China.
- Beijing Key Laboratory of Spinal Disease Research, Beijing, 100191, China.
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, China.
| | - Xiangdong Wang
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, Beijing, 100190, China.
| |
Collapse
|
28
|
Henschel L, Kügler D, Reuter M. FastSurferVINN: Building resolution-independence into deep learning segmentation methods-A solution for HighRes brain MRI. Neuroimage 2022; 251:118933. [PMID: 35122967 PMCID: PMC9801435 DOI: 10.1016/j.neuroimage.2022.118933] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 12/22/2021] [Accepted: 01/24/2022] [Indexed: 01/04/2023] Open
Abstract
Leading neuroimaging studies have pushed 3T MRI acquisition resolutions below 1.0 mm for improved structure definition and morphometry. Yet, only few, time-intensive automated image analysis pipelines have been validated for high-resolution (HiRes) settings. Efficient deep learning approaches, on the other hand, rarely support more than one fixed resolution (usually 1.0 mm). Furthermore, the lack of a standard submillimeter resolution as well as limited availability of diverse HiRes data with sufficient coverage of scanner, age, diseases, or genetic variance poses additional, unsolved challenges for training HiRes networks. Incorporating resolution-independence into deep learning-based segmentation, i.e., the ability to segment images at their native resolution across a range of different voxel sizes, promises to overcome these challenges, yet no such approach currently exists. We now fill this gap by introducing a Voxel-size Independent Neural Network (VINN) for resolution-independent segmentation tasks and present FastSurferVINN, which (i) establishes and implements resolution-independence for deep learning as the first method simultaneously supporting 0.7-1.0 mm whole brain segmentation, (ii) significantly outperforms state-of-the-art methods across resolutions, and (iii) mitigates the data imbalance problem present in HiRes datasets. Overall, internal resolution-independence mutually benefits both HiRes and 1.0 mm MRI segmentation. With our rigorously validated FastSurferVINN we distribute a rapid tool for morphometric neuroimage analysis. The VINN architecture, furthermore, represents an efficient resolution-independent segmentation method for wider application.
Collapse
Affiliation(s)
- Leonie Henschel
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - David Kügler
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
29
|
Chen S, Urban G, Baldi P. Weakly Supervised Polyp Segmentation in Colonoscopy Images Using Deep Neural Networks. J Imaging 2022; 8:jimaging8050121. [PMID: 35621885 PMCID: PMC9144698 DOI: 10.3390/jimaging8050121] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/15/2022] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of colonoscopists and the variability in polyp sizes, morphologies, and locations. Deep learning methods have led to state-of-the-art systems for the identification of polyps in colonoscopy videos. In this study, we show that deep learning can also be applied to the segmentation of polyps in real time, and the underlying models can be trained using mostly weakly labeled data, in the form of bounding box annotations that do not contain precise contour information. A novel dataset, Polyp-Box-Seg of 4070 colonoscopy images with polyps from over 2000 patients, is collected, and a subset of 1300 images is manually annotated with segmentation masks. A series of models is trained to evaluate various strategies that utilize bounding box annotations for segmentation tasks. A model trained on the 1300 polyp images with segmentation masks achieves a dice coefficient of 81.52%, which improves significantly to 85.53% when using a weakly supervised strategy leveraging bounding box images. The Polyp-Box-Seg dataset, together with a real-time video demonstration of the segmentation system, are publicly available.
Collapse
Affiliation(s)
- Siwei Chen
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
| | - Gregor Urban
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
| | - Pierre Baldi
- Department of Computer Science, University of California, Irvine, CA 92697, USA; (S.C.); (G.U.)
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA 92697, USA
- Center for Machine Learning and Intelligent Systems, University of California, Irvine, CA 92697, USA
- Correspondence: ; Tel.: +1-949-824-5809
| |
Collapse
|
30
|
Browne AW, Deyneka E, Ceccarelli F, To JK, Chen S, Tang J, Vu AN, Baldi PF. Deep learning to enable color vision in the dark. PLoS One 2022; 17:e0265185. [PMID: 35385502 PMCID: PMC8985995 DOI: 10.1371/journal.pone.0265185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 02/24/2022] [Indexed: 12/02/2022] Open
Abstract
Humans perceive light in the visible spectrum (400-700 nm). Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum. We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light. This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete “darkness” and only illuminated with infrared light. To achieve this goal, we used a monochromatic camera sensitive to visible and near infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue (447 nm) as well as infrared wavelengths (718, 777, and 807 nm). We then optimized a convolutional neural network with a U-Net-like architecture to predict visible spectrum images from only near-infrared images. This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination. Further work can profoundly contribute to a variety of applications including night vision and studies of biological samples sensitive to visible light.
Collapse
Affiliation(s)
- Andrew W. Browne
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
- Institute for Clinical and Translational Sciences, University of California-Irvine, Irvine, CA, United States of America
- Department of Biomedical Engineering, University of California-Irvine, Irvine, CA, United States of America
- * E-mail: (AWB); (PFB)
| | - Ekaterina Deyneka
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Francesco Ceccarelli
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Josiah K. To
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
| | - Siwei Chen
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Jianing Tang
- Department of Biomedical Engineering, University of California-Irvine, Irvine, CA, United States of America
| | - Anderson N. Vu
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
| | - Pierre F. Baldi
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
- * E-mail: (AWB); (PFB)
| |
Collapse
|
31
|
Zhao S, Chen B, Chang H, Chen B, Li S. Reasoning discriminative dictionary-embedded network for fully automatic vertebrae tumor diagnosis. Med Image Anal 2022; 79:102456. [DOI: 10.1016/j.media.2022.102456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 04/01/2022] [Accepted: 04/08/2022] [Indexed: 11/24/2022]
|
32
|
Ouyang H, Meng F, Liu J, Song X, Li Y, Yuan Y, Wang C, Lang N, Tian S, Yao M, Liu X, Yuan H, Jiang S, Jiang L. Evaluation of Deep Learning-Based Automated Detection of Primary Spine Tumors on MRI Using the Turing Test. Front Oncol 2022; 12:814667. [PMID: 35359400 PMCID: PMC8962659 DOI: 10.3389/fonc.2022.814667] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/16/2022] [Indexed: 01/04/2023] Open
Abstract
BackgroundRecently, the Turing test has been used to investigate whether machines have intelligence similar to humans. Our study aimed to assess the ability of an artificial intelligence (AI) system for spine tumor detection using the Turing test.MethodsOur retrospective study data included 12179 images from 321 patients for developing AI detection systems and 6635 images from 187 patients for the Turing test. We utilized a deep learning-based tumor detection system with Faster R-CNN architecture, which generates region proposals by Region Proposal Network in the first stage and corrects the position and the size of the bounding box of the lesion area in the second stage. Each choice question featured four bounding boxes enclosing an identical tumor. Three were detected by the proposed deep learning model, whereas the other was annotated by a doctor; the results were shown to six doctors as respondents. If the respondent did not correctly identify the image annotated by a human, his answer was considered a misclassification. If all misclassification rates were >30%, the respondents were considered unable to distinguish the AI-detected tumor from the human-annotated one, which indicated that the AI system passed the Turing test.ResultsThe average misclassification rates in the Turing test were 51.2% (95% CI: 45.7%–57.5%) in the axial view (maximum of 62%, minimum of 44%) and 44.5% (95% CI: 38.2%–51.8%) in the sagittal view (maximum of 59%, minimum of 36%). The misclassification rates of all six respondents were >30%; therefore, our AI system passed the Turing test.ConclusionOur proposed intelligent spine tumor detection system has a similar detection ability to annotation doctors and may be an efficient tool to assist radiologists or orthopedists in primary spine tumor detection.
Collapse
Affiliation(s)
- Hanqiang Ouyang
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
| | - Fanyu Meng
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jianfang Liu
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xinhang Song
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Yuan Li
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yuan Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chunjie Wang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Shuai Tian
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Meiyi Yao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoguang Liu
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| | - Shuqiang Jiang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| | - Liang Jiang
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| |
Collapse
|
33
|
Deep Learning for Orthopedic Disease Based on Medical Image Analysis: Present and Future. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12020681] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Since its development, deep learning has been quickly incorporated into the field of medicine and has had a profound impact. Since 2017, many studies applying deep learning-based diagnostics in the field of orthopedics have demonstrated outstanding performance. However, most published papers have focused on disease detection or classification, leaving some unsatisfactory reports in areas such as segmentation and prediction. This review introduces research published in the field of orthopedics classified according to disease from the perspective of orthopedic surgeons, and areas of future research are discussed. This paper provides orthopedic surgeons with an overall understanding of artificial intelligence-based image analysis and the information that medical data should be treated with low prejudice, providing developers and researchers with insight into the real-world context in which clinicians are embracing medical artificial intelligence.
Collapse
|
34
|
Liu S, Feng M, Qiao T, Cai H, Xu K, Yu X, Jiang W, Lv Z, Wang Y, Li D. Deep Learning for the Automatic Diagnosis and Analysis of Bone Metastasis on Bone Scintigrams. Cancer Manag Res 2022; 14:51-65. [PMID: 35018121 PMCID: PMC8740774 DOI: 10.2147/cmar.s340114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 12/19/2021] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE To develop an approach for automatically analyzing bone metastases (BMs) on bone scintigrams based on deep learning technology. METHODS This research included a bone scan classification model, a regional segmentation model, an assessment model for tumor burden and a diagnostic report generation model. Two hundred eighty patients with BMs and 341 patients with non-BMs were involved. Eighty percent of cases were randomly extracted from two groups as training set. Remaining cases were as testing set. A deep residual convolutional neural network with different structures was used to determine whether metastatic bone lesions existed, regions of lesions were automatically segmented. Bone scan tumor burden index (BSTBI) was calculated; finally, diagnostic report could be automatically generated. The sensitivity, specificity and accuracy of classification model were compared with three physicians with different clinical experience. The Dice coefficient evaluated the effect of segmentation model and compared to the result of nnU-Net model. The correlation between BSTBI and blood alkaline phosphatase (ALP) level was analyzed to verify the efficiency of BSTBI. The performance of report generation model was evaluated by the accuracy of interpretation of report. RESULTS In testing set, the sensitivity, specificity and accuracy of classification model were 92.59%, 85.51% and 88.62%, respectively. The accuracy showed no statistical difference with moderately and experienced physicians and obviously outperformed the inexperienced. The Dice coefficient of BMs area was 0.7387 in segmentation stage. Based on the whole model frame, our segmentation model outperformed the nnU-Net. BSTBI value changed as the BMs changed. There was a positive correlation between BSTBI and ALP level. The accuracy of report generation model was 78.05%. CONCLUSION Deep learning based on automatic analysis frameworks for BMs can accurately identify BMs, preliminarily realize a fully automatic analysis process from raw data to report generation. BSTBI can be used as a quantitative evaluation indicator to assess the effect of therapy on BMs in different patients or in the same patient before and after treatment.
Collapse
Affiliation(s)
- Simin Liu
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Ming Feng
- School of Electronic and Information Engineering, Tongji University, Shanghai, People’s Republic of China
| | - Tingting Qiao
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Haidong Cai
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Kele Xu
- National Key Laboratory of Parallel and Distributed Processing, National University of Defense Technology, Changsha, People’s Republic of China
| | - Xiaqing Yu
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Wen Jiang
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Zhongwei Lv
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| | - Yin Wang
- School of Electronic and Information Engineering, Tongji University, Shanghai, People’s Republic of China
| | - Dan Li
- Department of Nuclear Medicine, Shanghai Tenth People’s Hospital, Tongji University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
35
|
Moore MM, Iyer RS, Sarwani NI, Sze RW. Artificial intelligence development in pediatric body magnetic resonance imaging: best ideas to adapt from adults. Pediatr Radiol 2022; 52:367-373. [PMID: 33851261 PMCID: PMC8043435 DOI: 10.1007/s00247-021-05072-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 02/09/2021] [Accepted: 03/22/2021] [Indexed: 12/22/2022]
Abstract
Emerging manifestations of artificial intelligence (AI) have featured prominently in virtually all industries and facets of our lives. Within the radiology literature, AI has shown great promise in improving and augmenting radiologist workflow. In pediatric imaging, while greatest AI inroads have been made in musculoskeletal radiographs, there are certainly opportunities within thoracoabdominal MRI for AI to add significant value. In this paper, we briefly review non-interpretive and interpretive data science, with emphasis on potential avenues for advancement in pediatric body MRI based on similar work in adults. The discussion focuses on MRI image optimization, abdominal organ segmentation, and osseous lesion detection encountered during body MRI in children.
Collapse
Affiliation(s)
- Michael M Moore
- Department of Radiology, Penn State Children's Hospital, Penn State Health, 500 University Drive, H066, Hershey, PA, 17033, USA.
| | - Ramesh S Iyer
- Seattle Children's Hospital, University of Washington, Seattle, WA, USA
| | | | - Raymond W Sze
- Children's Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
36
|
Zhang K, Qi S, Cai J, Zhao D, Yu T, Yue Y, Yao Y, Qian W. Content-based image retrieval with a Convolutional Siamese Neural Network: Distinguishing lung cancer and tuberculosis in CT images. Comput Biol Med 2022; 140:105096. [PMID: 34872010 DOI: 10.1016/j.compbiomed.2021.105096] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 11/17/2021] [Accepted: 11/27/2021] [Indexed: 12/21/2022]
Abstract
BACKGROUND CT findings of lung cancer and tuberculosis are sometimes similar, potentially leading to misdiagnosis. This study aims to combine deep learning and content-based image retrieval (CBIR) to distinguish lung cancer (LC) from nodular/mass atypical tuberculosis (NMTB) in CT images. METHODS This study proposes CBIR with a convolutional Siamese neural network (CBIR-CSNN). First, the lesion patches are cropped out to compose LC and NMTB datasets and the pairs of two arbitrary patches form a patch-pair dataset. Second, this patch-pair dataset is utilized to train a CSNN. Third, a test patch is treated as a query. The distance between this query and 20 patches in both datasets is calculated using the trained CSNN. The patches closest to the query are used to give the final prediction by majority voting. One dataset of 719 patients is used to train and test the CBIR-CSNN. Another external dataset with 30 patients is employed to verify CBIR-CSNN. RESULTS The CBIR-CSNN achieves excellent performance at the patch level with an mAP (Mean Average Precision) of 0.953, an accuracy of 0.947, and an area under the curve (AUC) of 0.970. At the patient level, the CBIR-CSNN correctly predicted all labels. In the external dataset, the CBIR-CSNN has an accuracy of 0.802 and AUC of 0.858 at the patch level, and 0.833 and 0.902 at the patient level. CONCLUSIONS This CBIR-CSNN can accurately and automatically distinguish LC from NMTB using CT images. CBIR-CSNN has excellent representation capability, compatibility with few-shot learning, and visual explainability.
Collapse
Affiliation(s)
- Kai Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110169, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110169, China.
| | - Jiumei Cai
- Department of Health Medicine, General Hospital of Northern Theater Command, Shenyang, 110003, China; Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Dan Zhao
- Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Tao Yu
- Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, 110004, China.
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA.
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, 79968, USA.
| |
Collapse
|
37
|
Zhang Y, Chan S, Park VY, Chang KT, Mehta S, Kim MJ, Combs FJ, Chang P, Chow D, Parajuli R, Mehta RS, Lin CY, Chien SH, Chen JH, Su MY. Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non-Fat-Sat Images and Tested on Fat-Sat Images. Acad Radiol 2022; 29 Suppl 1:S135-S144. [PMID: 33317911 PMCID: PMC8192591 DOI: 10.1016/j.acra.2020.12.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 12/02/2020] [Accepted: 12/03/2020] [Indexed: 01/03/2023]
Abstract
RATIONALE AND OBJECTIVES Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions. MATERIALS AND METHODS Two DCE-MRI datasets were used, 241 patients acquired using non-fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic. RESULTS When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified. CONCLUSION Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Siwa Chan
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Vivian Youngjean Park
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kai-Ting Chang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Siddharth Mehta
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Min Jung Kim
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Freddie J. Combs
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Peter Chang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Daniel Chow
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, CA, United States
| | - Rita S. Mehta
- Department of Medicine, University of California, Irvine, CA, United States
| | - Chin-Yao Lin
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Sou-Hsin Chien
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Corresponding Author:Min-Ying Su, PhD, John Tu and Thomas Yuen Center for Functional Onco-Imaging, 164 Irvine Hall, University of California, Irvine, CA 92697-5020, USA, Tel: +1 (949) 824-4925; Fax: +1 (949) 824-3481;
| |
Collapse
|
38
|
Katsuura Y, Colón LF, Perez AA, Albert TJ, Qureshi SA. A Primer on the Use of Artificial Intelligence in Spine Surgery. Clin Spine Surg 2021; 34:316-321. [PMID: 34050043 DOI: 10.1097/bsd.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 04/14/2021] [Indexed: 11/26/2022]
Abstract
DESIGN This was a narrative review. PURPOSE Summarize artificial intelligence (AI) fundamentals as well as current and potential future uses in spine surgery. SUMMARY OF BACKGROUND DATA Although considered futuristic, the field of AI has already had a profound impact on many industries, including health care. Its ability to recognize patterns and self-correct to improve over time mimics human cognitive function, but on a much larger scale. METHODS Review of literature on AI fundamentals and uses in spine pathology. RESULTS Machine learning (ML), a subset of AI, increases in hierarchy of complexity from classic ML to unsupervised ML to deep leaning, where Language Processing and Computer Vision are possible. AI-based tools have been developed to segment spinal structures, acquire basic spinal measurements, and even identify pathology such as tumor or degeneration. AI algorithms could have use in guiding clinical management through treatment selection, patient-specific prognostication, and even has the potential to power neuroprosthetic devices after spinal cord injury. CONCLUSION While the use of AI has pitfalls and should be adopted with caution, future use is promising in the field of spine surgery and medicine as a whole. LEVEL OF EVIDENCE Level IV.
Collapse
Affiliation(s)
| | - Luis F Colón
- Department of Orthopedic Surgery, University of Tennessee College of Medicine in Chattanooga, Chattanooga, TN
| | - Alberto A Perez
- School of Medicine and Public Health, University of Wisconsin, Madison, WI
| | - Todd J Albert
- Hospital for Special Surgery
- Weill Cornell Medical College, New York, NY
| | - Sheeraz A Qureshi
- Hospital for Special Surgery
- Weill Cornell Medical College, New York, NY
| |
Collapse
|
39
|
McAleer S, Fast A, Xue Y, Seiler MJ, Tang WC, Balu M, Baldi P, Browne AW. Deep Learning-Assisted Multiphoton Microscopy to Reduce Light Exposure and Expedite Imaging in Tissues With High and Low Light Sensitivity. Transl Vis Sci Technol 2021; 10:30. [PMID: 34668935 PMCID: PMC8543395 DOI: 10.1167/tvst.10.12.30] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
Purpose Two-photon excitation fluorescence (2PEF) reveals information about tissue function. Concerns for phototoxicity demand lower light exposure during imaging. Reducing excitation light reduces the quality of the image by limiting fluorescence emission. We applied deep learning (DL) super-resolution techniques to images acquired from low light exposure to yield high-resolution images of retinal and skin tissues. Methods We analyzed two methods: a method based on U-Net and a patch-based regression method using paired images of skin (550) and retina (1200), each with low- and high-resolution paired images. The retina dataset was acquired at low and high laser powers from retinal organoids, and the skin dataset was obtained from averaging 7 to 15 frames or 70 frames. Mean squared error (MSE) and the structural similarity index measure (SSIM) were outcome measures for DL algorithm performance. Results For the skin dataset, the patches method achieved a lower MSE (3.768) compared with U-Net (4.032) and a high SSIM (0.824) compared with U-Net (0.783). For the retinal dataset, the patches method achieved an average MSE of 27,611 compared with 146,855 for the U-Net method and an average SSIM of 0.636 compared with 0.607 for the U-Net method. The patches method was slower (303 seconds) than the U-Net method (<1 second). Conclusions DL can reduce excitation light exposure in 2PEF imaging while preserving image quality metrics. Translational Relevance DL methods will aid in translating 2PEF imaging from benchtop systems to in vivo imaging of light-sensitive tissues such as the retina.
Collapse
Affiliation(s)
- Stephen McAleer
- Department of Computer Science, University of California, Irvine, Irvine, CA, USA.,Institute for Genomics and Bioinformatics, University of California, Irvine, Irvine, CA, USA
| | - Alexander Fast
- Beckman Laser Institute and Medical Clinic, University of California, Irvine, Irvine, CA, USA.,InfraDerm, LLC, Irvine, CA
| | - Yuntian Xue
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, USA
| | - Magdalene J Seiler
- Department of Physical Medicine & Rehabilitation, University of California, Irvine, Irvine, CA, USA.,Sue and Bill Gross Stem Cell Research Center, University of California, Irvine, Irvine, CA, USA.,Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, Irvine, CA, USA
| | - William C Tang
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, USA
| | - Mihaela Balu
- Beckman Laser Institute and Medical Clinic, University of California, Irvine, Irvine, CA, USA
| | - Pierre Baldi
- Department of Computer Science, University of California, Irvine, Irvine, CA, USA.,Institute for Genomics and Bioinformatics, University of California, Irvine, Irvine, CA, USA
| | - Andrew W Browne
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, USA.,Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, Irvine, CA, USA.,Institute for Clinical and Translational Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
40
|
Feng S, Liu B, Zhang Y, Zhang X, Li Y. Two-Stream Compare and Contrast Network for Vertebral Compression Fracture Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2496-2506. [PMID: 33999815 DOI: 10.1109/tmi.2021.3080991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Differentiating Vertebral Compression Fractures (VCFs) associated with trauma and osteoporosis (benign VCFs) or those caused by metastatic cancer (malignant VCFs) is critically important for treatment decisions. So far, automatic VCFs diagnosis is solved in a two-step manner, i.e., first identify VCFs and then classify them into benign or malignant. In this paper, we explore to model VCFs diagnosis as a three-class classification problem, i.e., normal vertebrae, benign VCFs, and malignant VCFs. However, VCFs recognition and classification require very different features, and both tasks are characterized by high intra-class variation and high inter-class similarity. Moreover, the dataset is extremely class-imbalanced. To address the above challenges, we propose a novel Two-Stream Compare and Contrast Network (TSCCN) for VCFs diagnosis. This network consists of two streams, a recognition stream which learns to identify VCFs through comparing and contrasting between adjacent vertebrae, and a classification stream which compares and contrasts between intra-class and inter-class to learn features for fine-grained classification. The two streams are integrated via a learnable weight control module which adaptively sets their contribution. TSCCN is evaluated on a dataset consisting of 239 VCFs patients and achieves the average sensitivity and specificity of 92.56% and 96.29%, respectively.
Collapse
|
41
|
Detecting pulmonary Coccidioidomycosis with deep convolutional neural networks. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
|
42
|
Molder C, Lowe B, Zhan J. Learning Medical Materials From Radiography Images. Front Artif Intell 2021; 4:638299. [PMID: 34337390 PMCID: PMC8320745 DOI: 10.3389/frai.2021.638299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 05/26/2021] [Indexed: 11/13/2022] Open
Abstract
Deep learning models have been shown to be effective for material analysis, a subfield of computer vision, on natural images. In medicine, deep learning systems have been shown to more accurately analyze radiography images than algorithmic approaches and even experts. However, one major roadblock to applying deep learning-based material analysis on radiography images is a lack of material annotations accompanying image sets. To solve this, we first introduce an automated procedure to augment annotated radiography images into a set of material samples. Next, using a novel Siamese neural network that compares material sample pairs, called D-CNN, we demonstrate how to learn a perceptual distance metric between material categories. This system replicates the actions of human annotators by discovering attributes that encode traits that distinguish materials in radiography images. Finally, we update and apply MAC-CNN, a material recognition neural network, to demonstrate this system on a dataset of knee X-rays and brain MRIs with tumors. Experiments show that this system has strong predictive power on these radiography images, achieving 92.8% accuracy at predicting the material present in a local region of an image. Our system also draws interesting parallels between human perception of natural materials and materials in radiography images.
Collapse
Affiliation(s)
- Carson Molder
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| | - Benjamin Lowe
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| | - Justin Zhan
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| |
Collapse
|
43
|
Merali ZA, Colak E, Wilson JR. Applications of Machine Learning to Imaging of Spinal Disorders: Current Status and Future Directions. Global Spine J 2021; 11:23S-29S. [PMID: 33890805 PMCID: PMC8076811 DOI: 10.1177/2192568220961353] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
STUDY DESIGN Narrative review. OBJECTIVES We aim to describe current progress in the application of artificial intelligence and machine learning technology to provide automated analysis of imaging in patients with spinal disorders. METHODS A literature search utilizing the PubMed database was performed. Relevant studies from all the evidence levels have been included. RESULTS Within spine surgery, artificial intelligence and machine learning technologies have achieved near-human performance in narrow image classification tasks on specific datasets in spinal degenerative disease, spinal deformity, spine trauma, and spine oncology. CONCLUSION Although substantial challenges remain to be overcome it is clear that artificial intelligence and machine learning technology will influence the practice of spine surgery in the future.
Collapse
Affiliation(s)
- Zamir A. Merali
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Errol Colak
- Department of Medical Imaging, University of Toronto, St. Michael’s Hospital, 30 Bond St, Toronto, ON, M5B 1W8, Canada
| | - Jefferson R. Wilson
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Department of Neurosurgery, St. Michael’s Hospital, Toronto, Ontario, Canada
| |
Collapse
|
44
|
Zhang W, Cai Q, Wei G. Comparison of Differential Diagnosis of Lung Cancer by Diffuse Weighted Imaging and Sagittal Imaging with Short Inversion Recovery Sequence. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The differential diagnosis of advanced lung cancer is difficult in clinical practice. Our study aims to compare the value of diffusion weighted imaging (DWI) with short-term inversion recovery sequence (STIR) for sagittal imaging in the differential diagnosis of lung cancer. 149 patients
with non-small cell lung carcinoma (NSCLC) were enrolled and underwent DWI and STIR sagittal imaging. To quantify cancer types, we evaluated the apparent diffusion coefficient (ADC) value on DWI and the contrast ratio (CRs) on sagittal imaging. The ADC values of subclasses in NSCLC were significantly
higher than small cell lung carcinoma (SCLC) (p <0.01). The mean CRs were 1.59 for SCLC and 1.30 for NSCLC with a significant difference (p < 0.01). Large cell carcinomas (LCC) and adenocarcinomas have significant differences compared to small cell carcinomas (SCC) without
difference between squamous cell carcinomas (p > 0.05); this is also the case for CRs. Squamous cell carcinoma and adenocarcinoma have significant differences compared to SCC without difference in LCC (p > 0.05). Qualitative evaluation of the feasible thresholds DWI and
STIR showed that the thresholds were 0.9810−3 mm2/s and 1.37 respectively. The specificity and accuracy was 78.5% is 85.3% for DWI, which was significantly higher than STIR (56.3% and 61.0%). The combination of DWI and STIR sequences was superior to DWI alone with
an accuracy rate of 94.3%. DWI is more helpful than STIR in differentiating SCLC and NSCLC, and their combined use can significantly improve diagnosis accuracy.
Collapse
Affiliation(s)
- Wei Zhang
- Department of Radiology, Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, 210028, China; Department of Oncology, Affiliated Hospital of Integrated Traditional
Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, 210028, China
| | - Qingyu Cai
- Department of Radiology, Zhoupu Hospital, Shanghai, 201318, China
| | - Guoli Wei
- Department of Radiology, Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, 210028, China; Department of Oncology, Affiliated Hospital of Integrated Traditional Chinese
and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, 210028, China
| |
Collapse
|
45
|
Zheng Q, Yang L, Zeng B, Li J, Guo K, Liang Y, Liao G. Artificial intelligence performance in detecting tumor metastasis from medical radiology imaging: A systematic review and meta-analysis. EClinicalMedicine 2021; 31:100669. [PMID: 33392486 PMCID: PMC7773591 DOI: 10.1016/j.eclinm.2020.100669] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 11/14/2020] [Accepted: 11/17/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Early diagnosis of tumor metastasis is crucial for clinical treatment. Artificial intelligence (AI) has shown great promise in the field of medicine. We therefore aimed to evaluate the diagnostic accuracy of AI algorithms in detecting tumor metastasis using medical radiology imaging. METHODS We searched PubMed and Web of Science for studies published from January 1, 1997, to January 30, 2020. Studies evaluating an AI model for the diagnosis of tumor metastasis from medical images were included. We excluded studies that used histopathology images or medical wave-form data and those focused on the region segmentation of interest. Studies providing enough information to construct contingency tables were included in a meta-analysis. FINDINGS We identified 2620 studies, of which 69 were included. Among them, 34 studies were included in a meta-analysis with a pooled sensitivity of 82% (95% CI 79-84%), specificity of 84% (82-87%) and AUC of 0·90 (0·87-0·92). Analysis for different AI algorithms showed a pooled sensitivity of 87% (83-90%) for machine learning and 86% (82-89%) for deep learning, and a pooled specificity of 89% (82-93%) for machine learning, and 87% (82-91%) for deep learning. INTERPRETATION AI algorithms may be used for the diagnosis of tumor metastasis using medical radiology imaging with equivalent or even better performance to health-care professionals, in terms of sensitivity and specificity. At the same time, rigorous reporting standards with external validation and comparison to health-care professionals are urgently needed for AI application in the medical field. FUNDING College students' innovative entrepreneurial training plan program .
Collapse
Affiliation(s)
- Qiuhan Zheng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Le Yang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Bin Zeng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Jiahao Li
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Kaixin Guo
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Yujie Liang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Guiqing Liao
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| |
Collapse
|
46
|
Abstract
Similarity has always been a key aspect in computer science and statistics. Any time two element vectors are compared, many different similarity approaches can be used, depending on the final goal of the comparison (Euclidean distance, Pearson correlation coefficient, Spearman's rank correlation coefficient, and others). But if the comparison has to be applied to more complex data samples, with features having different dimensionality and types which might need compression before processing, these measures would be unsuitable. In these cases, a siamese neural network may be the best choice: it consists of two identical artificial neural networks each capable of learning the hidden representation of an input vector. The two neural networks are both feedforward perceptrons, and employ error back-propagation during training; they work parallelly in tandem and compare their outputs at the end, usually through a cosine distance. The output generated by a siamese neural network execution can be considered the semantic similarity between the projected representation of the two input vectors. In this overview we first describe the siamese neural network architecture, and then we outline its main applications in a number of computational fields since its appearance in 1994. Additionally, we list the programming languages, software packages, tutorials, and guides that can be practically used by readers to implement this powerful machine learning model.
Collapse
|
47
|
AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100596] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
|
48
|
Kijowski R, Liu F, Caliva F, Pedoia V. Deep Learning for Lesion Detection, Progression, and Prediction of Musculoskeletal Disease. J Magn Reson Imaging 2020; 52:1607-1619. [PMID: 31763739 PMCID: PMC7251925 DOI: 10.1002/jmri.27001] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022] Open
Abstract
Deep learning is one of the most exciting new areas in medical imaging. This review article provides a summary of the current clinical applications of deep learning for lesion detection, progression, and prediction of musculoskeletal disease on radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine. Deep-learning methods have shown success for estimating pediatric bone age, detecting fractures, and assessing the severity of osteoarthritis on radiographs. In particular, the high diagnostic performance of deep-learning approaches for estimating pediatric bone age and detecting fractures suggests that the new technology may soon become available for use in clinical practice. Recent studies have also documented the feasibility of using deep-learning methods for identifying a wide variety of pathologic abnormalities on CT and MRI including internal derangement, metastatic disease, infection, fractures, and joint degeneration. However, the detection of musculoskeletal disease on CT and especially MRI is challenging, as it often requires analyzing complex abnormalities on multiple slices of image datasets with different tissue contrasts. Thus, additional technical development is needed to create deep-learning methods for reliable and repeatable interpretation of musculoskeletal CT and MRI examinations. Furthermore, the diagnostic performance of all deep-learning methods for detecting and characterizing musculoskeletal disease must be evaluated in prospective studies using large image datasets acquired at different institutions with different imaging parameters and different imaging hardware before they can be implemented in clinical practice. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1607-1619.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Francesco Caliva
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Valentina Pedoia
- Department of Radiology, University of California at San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
49
|
Vogrin M, Trojner T, Kelc R. Artificial intelligence in musculoskeletal oncological radiology. Radiol Oncol 2020; 55:1-6. [PMID: 33885240 PMCID: PMC7877260 DOI: 10.2478/raon-2020-0068] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 09/29/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Due to the rarity of primary bone tumors, precise radiologic diagnosis often requires an experienced musculoskeletal radiologist. In order to make the diagnosis more precise and to prevent the overlooking of potentially dangerous conditions, artificial intelligence has been continuously incorporated into medical practice in recent decades. This paper reviews some of the most promising systems developed, including those for diagnosis of primary and secondary bone tumors, breast, lung and colon neoplasms. CONCLUSIONS Although there is still a shortage of long-term studies confirming its benefits, there is probably a considerable potential for further development of computer-based expert systems aiming at a more efficient diagnosis of bone and soft tissue tumors.
Collapse
Affiliation(s)
- Matjaz Vogrin
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
- Faculty of Medicine, University of Maribor, Maribor, Slovenia
| | - Teodor Trojner
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
| | - Robi Kelc
- Department of Orthopaedic Surgery, University Medical CenterMaribor, Slovenia
- Faculty of Medicine, University of Maribor, Maribor, Slovenia
| |
Collapse
|
50
|
Fu J, Li W, Du J, Xiao B. Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy. Comput Biol Med 2020; 126:104048. [PMID: 33068809 DOI: 10.1016/j.compbiomed.2020.104048] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 10/06/2020] [Accepted: 10/06/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND In recent years, numerous fusion algorithms have been proposed for multimodal medical images. The Laplacian pyramid is one type of multiscale fusion method. Although the pyramid-based fusion algorithm can fuse images well, it has the disadvantages of edge degradation, detail loss and image smoothing as the number of decomposition layers increase, which is harmful for medical diagnosis and analysis. METHOD This paper proposes a medical image fusion algorithm based on the Laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy, which can greatly improve the edge quality. First, multimodal medical images are reconstructed through convolutional neural network. Then, the Laplacian pyramid is applied in the decomposition and fusion process. The optimal number of decomposition layers is determined by experiments. In addition, a local gradient energy fusion strategy is utilized to fuse the coefficients in each layer. Finally, the fused image is output through Laplacian inverse transformation. RESULTS Compared with existing algorithms, our fusion results represent better vision quality performance. Furthermore, our algorithm is considerably superior to the compared algorithms in objective indicators. In addition, in our fusion results of Alzheimer and Glioma, the disease details are much clearer than those of compared algorithms, which can provide a reliable basis for doctors to analyze disease and make pathological diagnoses.
Collapse
Affiliation(s)
- Jun Fu
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Jiao Du
- School of Computer Science and Educational Software, Guangzhou University, Guangzhou, 510006, China
| | - Bin Xiao
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|