1
|
Dong X, Chen G, Zhu Y, Ma B, Ban X, Wu N, Ming Y. Artificial intelligence in skeletal metastasis imaging. Comput Struct Biotechnol J 2024; 23:157-164. [PMID: 38144945 PMCID: PMC10749216 DOI: 10.1016/j.csbj.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
In the field of metastatic skeletal oncology imaging, the role of artificial intelligence (AI) is becoming more prominent. Bone metastasis typically indicates the terminal stage of various malignant neoplasms. Once identified, it necessitates a comprehensive revision of the initial treatment regime, and palliative care is often the only resort. Given the gravity of the condition, the diagnosis of bone metastasis should be approached with utmost caution. AI techniques are being evaluated for their efficacy in a range of tasks within medical imaging, including object detection, disease classification, region segmentation, and prognosis prediction in medical imaging. These methods offer a standardized solution to the frequently subjective challenge of image interpretation.This subjectivity is most desirable in bone metastasis imaging. This review describes the basic imaging modalities of bone metastasis imaging, along with the recent developments and current applications of AI in the respective imaging studies. These concrete examples emphasize the importance of using computer-aided systems in the clinical setting. The review culminates with an examination of the current limitations and prospects of AI in the realm of bone metastasis imaging. To establish the credibility of AI in this domain, further research efforts are required to enhance the reproducibility and attain robust level of empirical support.
Collapse
Affiliation(s)
- Xiying Dong
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 100021 Beijing, China
| | - Guilin Chen
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Yuanpeng Zhu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Boyuan Ma
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Xiaojuan Ban
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Nan Wu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Beijing Key Laboratory for Genetic Research of Skeletal Deformity, Beijing 100730, China
| | - Yue Ming
- Department of Nuclear Medicine (PET-CT Center), National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
2
|
Motohashi M, Funauchi Y, Adachi T, Fujioka T, Otaka N, Kamiko Y, Okada T, Tateishi U, Okawa A, Yoshii T, Sato S. A New Deep Learning Algorithm for Detecting Spinal Metastases on Computed Tomography Images. Spine (Phila Pa 1976) 2024; 49:390-397. [PMID: 38084012 PMCID: PMC10898548 DOI: 10.1097/brs.0000000000004889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 11/18/2023] [Indexed: 02/29/2024]
Abstract
STUDY DESIGN Retrospective diagnostic study. OBJECTIVE To automatically detect osteolytic bone metastasis lesions in the thoracolumbar region using conventional computed tomography (CT) scans, we developed a new deep learning (DL)-based computer-aided detection model. SUMMARY OF BACKGROUND DATA Radiographic detection of bone metastasis is often difficult, even for orthopedic surgeons and diagnostic radiologists, with a consequent risk for pathologic fracture or spinal cord injury. If we can improve detection rates, we will be able to prevent the deterioration of patients' quality of life at the end stage of cancer. MATERIALS AND METHODS This study included CT scans acquired at Tokyo Medical and Dental University (TMDU) Hospital between 2016 and 2022. A total of 263 positive CT scans that included at least one osteolytic bone metastasis lesion in the thoracolumbar spine and 172 negative CT scans without bone metastasis were collected for the datasets to train and validate the DL algorithm. As a test data set, 20 positive and 20 negative CT scans were separately collected from the training and validation datasets. To evaluate the performance of the established artificial intelligence (AI) model, sensitivity, precision, F1-score, and specificity were calculated. The clinical utility of our AI model was also evaluated through observer studies involving six orthopaedic surgeons and six radiologists. RESULTS Our AI model showed a sensitivity, precision, and F1-score of 0.78, 0.68, and 0.72 (per slice) and 0.75, 0.36, and 0.48 (per lesion), respectively. The observer studies revealed that our AI model had comparable sensitivity to orthopaedic or radiology experts and improved the sensitivity and F1-score of residents. CONCLUSION We developed a novel DL-based AI model for detecting osteolytic bone metastases in the thoracolumbar spine. Although further improvement in accuracy is needed, the current AI model may be applied to current clinical practice. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Masataka Motohashi
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
| | - Yuki Funauchi
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
| | - Takuya Adachi
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University (TMDU), Graduate School of Medical and Dental Sciences, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Artificial Intelligence Radiology, Tokyo Medical and Dental University (TMDU), Graduate School of Medical and Dental Sciences, Tokyo, Japan
| | - Naoya Otaka
- Research and Development Headquarters, NTT DATA Group Corporation, Tokyo, Japan
| | - Yuka Kamiko
- Research and Development Headquarters, NTT DATA Group Corporation, Tokyo, Japan
| | - Takashi Okada
- Research and Development Headquarters, NTT DATA Group Corporation, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University (TMDU), Graduate School of Medical and Dental Sciences, Tokyo, Japan
| | - Atsushi Okawa
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
| | - Toshitaka Yoshii
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
| | - Shingo Sato
- Department of Orthopaedic Surgery, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
- Center for Innovative Cancer Treatment, Tokyo Medical and Dental University (TMDU), Tokyo, Japan
| |
Collapse
|
3
|
Lopez-Melia M, Magnin V, Marchand-Maillet S, Grabherr S. Deep learning for acute rib fracture detection in CT data: a systematic review and meta-analysis. Br J Radiol 2024; 97:535-543. [PMID: 38323515 PMCID: PMC11027249 DOI: 10.1093/bjr/tqae014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/16/2023] [Accepted: 01/12/2024] [Indexed: 02/08/2024] Open
Abstract
OBJECTIVES To review studies on deep learning (DL) models for classification, detection, and segmentation of rib fractures in CT data, to determine their risk of bias (ROB), and to analyse the performance of acute rib fracture detection models. METHODS Research articles written in English were retrieved from PubMed, Embase, and Web of Science in April 2023. A study was only included if a DL model was used to classify, detect, or segment rib fractures, and only if the model was trained with CT data from humans. For the ROB assessment, the Quality Assessment of Diagnostic Accuracy Studies tool was used. The performance of acute rib fracture detection models was meta-analysed with forest plots. RESULTS A total of 27 studies were selected. About 75% of the studies have ROB by not reporting the patient selection criteria, including control patients or using 5-mm slice thickness CT scans. The sensitivity, precision, and F1-score of the subgroup of low ROB studies were 89.60% (95%CI, 86.31%-92.90%), 84.89% (95%CI, 81.59%-88.18%), and 86.66% (95%CI, 84.62%-88.71%), respectively. The ROB subgroup differences test for the F1-score led to a p-value below 0.1. CONCLUSION ROB in studies mostly stems from an inappropriate patient and data selection. The studies with low ROB have better F1-score in acute rib fracture detection using DL models. ADVANCES IN KNOWLEDGE This systematic review will be a reference to the taxonomy of the current status of rib fracture detection with DL models, and upcoming studies will benefit from our data extraction, our ROB assessment, and our meta-analysis.
Collapse
Affiliation(s)
- Manel Lopez-Melia
- University Centre of Legal Medicine Lausanne-Geneva, Geneva 1206, Switzerland
- University Hospital and University of Geneva, Geneva 1205, Switzerland
| | - Virginie Magnin
- University Centre of Legal Medicine Lausanne-Geneva, Geneva 1206, Switzerland
- University Hospital and University of Geneva, Geneva 1205, Switzerland
- University Hospital and University of Lausanne, Lausanne 1005, Switzerland
| | | | - Silke Grabherr
- University Centre of Legal Medicine Lausanne-Geneva, Geneva 1206, Switzerland
- University Hospital and University of Geneva, Geneva 1205, Switzerland
- University Hospital and University of Lausanne, Lausanne 1005, Switzerland
| |
Collapse
|
4
|
Shao J, Lin H, Ding L, Li B, Xu D, Sun Y, Guan T, Dai H, Liu R, Deng D, Huang B, Feng S, Diao X, Gao Z. Deep learning for differentiation of osteolytic osteosarcoma and giant cell tumor around the knee joint on radiographs: a multicenter study. Insights Imaging 2024; 15:35. [PMID: 38321327 PMCID: PMC10847082 DOI: 10.1186/s13244-024-01610-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/21/2023] [Indexed: 02/08/2024] Open
Abstract
OBJECTIVES To develop a deep learning (DL) model for differentiating between osteolytic osteosarcoma (OS) and giant cell tumor (GCT) on radiographs. METHODS Patients with osteolytic OS and GCT proven by postoperative pathology were retrospectively recruited from four centers (center A, training and internal testing; centers B, C, and D, external testing). Sixteen radiologists with different experiences in musculoskeletal imaging diagnosis were divided into three groups and participated with or without the DL model's assistance. DL model was generated using EfficientNet-B6 architecture, and the clinical model was trained using clinical variables. The performance of various models was compared using McNemar's test. RESULTS Three hundred thirty-three patients were included (mean age, 27 years ± 12 [SD]; 186 men). Compared to the clinical model, the DL model achieved a higher area under the curve (AUC) in both the internal (0.97 vs. 0.77, p = 0.008) and external test set (0.97 vs. 0.64, p < 0.001). In the total test set (including the internal and external test sets), the DL model achieved higher accuracy than the junior expert committee (93.1% vs. 72.4%; p < 0.001) and was comparable to the intermediate and senior expert committee (93.1% vs. 88.8%, p = 0.25; 87.1%, p = 0.35). With DL model assistance, the accuracy of the junior expert committee was improved from 72.4% to 91.4% (p = 0.051). CONCLUSION The DL model accurately distinguished osteolytic OS and GCT with better performance than the junior radiologists, whose own diagnostic performances were significantly improved with the aid of the model, indicating the potential for the differential diagnosis of the two bone tumors on radiographs. CRITICAL RELEVANCE STATEMENT The deep learning model can accurately distinguish osteolytic osteosarcoma and giant cell tumor on radiographs, which may help radiologists improve the diagnostic accuracy of two types of tumors. KEY POINTS • The DL model shows robust performance in distinguishing osteolytic osteosarcoma and giant cell tumor. • The diagnosis performance of the DL model is better than junior radiologists'. • The DL model shows potential for differentiating osteolytic osteosarcoma and giant cell tumor.
Collapse
Affiliation(s)
- Jingjing Shao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Hongxin Lin
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Lei Ding
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Danyang Xu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yang Sun
- Department of Radiology, Foshan Hospital of Traditional Chinese Medicine, Foshan, Guangdong, China
| | - Tianming Guan
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China
| | - Haiyang Dai
- Department of Radiology, People's Hospital of Huizhou City Center, Huizhou, Guangdong, China
| | - Ruihao Liu
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Demao Deng
- Department of Radiology, The People's Hospital of Guangxi Zhuang Autonomous Region, Guanxi Academy of Medical Science, Nanning, Guangxi, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Shiting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
| | - Xianfen Diao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Medicine, Shenzhen University, Shenzhen, Guangdong, China.
| | - Zhenhua Gao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China.
| |
Collapse
|
5
|
Sampath K, Rajagopal S, Chintanpalli A. A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images. Sci Rep 2024; 14:2144. [PMID: 38273131 PMCID: PMC10811327 DOI: 10.1038/s41598-024-52719-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 01/23/2024] [Indexed: 01/27/2024] Open
Abstract
Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts, whereas a malignant type can spread to other body parts and might be harmful. According to Cancer Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can increase the chances of survival by providing treatment at the initial stages. Prior detection of these lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current study is to utilize image processing techniques and deep learning-based Convolution neural network (CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and cancerous affected images were classified using various existing CNN-based models. The results revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation accuracy of 98%, and testing accuracy of 100%.
Collapse
Affiliation(s)
- Kanimozhi Sampath
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| | - Sivakumar Rajagopal
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India.
| | - Ananthakrishna Chintanpalli
- Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| |
Collapse
|
6
|
Khomduean P, Phuaudomcharoen P, Boonchu T, Taetragool U, Chamchoy K, Wimolsiri N, Jarrusrojwuttikul T, Chuajak A, Techavipoo U, Tweeatsani N. Segmentation of lung lobes and lesions in chest CT for the classification of COVID-19 severity. Sci Rep 2023; 13:20899. [PMID: 38017029 PMCID: PMC10684885 DOI: 10.1038/s41598-023-47743-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023] Open
Abstract
To precisely determine the severity of COVID-19-related pneumonia, computed tomography (CT) is an imaging modality beneficial for patient monitoring and therapy planning. Thus, we aimed to develop a deep learning-based image segmentation model to automatically assess lung lesions related to COVID-19 infection and calculate the total severity score (TSS). The entire dataset consisted of 124 COVID-19 patients acquired from Chulabhorn Hospital, divided into 28 cases without lung lesions and 96 cases with lung lesions categorized severity by radiologists regarding TSS. The model used a 3D-UNet along with DenseNet and ResNet models that had already been trained to separate the lobes of the lungs and figure out the percentage of lung involvement due to COVID-19 infection. It also used the Dice similarity coefficient (DSC) to measure TSS. Our final model, consisting of 3D-UNet integrated with DenseNet169, achieved segmentation of lung lobes and lesions with the Dice similarity coefficients of 91.52% and 76.89%, respectively. The calculated TSS values were similar to those evaluated by radiologists, with an R2 of 0.842. The correlation between the ground-truth TSS and model prediction was greater than that of the radiologist, which was 0.890 and 0.709, respectively.
Collapse
Affiliation(s)
- Prachaya Khomduean
- Centre of Learning and Research in Celebration of HRH Princess Chulabhorn's 60th Birthday Anniversary, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Pongpat Phuaudomcharoen
- Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Totsaporn Boonchu
- Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Unchalisa Taetragool
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Kamonwan Chamchoy
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Nat Wimolsiri
- Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Tanadul Jarrusrojwuttikul
- Queen Savang Vadhana Memorial Hospital, Chonburi, Thailand
- Faculty of Health Science Technology, HRH Princess Chulabhorn College of Medical Science, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Ammarut Chuajak
- Queen Savang Vadhana Memorial Hospital, Chonburi, Thailand
- Faculty of Health Science Technology, HRH Princess Chulabhorn College of Medical Science, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Udomchai Techavipoo
- Faculty of Health Science Technology, HRH Princess Chulabhorn College of Medical Science, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Numfon Tweeatsani
- Faculty of Health Science Technology, HRH Princess Chulabhorn College of Medical Science, Chulabhorn Royal Academy, Bangkok, Thailand.
| |
Collapse
|
7
|
Jin L, Sun T, Liu X, Cao Z, Liu Y, Chen H, Ma Y, Zhang J, Zou Y, Liu Y, Shi F, Shen D, Wu J. A multi-center performance assessment for automated histopathological classification and grading of glioma using whole slide images. iScience 2023; 26:108041. [PMID: 37876818 PMCID: PMC10590813 DOI: 10.1016/j.isci.2023.108041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/10/2023] [Accepted: 09/21/2023] [Indexed: 10/26/2023] Open
Abstract
Accurate pathological classification and grading of gliomas is crucial in clinical diagnosis and treatment. The application of deep learning techniques holds promise for automated histological pathology diagnosis. In this study, we collected 733 whole slide images from four medical centers, of which 456 were used for model training, 150 for internal validation, and 127 for multi-center testing. The study includes 5 types of common gliomas. A subtask-guided multi-instance learning image-to-label training pipeline was employed. The pipeline leveraged "patch prompting" for the model to converge with reasonable computational cost. Experiments showed that an overall accuracy of 0.79 in the internal validation dataset. The performance on the multi-center testing dataset showed an overall accuracy to 0.73. The findings suggest a minor yet acceptable performance decrease in multi-center data, demonstrating the model's strong generalizability and establishing a robust foundation for future clinical applications.
Collapse
Affiliation(s)
- Lei Jin
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Tianyang Sun
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Xi Liu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Zehong Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Yan Liu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Hong Chen
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
- Department of Pathology, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Yixin Ma
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| | - Jun Zhang
- Wuhan Zhongji Biotechnology Co., Ltd, Wuhan 430206, China
| | - Yaping Zou
- Wuhan Zhongji Biotechnology Co., Ltd, Wuhan 430206, China
| | - Yingchao Liu
- Department of Neurosurgery, The Provincial Hospital Affiliated to Shandong First Medical University, Shandong 250021, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai 200030, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| | - Jinsong Wu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Huashan Hospital Fudan University, Shanghai 200040, China
| |
Collapse
|
8
|
Koike Y, Yui M, Nakamura S, Yoshida A, Takegawa H, Anetai Y, Hirota K, Tanigawa N. Artificial intelligence-aided lytic spinal bone metastasis classification on CT scans. Int J Comput Assist Radiol Surg 2023; 18:1867-1874. [PMID: 36991276 DOI: 10.1007/s11548-023-02880-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 03/17/2023] [Indexed: 03/31/2023]
Abstract
PURPOSE Spinal bone metastases directly affect quality of life, and patients with lytic-dominant lesions are at high risk for neurological symptoms and fractures. To detect and classify lytic spinal bone metastasis using routine computed tomography (CT) scans, we developed a deep learning (DL)-based computer-aided detection (CAD) system. METHODS We retrospectively analyzed 2125 diagnostic and radiotherapeutic CT images of 79 patients. Images annotated as tumor (positive) or not (negative) were randomized into training (1782 images) and test (343 images) datasets. YOLOv5m architecture was used to detect vertebra on whole CT scans. InceptionV3 architecture with the transfer-learning technique was used to classify the presence/absence of lytic lesions on CT images showing the presence of vertebra. The DL models were evaluated via fivefold cross-validation. For vertebra detection, bounding box accuracy was estimated using intersection over union (IoU). We evaluated the area under the curve (AUC) of a receiver operating characteristic curve to classify lesions. Moreover, we determined the accuracy, precision, recall, and F1 score. We used the gradient-weighted class activation mapping (Grad-CAM) technique for visual interpretation. RESULTS The computation time was 0.44 s per image. The average IoU value of the predicted vertebra was 0.923 ± 0.052 (0.684-1.000) for test datasets. In the binary classification task, the accuracy, precision, recall, F1-score, and AUC value for test datasets were 0.872, 0.948, 0.741, 0.832, and 0.941, respectively. Heat maps constructed using the Grad-CAM technique were consistent with the location of lytic lesions. CONCLUSION Our artificial intelligence-aided CAD system using two DL models could rapidly identify vertebra bone from whole CT images and detect lytic spinal bone metastasis, although further evaluation of diagnostic accuracy is required with a larger sample size.
Collapse
Affiliation(s)
- Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Midori Yui
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Asami Yoshida
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Kazuki Hirota
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| |
Collapse
|
9
|
Meng Y, Yang Y, Hu M, Zhang Z, Zhou X. Artificial intelligence-based radiomics in bone tumors: Technical advances and clinical application. Semin Cancer Biol 2023; 95:75-87. [PMID: 37499847 DOI: 10.1016/j.semcancer.2023.07.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/21/2023] [Accepted: 07/22/2023] [Indexed: 07/29/2023]
Abstract
Radiomics is the extraction of predefined mathematic features from medical images for predicting variables of clinical interest. Recent research has demonstrated that radiomics can be processed by artificial intelligence algorithms to reveal complex patterns and trends for diagnosis, and prediction of prognosis and response to treatment modalities in various types of cancer. Artificial intelligence tools can utilize radiological images to solve next-generation issues in clinical decision making. Bone tumors can be classified as primary and secondary (metastatic) tumors. Osteosarcoma, Ewing sarcoma, and chondrosarcoma are the dominating primary tumors of bone. The development of bone tumor model systems and relevant research, and the assessment of novel treatment methods are ongoing to improve clinical outcomes, notably for patients with metastases. Artificial intelligence and radiomics have been utilized in almost full spectrum of clinical care of bone tumors. Radiomics models have achieved excellent performance in the diagnosis and grading of bone tumors. Furthermore, the models enable to predict overall survival, metastases, and recurrence. Radiomics features have exhibited promise in assisting therapeutic planning and evaluation, especially neoadjuvant chemotherapy. This review provides an overview of the evolution and opportunities for artificial intelligence in imaging, with a focus on hand-crafted features and deep learning-based radiomics approaches. We summarize the current application of artificial intelligence-based radiomics both in primary and metastatic bone tumors, and discuss the limitations and future opportunities of artificial intelligence-based radiomics in this field. In the era of personalized medicine, our in-depth understanding of emerging artificial intelligence-based radiomics approaches will bring innovative solutions to bone tumors and achieve clinical application.
Collapse
Affiliation(s)
- Yichen Meng
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Yue Yang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Miao Hu
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Zheng Zhang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| | - Xuhui Zhou
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| |
Collapse
|
10
|
Xiong Y, Guo W, Liang Z, Wu L, Ye G, Liang YY, Wen C, Yang F, Chen S, Zeng XW, Xu F. Deep learning-based diagnosis of osteoblastic bone metastases and bone islands in computed tomograph images: a multicenter diagnostic study. Eur Radiol 2023; 33:6359-6368. [PMID: 37060446 PMCID: PMC10415522 DOI: 10.1007/s00330-023-09573-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/08/2023] [Accepted: 03/20/2023] [Indexed: 04/16/2023]
Abstract
OBJECTIVE To develop and validate a deep learning (DL) model based on CT for differentiating bone islands and osteoblastic bone metastases. MATERIALS AND METHODS The patients with sclerosing bone lesions (SBLs) were retrospectively included in three hospitals. The images from site 1 were randomly assigned to the training (70%) and intrinsic verification (10%) datasets for developing the two-dimensional (2D) DL model (single-slice input) and "2.5-dimensional" (2.5D) DL model (three-slice input) and to the internal validation dataset (20%) for evaluating the performance of both models. The diagnostic performance was evaluated using the internal validation set from site 1 and additional external validation datasets from site 2 and site 3. And statistically analyze the performance of 2D and 2.5D DL models. RESULTS In total, 1918 SBLs in 728 patients in site 1, 122 SBLs in 71 patients in site 2, and 71 SBLs in 47 patients in site 3 were used to develop and test the 2D and 2.5D DL models. The best performance was obtained using the 2.5D DL model, which achieved an AUC of 0.996 (95% confidence interval [CI], 0.995-0.996), 0.958 (95% CI, 0.958-0.960), and 0.952 (95% CI, 0.951-0.953) and accuracies of 0.950, 0.902, and 0.863 for the internal validation set, the external validation set from site 2 and site 3, respectively. CONCLUSION A DL model based on a three-slice CT image input (2.5D DL model) can improve the prediction of osteoblastic bone metastases, which can facilitate clinical decision-making. KEY POINTS • This study investigated the value of deep learning models in identifying bone islands and osteoblastic bone metastases. • Three-slice CT image input (2.5D DL model) outweighed the 2D model in the classification of sclerosing bone lesions. • The 2.5D deep learning model showed excellent performance using the internal (AUC, 0.996) and two external (AUC, 0.958; AUC, 0.952) validation sets.
Collapse
Affiliation(s)
- Yuchao Xiong
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Wei Guo
- Department of Radiology, Wuhan Third Hospital, Tongren Hospital of Wuhan University, 241 Liuyang Road, Wuhan, 430063, Hubei Province, China
| | - Zhiping Liang
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Li Wu
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Guoxi Ye
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Ying-Ying Liang
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, 1Panfu Road, Guangzhou, 510180, Guangdong Province, China
| | - Chao Wen
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Feng Yang
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Song Chen
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China
| | - Xu-Wen Zeng
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China.
| | - Fan Xu
- Department of Radiology, Guangzhou Red Cross Hospital (Guangzhou Red Cross Hospital, Medical College of Jinan University), 396 Tongfu Road, Guangzhou, 510220, Guangdong Province, China.
| |
Collapse
|
11
|
Nishio M, Matsuo H, Kurata Y, Sugiyama O, Fujimoto K. Label Distribution Learning for Automatic Cancer Grading of Histopathological Images of Prostate Cancer. Cancers (Basel) 2023; 15:cancers15051535. [PMID: 36900325 PMCID: PMC10000939 DOI: 10.3390/cancers15051535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023] Open
Abstract
We aimed to develop and evaluate an automatic prediction system for grading histopathological images of prostate cancer. A total of 10,616 whole slide images (WSIs) of prostate tissue were used in this study. The WSIs from one institution (5160 WSIs) were used as the development set, while those from the other institution (5456 WSIs) were used as the unseen test set. Label distribution learning (LDL) was used to address a difference in label characteristics between the development and test sets. A combination of EfficientNet (a deep learning model) and LDL was utilized to develop an automatic prediction system. Quadratic weighted kappa (QWK) and accuracy in the test set were used as the evaluation metrics. The QWK and accuracy were compared between systems with and without LDL to evaluate the usefulness of LDL in system development. The QWK and accuracy were 0.364 and 0.407 in the systems with LDL and 0.240 and 0.247 in those without LDL, respectively. Thus, LDL improved the diagnostic performance of the automatic prediction system for the grading of histopathological images for cancer. By handling the difference in label characteristics using LDL, the diagnostic performance of the automatic prediction system could be improved for prostate cancer grading.
Collapse
Affiliation(s)
- Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017, Japan
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
- Correspondence: ; Tel.: +81-78-382-6104; Fax: +81-78-382-6129
| | - Hidetoshi Matsuo
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017, Japan
| | - Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Osamu Sugiyama
- Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City 577-8502, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| |
Collapse
|
12
|
Huo T, Xie Y, Fang Y, Wang Z, Liu P, Duan Y, Zhang J, Wang H, Xue M, Liu S, Ye Z. Deep learning-based algorithm improves radiologists' performance in lung cancer bone metastases detection on computed tomography. Front Oncol 2023; 13:1125637. [PMID: 36845701 PMCID: PMC9946454 DOI: 10.3389/fonc.2023.1125637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/10/2023] Open
Abstract
Purpose To develop and assess a deep convolutional neural network (DCNN) model for the automatic detection of bone metastases from lung cancer on computed tomography (CT). Methods In this retrospective study, CT scans acquired from a single institution from June 2012 to May 2022 were included. In total, 126 patients were assigned to a training cohort (n = 76), a validation cohort (n = 12), and a testing cohort (n = 38). We trained and developed a DCNN model based on positive scans with bone metastases and negative scans without bone metastases to detect and segment the bone metastases of lung cancer on CT. We evaluated the clinical efficacy of the DCNN model in an observer study with five board-certified radiologists and three junior radiologists. The receiver operator characteristic curve was used to assess the sensitivity and false positives of the detection performance; the intersection-over-union and dice coefficient were used to evaluate the segmentation performance of predicted lung cancer bone metastases. Results The DCNN model achieved a detection sensitivity of 0.894, with 5.24 average false positives per case, and a segmentation dice coefficient of 0.856 in the testing cohort. Through the radiologists-DCNN model collaboration, the detection accuracy of the three junior radiologists improved from 0.617 to 0.879 and the sensitivity from 0.680 to 0.902. Furthermore, the mean interpretation time per case of the junior radiologists was reduced by 228 s (p = 0.045). Conclusions The proposed DCNN model for automatic lung cancer bone metastases detection can improve diagnostic efficiency and reduce the diagnosis time and workload of junior radiologists.
Collapse
Affiliation(s)
- Tongtong Huo
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,Research Institute of Imaging, National Key Laboratory of Multi-Spectral Information Processing Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ziyi Wang
- Research Institute of Imaging, National Key Laboratory of Multi-Spectral Information Processing Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yuyu Duan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jiayao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Songxiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Songxiang Liu, ; Zhewei Ye,
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Songxiang Liu, ; Zhewei Ye,
| |
Collapse
|
13
|
Lacroix M, Aouad T, Feydy J, Biau D, Larousserie F, Fournier L, Feydy A. Artificial intelligence in musculoskeletal oncology imaging: A critical review of current applications. Diagn Interv Imaging 2023; 104:18-23. [PMID: 36270953 DOI: 10.1016/j.diii.2022.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Accepted: 10/05/2022] [Indexed: 01/10/2023]
Abstract
Artificial intelligence (AI) is increasingly being studied in musculoskeletal oncology imaging. AI has been applied to both primary and secondary bone tumors and assessed for various predictive tasks that include detection, segmentation, classification, and prognosis. Still, in the field of clinical research, further efforts are needed to improve AI reproducibility and reach an acceptable level of evidence in musculoskeletal oncology. This review describes the basic principles of the most common AI techniques, including machine learning, deep learning and radiomics. Then, recent developments and current results of AI in the field of musculoskeletal oncology are presented. Finally, limitations and future perspectives of AI in this field are discussed.
Collapse
Affiliation(s)
- Maxime Lacroix
- Department of Radiology, Hôpital Européen Georges Pompidou, Assistance Publique-Hôpitaux de Paris, Paris, 75015, France; Université Paris Cité, Faculté de Médecine, Paris, 75006, France; PARCC UMRS 970, INSERM, Paris 75015, France
| | - Theodore Aouad
- Université Paris-Saclay, CentraleSupélec, Inria, Centre for Visual Computing, 91190, Gif-sur-Yvette, France
| | - Jean Feydy
- Université Paris Cité, HeKA team, Inria Paris, Inserm, 75006, Paris, France
| | - David Biau
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Orthopedic Surgery, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| | - Frédérique Larousserie
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Pathology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| | - Laure Fournier
- Department of Radiology, Hôpital Européen Georges Pompidou, Assistance Publique-Hôpitaux de Paris, Paris, 75015, France; Université Paris Cité, Faculté de Médecine, Paris, 75006, France; PARCC UMRS 970, INSERM, Paris 75015, France
| | - Antoine Feydy
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| |
Collapse
|