1
|
Lv C, Shu XJ, Chang H, Qiu J, Peng S, Yu K, Chen SB, Rao H. Classification of high-grade glioblastoma and single brain metastases using a new SCAT-inception model trained with MRI images. Front Neurosci 2024; 18:1349781. [PMID: 38560048 PMCID: PMC10979639 DOI: 10.3389/fnins.2024.1349781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/13/2024] [Indexed: 04/04/2024] Open
Abstract
Background and objectives Glioblastoma (GBM) and brain metastasis (MET) are the two most common intracranial tumors. However, the different pathogenesis of the two tumors leads to completely different treatment options. In terms of magnetic resonance imaging (MRI), GBM and MET are extremely similar, which makes differentiation by imaging extremely challenging. Therefore, this study explores an improved deep learning algorithm to assist in the differentiation of GBM and MET. Materials and methods For this study, axial contrast-enhanced T1 weight (ceT1W) MRI images from 321 cases of high-grade gliomas and solitary brain metastasis were collected. Among these, 251 out of 270 cases were selected for the experimental dataset (127 glioblastomas and 124 metastases), 207 cases were chosen as the training dataset, and 44 cases as the testing dataset. We designed a new deep learning algorithm called SCAT-inception (Spatial Convolutional Attention inception) and used five-fold cross-validation to verify the results. Results By employing the newly designed SCAT-inception model to predict glioblastomas and brain metastasis, the prediction accuracy reached 92.3%, and the sensitivity and specificity reached 93.5 and 91.1%, respectively. On the external testing dataset, our model achieved an accuracy of 91.5%, which surpasses other model performances such as VGG, UNet, and GoogLeNet. Conclusion This study demonstrated that the SCAT-inception architecture could extract more subtle features from ceT1W images, provide state-of-the-art performance in the differentiation of GBM and MET, and surpass most existing approaches.
Collapse
Affiliation(s)
- Cheng Lv
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, Jiangxi Province, China
| | - Xu-Jun Shu
- Department of Neurosurgery, Nanjing Jinling Hospital, Nanjing, Jiangsu Province, China
| | - Hui Chang
- Department of Computer and Information Engineering, Henan University, Kaifeng, China
| | - Jun Qiu
- Department of Critical Care Medicine, The Second People’s Hospital of Yibin, Yibin, Sichuan Province, China
| | - Shuo Peng
- Department of Computer Science, Jinggangshan University, Ji’an, China
| | - Keping Yu
- School of Science and Engineering, Hosei University, Tokyo, Japan
| | - Sheng-Bo Chen
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, Jiangxi Province, China
| | - Hong Rao
- Department of Neurosurgery, Nanjing Jinling Hospital, Nanjing, Jiangsu Province, China
| |
Collapse
|
2
|
Wang TW, Hsu MS, Lee WK, Pan HC, Yang HC, Lee CC, Wu YT. Brain metastasis tumor segmentation and detection using deep learning algorithms: A systematic review and meta-analysis. Radiother Oncol 2024; 190:110007. [PMID: 37967585 DOI: 10.1016/j.radonc.2023.110007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2023] [Accepted: 11/08/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Manual detection of brain metastases is both laborious and inconsistent, driving the need for more efficient solutions. Accordingly, our systematic review and meta-analysis assessed the efficacy of deep learning algorithms in detecting and segmenting brain metastases from various primary origins in MRI images. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science up to May 24, 2023, which yielded 42 relevant studies for our analysis. We assessed the quality of these studies using the QUADAS-2 and CLAIM tools. Using a random-effect model, we calculated the pooled lesion-wise dice score as well as patient-wise and lesion-wise sensitivity. We performed subgroup analyses to investigate the influence of factors such as publication year, study design, training center of the model, validation methods, slice thickness, model input dimensions, MRI sequences fed to the model, and the specific deep learning algorithms employed. Additionally, meta-regression analyses were carried out considering the number of patients in the studies, count of MRI manufacturers, count of MRI models, training sample size, and lesion number. RESULTS Our analysis highlighted that deep learning models, particularly the U-Net and its variants, demonstrated superior segmentation accuracy. Enhanced detection sensitivity was observed with an increased diversity in MRI hardware, both in terms of manufacturer and model variety. Furthermore, slice thickness was identified as a significant factor influencing lesion-wise detection sensitivity. Overall, the pooled results indicated a lesion-wise dice score of 79%, with patient-wise and lesion-wise sensitivities at 86% and 87%, respectively. CONCLUSIONS The study underscores the potential of deep learning in improving brain metastasis diagnostics and treatment planning. Still, more extensive cohorts and larger meta-analysis are needed for more practical and generalizable algorithms. Future research should prioritize these areas to advance the field. This study was funded by the Gen. & Mrs. M.C. Peng Fellowship and registered under PROSPERO (CRD42023427776).
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ming-Sheng Hsu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan; National Yang Ming Chiao Tung University, College Medical Device Innovation and Translation Center, Taiwan.
| |
Collapse
|
3
|
Chen J, Meng L, Bu C, Zhang C, Wu P. Feature pyramid network-based computer-aided detection and monitoring treatment response of brain metastases on contrast-enhanced MRI. Clin Radiol 2023; 78:e808-e814. [PMID: 37573242 DOI: 10.1016/j.crad.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 08/14/2023]
Abstract
AIM To investigate the value of feature pyramid network (FPN)-based computer-aided detection (CAD) of brain metastases (BMs) before and after non-surgical treatment, and to evaluate its performance in monitoring treatment response of BM on contrast-enhanced (CE) magnetic resonance imaging (MRI). MATERIAL AND METHODS Eighty-five cancer patients newly diagnosed with BM who had undergone initial and follow-up three-dimensional (3D) CE MRI at Liaocheng People's Hospital were included retrospectively in this study. Manual detection (MD) was performed by reviewer 1. Computer-aided detection (CAD) was performed by reviewer 2 using uAI Discover-BMs software. The treatment response was assessed by the two reviewers for each patient separately. A paired chi-square test was used to compare the differences in the detection of BM between MD and CAD. Agreement between MD and CAD in monitoring treatment response was assessed by kappa test. RESULTS The sensitivities of MD and CAD on initial 3D CE MRI were 78.65% and 99.13%, respectively. The sensitivities of MD and CAD on follow-up 3D CE MRI were 76.32% and 98.24%, respectively. There was a very good agreement between Reviewer 1 and Reviewer 2 in evaluating the treatment response of BM. CONCLUSION FPN-based CAD has a higher sensitivity of close to 100% and lower false negatives (FNs) for BM detection, compared to MD. Although CAD had a few shortcomings in reflecting changes of BMs after treatment, it had high performance in monitoring treatment response of BM on CE MRI.
Collapse
Affiliation(s)
- J Chen
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China.
| | - L Meng
- Department of Radiotherapy, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Bu
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Zhang
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - P Wu
- Philips Healthcare, Shanghai, 200072, China
| |
Collapse
|
4
|
Zhou Z, Qiu Q, Liu H, Ge X, Li T, Xing L, Yang R, Yin Y. Automatic Detection of Brain Metastases in T1-Weighted Construct-Enhanced MRI Using Deep Learning Model. Cancers (Basel) 2023; 15:4443. [PMID: 37760413 PMCID: PMC10526374 DOI: 10.3390/cancers15184443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/03/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023] Open
Abstract
As a complication of malignant tumors, brain metastasis (BM) seriously threatens patients' survival and quality of life. Accurate detection of BM before determining radiation therapy plans is a paramount task. Due to the small size and heterogeneous number of BMs, their manual diagnosis faces enormous challenges. Thus, MRI-based artificial intelligence-assisted BM diagnosis is significant. Most of the existing deep learning (DL) methods for automatic BM detection try to ensure a good trade-off between precision and recall. However, due to the objective factors of the models, higher recall is often accompanied by higher number of false positive results. In real clinical auxiliary diagnosis, radiation oncologists are required to spend much effort to review these false positive results. In order to reduce false positive results while retaining high accuracy, a modified YOLOv5 algorithm is proposed in this paper. First, in order to focus on the important channels of the feature map, we add a convolutional block attention model to the neck structure. Furthermore, an additional prediction head is introduced for detecting small-size BMs. Finally, to distinguish between cerebral vessels and small-size BMs, a Swin transformer block is embedded into the smallest prediction head. With the introduction of the F2-score index to determine the most appropriate confidence threshold, the proposed method achieves a precision of 0.612 and recall of 0.904. Compared with existing methods, our proposed method shows superior performance with fewer false positive results. It is anticipated that the proposed method could reduce the workload of radiation oncologists in real clinical auxiliary diagnosis.
Collapse
Affiliation(s)
- Zichun Zhou
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Qingtao Qiu
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
| | - Huiling Liu
- Department of Oncology, Binzhou People’s Hospital, Binzhou 256610, China
- Third Clinical Medical College, Xinjiang Medical University, Urumqi 830011, China
| | - Xuanchu Ge
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Tengxiang Li
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Ligang Xing
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Runtao Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Yong Yin
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| |
Collapse
|
5
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
6
|
Ozkara BB, Chen MM, Federau C, Karabacak M, Briere TM, Li J, Wintermark M. Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15. [PMID: 36672286 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
7
|
Moridian P, Ghassemi N, Jafari M, Salloum-Asfar S, Sadeghi D, Khodatars M, Shoeibi A, Khosravi A, Ling SH, Subasi A, Alizadehsani R, Gorriz JM, Abdulla SA, Acharya UR. Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review. Front Mol Neurosci 2022; 15:999605. [PMID: 36267703 PMCID: PMC9577321 DOI: 10.3389/fnmol.2022.999605] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 08/09/2022] [Indexed: 12/04/2022] Open
Abstract
Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the Supplementary Appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We suggest future approaches to detecting ASDs using AI techniques and MRI neuroimaging.
Collapse
Affiliation(s)
- Parisa Moridian
- Faculty of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Navid Ghassemi
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Mahboobeh Jafari
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran
| | - Salam Salloum-Asfar
- Neurological Disorders Research Center, Qatar Biomedical Research Institute, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Delaram Sadeghi
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Granada, Spain
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC, Australia
| | - Sai Ho Ling
- Faculty of Engineering and IT, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| | - Abdulhamit Subasi
- Faculty of Medicine, Institute of Biomedicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC, Australia
| | - Juan M. Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Granada, Spain
| | - Sara A. Abdulla
- Neurological Disorders Research Center, Qatar Biomedical Research Institute, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - U. Rajendra Acharya
- Ngee Ann Polytechnic, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
- Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore, Singapore
| |
Collapse
|
8
|
Sakamoto N, Amemiya S, Takao H, Kato S, Yamashita H, Fujimoto K, Nakaya M, Kanemaru N, Miyo R, Hosoi R, Mizuki M, Abe O. The Usefulness of Computer-Aided Detection of Brain Metastases on Contrast-Enhanced Computed Tomography Using Single-Shot Multibox Detector: Observer Performance Study. J Comput Assist Tomogr 2022. [PMID: 35819922 DOI: 10.1097/RCT.0000000000001339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study aimed to test the usefulness of computer-aided detection (CAD) for the detection of brain metastasis (BM) on contrast-enhanced computed tomography. METHODS The test data set included whole-brain axial contrast-enhanced computed tomography images of 25 cases with 62 BMs and 5 cases without BM. Six radiologists from 3 institutions with 2 to 4 years of experience independently reviewed the cases, both in conditions with and without CAD assistance. Sensitivity, positive predictive value, number of false positives, and reading time were compared between the conditions using paired t tests. Subanalysis was also performed for groups of lesions divided according to size. A P value <0.05 was considered statistically significant. RESULTS With CAD, sensitivity significantly increased from 80.4% to 83.9% (P = 0.04), whereas positive predictive value significantly decreased from 88.7% to 84.8% (P = 0.03). Reading time with and without CAD was 112 and 107 seconds, respectively (P = 0.38), and the number of false positives was 10.5 with CAD and 7.0 without CAD (P = 0.053). Sensitivity significantly improved for 6- to 12-mm lesions, from 71.2% without CAD to 80.3% with CAD (P = 0.02). The sensitivity of the CAD (95.2%) was significantly higher than that of any reader (with CAD: P = 0.01; without CAD: P = 0.005). CONCLUSIONS Computer-aided detection significantly improved BM detection sensitivity without prolonging reading time while marginally increased the false positives.
Collapse
|
9
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning 2.5-dimensional single-shot detector improves the performance of automated detection of brain metastases on contrast-enhanced CT. Neuroradiology 2022. [PMID: 35064786 DOI: 10.1007/s00234-022-02902-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/15/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Collapse
|
10
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning single-shot detector for automatic detection of brain metastases with the combined use of contrast-enhanced and non-enhanced computed tomography images. Eur J Radiol 2021; 144:110015. [PMID: 34742108 DOI: 10.1016/j.ejrad.2021.110015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/10/2021] [Accepted: 10/27/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE To develop a deep-learning object detection model for automatic detection of brain metastases that simultaneously uses contrast-enhanced and non-enhanced images as inputs, and to compare its performance with that of a model that uses only contrast-enhanced images. METHOD A total of 116 computed tomography (CT) scans of 116 patients with brain metastases were included in this study. They showed a total of 659 metastases, 428 of which were used for training and validation (mean size, 11.3 ± 9.9 mm) and 231 were used for testing (mean size, 9.0 ± 7.0 mm). Single-shot detector (SSD) models were constructed with a feature fusion module, and their results were compared per lesion at a confidence threshold of 50%. RESULTS The sensitivity was 88.7% for the model that used both contrast-enhanced and non-enhanced CT images (the CE + NECT model) and 87.6% for the model that used only contrast-enhanced CT images (the CECT model). The positive predictive value (PPV) was 44.0% for the CE + NECT model and 37.2% for the CECT model. The number of false positives per patient was 9.9 for the CE + NECT model and 13.6 for the CECT model. The CE + NECT model had a significantly higher PPV (t test, p < 0.001), significantly fewer false positives (t test, p < 0.001), and a tendency to be more sensitive (t test, p = 0.14). CONCLUSIONS The results indicate that the information on true contrast enhancement obtained by comparing the contrast-enhanced and non-enhanced images may prevent the detection of pseudolesions, suppress false positives, and improve the performance of deep-learning object detection models.
Collapse
Affiliation(s)
- Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|