1
|
Molinski NS, Kenda M, Leithner C, Nee J, Storm C, Scheel M, Meddeb A. Deep learning-enabled detection of hypoxic-ischemic encephalopathy after cardiac arrest in CT scans: a comparative study of 2D and 3D approaches. Front Neurosci 2024; 18:1245791. [PMID: 38419661 PMCID: PMC10899383 DOI: 10.3389/fnins.2024.1245791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024] Open
Abstract
Objective To establish a deep learning model for the detection of hypoxic-ischemic encephalopathy (HIE) features on CT scans and to compare various networks to determine the best input data format. Methods 168 head CT scans of patients after cardiac arrest were retrospectively identified and classified into two categories: 88 (52.4%) with radiological evidence of severe HIE and 80 (47.6%) without signs of HIE. These images were randomly divided into a training and a test set, and five deep learning models based on based on Densely Connected Convolutional Networks (DenseNet121) were trained and validated using different image input formats (2D and 3D images). Results All optimized stacked 2D and 3D networks could detect signs of HIE. The networks based on the data as 2D image data stacks provided the best results (S100: AUC: 94%, ACC: 79%, S50: AUC: 93%, ACC: 79%). We provide visual explainability data for the decision making of our AI model using Gradient-weighted Class Activation Mapping. Conclusion Our proof-of-concept deep learning model can accurately identify signs of HIE on CT images. Comparing different 2D- and 3D-based approaches, most promising results were achieved by 2D image stack models. After further clinical validation, a deep learning model of HIE detection based on CT images could be implemented in clinical routine and thus aid clinicians in characterizing imaging data and predicting outcome.
Collapse
Affiliation(s)
- Noah S. Molinski
- Department for Neuroradiology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Martin Kenda
- Department of Neurology with Experimental Neurology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité – Universitätsmedizin Berlin, BIH Biomedical Innovation Academy, Berlin, Germany
| | - Christoph Leithner
- Department of Neurology with Experimental Neurology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Jens Nee
- Department of Nephrology and Medical Intensive Care, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Christian Storm
- Department of Nephrology and Medical Intensive Care, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Michael Scheel
- Department for Neuroradiology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Aymen Meddeb
- Department for Neuroradiology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité – Universitätsmedizin Berlin, BIH Biomedical Innovation Academy, Berlin, Germany
| |
Collapse
|
2
|
Liu X, Liu J. Aided Diagnosis Model Based on Deep Learning for Glioblastoma, Solitary Brain Metastases, and Primary Central Nervous System Lymphoma with Multi-Modal MRI. Biology (Basel) 2024; 13:99. [PMID: 38392317 PMCID: PMC10887006 DOI: 10.3390/biology13020099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/26/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024]
Abstract
(1) Background: Diagnosis of glioblastoma (GBM), solitary brain metastases (SBM), and primary central nervous system lymphoma (PCNSL) plays a decisive role in the development of personalized treatment plans. Constructing a deep learning classification network to diagnose GBM, SBM, and PCNSL with multi-modal MRI is important and necessary. (2) Subjects: GBM, SBM, and PCNSL were confirmed by histopathology with the multi-modal MRI examination (study from 1225 subjects, average age 53 years, 671 males), 3.0 T T2 fluid-attenuated inversion recovery (T2-Flair), and Contrast-enhanced T1-weighted imaging (CE-T1WI). (3) Methods: This paper introduces MFFC-Net, a classification model based on the fusion of multi-modal MRIs, for the classification of GBM, SBM, and PCNSL. The network architecture consists of parallel encoders using DenseBlocks to extract features from different modalities of MRI images. Subsequently, an L1-norm feature fusion module is applied to enhance the interrelationships among tumor tissues. Then, a spatial-channel self-attention weighting operation is performed after the feature fusion. Finally, the classification results are obtained using the full convolutional layer (FC) and Soft-max. (4) Results: The ACC of MFFC-Net based on feature fusion was 0.920, better than the radiomics model (ACC of 0.829). There was no significant difference in the ACC compared to the expert radiologist (0.920 vs. 0.924, p = 0.774). (5) Conclusions: Our MFFC-Net model could distinguish GBM, SBM, and PCNSL preoperatively based on multi-modal MRI, with a higher performance than the radiomics model and was comparable to radiologists.
Collapse
Affiliation(s)
- Xiao Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
| | - Jie Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
| |
Collapse
|
3
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
4
|
Pan I, Huang RY. Artificial intelligence in neuroimaging of brain tumors: reality or still promise? Curr Opin Neurol 2023; 36:549-556. [PMID: 37973024 DOI: 10.1097/wco.0000000000001213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
PURPOSE OF REVIEW To provide an updated overview of artificial intelligence (AI) applications in neuro-oncologic imaging and discuss current barriers to wider clinical adoption. RECENT FINDINGS A wide variety of AI applications in neuro-oncologic imaging have been developed and researched, spanning tasks from pretreatment brain tumor classification and segmentation, preoperative planning, radiogenomics, prognostication and survival prediction, posttreatment surveillance, and differentiating between pseudoprogression and true disease progression. While earlier studies were largely based on data from a single institution, more recent studies have demonstrated that the performance of these algorithms are also effective on external data from other institutions. Nevertheless, most of these algorithms have yet to see widespread clinical adoption, given the lack of prospective studies demonstrating their efficacy and the logistical difficulties involved in clinical implementation. SUMMARY While there has been significant progress in AI and neuro-oncologic imaging, clinical utility remains to be demonstrated. The next wave of progress in this area will be driven by prospective studies measuring outcomes relevant to clinical practice and go beyond retrospective studies which primarily aim to demonstrate high performance.
Collapse
Affiliation(s)
- Ian Pan
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School
| | | |
Collapse
|
5
|
Kim H, Kim HG, Oh JH, Lee KM. Deep-learning model for diagnostic clue: detecting the dural tail sign for meningiomas on contrast-enhanced T1 weighted images. Quant Imaging Med Surg 2023; 13:8132-8143. [PMID: 38106283 PMCID: PMC10722041 DOI: 10.21037/qims-23-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/06/2023] [Indexed: 12/19/2023]
Abstract
Background Meningiomas are the most common primary central nervous system tumors, and magnetic resonance imaging (MRI), especially contrast-enhanced T1 weighted image (CE T1WI), is used as a fundamental imaging modality for the detection and analysis of the tumors. In this study, we propose an automated deep-learning model for meningioma detection using the dural tail sign. Methods The dataset included 123 patients with 3,824 dural tail signs on sagittal CE T1WI. The dataset was divided into training and test datasets based on specific time point, and 78 and 45 patients were comprised for the training and test dataset, respectively. To compensate for the small sample size of the training dataset, 39 additional patients with 69 dural tail signs from the open dataset were appended to the training dataset. A You Only Look Once (YOLO) v4 network was trained with sagittal CE T1WI to detect dural tail signs. The normal group dataset, comprised of 51 patients with no abnormal finding on MRI, was employed to evaluate the specificity of the trained model. Results The sensitivity and false positive average were 82.22% and 29.73, respectively, in the test dataset. The specificity and false positive average were 17.65% and 3.16, respectively, in the normal dataset. Most of the false-positive cases in the test dataset were enhancing vessels, misinterpreted as dural thickening. Conclusions The proposed model demonstrates an automated detection system for the dural tail sign to identify meningioma in general screening MRI. Our model can facilitate and alleviate radiologists' reading process by notifying the possibility of incidental dural mass based on dural tail sign detection.
Collapse
Affiliation(s)
- Hyunmin Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, Seoul, Republic of Korea
| | | | | |
Collapse
|
6
|
Chung KM, Yu H, Kim JH, Lee JJ, Sohn JH, Lee SH, Sung JH, Han SW, Yang JS, Kim C. Deep Learning-Based Knee MRI Classification for Common Peroneal Nerve Palsy with Foot Drop. Biomedicines 2023; 11:3171. [PMID: 38137392 PMCID: PMC10741167 DOI: 10.3390/biomedicines11123171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/21/2023] [Accepted: 11/26/2023] [Indexed: 12/24/2023] Open
Abstract
Foot drop can have a variety of causes, including the common peroneal nerve (CPN) injuries, and is often difficult to diagnose. We aimed to develop a deep learning-based algorithm that can classify foot drop with CPN injury in patients with knee MRI axial images only. In this retrospective study, we included 945 MR image data from foot drop patients confirmed with CPN injury in electrophysiologic tests (n = 42), and 1341 MR image data with non-traumatic knee pain (n = 107). Data were split into training, validation, and test datasets using a 8:1:1 ratio. We used a convolution neural network-based algorithm (EfficientNet-B5, ResNet152, VGG19) for the classification between the CPN injury group and the others. Performance of each classification algorithm used the area under the receiver operating characteristic curve (AUC). In classifying CPN MR images and non-CPN MR images, EfficientNet-B5 had the highest performance (AUC = 0.946), followed by the ResNet152 and the VGG19 algorithms. On comparison of other performance metrics including precision, recall, accuracy, and F1 score, EfficientNet-B5 had the best performance of the three algorithms. In a saliency map, the EfficientNet-B5 algorithm focused on the nerve area to detect CPN injury. In conclusion, deep learning-based analysis of knee MR images can successfully differentiate CPN injury from other etiologies in patients with foot drop.
Collapse
Affiliation(s)
- Kyung Min Chung
- Department of Neurosurgery, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea;
| | - Hyunjae Yu
- Division of Big Data and Artificial Intelligence, Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea (S.-W.H.)
| | - Jong-Ho Kim
- Department of Anesthesiology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.K.); (J.J.L.)
| | - Jae Jun Lee
- Department of Anesthesiology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.K.); (J.J.L.)
| | - Jong-Hee Sohn
- Department of Neurology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.S.); (S.-H.L.); (J.H.S.)
| | - Sang-Hwa Lee
- Department of Neurology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.S.); (S.-H.L.); (J.H.S.)
| | - Joo Hye Sung
- Department of Neurology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.S.); (S.-H.L.); (J.H.S.)
| | - Sang-Won Han
- Division of Big Data and Artificial Intelligence, Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea (S.-W.H.)
- Department of Neurology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.S.); (S.-H.L.); (J.H.S.)
| | - Jin Seo Yang
- Department of Neurosurgery, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea;
| | - Chulho Kim
- Division of Big Data and Artificial Intelligence, Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea (S.-W.H.)
- Department of Neurology, Hallym University College of Medicine, Chuncheon 24252, Republic of Korea; (J.-H.S.); (S.-H.L.); (J.H.S.)
| |
Collapse
|
7
|
Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol 2023; 16:114. [PMID: 38012673 PMCID: PMC10680201 DOI: 10.1186/s13045-023-01514-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/20/2023] [Indexed: 11/29/2023] Open
Abstract
Research into the potential benefits of artificial intelligence for comprehending the intricate biology of cancer has grown as a result of the widespread use of deep learning and machine learning in the healthcare sector and the availability of highly specialized cancer datasets. Here, we review new artificial intelligence approaches and how they are being used in oncology. We describe how artificial intelligence might be used in the detection, prognosis, and administration of cancer treatments and introduce the use of the latest large language models such as ChatGPT in oncology clinics. We highlight artificial intelligence applications for omics data types, and we offer perspectives on how the various data types might be combined to create decision-support tools. We also evaluate the present constraints and challenges to applying artificial intelligence in precision oncology. Finally, we discuss how current challenges may be surmounted to make artificial intelligence useful in clinical settings in the future.
Collapse
Affiliation(s)
- Chaoyi Zhang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jin Xu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Rong Tang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jianhui Yang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Wang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Xianjun Yu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| | - Si Shi
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
8
|
Lee S, Jeon U, Lee JH, Kang S, Kim H, Lee J, Chung MJ, Cha HS. Artificial intelligence for the detection of sacroiliitis on magnetic resonance imaging in patients with axial spondyloarthritis. Front Immunol 2023; 14:1278247. [PMID: 38022576 PMCID: PMC10676202 DOI: 10.3389/fimmu.2023.1278247] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Background Magnetic resonance imaging (MRI) is important for the early detection of axial spondyloarthritis (axSpA). We developed an artificial intelligence (AI) model for detecting sacroiliitis in patients with axSpA using MRI. Methods This study included MRI examinations of patients who underwent semi-coronal MRI scans of the sacroiliac joints owing to chronic back pain with short tau inversion recovery (STIR) sequences between January 2010 and December 2021. Sacroiliitis was defined as a positive MRI finding according to the ASAS classification criteria for axSpA. We developed a two-stage framework. First, the Faster R-CNN network extracted regions of interest (ROIs) to localize the sacroiliac joints. Maximum intensity projection (MIP) of three consecutive slices was used to mimic the reading of two adjacent slices. Second, the VGG-19 network determined the presence of sacroiliitis in localized ROIs. We augmented the positive dataset six-fold. The sacroiliitis classification performance was measured using the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). The prediction models were evaluated using three-round three-fold cross-validation. Results A total of 296 participants with 4,746 MRI slices were included in the study. Sacroiliitis was identified in 864 MRI slices of 119 participants. The mean sensitivity, specificity, and AUROC for the detection of sacroiliitis were 0.725 (95% CI, 0.705-0.745), 0.936 (95% CI, 0.924-0.947), and 0.830 (95%CI, 0.792-0.868), respectively, at the image level and 0.947 (95% CI, 0.912-0.982), 0.691 (95% CI, 0.603-0.779), and 0.816 (95% CI, 0.776-0.856), respectively, at the patient level. In the original model, without using MIP and dataset augmentation, the mean sensitivity, specificity, and AUROC were 0.517 (95% CI, 0.493-0.780), 0.944 (95% CI, 0.933-0.955), and 0.731 (95% CI, 0.681-0.780), respectively, at the image level and 0.806 (95% CI, 0.729-0.883), 0.617 (95% CI, 0.523-0.711), and 0.711 (95% CI, 0.660-0.763), respectively, at the patient level. The performance was improved by MIP techniques and data augmentation. Conclusion An AI model was developed for the detection of sacroiliitis using MRI, compatible with the ASAS criteria for axSpA, with the potential to aid MRI application in a wider clinical setting.
Collapse
Affiliation(s)
- Seulkee Lee
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Uju Jeon
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
| | - Ji Hyun Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Seonyoung Kang
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jaejoon Lee
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
- Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea
| | - Hoon-Suk Cha
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
9
|
Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ 2023; 23:852. [PMID: 37946176 PMCID: PMC10637014 DOI: 10.1186/s12909-023-04700-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 09/19/2023] [Indexed: 11/12/2023]
Abstract
Medical AI has transformed modern medicine and created a new environment for future doctors. However, medical education has failed to keep pace with these advances, and it is essential to provide systematic education on medical AI to current medical undergraduate and postgraduate students. To address this issue, our study utilized the Unified Theory of Acceptance and Use of Technology model to identify key factors that influence the acceptance and intention to use medical AI. We collected data from 1,243 undergraduate and postgraduate students from 13 universities and 33 hospitals, and 54.3% reported prior experience using medical AI. Our findings indicated that medical postgraduate students have a higher level of awareness in using medical AI than undergraduate students. The intention to use medical AI is positively associated with factors such as performance expectancy, habit, hedonic motivation, and trust. Therefore, future medical education should prioritize promoting students' performance in training, and courses should be designed to be both easy to learn and engaging, ensuring that students are equipped with the necessary skills to succeed in their future medical careers.
Collapse
Affiliation(s)
- Qianying Li
- Antai College of economics and management, Shanghai Jiao Tong University, Shanghai, China
| | - Yunhao Qin
- Department of Orthopedics, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
10
|
Gao Z, Yu Z, Zhang X, Chen C, Pan Z, Chen X, Lin W, Chen J, Zhuge Q, Shen X. Development of a deep learning model for early gastric cancer diagnosis using preoperative computed tomography images. Front Oncol 2023; 13:1265366. [PMID: 37869090 PMCID: PMC10587601 DOI: 10.3389/fonc.2023.1265366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 09/15/2023] [Indexed: 10/24/2023] Open
Abstract
Background Gastric cancer is a highly prevalent and fatal disease. Accurate differentiation between early gastric cancer (EGC) and advanced gastric cancer (AGC) is essential for personalized treatment. Currently, the diagnostic accuracy of computerized tomography (CT) for gastric cancer staging is insufficient to meet clinical requirements. Many studies rely on manual marking of lesion areas, which is not suitable for clinical diagnosis. Methods In this study, we retrospectively collected data from 341 patients with gastric cancer at the First Affiliated Hospital of Wenzhou Medical University. The dataset was randomly divided into a training set (n=273) and a validation set (n=68) using an 8:2 ratio. We developed a two-stage deep learning model that enables fully automated EGC screening based on CT images. In the first stage, an unsupervised domain adaptive segmentation model was employed to automatically segment the stomach on unlabeled portal phase CT images. Subsequently, based on the results of the stomach segmentation model, the image was cropped out of the stomach area and scaled to a uniform size, and then the EGC and AGC classification models were built based on these images. The segmentation accuracy of the model was evaluated using the dice index, while the classification performance was assessed using metrics such as the area under the curve (AUC) of the receiver operating characteristic (ROC), accuracy, sensitivity, specificity, and F1 score. Results The segmentation model achieved an average dice accuracy of 0.94 on the hand-segmented validation set. On the training set, the EGC screening model demonstrated an AUC, accuracy, sensitivity, specificity, and F1 score of 0.98, 0.93, 0.92, 0.92, and 0.93, respectively. On the validation set, these metrics were 0.96, 0.92, 0.90, 0.89, and 0.93, respectively. After three rounds of data regrouping, the model consistently achieved an AUC above 0.9 on both the validation set and the validation set. Conclusion The results of this study demonstrate that the proposed method can effectively screen for EGC in portal venous CT images. Furthermore, the model exhibits stability and holds promise for future clinical applications.
Collapse
Affiliation(s)
- Zhihong Gao
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhuo Yu
- School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan, China
| | - Xiang Zhang
- Wenzhou Data Management and Development Group Co., Ltd., Wenzhou, Zhejiang, China
| | - Chun Chen
- School of Public Health and Management, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Zhifang Pan
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiaodong Chen
- Department of Gastrointestinal Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Weihong Lin
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jun Chen
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Qichuan Zhuge
- Zhejiang Provincial Key Laboratory of Aging and Neurological Disorder Research, Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Xian Shen
- Department of Gastrointestinal Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
11
|
Bathla G, Dhruba DD, Soni N, Liu Y, Larson NB, Kassmeyer BA, Mohan S, Roberts-Wolfe D, Rathore S, Le NH, Zhang H, Sonka M, Priya S. AI-based classification of three common malignant tumors in neuro-oncology: A multi-institutional comparison of machine learning and deep learning methods. J Neuroradiol 2023:S0150-9861(23)00237-7. [PMID: 37652263 DOI: 10.1016/j.neurad.2023.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/23/2023] [Accepted: 08/29/2023] [Indexed: 09/02/2023]
Abstract
PURPOSE To determine if machine learning (ML) or deep learning (DL) pipelines perform better in AI-based three-class classification of glioblastoma (GBM), intracranial metastatic disease (IMD) and primary CNS lymphoma (PCNSL). METHODOLOGY Retrospective analysis included 502 cases for training (208 GBM, 67 PCNSL and 227 IMD), with external validation on 86 cases (27:27:32). Multiparametric MRI images (T1W, T2W, FLAIR, DWI and T1-CE) were co-registered, resampled, denoised and intensity normalized, followed by semiautomatic 3D segmentation of the enhancing tumor (ET) and peritumoral region (PTR). Model performance was assessed using several ML pipelines and 3D-convolutional neural networks (3D-CNN) using sequence specific masks, as well as combination of masks. All pipelines were trained and evaluated with 5-fold nested cross-validation on internal data followed by external validation using multi-class AUC. RESULTS Two ML models achieved similar performance on test set, one using T2-ET and T2-PTR masks (AUC: 0.885, 95% CI: [0.816, 0.935] and another using T1-CE-ET and FLAIR-PTR mask (AUC: 0.878, CI: [0.804, 0.930]). The best performing DL models achieved an AUC of 0.854, (CI [0.774, 0.914]) on external data using T1-CE-ET and T2-PTR masks, followed by model derived from T1-CE-ET, ADC-ET and FLAIR-PTR masks (AUC: 0.851, CI [0.772, 0.909]). CONCLUSION Both ML and DL derived pipelines achieved similar performance. T1-CE mask was used in three of the top four overall models. Additionally, all four models had some mask derived from PTR, either T2WI or FLAIR.
Collapse
Affiliation(s)
- Girish Bathla
- Department of Radiology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52242, USA; Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN 55902, USA.
| | - Durjoy Deb Dhruba
- Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center for the Engineering Arts and Sciences, Iowa City, IA 52242 USA
| | - Neetu Soni
- Department of Radiology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52242, USA; Department of Imaging Sciences, University of Rochester Medical Center, 601 Elmwood Ave, Box 648, Rochester, NY 14642, USA
| | - Yanan Liu
- Advanced Pulmonary Physiomic Imaging Laboratory (APPIL), University of Iowa, 200 Hawkins Drive, Iowa City, IA, 52242 USA
| | - Nicholas B Larson
- Division of Clinical Trials and Biostatistics, Department of Quantitative Health Sciences, Mayo Clinic, 200 1st Street SW, Rochester, MN 55902, USA
| | - Blake A Kassmeyer
- Division of Clinical Trials and Biostatistics, Department of Quantitative Health Sciences, Mayo Clinic, 200 1st Street SW, Rochester, MN 55902, USA
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 USA
| | - Douglas Roberts-Wolfe
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 USA
| | - Saima Rathore
- Senior research scientist, Avid Radiopharmaceuticals, 3711 Market Street, Philadelphia, PA 19104, USA
| | - Nam H Le
- Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center for the Engineering Arts and Sciences, Iowa City, IA 52242 USA
| | - Honghai Zhang
- Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center for the Engineering Arts and Sciences, Iowa City, IA 52242 USA
| | - Milan Sonka
- Electrical and Computer Engineering, University of Iowa, 4016 Seamans Center for the Engineering Arts and Sciences, Iowa City, IA 52242 USA
| | - Sarv Priya
- Department of Radiology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52242, USA
| |
Collapse
|
12
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
13
|
Martucci M, Russo R, Schimperna F, D’Apolito G, Panfili M, Grimaldi A, Perna A, Ferranti AM, Varcasia G, Giordano C, Gaudino S. Magnetic Resonance Imaging of Primary Adult Brain Tumors: State of the Art and Future Perspectives. Biomedicines 2023; 11:biomedicines11020364. [PMID: 36830900 PMCID: PMC9953338 DOI: 10.3390/biomedicines11020364] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/20/2023] [Accepted: 01/22/2023] [Indexed: 01/28/2023] Open
Abstract
MRI is undoubtedly the cornerstone of brain tumor imaging, playing a key role in all phases of patient management, starting from diagnosis, through therapy planning, to treatment response and/or recurrence assessment. Currently, neuroimaging can describe morphologic and non-morphologic (functional, hemodynamic, metabolic, cellular, microstructural, and sometimes even genetic) characteristics of brain tumors, greatly contributing to diagnosis and follow-up. Knowing the technical aspects, strength and limits of each MR technique is crucial to correctly interpret MR brain studies and to address clinicians to the best treatment strategy. This article aimed to provide an overview of neuroimaging in the assessment of adult primary brain tumors. We started from the basilar role of conventional/morphological MR sequences, then analyzed, one by one, the non-morphological techniques, and finally highlighted future perspectives, such as radiomics and artificial intelligence.
Collapse
Affiliation(s)
- Matia Martucci
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
- Correspondence:
| | - Rosellina Russo
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | | | - Gabriella D’Apolito
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Marco Panfili
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Alessandro Grimaldi
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Alessandro Perna
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | | | - Giuseppe Varcasia
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Carolina Giordano
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
| | - Simona Gaudino
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico “A. Gemelli” IRCCS, 00168 Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|