1
|
Yasaka K, Kawamura M, Sonoda Y, Kubo T, Kiryu S, Abe O. Large multimodality model fine-tuned for detecting breast and esophageal carcinomas on CT: a preliminary study. Jpn J Radiol 2025; 43:779-786. [PMID: 39668277 PMCID: PMC12052878 DOI: 10.1007/s11604-024-01718-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
PURPOSE This study aimed to develop a large multimodality model (LMM) that can detect breast and esophageal carcinomas on chest contrast-enhanced CT. MATERIALS AND METHODS In this retrospective study, CT images of 401 (age, 62.9 ± 12.9 years; 169 males), 51 (age, 65.5 ± 11.6 years; 23 males), and 120 (age, 64.6 ± 14.2 years; 60 males) patients were used in the training, validation, and test phases. The numbers of CT images with breast carcinoma, esophageal carcinoma, and no lesion were 927, 2180, and 2087; 80, 233, and 270; and 184, 246, and 6919 for the training, validation, and test datasets, respectively. The LMM was fine-tuned using CT images as input and text data ("suspicious of breast carcinoma"/ "suspicious of esophageal carcinoma"/ "no lesion") as reference data on a desktop computer equipped with a single graphic processing unit. Because of the random nature of the training process, supervised learning was performed 10 times. The performance of the best performing model on the validation dataset was further tested using the time-independent test dataset. The detection performance was evaluated by calculating the area under the receiver operating characteristic curve (AUC). RESULTS The sensitivities of the fine-tuned LMM for detecting breast and esophageal carcinomas in the test dataset were 0.929 and 0.951, respectively. The diagnostic performance of the fine-tuned LMM for detecting breast and esophageal carcinomas was high, with AUCs of 0.890 (95%CI 0.871-0.909) and 0.880 (95%CI 0.865-0.894), respectively. CONCLUSIONS The fine-tuned LMM could detect both breast and esophageal carcinomas on chest contrast-enhanced CT with high diagnostic performance. Usefulness of large multimodality models in chest cancer imaging has not been assessed so far. The fine-tuned large multimodality model could detect breast and esophageal carcinomas with high diagnostic performance (area under the receiver operating characteristic curve of 0.890 and 0.880, respectively).
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Motohide Kawamura
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuki Sonoda
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Chindanuruks T, Jindanil T, Cumpim C, Sinpitaksakul P, Arunjaroensuk S, Mattheos N, Pimkhaokham A. Development and validation of a deep learning algorithm for the classification of the level of surgical difficulty in impacted mandibular third molar surgery. Int J Oral Maxillofac Surg 2025; 54:452-460. [PMID: 39632213 DOI: 10.1016/j.ijom.2024.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 11/11/2024] [Accepted: 11/15/2024] [Indexed: 12/07/2024]
Abstract
The aim of this study was to develop and validate a convolutional neural network (CNN) algorithm for the detection of impacted mandibular third molars in panoramic radiographs and the classification of the surgical extraction difficulty level. A dataset of 1730 panoramic radiographs was collected; 1300 images were allocated to training and 430 to testing. The performance of the model was evaluated using the confusion matrix for multiclass classification, and the actual scores were compared to those of two human experts. The area under the precision-recall curve of the YOLOv5 model ranged from 72% to 89% across the variables in the surgical difficulty index. The area under the receiver operating characteristic curve showed promising results of the YOLOv5 model for classifying third molars into three surgical difficulty levels (micro-average AUC 87%). Furthermore, the algorithm scores demonstrated good agreement with the human experts. In conclusion, the YOLOv5 model has the potential to accurately detect and classify the position of mandibular third molars, with high performance for every criterion in radiographic images. The proposed model could serve as an aid in improving clinician performance and could be integrated into a screening system.
Collapse
Affiliation(s)
- T Chindanuruks
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - T Jindanil
- Department of Radiology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - C Cumpim
- Department of Computer Engineering, Faculty of Engineering, Rajamangala University of Technology Rattanakosin, Nakhon Pathom, Thailand
| | - P Sinpitaksakul
- Department of Radiology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - S Arunjaroensuk
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
| | - N Mattheos
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| | - A Pimkhaokham
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| |
Collapse
|
3
|
Yasaka K, Asari Y, Morita Y, Kurokawa M, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution deep learning reconstruction to evaluate lumbar spinal stenosis status on magnetic resonance myelography. Jpn J Radiol 2025:10.1007/s11604-025-01787-5. [PMID: 40266548 DOI: 10.1007/s11604-025-01787-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2025] [Accepted: 04/06/2025] [Indexed: 04/24/2025]
Abstract
PURPOSE To investigate whether super-resolution deep learning reconstruction (SR-DLR) of MR myelography-aided evaluations of lumbar spinal stenosis. MATERIAL AND METHODS In this retrospective study, lumbar MR myelography of 40 patients (16 males and 24 females; mean age, 59.4 ± 31.8 years) were analyzed. Using the MR imaging data, MR myelography was separately reconstructed via SR-DLR, deep learning reconstruction (DLR), and conventional zero-filling interpolation (ZIP). Three radiologists, blinded to patient background data and MR reconstruction information, independently evaluated the image sets in terms of the following items: the numbers of levels affected by lumbar spinal stenosis; and cauda equina depiction, sharpness, noise, artifacts, and overall image quality. RESULTS The median interobserver agreement in terms of the numbers of lumbar spinal stenosis levels were 0.819, 0.735, and 0.729 for SR-DLR, DLR, and ZIP images, respectively. The imaging quality of the cauda equina, and image sharpness, noise, and overall quality on SR-DLR images were significantly better than those on DLR and ZIP images, as rated by all readers (p < 0.001, Wilcoxon signed-rank test). No significant differences were observed for artifacts on SR-DLR against DLR and ZIP. CONCLUSIONS SR-DLR improved the image quality of lumbar MR myelographs compared to DLR and ZIP, and was associated with better interobserver agreement during assessment of lumbar spinal stenosis status.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Yusuke Asari
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuichi Morita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Mariko Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-Ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
4
|
Yang X, Yang R, Liu X, Chen Z, Zheng Q. Recent Advances in Artificial Intelligence for Precision Diagnosis and Treatment of Bladder Cancer: A Review. Ann Surg Oncol 2025:10.1245/s10434-025-17228-6. [PMID: 40221553 DOI: 10.1245/s10434-025-17228-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Accepted: 03/09/2025] [Indexed: 04/14/2025]
Abstract
BACKGROUND Bladder cancer is one of the top ten cancers globally, with its incidence steadily rising in China. Early detection and prognosis risk assessment play a crucial role in guiding subsequent treatment decisions for bladder cancer. However, traditional diagnostic methods such as bladder endoscopy, imaging, or pathology examinations heavily rely on the clinical expertise and experience of clinicians, exhibiting subjectivity and poor reproducibility. MATERIALS AND METHODS With the rise of artificial intelligence, novel approaches, particularly those employing deep learning technology, have shown significant advancements in clinical tasks related to bladder cancer, including tumor detection, molecular subtyping identification, tumor staging and grading, prognosis prediction, and recurrence assessment. RESULTS Artificial intelligence, with its robust data mining capabilities, enhances diagnostic efficiency and reproducibility when assisting clinicians in decision-making, thereby reducing the risks of misdiagnosis and underdiagnosis. This not only helps alleviate the current challenges of talent shortages and uneven distribution of medical resources but also fosters the development of precision medicine. CONCLUSIONS This study provides a comprehensive review of the latest research advances and prospects of artificial intelligence technology in the precise diagnosis and treatment of bladder cancer.
Collapse
Affiliation(s)
- Xiangxiang Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
| | - Xiuheng Liu
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
| | - Zhiyuan Chen
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China.
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China.
| | - Qingyuan Zheng
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China.
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China.
| |
Collapse
|
5
|
Browning SD, Costello JM, Dunn HP, Fraser CL. The Use Of Fundus Photography In The Emergency Room-A Review. Curr Neurol Neurosci Rep 2025; 25:30. [PMID: 40214922 PMCID: PMC11991934 DOI: 10.1007/s11910-025-01417-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2025] [Indexed: 04/14/2025]
Abstract
PURPOSE OF REVIEW The ocular fundus reveals a wealth of pathophysiological findings which should change patient management in the emergency room (ER). Traditional fundoscopy has been technically challenging and diagnostically inaccurate, but technological advances in non-mydriatic fundus photography (NMFP) have facilitated clinically meaningful fundoscopy. This review presents an update on the literature regarding NMFP and its application to the ER, illustrating pivotal publications and recent advances within this field. RECENT FINDINGS NMFP's application in the ER is demonstrably feasible and seamlessly integrates into emergency physicians' (EP) diagnostic workflows in a clinically meaningful and time efficient manner. The images of the ocular fundus (OF) generated by NMFP are consistently high quality, allowing a greater diagnostic accuracy to EP and ophthalmology interpreters alike. Digital NMFP images facilitate effective ophthalmology input via telemedicine to review the images in the ER. NMFP has been shown to change management decisions in the ER, improving patient and departmental outcomes. Interpretation of fundus images remains a medical education challenge, and early research highlights the potential for artificial intelligence (AI) image systems of NMFP to augment image interpretation in the ER. NMFP can change the ER approach to OF assessment, however the factors limiting its routine implementation need further consideration. There is potential for AI to contribute to NMFP image screening systems to augment EPs diagnostic accuracy.
Collapse
Affiliation(s)
- Samuel D Browning
- Faculty of Medicine, The University of New South Wales, New South Wales, Australia
| | - Julia M Costello
- Port Macquarie Eye Centre, Port Macquarie, New South Wales, Australia
- Port Macquarie Base Hospital, Port Macquarie, New South Wales, Australia
| | - Hamish P Dunn
- Faculty of Medicine and Health, The University of Sydney, New South Wales, Australia
- Faculty of Medicine, The University of New South Wales, New South Wales, Australia
- Port Macquarie Eye Centre, Port Macquarie, New South Wales, Australia
| | - Clare L Fraser
- Save Sight Institute, The University of Sydney, New South Wales, Australia.
| |
Collapse
|
6
|
Yasaka K, Abe O. A New Step Forward in the Extraction of Appropriate Radiology Reports. Radiology 2025; 315:e250867. [PMID: 40232144 DOI: 10.1148/radiol.250867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2025]
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
7
|
Kucukkara Z, Ozkan IA, Tasdemir S, Ceylan O. Classification of chicken Eimeria species through deep transfer learning models: A comparative study on model efficacy. Vet Parasitol 2025; 334:110400. [PMID: 39855058 DOI: 10.1016/j.vetpar.2025.110400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 01/07/2025] [Accepted: 01/17/2025] [Indexed: 01/27/2025]
Abstract
Eimeria is a protozoan parasite that causes coccidiosis in various animal species, especially in chickens, resulting in infections characterized by intestinal damage, hemorrhagic diarrhea, lethargy, and high mortality rates in the absence of effective control measures. The rapid spread of these parasites through ingestion of food and drinking water can seriously endanger animal health and productivity, leading to significant economic losses in the chicken industry. Chicken Eimeria species are difficult to identify by conventional microscopy due to similarities in oocyst morphologies. In addition, species identification, which is significant in epidemiological studies, is a time-consuming process involving the sporulation stage and various measurements, requiring labor and expertise. Therefore, the objective of this study was to develop an automated system to classify digital micrographic images of sporulated Eimeria oocysts belonging to seven pathogenic species obtained from domestic chickens using deep transfer learning (DTL) models. This study is the first to utilize feature extraction and fine-tuning methods for classification using DTL models. In this study, 17 pre-trained DTL models were utilized for the classification process. The Xception model achieved the highest classification performance with an accuracy rate of 96.4 %, outperforming all the other models. These results highlight the efficacy of the Xception model and show that DTL models have significant potential in classifying Eimeria species. The DTL models applied in this study, which use both feature extraction and fine-tuning methods to enable species classification of sporulated oocysts of primary chicken Eimeria species, may reduce the workload of researchers in the future and can be incorporated into diagnostic tools and adapted for other practical uses in parasitology and other scientific fields.
Collapse
Affiliation(s)
- Zeki Kucukkara
- Selcuk University, Faculty of Technology, Department of Computer Engineering, Konya, Türkiye
| | - Ilker Ali Ozkan
- Selcuk University, Faculty of Technology, Department of Computer Engineering, Konya, Türkiye.
| | - Sakir Tasdemir
- Selcuk University, Faculty of Technology, Department of Computer Engineering, Konya, Türkiye
| | - Onur Ceylan
- Selcuk University, Faculty of Veterinary Medicine, Department of Veterinary Parasitology, Konya, Türkiye
| |
Collapse
|
8
|
Yasaka K, Kanzawa J, Kanemaru N, Koshino S, Abe O. Fine-Tuned Large Language Model for Extracting Patients on Pretreatment for Lung Cancer from a Picture Archiving and Communication System Based on Radiological Reports. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:327-334. [PMID: 38955964 PMCID: PMC11811339 DOI: 10.1007/s10278-024-01186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 07/04/2024]
Abstract
This study aimed to investigate the performance of a fine-tuned large language model (LLM) in extracting patients on pretreatment for lung cancer from picture archiving and communication systems (PACS) and comparing it with that of radiologists. Patients whose radiological reports contained the term lung cancer (3111 for training, 124 for validation, and 288 for test) were included in this retrospective study. Based on clinical indication and diagnosis sections of the radiological report (used as input data), they were classified into four groups (used as reference data): group 0 (no lung cancer), group 1 (pretreatment lung cancer present), group 2 (after treatment for lung cancer), and group 3 (planning radiation therapy). Using the training and validation datasets, fine-tuning of the pretrained LLM was conducted ten times. Due to group imbalance, group 2 data were undersampled in the training. The performance of the best-performing model in the validation dataset was assessed in the independent test dataset. For testing purposes, two other radiologists (readers 1 and 2) were also involved in classifying radiological reports. The overall accuracy of the fine-tuned LLM, reader 1, and reader 2 was 0.983, 0.969, and 0.969, respectively. The sensitivity for differentiating group 0/1/2/3 by LLM, reader 1, and reader 2 was 1.000/0.948/0.991/1.000, 0.750/0.879/0.996/1.000, and 1.000/0.931/0.978/1.000, respectively. The time required for classification by LLM, reader 1, and reader 2 was 46s/2539s/1538s, respectively. Fine-tuned LLM effectively extracted patients on pretreatment for lung cancer from PACS with comparable performance to radiologists in a shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Noriko Kanemaru
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Saori Koshino
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
9
|
Aldhyani T, Ahmed ZAT, Alsharbi BM, Ahmad S, Al-Adhaileh MH, Kamal AH, Almaiah M, Nazeer J. Diagnosis and detection of bone fracture in radiographic images using deep learning approaches. Front Med (Lausanne) 2025; 11:1506686. [PMID: 39927268 PMCID: PMC11803505 DOI: 10.3389/fmed.2024.1506686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2024] [Accepted: 11/04/2024] [Indexed: 02/11/2025] Open
Abstract
Introduction Bones are a fundamental component of human anatomy, enabling movement and support. Bone fractures are prevalent in the human body, and their accurate diagnosis is crucial in medical practice. In response to this challenge, researchers have turned to deep-learning (DL) algorithms. Recent advancements in sophisticated DL methodologies have helped overcome existing issues in fracture detection. Methods Nevertheless, it is essential to develop an automated approach for identifying fractures using the multi-region X-ray dataset from Kaggle, which contains a comprehensive collection of 10,580 radiographic images. This study advocates for the use of DL techniques, including VGG16, ResNet152V2, and DenseNet201, for the detection and diagnosis of bone fractures. Results The experimental findings demonstrate that the proposed approach accurately identifies and classifies various types of fractures. Our system, incorporating DenseNet201 and VGG16, achieved an accuracy rate of 97% during the validation phase. By addressing these challenges, we can further improve DL models for fracture detection. This article tackles the limitations of existing methods for fracture detection and diagnosis and proposes a system that improves accuracy. Conclusion The findings lay the foundation for future improvements to radiographic systems used in bone fracture diagnosis.
Collapse
Affiliation(s)
| | - Zeyad A. T. Ahmed
- Department of Computer Science, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, India
| | - Bayan M. Alsharbi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Sultan Ahmad
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
- School of Computer Science and Engineering, Lovely Professional University, Phagwara, Punjab, India
| | - Mosleh Hmoud Al-Adhaileh
- Deanship of E-Learning and Distance Education and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Ahmed Hassan Kamal
- Department of Orthopedic and Trauma, College of Medicine, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Mohammed Almaiah
- King Abdullah the II IT School, The University of Jordan, Amman, Jordan
| | - Jabeen Nazeer
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|
10
|
Mustafa A, Wei C, Grovu R, Basman C, Kodra A, Maniatis G, Rutkin B, Weinberg M, Kliger C. Using novel machine learning tools to predict optimal discharge following transcatheter aortic valve replacement. Arch Cardiovasc Dis 2025; 118:26-34. [PMID: 39424448 DOI: 10.1016/j.acvd.2024.08.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 08/22/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
BACKGROUND Although transcatheter aortic valve replacement has emerged as an alternative to surgical aortic valve replacement, it requires extensive healthcare resources, and optimal length of hospital stay has become increasingly important. This study was conducted to assess the potential of novel machine learning models (artificial neural network and eXtreme Gradient Boost) in predicting optimal hospital discharge following transcatheter aortic valve replacement. AIM To determine whether artificial neural network and eXtreme Gradient Boost models can be used to accurately predict optimal discharge following transcatheter aortic valve replacement. METHODS Data were collected from the 2016-2018 National Inpatient Sample database using International Classification of Diseases, Tenth Revision codes. Patients were divided into two cohorts based on length of hospital stay: optimal discharge (length of hospital stay 0-3 days); and late discharge (length of hospital stay 4-9 days). χ2 and t tests were performed to compare patient characteristics with optimal discharge and prolonged discharge. Logistic regression, artificial neural network and eXtreme Gradient Boost models were used to predict optimal discharge. Model performance was determined using area under the curve and F1 score. An area under the curve≥0.80 and an F1 score≥0.70 were considered strong predictive accuracy. RESULTS Twenty-five thousand and eight hundred and seventy-four patients who underwent transcatheter aortic valve replacement were analysed. Predictability of optimal discharge was similar amongst the models (area under the curve 0.80 in all models). In all models, patient disposition and elective procedure were the most important predictive factors. Coagulation disorder was the strongest co-morbidity predictor of whether a patient had an optimal discharge. CONCLUSIONS Artificial neural network and eXtreme Gradient Boost models had satisfactory performances, demonstrating similar accuracy to binary logistic regression in predicting optimal discharge following transcatheter aortic valve replacement. Further validation and refinement of these models may lead to broader clinical adoption.
Collapse
Affiliation(s)
- Ahmad Mustafa
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA.
| | - Chapman Wei
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Radu Grovu
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Craig Basman
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Arber Kodra
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Gregory Maniatis
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Bruce Rutkin
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Mitchell Weinberg
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| | - Chad Kliger
- Department of Cardiology, Northwell Health, 2000 Marcus Avenue, Suite 300, New Hyde Park, NY, 11042-1069, USA
| |
Collapse
|
11
|
Ohya A, Miyamoto T, Ichinohe F, Kobara H, Fujinaga Y, Shiozawa T. Problems of magnetic resonance diagnosis for gastric-type mucin-positive cervical lesions of the uterus and its solutions using artificial intelligence. PLoS One 2024; 19:e0315862. [PMID: 39775578 PMCID: PMC11684648 DOI: 10.1371/journal.pone.0315862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 12/02/2024] [Indexed: 01/11/2025] Open
Abstract
PURPOSE To reveal problems of magnetic resonance imaging (MRI) for diagnosing gastric-type mucin-positive (GMPLs) and gastric-type mucin-negative (GMNLs) cervical lesions. METHODS We selected 172 patients suspected to have lobular endocervical glandular hyperplasia; their pelvic MR images were categorised into the training (n = 132) and validation (n = 40) groups. The images of the validation group were read twice by three pairs of six readers to reveal the accuracy, area under the curve (AUC), and intraclass correlation coefficient (ICC). The readers evaluated three images (sagittal T2-weighted image [T2WI], axial T2WI, and axial T1-weighted image [T1WI]) in every patient. The pre-trained convolutional neural network (pCNN) was used to differentiate between GMPLs and GMNLs and perform four-fold cross-validation using cases in the training group. The accuracy and AUC were obtained using the MR images in the validation group. For each case, three images (sagittal T2WI and axial T2WI/T1WI) were entered into the CNN. Calculations were performed twice independently. ICC (2,1) between first- and second-time CNN was evaluated, and these results were compared with those of readers. RESULTS The highest accuracy of readers was 77.50%. The highest ICC (1,1) between a pair of readers was 0.750. All ICC (2,1) values were <0.7, indicating poor agreement; the highest accuracy of CNN was 82.50%. The AUC did not differ significantly between the CNN and readers. The ICC (2,1) of CNN was 0.965. CONCLUSIONS Variation in the inter-reader or intra-reader accuracy in MRI diagnosis limits differentiation between GMPL and GMNL. CNN is nearly as accurate as readers but improves the reproducibility of diagnosis.
Collapse
Affiliation(s)
- Ayumi Ohya
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Japan
| | - Tsutomu Miyamoto
- Department of Obstetrics and Gynecology, Shinshu University School of Medicine, Matsumoto, Japan
| | - Fumihito Ichinohe
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Japan
| | - Hisanori Kobara
- Department of Obstetrics and Gynecology, Shinshu University School of Medicine, Matsumoto, Japan
| | - Yasunari Fujinaga
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Japan
| | - Tanri Shiozawa
- Department of Obstetrics and Gynecology, Shinshu University School of Medicine, Matsumoto, Japan
| |
Collapse
|
12
|
Yasaka K, Nomura T, Kamohara J, Hirakawa H, Kubo T, Kiryu S, Abe O. Classification of Interventional Radiology Reports into Technique Categories with a Fine-Tuned Large Language Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01370-w. [PMID: 39673010 DOI: 10.1007/s10278-024-01370-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 11/29/2024] [Accepted: 12/02/2024] [Indexed: 12/15/2024]
Abstract
The aim of this study is to develop a fine-tuned large language model that classifies interventional radiology reports into technique categories and to compare its performance with readers. This retrospective study included 3198 patients (1758 males and 1440 females; age, 62.8 ± 16.8 years) who underwent interventional radiology from January 2018 to July 2024. Training, validation, and test datasets involved 2292, 250, and 656 patients, respectively. Input data involved texts in clinical indication, imaging diagnosis, and image-finding sections of interventional radiology reports. Manually classified technique categories (15 categories in total) were utilized as reference data. Fine-tuning of the Bidirectional Encoder Representations model was performed using training and validation datasets. This process was repeated 15 times due to the randomness of the learning process. The best-performed model, which showed the highest accuracy among 15 trials, was selected to further evaluate its performance in the independent test dataset. The report classification involved one radiologist (reader 1) and two radiology residents (readers 2 and 3). The accuracy and macrosensitivity (average of each category's sensitivity) of the best-performed model in the validation dataset were 0.996 and 0.994, respectively. For the test dataset, the accuracy/macrosensitivity were 0.988/0.980, 0.986/0.977, 0.989/0.979, and 0.988/0.980 in the best model, reader 1, reader 2, and reader 3, respectively. The model required 0.178 s required for classification per patient, which was 17.5-19.9 times faster than readers. In conclusion, fine-tuned large language model classified interventional radiology reports into technique categories with high accuracy similar to readers within a remarkably shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Takuto Nomura
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kamohara
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Hiroshi Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
13
|
Yasaka K, Kanzawa J, Nakaya M, Kurokawa R, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction for 3D Brain MR Imaging: Improvement of Cranial Nerve Depiction and Interobserver Agreement in Evaluations of Neurovascular Conflict. Acad Radiol 2024; 31:5118-5127. [PMID: 38897913 DOI: 10.1016/j.acra.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
RATIONALE AND OBJECTIVES To determine if super-resolution deep learning reconstruction (SR-DLR) improves the depiction of cranial nerves and interobserver agreement when assessing neurovascular conflict in 3D fast asymmetric spin echo (3D FASE) brain MR images, as compared to deep learning reconstruction (DLR). MATERIALS AND METHODS This retrospective study involved reconstructing 3D FASE MR images of the brain for 37 patients using SR-DLR and DLR. Three blinded readers conducted qualitative image analyses, evaluating the degree of neurovascular conflict, structure depiction, sharpness, noise, and diagnostic acceptability. Quantitative analyses included measuring edge rise distance (ERD), edge rise slope (ERS), and full width at half maximum (FWHM) using the signal intensity profile along a linear region of interest across the center of the basilar artery. RESULTS Interobserver agreement on the degree of neurovascular conflict of the facial nerve was generally higher with SR-DLR (0.429-0.923) compared to DLR (0.175-0.689). SR-DLR exhibited increased subjective image noise compared to DLR (p ≥ 0.008). However, all three readers found SR-DLR significantly superior in terms of sharpness (p < 0.001); cranial nerve depiction, particularly of facial and acoustic nerves, as well as the osseous spiral lamina (p < 0.001); and diagnostic acceptability (p ≤ 0.002). The FWHM (mm)/ERD (mm)/ERS (mm-1) for SR-DLR and DLR was 3.1-4.3/0.9-1.1/8795.5-10,703.5 and 3.3-4.8/1.4-2.1/5157.9-7705.8, respectively, with SR-DLR's image sharpness being significantly superior (p ≤ 0.001). CONCLUSION SR-DLR enhances image sharpness, leading to improved cranial nerve depiction and a tendency for greater interobserver agreement regarding facial nerve neurovascular conflict.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Moto Nakaya
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan; Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan.
| |
Collapse
|
14
|
Yasaka K, Akai H, Kato S, Tajima T, Yoshioka N, Furuta T, Kageyama H, Toda Y, Akahane M, Ohtomo K, Abe O, Kiryu S. Iterative Motion Correction Technique with Deep Learning Reconstruction for Brain MRI: A Volunteer and Patient Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3070-3076. [PMID: 38942939 PMCID: PMC11612051 DOI: 10.1007/s10278-024-01184-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/03/2024] [Accepted: 06/18/2024] [Indexed: 06/30/2024]
Abstract
The aim of this study was to investigate the effect of iterative motion correction (IMC) on reducing artifacts in brain magnetic resonance imaging (MRI) with deep learning reconstruction (DLR). The study included 10 volunteers (between September 2023 and December 2023) and 30 patients (between June 2022 and July 2022) for quantitative and qualitative analyses, respectively. Volunteers were instructed to remain still during the first MRI with fluid-attenuated inversion recovery sequence (FLAIR) and to move during the second scan. IMCoff DLR images were reconstructed from the raw data of the former acquisition; IMCon and IMCoff DLR images were reconstructed from the latter acquisition. After registration of the motion images, the structural similarity index measure (SSIM) was calculated using motionless images as reference. For qualitative analyses, IMCon and IMCoff FLAIR DLR images of the patients were reconstructed and evaluated by three blinded readers in terms of motion artifacts, noise, and overall quality. SSIM for IMCon images was 0.952, higher than that for IMCoff images (0.949) (p < 0.001). In qualitative analyses, although noise in IMCon images was rated as increased by two of the three readers (both p < 0.001), all readers agreed that motion artifacts and overall quality were significantly better in IMCon images than in IMCoff images (all p < 0.001). In conclusion, IMC reduced motion artifacts in brain FLAIR DLR images while maintaining similarity to motionless images.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Shimpei Kato
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Toshihiro Furuta
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Hajime Kageyama
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Yui Toda
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
15
|
Mao J, Du Y, Xue J, Hu J, Mai Q, Zhou T, Zhou Z. Automated detection and classification of mandibular fractures on multislice spiral computed tomography using modified convolutional neural networks. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:803-812. [PMID: 39384413 DOI: 10.1016/j.oooo.2024.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 07/19/2024] [Accepted: 07/26/2024] [Indexed: 10/11/2024]
Abstract
OBJECTIVE To evaluate the performance of convolutional neural networks (CNNs) for the automated detection and classification of mandibular fractures on multislice spiral computed tomography (MSCT). STUDY DESIGN MSCT data from 361 patients with mandibular fractures were retrospectively collected. Two experienced maxillofacial surgeons annotated the images as ground truth. Fractures were detected utilizing the following models: YOLOv3, YOLOv4, Faster R-CNN, CenterNet, and YOLOv5-TRS. Fracture sites were classified by the following models: AlexNet, GoogLeNet, ResNet50, original DenseNet-121, and modified DenseNet-121. The performance was evaluated for accuracy, sensitivity, specificity, and area under the curve (AUC). AUC values were compared using the Z-test and P values <.05 were considered to be statistically significant. RESULTS Of all of the detection models, YOLOv5-TRS obtained the greatest mean accuracy (96.68%). Among all of the fracture subregions, body fractures were the most reliably detected (with accuracies of 88.59%-99.01%). For classification models, the AUCs for body fractures were higher than those of condyle and angle fractures, and they were all above 0.75, with the highest AUC at 0.903. Modified DenseNet-121 had the best overall classification performance with a mean AUC of 0.814. CONCLUSIONS The modified CNN-based models demonstrated high reliability for the diagnosis of mandibular fractures on MSCT.
Collapse
Affiliation(s)
- Jingjing Mao
- Ningxia Medical University, Ningxia Key Laboratory of Oral Disease Research, Yinchuan, P.R. China
| | - Yuhu Du
- College of Computer Science and Engineering, North Minzu University, Yinchuan, P.R. China
| | - Jiawen Xue
- Ningxia Medical University, Ningxia Key Laboratory of Oral Disease Research, Yinchuan, P.R. China
| | - Jingjing Hu
- Department of Oral and Maxillofacial Surgery, Guyuan People's Hospital, Guyuan, P.R. China
| | - Qian Mai
- Department of Stomatology, The First People's Hospital of Yinchuan, Yinchuan, P.R. China
| | - Tao Zhou
- College of Computer Science and Engineering, North Minzu University, Yinchuan, P.R. China
| | - Zhongwei Zhou
- Department of Oral and Maxillofacial Surgery, General Hospital of Ningxia Medical University, Yinchuan, P.R. China; Institution of Medical Sciences, General Hospital of Ningxia Medical University, Yinchuan, P.R. China.
| |
Collapse
|
16
|
Kuo CH, Liu GT, Lee CE, Wu J, Casimo K, Weaver KE, Lo YC, Chen YY, Huang WC, Ojemann JG. Decoding micro-electrocorticographic signals by using explainable 3D convolutional neural network to predict finger movements. J Neurosci Methods 2024; 411:110251. [PMID: 39151656 DOI: 10.1016/j.jneumeth.2024.110251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/05/2024] [Accepted: 08/13/2024] [Indexed: 08/19/2024]
Abstract
BACKGROUND Electroencephalography (EEG) and electrocorticography (ECoG) recordings have been used to decode finger movements by analyzing brain activity. Traditional methods focused on single bandpass power changes for movement decoding, utilizing machine learning models requiring manual feature extraction. NEW METHOD This study introduces a 3D convolutional neural network (3D-CNN) model to decode finger movements using ECoG data. The model employs adaptive, explainable AI (xAI) techniques to interpret the physiological relevance of brain signals. ECoG signals from epilepsy patients during awake craniotomy were processed to extract power spectral density across multiple frequency bands. These data formed a 3D matrix used to train the 3D-CNN to predict finger trajectories. RESULTS The 3D-CNN model showed significant accuracy in predicting finger movements, with root-mean-square error (RMSE) values of 0.26-0.38 for single finger movements and 0.20-0.24 for combined movements. Explainable AI techniques, Grad-CAM and SHAP, identified the high gamma (HG) band as crucial for movement prediction, showing specific cortical regions involved in different finger movements. These findings highlighted the physiological significance of the HG band in motor control. COMPARISON WITH EXISTING METHODS The 3D-CNN model outperformed traditional machine learning approaches by effectively capturing spatial and temporal patterns in ECoG data. The use of xAI techniques provided clearer insights into the model's decision-making process, unlike the "black box" nature of standard deep learning models. CONCLUSIONS The proposed 3D-CNN model, combined with xAI methods, enhances the decoding accuracy of finger movements from ECoG data. This approach offers a more efficient and interpretable solution for brain-computer interface (BCI) applications, emphasizing the HG band's role in motor control.
Collapse
Affiliation(s)
- Chao-Hung Kuo
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurological Surgery, University of Washington, Seattle, WA, USA; The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan.
| | - Guan-Tze Liu
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chi-En Lee
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jing Wu
- Department of Bioengineering, University of Washington, Seattle, WA, USA; Center for Neurotechnology, University of Washington, Seattle, WA, USA
| | - Kaitlyn Casimo
- Center for Neurotechnology, University of Washington, Seattle, WA, USA
| | - Kurt E Weaver
- Center for Neurotechnology, University of Washington, Seattle, WA, USA; Department of Radiology, and Integrated Brain Imaging Center, University of Washington, Seattle, WA, USA
| | - Yu-Chun Lo
- The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - You-Yin Chen
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan; The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan.
| | - Wen-Cheng Huang
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jeffrey G Ojemann
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA; Center for Neurotechnology, University of Washington, Seattle, WA, USA; Departments of Surgery, Seattle Children's Hospital, Seattle, WA, USA
| |
Collapse
|
17
|
Lo Iacono F, Maragna R, Pontone G, Corino VDA. A Novel Data Augmentation Method for Radiomics Analysis Using Image Perturbations. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2401-2414. [PMID: 38710969 PMCID: PMC11522260 DOI: 10.1007/s10278-024-01013-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 05/08/2024]
Abstract
Radiomics extracts hundreds of features from medical images to quantitively characterize a region of interest (ROI). When applying radiomics, imbalanced or small dataset issues are commonly addressed using under or over-sampling, the latter being applied directly to the extracted features. Aim of this study is to propose a novel balancing and data augmentation technique by applying perturbations (erosion, dilation, contour randomization) to the ROI in cardiac computed tomography images. From the perturbed ROIs, radiomic features are extracted, thus creating additional samples. This approach was tested addressing the clinical problem of distinguishing cardiac amyloidosis (CA) from aortic stenosis (AS) and hypertrophic cardiomyopathy (HCM). Twenty-one CA, thirty-two AS and twenty-one HCM patients were included in the study. From each original and perturbed ROI, 107 radiomic features were extracted. The CA-AS dataset was balanced using the perturbation-based method along with random over-sampling, adaptive synthetic (ADASYN) and the synthetic minority oversampling technique (SMOTE). The same methods were tested to perform data augmentation dealing with CA and HCM. Features were submitted to robustness, redundancy, and relevance analysis testing five feature selection methods (p-value, least absolute shrinkage and selection operator (LASSO), semi-supervised LASSO, principal component analysis (PCA), semi-supervised PCA). Support vector machine performed the classification tasks, and its performance were evaluated by means of a 10-fold cross-validation. The perturbation-based approach provided the best performances in terms of f1 score and balanced accuracy in both CA-AS (f1 score: 80%, AUC: 0.91) and CA-HCM (f1 score: 86%, AUC: 0.92) classifications. These results suggest that ROI perturbations represent a powerful approach to address both data balancing and augmentation issues.
Collapse
Affiliation(s)
- F Lo Iacono
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Milan, Italy.
| | - R Maragna
- Department of Perioperative Cardiology and Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy
| | - G Pontone
- Department of Perioperative Cardiology and Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
| | - V D A Corino
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Milan, Italy
- Department of Perioperative Cardiology and Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy
| |
Collapse
|
18
|
Yasaka K, Uehara S, Kato S, Watanabe Y, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2466-2473. [PMID: 38671337 PMCID: PMC11522216 DOI: 10.1007/s10278-024-01112-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 03/28/2024] [Accepted: 04/01/2024] [Indexed: 04/28/2024]
Abstract
The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422-0.571 and 0.410-0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (p < 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Shunichi Uehara
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
19
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024; 17:725-738. [PMID: 39048847 PMCID: PMC11341643 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
20
|
Okimoto N, Yasaka K, Cho S, Koshino S, Kanzawa J, Asari Y, Fujita N, Kubo T, Suzuki Y, Abe O. New liver window width in detecting hepatocellular carcinoma on dynamic contrast-enhanced computed tomography with deep learning reconstruction. Radiol Phys Technol 2024; 17:658-665. [PMID: 38837119 PMCID: PMC11341740 DOI: 10.1007/s12194-024-00817-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 05/12/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
Changing a window width (WW) alters appearance of noise and contrast of CT images. The aim of this study was to investigate the impact of adjusted WW for deep learning reconstruction (DLR) in detecting hepatocellular carcinomas (HCCs) on CT with DLR. This retrospective study included thirty-five patients who underwent abdominal dynamic contrast-enhanced CT. DLR was used to reconstruct arterial, portal, and delayed phase images. The investigation of the optimal WW involved two blinded readers. Then, five other blinded readers independently read the image sets for detection of HCCs and evaluation of image quality with optimal or conventional liver WW. The optimal WW for detection of HCC was 119 (rounded to 120 in the subsequent analyses) Hounsfield unit (HU), which was the average of adjusted WW in the arterial, portal, and delayed phases. The average figures of merit for the readers for the jackknife alternative free-response receiver operating characteristic analysis to detect HCC were 0.809 (reader 1/2/3/4/5, 0.765/0.798/0.892/0.764/0.827) in the optimal WW (120 HU) and 0.765 (reader 1/2/3/4/5, 0.707/0.769/0.838/0.720/0.791) in the conventional WW (150 HU), and statistically significant difference was observed between them (p < 0.001). Image quality in the optimal WW was superior to those in the conventional WW, and significant difference was seen for some readers (p < 0.041). The optimal WW for detection of HCC was narrower than conventional WW on dynamic contrast-enhanced CT with DLR. Compared with the conventional liver WW, optimal liver WW significantly improved detection performance of HCC.
Collapse
Affiliation(s)
- Naomasa Okimoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shinichi Cho
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Saori Koshino
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yusuke Asari
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yuichi Suzuki
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
21
|
Yasaka K, Abe O. Impact of rapid iodine contrast agent infusion on tracheal diameter and lung volume in CT pulmonary angiography measured with deep learning-based algorithm. Jpn J Radiol 2024; 42:1003-1011. [PMID: 38733470 PMCID: PMC11364558 DOI: 10.1007/s11604-024-01591-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 05/04/2024] [Indexed: 05/13/2024]
Abstract
PURPOSE To compare computed tomography (CT) pulmonary angiography and unenhanced CT to determine the effect of rapid iodine contrast agent infusion on tracheal diameter and lung volume. MATERIAL AND METHODS This retrospective study included 101 patients who underwent CT pulmonary angiography and unenhanced CT, for which the time interval between them was within 365 days. CT pulmonary angiography was scanned 20 s after starting the contrast agent injection at the end-inspiratory level. Commercial software, which was developed based on deep learning technique, was used to segment the lung, and its volume was automatically evaluated. The tracheal diameter at the thoracic inlet level was also measured. Then, the ratios for the CT pulmonary angiography to unenhanced CT of the tracheal diameter (TDPAU) and both lung volumes (BLVPAU) were calculated. RESULTS Tracheal diameter and both lung volumes were significantly smaller in CT pulmonary angiography (17.2 ± 2.6 mm and 3668 ± 1068 ml, respectively) than those in unenhanced CT (17.7 ± 2.5 mm and 3887 ± 1086 ml, respectively) (p < 0.001 for both). A statistically significant correlation was found between TDPAU and BLVPAU with a correlation coefficient of 0.451 (95% confidence interval, 0.280-0.594) (p < 0.001). No factor showed a significant association with TDPAU. The type of contrast agent had a significant association for BLVPAU (p = 0.042). CONCLUSIONS Rapid infusion of iodine contrast agent reduced the tracheal diameter and both lung volumes in CT pulmonary angiography, which was scanned at end-inspiratory level, compared with those in unenhanced CT.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
22
|
Kavak N, Kavak RP, Güngörer B, Turhan B, Kaymak SD, Duman E, Çelik S. Detecting pediatric appendicular fractures using artificial intelligence. REVISTA DA ASSOCIACAO MEDICA BRASILEIRA (1992) 2024; 70:e20240523. [PMID: 39230068 PMCID: PMC11371126 DOI: 10.1590/1806-9282.20240523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Accepted: 06/05/2024] [Indexed: 09/05/2024]
Abstract
OBJECTIVE The primary objective was to assess the diagnostic accuracy of a deep learning-based artificial intelligence model for the detection of acute appendicular fractures in pediatric patients presenting with a recent history of trauma to the emergency department. The secondary goal was to examine the effect of assistive support on the emergency doctor's ability to detect fractures. METHODS The dataset was 5,150 radiographs of which 850 showed fractures, while 4,300 radiographs did not show any fractures. The process utilized 4,532 (88%) radiographs, inclusive of both fractured and non-fractured radiographs, in the training phase. Subsequently, 412 (8%) radiographs were appraised during validation, and 206 (4%) were set apart for the testing phase. With and without artificial intelligence assistance, the emergency doctor reviewed another set of 2,000 radiographs (400 fractures and 600 non-fractures each) for labeling in the second test. RESULTS The artificial intelligence model showed a mean average precision 50 of 89%, a specificity of 92%, a sensitivity of 90%, and an F1 score of 90%. The confusion matrix revealed that the model trained with artificial intelligence achieved accuracies of 93 and 95% in detecting fractures, respectively. Artificial intelligence assistance improved the reading sensitivity from 93.7% (without assistance) to 97.0% (with assistance) and the reading accuracy from 88% (without assistance) to 94.9% (with assistance). CONCLUSION A deep learning-based artificial intelligence model has proven to be highly effective in detecting fractures in pediatric patients, enhancing the diagnostic capabilities of emergency doctors through assistive support.
Collapse
Affiliation(s)
- Nezih Kavak
- Etlik City Hospital, Department of Emergency – Ankara, Turkey
| | | | - Bülent Güngörer
- Etlik City Hospital, Department of Emergency – Ankara, Turkey
| | - Berna Turhan
- Etlik City Hospital, Department of Radiology – Ankara, Turkey
| | | | - Evrim Duman
- Etlik City Hospital, Department of Emergency – Ankara, Turkey
- Etlik City Hospital, Department of Orthopedics and Traumatology – Ankara, Turkey
| | - Serdar Çelik
- Ostim Technical University, Department of Management Information Systems – Ankara, Turkey
| |
Collapse
|
23
|
Kutbi M. Artificial Intelligence-Based Applications for Bone Fracture Detection Using Medical Images: A Systematic Review. Diagnostics (Basel) 2024; 14:1879. [PMID: 39272664 PMCID: PMC11394268 DOI: 10.3390/diagnostics14171879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/19/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.
Collapse
Affiliation(s)
- Mohammed Kutbi
- College of Computing and Informatics, Saudi Electronic University, Riyadh 13316, Saudi Arabia
| |
Collapse
|
24
|
Nomura Y, Hanaoka S, Hayashi N, Yoshikawa T, Koshino S, Sato C, Tatsuta M, Tanaka Y, Kano S, Nakaya M, Inui S, Kusakabe M, Nakao T, Miki S, Watadani T, Nakaoka R, Shimizu A, Abe O. Performance changes due to differences among annotating radiologists for training data in computerized lesion detection. Int J Comput Assist Radiol Surg 2024; 19:1527-1536. [PMID: 38625446 PMCID: PMC11329536 DOI: 10.1007/s11548-024-03136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Chiaki Sato
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, Tokyo, Japan
| | - Momoko Tatsuta
- Department of Diagnostic Radiology, Kitasato University Hospital, Sagamihara, Kanagawa, Japan
| | - Yuya Tanaka
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shintaro Kano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Moto Nakaya
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | | | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryusuke Nakaoka
- Division of Medical Devices, National Institute of Health Sciences, Kawasaki, Kanagawa, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
25
|
Yasaka K, Saigusa H, Abe O. Effects of Intravenous Infusion of Iodine Contrast Media on the Tracheal Diameter and Lung Volume Measured with Deep Learning-Based Algorithm. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1609-1617. [PMID: 38448759 PMCID: PMC11300755 DOI: 10.1007/s10278-024-01071-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/06/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
This study aimed to investigate the effects of intravenous injection of iodine contrast agent on the tracheal diameter and lung volume. In this retrospective study, a total of 221 patients (71.1 ± 12.4 years, 174 males) who underwent vascular dynamic CT examination including chest were included. Unenhanced, arterial phase, and delayed-phase images were scanned. The tracheal luminal diameters at the level of the thoracic inlet and both lung volumes were evaluated by a radiologist using a commercial software, which allows automatic airway and lung segmentation. The tracheal diameter and both lung volumes were compared between the unenhanced vs. arterial and delayed phase using a paired t-test. The Bonferroni correction was performed for multiple group comparisons. The tracheal diameter in the arterial phase (18.6 ± 2.4 mm) was statistically significantly smaller than those in the unenhanced CT (19.1 ± 2.5 mm) (p < 0.001). No statistically significant difference was found in the tracheal diameter between the delayed phase (19.0 ± 2.4 mm) and unenhanced CT (p = 0.077). Both lung volumes in the arterial phase were 4131 ± 1051 mL which was significantly smaller than those in the unenhanced CT (4332 ± 1076 mL) (p < 0.001). No statistically significant difference was found in both lung volumes between the delayed phase (4284 ± 1054 mL) and unenhanced CT (p = 0.068). In conclusion, intravenous infusion of iodine contrast agent transiently decreased the tracheal diameter and both lung volumes.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Hiroyuki Saigusa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
26
|
Hamada A, Yasaka K, Hatano S, Kurokawa M, Inui S, Kubo T, Watanabe Y, Abe O. Deep-Learning Reconstruction of High-Resolution CT Improves Interobserver Agreement for the Evaluation of Pulmonary Fibrosis. Can Assoc Radiol J 2024; 75:542-548. [PMID: 38293802 DOI: 10.1177/08465371241228468] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Objective: This study aimed to investigate whether deep-learning reconstruction (DLR) improves interobserver agreement in the evaluation of honeycombing for patients with interstitial lung disease (ILD) who underwent high-resolution computed tomography (CT) compared with hybrid iterative reconstruction (HIR). Methods: In this retrospective study, 35 consecutive patients suspected of ILD who underwent CT including the chest region were included. High-resolution CT images of the unilateral lung with DLR and HIR were reconstructed for the right and left lungs. A radiologist placed regions of interest on the lung and measured standard deviation of CT attenuation (i.e., quantitative image noise). In the qualitative image analyses, 5 blinded readers assessed the presence of honeycombing and reticulation, qualitative image noise, artifacts, and overall image quality using a 5-point scale (except for artifacts which was evaluated using a 3-point scale). Results: The quantitative and qualitative image noise in DLR was remarkably reduced compared to that in HIR (P < .001). Artifacts and overall DLR quality were significantly improved compared to those of HIR (P < .001 for 4 out of 5 readers). Interobserver agreement in the evaluations of honeycombing and reticulation for DLR (0.557 [0.450-0.693] and 0.525 [0.470-0.541], respectively) were higher than those for HIR (0.321 [0.211-0.520] and 0.470 [0.354-0.533], respectively). A statistically significant difference was found for honeycombing (P = .014). Conclusions: DLR improved interobserver agreement in the evaluation of honeycombing in patients with ILD on CT compared to HIR.
Collapse
Affiliation(s)
- Akiyoshi Hamada
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Sosuke Hatano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Mariko Kurokawa
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
27
|
Liu J, Du H, Huang L, Xie W, Liu K, Zhang X, Chen S, Zhang Y, Li D, Pan H. AI-Powered Microfluidics: Shaping the Future of Phenotypic Drug Discovery. ACS APPLIED MATERIALS & INTERFACES 2024; 16:38832-38851. [PMID: 39016521 DOI: 10.1021/acsami.4c07665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
Phenotypic drug discovery (PDD), which involves harnessing biological systems directly to uncover effective drugs, has undergone a resurgence in recent years. The rapid advancement of artificial intelligence (AI) over the past few years presents numerous opportunities for augmenting phenotypic drug screening on microfluidic platforms, leveraging its predictive capabilities, data analysis, efficient data processing, etc. Microfluidics coupled with AI is poised to revolutionize the landscape of phenotypic drug discovery. By integrating advanced microfluidic platforms with AI algorithms, researchers can rapidly screen large libraries of compounds, identify novel drug candidates, and elucidate complex biological pathways with unprecedented speed and efficiency. This review provides an overview of recent advances and challenges in AI-based microfluidics and their applications in drug discovery. We discuss the synergistic combination of microfluidic systems for high-throughput screening and AI-driven analysis for phenotype characterization, drug-target interactions, and predictive modeling. In addition, we highlight the potential of AI-powered microfluidics to achieve an automated drug screening system. Overall, AI-powered microfluidics represents a promising approach to shaping the future of phenotypic drug discovery by enabling rapid, cost-effective, and accurate identification of therapeutically relevant compounds.
Collapse
Affiliation(s)
- Junchi Liu
- Department of Anesthesiology, The First Hospital of Jilin University, 71 Xinmin Street, Changchun 130012, China
| | - Hanze Du
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Lei Huang
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Wangni Xie
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Kexuan Liu
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Xue Zhang
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Shi Chen
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Yuan Zhang
- Department of Anesthesiology, The First Hospital of Jilin University, 71 Xinmin Street, Changchun 130012, China
| | - Daowei Li
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Hui Pan
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
28
|
Wang R, Huang S, Wang P, Shi X, Li S, Ye Y, Zhang W, Shi L, Zhou X, Tang X. Bibliometric analysis of the application of deep learning in cancer from 2015 to 2023. Cancer Imaging 2024; 24:85. [PMID: 38965599 PMCID: PMC11223420 DOI: 10.1186/s40644-024-00737-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/27/2024] [Indexed: 07/06/2024] Open
Abstract
BACKGROUND Recently, the application of deep learning (DL) has made great progress in various fields, especially in cancer research. However, to date, the bibliometric analysis of the application of DL in cancer is scarce. Therefore, this study aimed to explore the research status and hotspots of the application of DL in cancer. METHODS We retrieved all articles on the application of DL in cancer from the Web of Science database Core Collection database. Biblioshiny, VOSviewer and CiteSpace were used to perform the bibliometric analysis through analyzing the numbers, citations, countries, institutions, authors, journals, references, and keywords. RESULTS We found 6,016 original articles on the application of DL in cancer. The number of annual publications and total citations were uptrend in general. China published the greatest number of articles, USA had the highest total citations, and Saudi Arabia had the highest centrality. Chinese Academy of Sciences was the most productive institution. Tian, Jie published the greatest number of articles, while He Kaiming was the most co-cited author. IEEE Access was the most popular journal. The analysis of references and keywords showed that DL was mainly used for the prediction, detection, classification and diagnosis of breast cancer, lung cancer, and skin cancer. CONCLUSIONS Overall, the number of articles on the application of DL in cancer is gradually increasing. In the future, further expanding and improving the application scope and accuracy of DL applications, and integrating DL with protein prediction, genomics and cancer research may be the research trends.
Collapse
Affiliation(s)
- Ruiyu Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shu Huang
- Department of Gastroenterology, Lianshui County People' Hospital, Huaian, China
- Department of Gastroenterology, Lianshui People' Hospital of Kangda CollegeAffiliated to, Nanjing Medical University , Huaian, China
| | - Ping Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xiaomin Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shiqi Li
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Yusong Ye
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Wei Zhang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Lei Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| | - Xiaowei Tang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| |
Collapse
|
29
|
Fujita N, Yasaka K, Hatano S, Sakamoto N, Kurokawa R, Abe O. Deep learning reconstruction for high-resolution computed tomography images of the temporal bone: comparison with hybrid iterative reconstruction. Neuroradiology 2024; 66:1105-1112. [PMID: 38514472 DOI: 10.1007/s00234-024-03330-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 03/04/2024] [Indexed: 03/23/2024]
Abstract
PURPOSE We investigated whether the quality of high-resolution computed tomography (CT) images of the temporal bone improves with deep learning reconstruction (DLR) compared with hybrid iterative reconstruction (HIR). METHODS This retrospective study enrolled 36 patients (15 men, 21 women; age, 53.9 ± 19.5 years) who had undergone high-resolution CT of the temporal bone. Axial and coronal images were reconstructed using DLR, HIR, and filtered back projection (FBP). In qualitative image analyses, two radiologists independently compared the DLR and HIR images with FBP in terms of depiction of structures, image noise, and overall quality, using a 5-point scale (5 = better than FBP, 1 = poorer than FBP) to evaluate image quality. The other two radiologists placed regions of interest on the tympanic cavity and measured the standard deviation of CT attenuation (i.e., quantitative image noise). Scores from the qualitative and quantitative analyses of the DLR and HIR images were compared using, respectively, the Wilcoxon signed-rank test and the paired t-test. RESULTS Qualitative and quantitative image noise was significantly reduced in DLR images compared with HIR images (all comparisons, p ≤ 0.016). Depiction of the otic capsule, auditory ossicles, and tympanic membrane was significantly improved in DLR images compared with HIR images (both readers, p ≤ 0.003). Overall image quality was significantly superior in DLR images compared with HIR images (both readers, p < 0.001). CONCLUSION Compared with HIR, DLR provided significantly better-quality high-resolution CT images of the temporal bone.
Collapse
Affiliation(s)
- Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Sosuke Hatano
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
30
|
Alrashed S, Dutra V, Chu TMG, Yang CC, Lin WS. Influence of exposure protocol, voxel size, and artifact removal algorithm on the trueness of segmentation utilizing an artificial-intelligence-based system. J Prosthodont 2024; 33:574-583. [PMID: 38305665 DOI: 10.1111/jopr.13827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
PURPOSE To evaluate the effects of exposure protocol, voxel sizes, and artifact removal algorithms on the trueness of segmentation in various mandible regions using an artificial intelligence (AI)-based system. MATERIALS AND METHODS Eleven dry human mandibles were scanned using a cone beam computed tomography (CBCT) scanner under differing exposure protocols (standard and ultra-low), voxel sizes (0.15 mm, 0.3 mm, and 0.45 mm), and with or without artifact removal algorithm. The resulting datasets were segmented using an AI-based system, exported as 3D models, and compared to reference files derived from a white-light laboratory scanner. Deviation measurement was performed using a computer-aided design (CAD) program and recorded as root mean square (RMS). The RMS values were used as a representation of the trueness of the AI-segmented 3D models. A 4-way ANOVA was used to assess the impact of voxel size, exposure protocol, artifact removal algorithm, and location on RMS values (α = 0.05). RESULTS Significant effects were found with voxel size (p < 0.001) and location (p < 0.001), but not with exposure protocol (p = 0.259) or artifact removal algorithm (p = 0.752). Standard exposure groups had significantly lower RMS values than the ultra-low exposure groups in the mandible body with 0.3 mm (p = 0.014) or 0.45 mm (p < 0.001) voxel sizes, the symphysis with a 0.45 mm voxel size (p = 0.011), and the whole mandible with a 0.45 mm voxel size (p = 0.001). Exposure protocol did not affect RMS values at teeth and alveolar bone (p = 0.544), mandible angles (p = 0.380), condyles (p = 0.114), and coronoids (p = 0.806) locations. CONCLUSION This study informs optimal exposure protocol and voxel size choices in CBCT imaging for true AI-based automatic segmentation with minimal radiation. The artifact removal algorithm did not influence the trueness of AI segmentation. When using an ultra-low exposure protocol to minimize patient radiation exposure in AI segmentations, a voxel size of 0.15 mm is recommended, while a voxel size of 0.45 mm should be avoided.
Collapse
Affiliation(s)
- Safa Alrashed
- Oral Biology PhD program in the College of Dentistry, Division of Restorative and Prosthetic Dentistry, The Ohio State University, Columbus, Ohio, USA
| | - Vinicius Dutra
- Department of Oral Pathology, Medicine, and Radiology, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Tien-Min G Chu
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Chao-Chieh Yang
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
- Advanced Education Program in Prosthodontics, Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Wei-Shao Lin
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
- Advanced Education Program in Prosthodontics, Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| |
Collapse
|
31
|
Liu J, Huang J, Song Y, He Q, Fang W, Wang T, Zheng Z, Liu W. Differentiating Gastrointestinal Stromal Tumors From Leiomyomas of Upper Digestive Tract Using Convolutional Neural Network Model by Endoscopic Ultrasonography. J Clin Gastroenterol 2024; 58:574-579. [PMID: 37646533 DOI: 10.1097/mcg.0000000000001907] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/16/2023] [Indexed: 09/01/2023]
Abstract
BACKGROUND Gastrointestinal stromal tumors (GISTs) and leiomyomas are the most common submucosal tumors of the upper digestive tract, and the diagnosis of the tumors is essential for their treatment and prognosis. However, the ability of endoscopic ultrasonography (EUS) which could correctly identify the tumor types is limited and closely related to the knowledge, operational level, and experience of the endoscopists. Therefore, the convolutional neural network (CNN) is used to assist endoscopists in determining GISTs or leiomyomas with EUS. MATERIALS AND METHODS A model based on CNN was constructed according to GoogLeNet architecture to distinguish GISTs or leiomyomas. All EUS images collected from this study were randomly sampled and divided into training set (n=411) and testing set (n=103) in a ratio of 4:1. The CNN model was trained by EUS images from the training set, and the testing set was utilized to evaluate the performance of the CNN model. In addition, there were some comparisons between endoscopists and CNN models. RESULTS It was shown that the sensitivity and specificity in identifying leiomyoma were 95.92%, 94.44%, sensitivity and specificity in identifying GIST were 94.44%, 95.92%, and accuracy in total was 95.15% of the CNN model. It indicates that the diagnostic accuracy of the CNN model is equivalent to skilled endoscopists, or even higher than them. CONCLUSION While identifying GIST or leiomyoma, the performance of CNN model was robust, which is highlighting its promising role in supporting less-experienced endoscopists and reducing interobserver agreement.
Collapse
Affiliation(s)
- Jing Liu
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Jia Huang
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Yan Song
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Qi He
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Weili Fang
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Tao Wang
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Zhongqing Zheng
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| | - Wentian Liu
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital
| |
Collapse
|
32
|
Yang M, Yang M, Yang L, Wang Z, Ye P, Chen C, Fu L, Xu S. Deep learning for MRI lesion segmentation in rectal cancer. Front Med (Lausanne) 2024; 11:1394262. [PMID: 38983364 PMCID: PMC11231084 DOI: 10.3389/fmed.2024.1394262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 06/14/2024] [Indexed: 07/11/2024] Open
Abstract
Rectal cancer (RC) is a globally prevalent malignant tumor, presenting significant challenges in its management and treatment. Currently, magnetic resonance imaging (MRI) offers superior soft tissue contrast and radiation-free effects for RC patients, making it the most widely used and effective detection method. In early screening, radiologists rely on patients' medical radiology characteristics and their extensive clinical experience for diagnosis. However, diagnostic accuracy may be hindered by factors such as limited expertise, visual fatigue, and image clarity issues, resulting in misdiagnosis or missed diagnosis. Moreover, the distribution of surrounding organs in RC is extensive with some organs having similar shapes to the tumor but unclear boundaries; these complexities greatly impede doctors' ability to diagnose RC accurately. With recent advancements in artificial intelligence, machine learning techniques like deep learning (DL) have demonstrated immense potential and broad prospects in medical image analysis. The emergence of this approach has significantly enhanced research capabilities in medical image classification, detection, and segmentation fields with particular emphasis on medical image segmentation. This review aims to discuss the developmental process of DL segmentation algorithms along with their application progress in lesion segmentation from MRI images of RC to provide theoretical guidance and support for further advancements in this field.
Collapse
Affiliation(s)
- Mingwei Yang
- Department of General Surgery, Nanfang Hospital Zengcheng Campus, Guangzhou, Guangdong, China
| | - Miyang Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Lanlan Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Zhaochu Wang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Peiyun Ye
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Chujie Chen
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Liyuan Fu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Shangwen Xu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| |
Collapse
|
33
|
Nguyen TTT, Greene LA, Mnatsakanyan H, Badr CE. Revolutionizing Brain Tumor Care: Emerging Technologies and Strategies. Biomedicines 2024; 12:1376. [PMID: 38927583 PMCID: PMC11202201 DOI: 10.3390/biomedicines12061376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/16/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
Glioblastoma multiforme (GBM) is one of the most aggressive forms of brain tumor, characterized by a daunting prognosis with a life expectancy hovering around 12-16 months. Despite a century of relentless research, only a select few drugs have received approval for brain tumor treatment, largely due to the formidable barrier posed by the blood-brain barrier. The current standard of care involves a multifaceted approach combining surgery, irradiation, and chemotherapy. However, recurrence often occurs within months despite these interventions. The formidable challenges of drug delivery to the brain and overcoming therapeutic resistance have become focal points in the treatment of brain tumors and are deemed essential to overcoming tumor recurrence. In recent years, a promising wave of advanced treatments has emerged, offering a glimpse of hope to overcome the limitations of existing therapies. This review aims to highlight cutting-edge technologies in the current and ongoing stages of development, providing patients with valuable insights to guide their choices in brain tumor treatment.
Collapse
Affiliation(s)
- Trang T. T. Nguyen
- Ronald O. Perelman Department of Dermatology, Perlmutter Cancer Center, NYU Grossman School of Medicine, NYU Langone Health, New York, NY 10016, USA
| | - Lloyd A. Greene
- Department of Pathology and Cell Biology, Columbia University Medical Center, New York, NY 10032, USA;
| | - Hayk Mnatsakanyan
- Department of Neurology, Massachusetts General Hospital, Neuroscience Program, Harvard Medical School, Boston, MA 02129, USA; (H.M.); (C.E.B.)
| | - Christian E. Badr
- Department of Neurology, Massachusetts General Hospital, Neuroscience Program, Harvard Medical School, Boston, MA 02129, USA; (H.M.); (C.E.B.)
| |
Collapse
|
34
|
Xie Y, Zaccagna F, Rundo L, Testa C, Zhu R, Tonon C, Lodi R, Manners DN. IMPA-Net: Interpretable Multi-Part Attention Network for Trustworthy Brain Tumor Classification from MRI. Diagnostics (Basel) 2024; 14:997. [PMID: 38786294 PMCID: PMC11119919 DOI: 10.3390/diagnostics14100997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 05/08/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024] Open
Abstract
Deep learning (DL) networks have shown attractive performance in medical image processing tasks such as brain tumor classification. However, they are often criticized as mysterious "black boxes". The opaqueness of the model and the reasoning process make it difficult for health workers to decide whether to trust the prediction outcomes. In this study, we develop an interpretable multi-part attention network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness of classification outcomes. The proposed model not only predicts the tumor grade but also provides a global explanation for the model interpretability and a local explanation as justification for the proffered prediction. Global explanation is represented as a group of feature patterns that the model learns to distinguish high-grade glioma (HGG) and low-grade glioma (LGG) classes. Local explanation interprets the reasoning process of an individual prediction by calculating the similarity between the prototypical parts of the image and a group of pre-learned task-related features. Experiments conducted on the BraTS2017 dataset demonstrate that IMPA-Net is a verifiable model for the classification task. A percentage of 86% of feature patterns were assessed by two radiologists to be valid for representing task-relevant medical features. The model shows a classification accuracy of 92.12%, of which 81.17% were evaluated as trustworthy based on local explanations. Our interpretable model is a trustworthy model that can be used for decision aids for glioma classification. Compared with black-box CNNs, it allows health workers and patients to understand the reasoning process and trust the prediction outcomes.
Collapse
Affiliation(s)
- Yuting Xie
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (C.T.); (R.L.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - Fulvio Zaccagna
- Department of Imaging, Cambridge University Hospitals NHS Foundation Trust, Cambridge Biomedical Campus, Cambridge CB2 0SL, UK;
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Claudia Testa
- INFN Bologna Division, Viale C. Berti Pichat, 6/2, 40127 Bologna, Italy
- Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
| | - Ruifeng Zhu
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy;
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (C.T.); (R.L.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (C.T.); (R.L.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - David Neil Manners
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
- Department for Life Quality Studies, University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
35
|
Yan T, Tang G, Zhang H, Liang L, Ma J, Gao Y, Zhou C, Li S. Multiscale and multiperception feature learning for pancreatic lesion detection based on noncontrast CT. Phys Med Biol 2024; 69:105014. [PMID: 38588676 DOI: 10.1088/1361-6560/ad3c0c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 04/08/2024] [Indexed: 04/10/2024]
Abstract
Background. Pancreatic cancer is one of the most malignant tumours, demonstrating a poor prognosis and nearly identically high mortality and morbidity, mainly because of the difficulty of early diagnosis and timely treatment for localized stages.Objective. To develop a noncontrast CT (NCCT)-based pancreatic lesion detection model that could serve as an intelligent tool for diagnosing pancreatic cancer early, overcoming the challenges associated with low contrast intensities and complex anatomical structures present in NCCT images.Approach.We design a multiscale and multiperception (MSMP) feature learning network with ResNet50 coupled with a feature pyramid network as the backbone for strengthening feature expressions. We added multiscale atrous convolutions to expand different receptive fields, contextual attention to perceive contextual information, and channel and spatial attention to focus on important channels and spatial regions, respectively. The MSMP network then acts as a feature extractor for proposing an NCCT-based pancreatic lesion detection model with image patches covering the pancreas as its input; Faster R-CNN is employed as the detection method for accurately detecting pancreatic lesions.Main results. By using the new MSMP network as a feature extractor, our model outperforms the conventional object detection algorithms in terms of the recall (75.40% and 90.95%), precision (40.84% and 68.21%), F1 score (52.98% and 77.96%), F2 score (64.48% and 85.26%) and Ap50 metrics (53.53% and 70.14%) at the image and patient levels, respectively.Significance.The good performance of our new model implies that MSMP can mine NCCT imaging features for detecting pancreatic lesions from complex backgrounds well. The proposed detection model is expected to be further developed as an intelligent method for the early detection of pancreatic cancer.
Collapse
Affiliation(s)
- Tian Yan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, People's Republic of China
- General Surgery Center, Department of Hepatobiliary Surgery II, Guangdong Provincial Research Center for Artificial Organ and Tissue Engineering, Guangzhou Clinical Research and Transformation Center for Artificial Liver, Institute of Regenerative Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, People's Republic of China
| | - Geye Tang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, People's Republic of China
| | - Haojie Zhang
- Department of General Surgery, Seventh People's Hospital of Nanhai District, Foshan, People's Republic of China
| | - Lidu Liang
- Equipment and Materials Department, Xinchang Hospital of Traditional Chinese Medicine, Zhejiang, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, People's Republic of China
| | - Yi Gao
- General Surgery Center, Department of Hepatobiliary Surgery II, Guangdong Provincial Research Center for Artificial Organ and Tissue Engineering, Guangzhou Clinical Research and Transformation Center for Artificial Liver, Institute of Regenerative Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, People's Republic of China
- State Key Laboratory of Organ Failure Research, Southern Medical University, Guangzhou, People's Republic of China
| | - Chenjie Zhou
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China
| | - Shulong Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, People's Republic of China
| |
Collapse
|
36
|
Satheesh Kumar J, Vinoth Kumar V, Mahesh TR, Alqahtani MS, Prabhavathy P, Manikandan K, Guluwadi S. Detection of Marchiafava Bignami disease using distinct deep learning techniques in medical diagnostics. BMC Med Imaging 2024; 24:100. [PMID: 38684964 PMCID: PMC11059769 DOI: 10.1186/s12880-024-01283-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/25/2024] [Indexed: 05/02/2024] Open
Abstract
PURPOSE To detect the Marchiafava Bignami Disease (MBD) using a distinct deep learning technique. BACKGROUND Advanced deep learning methods are becoming more crucial in contemporary medical diagnostics, particularly for detecting intricate and uncommon neurological illnesses such as MBD. This rare neurodegenerative disorder, sometimes associated with persistent alcoholism, is characterized by the loss of myelin or tissue death in the corpus callosum. It poses significant diagnostic difficulties owing to its infrequency and the subtle signs it exhibits in its first stages, both clinically and on radiological scans. METHODS The novel method of Variational Autoencoders (VAEs) in conjunction with attention mechanisms is used to identify MBD peculiar diseases accurately. VAEs are well-known for their proficiency in unsupervised learning and anomaly detection. They excel at analyzing extensive brain imaging datasets to uncover subtle patterns and abnormalities that traditional diagnostic approaches may overlook, especially those related to specific diseases. The use of attention mechanisms enhances this technique, enabling the model to concentrate on the most crucial elements of the imaging data, similar to the discerning observation of a skilled radiologist. Thus, we utilized the VAE with attention mechanisms in this study to detect MBD. Such a combination enables the prompt identification of MBD and assists in formulating more customized and efficient treatment strategies. RESULTS A significant breakthrough in this field is the creation of a VAE equipped with attention mechanisms, which has shown outstanding performance by achieving accuracy rates of over 90% in accurately differentiating MBD from other neurodegenerative disorders. CONCLUSION This model, which underwent training using a diverse range of MRI images, has shown a notable level of sensitivity and specificity, significantly minimizing the frequency of false positive results and strengthening the confidence and dependability of these sophisticated automated diagnostic tools.
Collapse
Affiliation(s)
- J Satheesh Kumar
- Department of Electronics and Instrumentation Engineering, Dayananda Sagar College of Engineering, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - T R Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
| | - P Prabhavathy
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - K Manikandan
- School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology (VIT), Vellore, India
| | - Suresh Guluwadi
- Adama Science and Technology University, 302120, Adama, Ethiopia.
| |
Collapse
|
37
|
Ito K, Hirahara N, Muraoka H, Sawada E, Tokunaga S, Komatsu T, Kaneda T. Graphical user interface-based convolutional neural network models for detecting nasopalatine duct cysts using panoramic radiography. Sci Rep 2024; 14:7699. [PMID: 38565866 PMCID: PMC10987649 DOI: 10.1038/s41598-024-57632-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 03/20/2024] [Indexed: 04/04/2024] Open
Abstract
Nasopalatine duct cysts are difficult to detect on panoramic radiographs due to obstructive shadows and are often overlooked. Therefore, sensitive detection using panoramic radiography is clinically important. This study aimed to create a trained model to detect nasopalatine duct cysts from panoramic radiographs in a graphical user interface-based environment. This study was conducted on panoramic radiographs and CT images of 115 patients with nasopalatine duct cysts. As controls, 230 age- and sex-matched patients without cysts were selected from the same database. The 345 pre-processed panoramic radiographs were divided into 216 training data sets, 54 validation data sets, and 75 test data sets. Deep learning was performed for 400 epochs using pretrained-LeNet and pretrained-VGG16 as the convolutional neural networks to classify the cysts. The deep learning system's accuracy, sensitivity, and specificity using LeNet and VGG16 were calculated. LeNet and VGG16 showed an accuracy rate of 85.3% and 88.0%, respectively. A simple deep learning method using a graphical user interface-based Windows machine was able to create a trained model to detect nasopalatine duct cysts from panoramic radiographs, and may be used to prevent such cysts being overlooked during imaging.
Collapse
Affiliation(s)
- Kotaro Ito
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan.
| | - Naohisa Hirahara
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Hirotaka Muraoka
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Eri Sawada
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Satoshi Tokunaga
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Tomohiro Komatsu
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| | - Takashi Kaneda
- Department of Radiology, Nihon University School of Dentistry at Matsudo, 2-870-1 Sakaecho-Nishi, Matsudo, Chiba, 271-8587, Japan
| |
Collapse
|
38
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
39
|
Ayhan B, Ayan E, Bayraktar Y. A novel deep learning-based perspective for tooth numbering and caries detection. Clin Oral Investig 2024; 28:178. [PMID: 38411726 PMCID: PMC10899376 DOI: 10.1007/s00784-024-05566-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 02/17/2024] [Indexed: 02/28/2024]
Abstract
OBJECTIVES The aim of this study was automatically detecting and numbering teeth in digital bitewing radiographs obtained from patients, and evaluating the diagnostic efficiency of decayed teeth in real time, using deep learning algorithms. METHODS The dataset consisted of 1170 anonymized digital bitewing radiographs randomly obtained from faculty archives. After image evaluation and labeling process, the dataset was split into training and test datasets. This study proposed an end-to-end pipeline architecture consisting of three stages for matching tooth numbers and caries lesions to enhance treatment outcomes and prevent potential issues. Initially, a pre-trained convolutional neural network (CNN) utilized to determine the side of the bitewing images. Then, an improved CNN model YOLOv7 was proposed for tooth numbering and caries detection. In the final stage, our developed algorithm assessed which teeth have caries by comparing the numbered teeth with the detected caries, using the intersection over union value for the matching process. RESULTS According to test results, the recall, precision, and F1-score values were 0.994, 0.987 and 0.99 for teeth detection, 0.974, 0.985 and 0.979 for teeth numbering, and 0.833, 0.866 and 0.822 for caries detection, respectively. For teeth numbering and caries detection matching performance; the accuracy, recall, specificity, precision and F1-Score values were 0.934, 0.834, 0.961, 0.851 and 0.842, respectively. CONCLUSIONS The proposed model exhibited good achievement, highlighting the potential use of CNNs for tooth detection, numbering, and caries detection, concurrently. CLINICAL SIGNIFICANCE CNNs can provide valuable support to clinicians by automating the detection and numbering of teeth, as well as the detection of caries on bitewing radiographs. By enhancing overall performance, these algorithms have the capacity to efficiently save time and play a significant role in the assessment process.
Collapse
Affiliation(s)
- Baturalp Ayhan
- Department of Restorative Dentistry, Faculty of Dentistry, Kırıkkale University, Kırıkkale, Turkey.
| | - Enes Ayan
- Department of Computer Engineering, Faculty of Engineering and Architecture, Kırıkkale University, Kırıkkale, Turkey
| | - Yusuf Bayraktar
- Department of Restorative Dentistry, Faculty of Dentistry, Kırıkkale University, Kırıkkale, Turkey
| |
Collapse
|
40
|
Saluja S, Trivedi MC, Sarangdevot SS. Advancing glioma diagnosis: Integrating custom U-Net and VGG-16 for improved grading in MR imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4328-4350. [PMID: 38549330 DOI: 10.3934/mbe.2024191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
In the realm of medical imaging, the precise segmentation and classification of gliomas represent fundamental challenges with profound clinical implications. Leveraging the BraTS 2018 dataset as a standard benchmark, this study delves into the potential of advanced deep learning models for addressing these challenges. We propose a novel approach that integrates a customized U-Net for segmentation and VGG-16 for classification. The U-Net, with its tailored encoder-decoder pathways, accurately identifies glioma regions, thus improving tumor localization. The fine-tuned VGG-16, featuring a customized output layer, precisely differentiates between low-grade and high-grade gliomas. To ensure consistency in data pre-processing, a standardized methodology involving gamma correction, data augmentation, and normalization is introduced. This novel integration surpasses existing methods, offering significantly improved glioma diagnosis, validated by high segmentation dice scores (WT: 0.96, TC: 0.92, ET: 0.89), and a remarkable overall classification accuracy of 97.89%. The experimental findings underscore the potential of integrating deep learning-based methodologies for tumor segmentation and classification in enhancing glioma diagnosis and formulating subsequent treatment strategies.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | | |
Collapse
|
41
|
Versnjak J, Yevtushenko P, Kuehne T, Bruening J, Goubergrits L. Deep learning based assessment of hemodynamics in the coarctation of the aorta: comparison of bidirectional recurrent and convolutional neural networks. Front Physiol 2024; 15:1288339. [PMID: 38449784 PMCID: PMC10916009 DOI: 10.3389/fphys.2024.1288339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 01/24/2024] [Indexed: 03/08/2024] Open
Abstract
The utilization of numerical methods, such as computational fluid dynamics (CFD), has been widely established for modeling patient-specific hemodynamics based on medical imaging data. Hemodynamics assessment plays a crucial role in treatment decisions for the coarctation of the aorta (CoA), a congenital heart disease, with the pressure drop (PD) being a crucial biomarker for CoA treatment decisions. However, implementing CFD methods in the clinical environment remains challenging due to their computational cost and the requirement for expert knowledge. This study proposes a deep learning approach to mitigate the computational need and produce fast results. Building upon a previous proof-of-concept study, we compared the effects of two different artificial neural network (ANN) architectures trained on data with different dimensionalities, both capable of predicting hemodynamic parameters in CoA patients: a one-dimensional bidirectional recurrent neural network (1D BRNN) and a three-dimensional convolutional neural network (3D CNN). The performance was evaluated by median point-wise root mean square error (RMSE) for pressures along the centerline in 18 test cases, which were not included in a training cohort. We found that the 3D CNN (median RMSE of 3.23 mmHg) outperforms the 1D BRNN (median RMSE of 4.25 mmHg). In contrast, the 1D BRNN is more precise in PD prediction, with a lower standard deviation of the error (±7.03 mmHg) compared to the 3D CNN (±8.91 mmHg). The differences between both ANNs are not statistically significant, suggesting that compressing the 3D aorta hemodynamics into a 1D centerline representation does not result in the loss of valuable information when training ANN models. Additionally, we evaluated the utility of the synthetic geometries of the aortas with CoA generated by using a statistical shape model (SSM), as well as the impact of aortic arch geometry (gothic arch shape) on the model's training. The results show that incorporating a synthetic cohort obtained through the SSM of the clinical cohort does not significantly increase the model's accuracy, indicating that the synthetic cohort generation might be oversimplified. Furthermore, our study reveals that selecting training cases based on aortic arch shape (gothic versus non-gothic) does not improve ANN performance for test cases sharing the same shape.
Collapse
Affiliation(s)
| | | | | | | | - Leonid Goubergrits
- Institute of Computer-assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
| |
Collapse
|
42
|
Mizuki M, Yasaka K, Miyo R, Ohtake Y, Hamada A, Hosoi R, Abe O. Deep Learning Reconstruction Plus Single-Energy Metal Artifact Reduction for Supra Hyoid Neck CT in Patients With Dental Metals. Can Assoc Radiol J 2024; 75:74-81. [PMID: 37387607 DOI: 10.1177/08465371231182904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023] Open
Abstract
Purpose: We investigated the effect of deep learning reconstruction (DLR) plus single-energy metal artifact reduction (SEMAR) on neck CT in patients with dental metals, comparing it with DLR and with hybrid iterative reconstruction (Hybrid IR)-SEMAR. Methods: In this retrospective study, 32 patients (25 men, 7 women; mean age: 63 ± 15 years) with dental metals underwent contrast-enhanced CT of the oral and oropharyngeal regions. Axial images were reconstructed using DLR, Hybrid IR-SEMAR, and DLR-SEMAR. In quantitative analyses, degrees of image noise and artifacts were evaluated. In one-by-one qualitative analyses, 2 radiologists evaluated metal artifacts, the depiction of structures, and noise on five-point scales. In side-by-side qualitative analyses, artifacts and overall image quality were evaluated by comparing Hybrid IR-SEMAR with DLR-SEMAR. Results: Artifacts were significantly less with DLR-SEMAR than with DLR in quantitative (P < .001) and one-by-one qualitative (P < .001) analyses, which resulted in significantly better depiction of most structures (P < .004). Artifacts in side-by-side analysis and image noise in quantitative and one-by-one qualitative analyses (P < .001) were significantly less with DLR-SEMAR than with Hybrid IR-SEMAR, resulting in significantly better overall quality of DLR-SEMAR. Conclusions: Compared with DLR and Hybrid IR-SEMAR, DLR-SEMAR provided significantly better supra hyoid neck CT images in patients with dental metals.
Collapse
Affiliation(s)
- Masumi Mizuki
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Rintaro Miyo
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Yuta Ohtake
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Akiyoshi Hamada
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Reina Hosoi
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
43
|
Han SH, Lim J, Kim JS, Cho JH, Hong M, Kim M, Kim SJ, Kim YJ, Kim YH, Lim SH, Sung SJ, Kang KH, Baek SH, Choi SK, Kim N. Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: A multicenter study. Korean J Orthod 2024; 54:48-58. [PMID: 38072448 PMCID: PMC10811357 DOI: 10.4041/kjod23.075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/07/2023] [Accepted: 10/10/2023] [Indexed: 01/26/2024] Open
Abstract
Objective : To quantify the effects of midline-related landmark identification on midline deviation measurements in posteroanterior (PA) cephalograms using a cascaded convolutional neural network (CNN). Methods : A total of 2,903 PA cephalogram images obtained from 9 university hospitals were divided into training, internal validation, and test sets (n = 2,150, 376, and 377). As the gold standard, 2 orthodontic professors marked the bilateral landmarks, including the frontozygomatic suture point and latero-orbitale (LO), and the midline landmarks, including the crista galli, anterior nasal spine (ANS), upper dental midpoint (UDM), lower dental midpoint (LDM), and menton (Me). For the test, Examiner-1 and Examiner-2 (3-year and 1-year orthodontic residents) and the Cascaded-CNN models marked the landmarks. After point-to-point errors of landmark identification, the successful detection rate (SDR) and distance and direction of the midline landmark deviation from the midsagittal line (ANS-mid, UDM-mid, LDM-mid, and Me-mid) were measured, and statistical analysis was performed. Results : The cascaded-CNN algorithm showed a clinically acceptable level of point-to-point error (1.26 mm vs. 1.57 mm in Examiner-1 and 1.75 mm in Examiner-2). The average SDR within the 2 mm range was 83.2%, with high accuracy at the LO (right, 96.9%; left, 97.1%), and UDM (96.9%). The absolute measurement errors were less than 1 mm for ANS-mid, UDM-mid, and LDM-mid compared with the gold standard. Conclusions : The cascaded-CNN model may be considered an effective tool for the auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients, regardless of variations in the image acquisition method.
Collapse
Affiliation(s)
- Sung-Hoon Han
- Department of Orthodontics, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Jisup Lim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jun-Sik Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jin-Hyoung Cho
- Department of Orthodontics, School of Dentistry, Chonnam National University, Gwangju, Korea
| | - Mihee Hong
- Department of Orthodontics, School of Dentistry, Kyungpook National University, Daegu, Korea
| | - Minji Kim
- Department of Orthodontics, College of Medicine, Ewha Womans University, Seoul, Korea
| | - Su-Jung Kim
- Department of Orthodontics, Kyung Hee University School of Dentistry, Seoul, Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Young Ho Kim
- Department of Orthodontics, Institute of Oral Health Science, Ajou University School of Medicine, Suwon, Korea
| | - Sung-Hoon Lim
- Department of Orthodontics, College of Dentistry, Chosun University, Gwangju, Korea
| | - Sang Jin Sung
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Kyung-Hwa Kang
- Department of Orthodontics, School of Dentistry, Wonkwang University, Iksan, Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Dental Research Institute, Seoul National University, Seoul, Korea
| | - Sung-Kwon Choi
- Department of Orthodontics, School of Dentistry, Wonkwang University, Iksan, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
44
|
Jeon SM, Kim S, Lee KC. Deep Learning-based Assessment of Facial Asymmetry Using U-Net Deep Convolutional Neural Network Algorithm. J Craniofac Surg 2024; 35:133-136. [PMID: 37973054 DOI: 10.1097/scs.0000000000009862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 10/09/2023] [Indexed: 11/19/2023] Open
Abstract
OBJECTIVES This study aimed to evaluate the diagnostic performance of a deep convolutional neural network (DCNN)-based computer-assisted diagnosis (CAD) system to detect facial asymmetry on posteroanterior (PA) cephalograms and compare the results of the DCNN with those made by the orthodontist. MATERIALS AND METHODS PA cephalograms of 1020 patients with orthodontics were used to train the DCNN-based CAD systems for autoassessment of facial asymmetry, the degree of menton deviation, and the coordinates of its regarding landmarks. Twenty-five PA cephalograms were used to test the performance of the DCNN in analyzing facial asymmetry. The diagnostic performance of the DCNN-based CAD system was assessed using independent t -tests and Bland-Altman plots. RESULTS Comparison between the DCNN-based CAD system and conventional analysis confirmed no significant differences. Bland-Altman plots showed good agreement for all the measurements. CONCLUSIONS The DCNN-based CAD system might offer a clinically acceptable diagnostic evaluation of facial asymmetry on PA cephalograms.
Collapse
Affiliation(s)
| | - Seojeong Kim
- Korea Electronics Technology Institute, Seongnam, Korea
| | - Kyungmin Clara Lee
- Department of Orthodontics, School of Dentistry, Chonnam National University, Gwangju, Korea
| |
Collapse
|
45
|
Okimoto N, Yasaka K, Fujita N, Watanabe Y, Kanzawa J, Abe O. Deep learning reconstruction for improving the visualization of acute brain infarct on computed tomography. Neuroradiology 2024; 66:63-71. [PMID: 37991522 PMCID: PMC10761512 DOI: 10.1007/s00234-023-03251-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
PURPOSE This study aimed to investigate the impact of deep learning reconstruction (DLR) on acute infarct depiction compared with hybrid iterative reconstruction (Hybrid IR). METHODS This retrospective study included 29 (75.8 ± 13.2 years, 20 males) and 26 (64.4 ± 12.4 years, 18 males) patients with and without acute infarction, respectively. Unenhanced head CT images were reconstructed with DLR and Hybrid IR. In qualitative analyses, three readers evaluated the conspicuity of lesions based on five regions and image quality. A radiologist placed regions of interest on the lateral ventricle, putamen, and white matter in quantitative analyses, and the standard deviation of CT attenuation (i.e., quantitative image noise) was recorded. RESULTS Conspicuity of acute infarct in DLR was superior to that in Hybrid IR, and a statistically significant difference was observed for two readers (p ≤ 0.038). Conspicuity of acute infarct with time from onset to CT imaging at < 24 h in DLR was significantly improved compared with Hybrid IR for all readers (p ≤ 0.020). Image noise in DLR was significantly reduced compared with Hybrid IR with both the qualitative and quantitative analyses (p < 0.001 for all). CONCLUSION DLR in head CT helped improve acute infarct depiction, especially those with time from onset to CT imaging at < 24 h.
Collapse
Affiliation(s)
- Naomasa Okimoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-Ku, Tokyo, 130-8575, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
46
|
Yasaka K, Sato C, Hirakawa H, Fujita N, Kurokawa M, Watanabe Y, Kubo T, Abe O. Impact of deep learning on radiologists and radiology residents in detecting breast cancer on CT: a cross-vendor test study. Clin Radiol 2024; 79:e41-e47. [PMID: 37872026 DOI: 10.1016/j.crad.2023.09.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/13/2023] [Accepted: 09/29/2023] [Indexed: 10/25/2023]
Abstract
AIM To investigate the effect of deep learning on the diagnostic performance of radiologists and radiology residents in detecting breast cancers on computed tomography (CT). MATERIALS AND METHODS In this retrospective study, patients undergoing contrast-enhanced chest CT between January 2010 and December 2020 using equipment from two vendors were included. Patients with confirmed breast cancer were categorised as the training (n=201) and validation (n=26) group and the testing group (n=30) using processed CT images from either vendor. The trained deep-learning model was applied to test group patients with (30 females; mean age = 59.2 ± 15.8 years) and without (19 males, 21 females; mean age = 64 ± 15.9 years) breast cancer. Image-based diagnostic performance of the deep-learning model was evaluated with the area under the receiver operating characteristic curve (AUC). Two radiologists and three radiology residents were asked to detect malignant lesions by recording a four-point diagnostic confidence score before and after referring to the result from the deep-learning model, and their diagnostic performance was evaluated using jackknife alternative free-response receiver operating characteristic analysis by calculating the figure of merit (FOM). RESULTS The AUCs of the trained deep-learning model on the validation and test data were 0.976 and 0.967, respectively. After referencing with the result of the deep learning model, the FOMs of readers significantly improved (reader 1/2/3/4/5: from 0.933/0.962/0.883/0.944/0.867 to 0.958/0.968/0.917/0.947/0.900; p=0.038). CONCLUSION Deep learning can help radiologists and radiology residents detect breast cancer on CT.
Collapse
Affiliation(s)
- K Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - C Sato
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - H Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - N Fujita
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - M Kurokawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Y Watanabe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - T Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - O Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
47
|
Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, Tatsugami F, Yanagawa M, Hirata K, Yamada A, Tsuboyama T, Kawamura M, Fujioka T, Naganawa S. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42:3-15. [PMID: 37540463 PMCID: PMC10764412 DOI: 10.1007/s11604-023-01474-3] [Citation(s) in RCA: 102] [Impact Index Per Article: 102.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan.
| | | | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-ku, Sapporo, Hokkaido, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
48
|
Takayama Y, Sato K, Tanaka S, Murayama R, Goto N, Yoshimitsu K. Deep learning-based magnetic resonance imaging reconstruction for improving the image quality of reduced-field-of-view diffusion-weighted imaging of the pancreas. World J Radiol 2023; 15:338-349. [PMID: 38179202 PMCID: PMC10762521 DOI: 10.4329/wjr.v15.i12.338] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 11/12/2023] [Accepted: 12/04/2023] [Indexed: 12/26/2023] Open
Abstract
BACKGROUND It has been reported that deep learning-based reconstruction (DLR) can reduce image noise and artifacts, thereby improving the signal-to-noise ratio and image sharpness. However, no previous studies have evaluated the efficacy of DLR in improving image quality in reduced-field-of-view (reduced-FOV) diffusion-weighted imaging (DWI) [field-of-view optimized and constrained undistorted single-shot (FOCUS)] of the pancreas. We hypothesized that a combination of these techniques would improve DWI image quality without prolonging the scan time but would influence the apparent diffusion coefficient calculation. AIM To evaluate the efficacy of DLR for image quality improvement of FOCUS of the pancreas. METHODS This was a retrospective study evaluated 37 patients with pancreatic cystic lesions who underwent magnetic resonance imaging between August 2021 and October 2021. We evaluated three types of FOCUS examinations: FOCUS with DLR (FOCUS-DLR+), FOCUS without DLR (FOCUS-DLR-), and conventional FOCUS (FOCUS-conv). The three types of FOCUS and their apparent diffusion coefficient (ADC) maps were compared qualitatively and quantitatively. RESULTS FOCUS-DLR+ (3.62, average score of two radiologists) showed significantly better qualitative scores for image noise than FOCUS-DLR- (2.62) and FOCUS-conv (2.88) (P < 0.05). Furthermore, FOCUS-DLR+ showed the highest contrast ratio (CR) between the pancreatic parenchyma and adjacent fat tissue for b-values of 0 and 600 s/mm2 (0.72 ± 0.08 and 0.68 ± 0.08) and FOCUS-DLR- showed the highest CR between cystic lesions and the pancreatic parenchyma for the b-values of 0 and 600 s/mm2 (0.62 ± 0.21 and 0.62 ± 0.21) (P < 0.05), respectively. FOCUS-DLR+ provided significantly higher ADCs of the pancreas and lesion (1.44 ± 0.24 and 3.00 ± 0.66) compared to FOCUS-DLR- (1.39 ± 0.22 and 2.86 ± 0.61) and significantly lower ADCs compared to FOCUS-conv (1.84 ± 0.45 and 3.32 ± 0.70) (P < 0.05), respectively. CONCLUSION This study evaluated the efficacy of DLR for image quality improvement in reduced-FOV DWI of the pancreas. DLR can significantly denoise images without prolonging the scan time or decreasing the spatial resolution. The denoising level of DWI can be controlled to make the images appear more natural to the human eye. However, this study revealed that DLR did not ameliorate pancreatic distortion. Additionally, physicians should pay attention to the interpretation of ADCs after DLR application because ADCs are significantly changed by DLR.
Collapse
Affiliation(s)
- Yukihisa Takayama
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| | - Keisuke Sato
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| | - Shinji Tanaka
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| | - Ryo Murayama
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| | - Nahoko Goto
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| | - Kengo Yoshimitsu
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka 8140180, Japan
| |
Collapse
|
49
|
Chen W, Ayoub M, Liao M, Shi R, Zhang M, Su F, Huang Z, Li Y, Wang Y, Wong KK. A fusion of VGG-16 and ViT models for improving bone tumor classification in computed tomography. J Bone Oncol 2023; 43:100508. [PMID: 38021075 PMCID: PMC10654018 DOI: 10.1016/j.jbo.2023.100508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/14/2023] [Accepted: 09/20/2023] [Indexed: 12/01/2023] Open
Abstract
Background and Objective Bone tumors present significant challenges in orthopedic medicine due to variations in clinical treatment approaches for different tumor types, which includes benign, malignant, and intermediate cases. Convolutional Neural Networks (CNNs) have emerged as prominent models for tumor classification. However, their limited perception ability hinders the acquisition of global structural information, potentially affecting classification accuracy. To address this limitation, we propose an optimized deep learning algorithm for precise classification of diverse bone tumors. Materials and Methods Our dataset comprises 786 computed tomography (CT) images of bone tumors, featuring sections from two distinct bone species, namely the tibia and femur. Sourced from The Second Affiliated Hospital of Fujian Medical University, the dataset was meticulously preprocessed with noise reduction techniques. We introduce a novel fusion model, VGG16-ViT, leveraging the advantages of the VGG-16 network and the Vision Transformer (ViT) model. Specifically, we select 27 features from the third layer of VGG-16 and input them into the Vision Transformer encoder for comprehensive training. Furthermore, we evaluate the impact of secondary migration using CT images from Xiangya Hospital for validation. Results The proposed fusion model demonstrates notable improvements in classification performance. It effectively reduces the training time while achieving an impressive classification accuracy rate of 97.6%, marking a significant enhancement of 8% in sensitivity and specificity optimization. Furthermore, the investigation into secondary migration's effects on experimental outcomes across the three models reveals its potential to enhance system performance. Conclusion Our novel VGG-16 and Vision Transformer joint network exhibits robust classification performance on bone tumor datasets. The integration of these models enables precise and efficient classification, accommodating the diverse characteristics of different bone tumor types. This advancement holds great significance for the early detection and prognosis of bone tumor patients in the future.
Collapse
Affiliation(s)
- Weimin Chen
- School of Information and Electronics, Hunan City University, Yiyang 413000, China
| | - Muhammad Ayoub
- School of Computer Science and Engineering, Central South University, Changsha 410083, Hunan, China
| | - Mengyun Liao
- School of Computer Science and Engineering, Central South University, Changsha 410083, Hunan, China
| | - Ruizheng Shi
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Mu Zhang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Feng Su
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Zhiguo Huang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Yuanzhe Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yi Wang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Kevin K.L. Wong
- School of Information and Electronics, Hunan City University, Yiyang 413000, China
- Department of Mechanical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| |
Collapse
|
50
|
Wada T, Takahashi M, Matsunaga H, Kawai G, Kaneshima R, Machida M, Fujita N, Matsuoka Y. An automated screening model for aortic emergencies using convolutional neural networks and cropped computed tomography angiography images of the aorta. Int J Comput Assist Radiol Surg 2023; 18:2253-2260. [PMID: 37326817 DOI: 10.1007/s11548-023-02979-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 05/25/2023] [Indexed: 06/17/2023]
Abstract
PURPOSE Patients with aortic emergencies, such as aortic dissection and rupture, are at risk of rapid deterioration, necessitating prompt diagnosis. This study introduces a novel automated screening model for computed tomography angiography (CTA) of patients with aortic emergencies, utilizing deep convolutional neural network (DCNN) algorithms. METHODS Our model (Model A) initially predicted the positions of the aorta in the original axial CTA images and extracted the sections containing the aorta from these images. Subsequently, it predicted whether the cropped images showed aortic lesions. To compare the predictive performance of Model A in identifying aortic emergencies, we also developed Model B, which directly predicted the presence or absence of aortic lesions in the original images. Ultimately, these models categorized patients based on the presence or absence of aortic emergencies, as determined by the number of consecutive images expected to show the lesion. RESULTS The models were trained with 216 CTA scans and tested with 220 CTA scans. Model A demonstrated a higher area under the curve (AUC) for patient-level classification of aortic emergencies than Model B (0.995; 95% confidence interval [CI], 0.990-1.000 vs. 0.972; 95% CI, 0.950-0.994, respectively; p = 0.013). Among patients with aortic emergencies, the AUC of Model A for patient-level classification of aortic emergencies involving the ascending aorta was 0.971 (95% CI, 0.931-1.000). CONCLUSION The model utilizing DCNNs and cropped CTA images of the aorta effectively screened CTA scans of patients with aortic emergencies. This study would help develop a computer-aided triage system for CT scans, prioritizing the reading for patients requiring urgent care and ultimately promoting rapid responses to patients with aortic emergencies.
Collapse
Affiliation(s)
- Tomoki Wada
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan.
| | - Masamichi Takahashi
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan
| | - Hiroki Matsunaga
- Tertiary Emergency Medical Center, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan
| | - Go Kawai
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan
| | - Risa Kaneshima
- Department of Radiology, Toranomon Hospital, 2-2-2 Toranomon, Minato-ku, Tokyo, Japan
| | - Munetaka Machida
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan
| | - Nana Fujita
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Yujiro Matsuoka
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, 4-23-15 Kotobashi, Sumida-ku, Tokyo, Japan
| |
Collapse
|