1
|
Huang L, Feng B, Yang Z, Feng ST, Liu Y, Xue H, Shi J, Chen Q, Zhou T, Chen X, Wan C, Chen X, Long W. A Transfer Learning Radiomics Nomogram to Predict the Postoperative Recurrence of Advanced Gastric Cancer. J Gastroenterol Hepatol 2025; 40:844-854. [PMID: 39730209 DOI: 10.1111/jgh.16863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 10/15/2024] [Accepted: 12/10/2024] [Indexed: 12/29/2024]
Abstract
BACKGROUND AND AIM In this study, a transfer learning (TL) algorithm was used to predict postoperative recurrence of advanced gastric cancer (AGC) and to evaluate its value in a small-sample clinical study. METHODS A total of 431 cases of AGC from three centers were included in this retrospective study. First, TL signatures (TLSs) were constructed based on different source domains, including whole slide images (TLS-WSIs) and natural images (TLS-ImageNet). Clinical model and non-TLS based on CT images were constructed simultaneously. Second, TL radiomic model (TLRM) was constructed by combining optimal TLS and clinical factors. Finally, the performance of the models was evaluated by ROC analysis. The clinical utility of the models was assessed using integrated discriminant improvement (IDI) and decision curve analysis (DCA). RESULTS TLS-WSI significantly outperformed TLS-ImageNet, non-TLS, and clinical models (p < 0.05). The AUC value of TLS-WSI in training cohort was 0.9459 (95CI%: 0.9054, 0.9863) and ranged from 0.8050 (95CI%: 0.7130, 0.8969) to 0.8984 (95CI%: 0.8420, 0.9547) in validation cohorts. TLS-WSI and the nodular or irregular outer layer of gastric wall were screened to construct TLRM. The AUC value of TLRM in training cohort was 0.9643 (95CI%: 0.9349, 0.9936) and ranged from 0.8561 (95CI%: 0.7571, 0.9552) to 0.9195 (95CI%: 0.8670, 0.9721) in validation cohorts. The IDI and DCA showed that the performance of TLRM outperformed the other models. CONCLUSION TLS-WSI can be used to predict postoperative recurrence in AGC, whereas TLRM is more effective. TL can effectively improve the performance of clinical research models with a small sample size.
Collapse
Affiliation(s)
- Liebin Huang
- Department of Medical Imaging Center, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Bao Feng
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Zhiqi Yang
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Shi-Ting Feng
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yu Liu
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Huimin Xue
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Jiangfeng Shi
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Qinxian Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Tao Zhou
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Xiangguang Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Cuixia Wan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiaofeng Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Wansheng Long
- Department of Medical Imaging Center, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| |
Collapse
|
2
|
Curto-Vilalta A, Schlossmacher B, Valle C, Gersing A, Neumann J, von Eisenhart-Rothe R, Rueckert D, Hinterwimmer F. Semi-supervised Label Generation for 3D Multi-modal MRI Bone Tumor Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01448-z. [PMID: 39979760 DOI: 10.1007/s10278-025-01448-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Revised: 01/17/2025] [Accepted: 02/10/2025] [Indexed: 02/22/2025]
Abstract
Medical image segmentation is challenging due to the need for expert annotations and the variability of these manually created labels. Previous methods tackling label variability focus on 2D segmentation and single modalities, but reliable 3D multi-modal approaches are necessary for clinical applications such as in oncology. In this paper, we propose a framework for generating reliable and unbiased labels with minimal radiologist input for supervised 3D segmentation, reducing radiologists' efforts and variability in manual labeling. Our framework generates AI-assisted labels through a two-step process involving 3D multi-modal unsupervised segmentation based on feature clustering and semi-supervised refinement. These labels are then compared against traditional expert-generated labels in a downstream task consisting of 3D multi-modal bone tumor segmentation. Two 3D-Unet models are trained, one with manually created expert labels and the other with AI-assisted labels. Following this, a blind evaluation is performed on the segmentations of these two models to assess the reliability of training labels. The framework effectively generated accurate segmentation labels with minimal expert input, achieving state-of-the-art performance. The model trained with AI-assisted labels outperformed the baseline model in 61.67% of blind evaluations, indicating the enhancement of segmentation quality and demonstrating the potential of AI-assisted labeling to reduce radiologists' workload and improve label reliability for 3D multi-modal bone tumor segmentation. The code is available at https://github.com/acurtovilalta/3D_LabelGeneration .
Collapse
Affiliation(s)
- Anna Curto-Vilalta
- Department of Orthopedics and Sports Orthopedics, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany.
- Institute for AI and Informatics in Medicine, Technical University of Munich, Einsteinstrasse 25, 81675, Munich, Germany.
| | - Benjamin Schlossmacher
- Department of Orthopedics and Sports Orthopedics, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
| | - Christina Valle
- Department of Orthopedics and Sports Orthopedics, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
| | - Alexandra Gersing
- Musculoskeletal Radiology Section, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
| | - Jan Neumann
- Musculoskeletal Radiology Section, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
- Kantonsspital Graubünden, KSGR, Loëstrasse 170, 7000, Chur, Switzerland
| | - Ruediger von Eisenhart-Rothe
- Department of Orthopedics and Sports Orthopedics, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
| | - Daniel Rueckert
- Institute for AI and Informatics in Medicine, Technical University of Munich, Einsteinstrasse 25, 81675, Munich, Germany
| | - Florian Hinterwimmer
- Department of Orthopedics and Sports Orthopedics, Klinikum Rechts Der Isar, Technical University of Munich, Ismaninger Strasse 22, 81675, Munich, Germany
- Institute for AI and Informatics in Medicine, Technical University of Munich, Einsteinstrasse 25, 81675, Munich, Germany
| |
Collapse
|
3
|
Boers TGW, Fockens KN, van der Putten JA, Jaspers TJM, Kusters CHJ, Jukema JB, Jong MR, Struyvenberg MR, de Groof J, Bergman JJ, de With PHN, van der Sommen F. Foundation models in gastrointestinal endoscopic AI: Impact of architecture, pre-training approach and data efficiency. Med Image Anal 2024; 98:103298. [PMID: 39173410 DOI: 10.1016/j.media.2024.103298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 07/18/2024] [Accepted: 08/06/2024] [Indexed: 08/24/2024]
Abstract
Pre-training deep learning models with large data sets of natural images, such as ImageNet, has become the standard for endoscopic image analysis. This approach is generally superior to training from scratch, due to the scarcity of high-quality medical imagery and labels. However, it is still unknown whether the learned features on natural imagery provide an optimal starting point for the downstream medical endoscopic imaging tasks. Intuitively, pre-training with imagery closer to the target domain could lead to better-suited feature representations. This study evaluates whether leveraging in-domain pre-training in gastrointestinal endoscopic image analysis has potential benefits compared to pre-training on natural images. To this end, we present a dataset comprising of 5,014,174 gastrointestinal endoscopic images from eight different medical centers (GastroNet-5M), and exploit self-supervised learning with SimCLRv2, MoCov2 and DINO to learn relevant features for in-domain downstream tasks. The learned features are compared to features learned on natural images derived with multiple methods, and variable amounts of data and/or labels (e.g. Billion-scale semi-weakly supervised learning and supervised learning on ImageNet-21k). The effects of the evaluation is performed on five downstream data sets, particularly designed for a variety of gastrointestinal tasks, for example, GIANA for angiodyplsia detection and Kvasir-SEG for polyp segmentation. The findings indicate that self-supervised domain-specific pre-training, specifically using the DINO framework, results into better performing models compared to any supervised pre-training on natural images. On the ResNet50 and Vision-Transformer-small architectures, utilizing self-supervised in-domain pre-training with DINO leads to an average performance boost of 1.63% and 4.62%, respectively, on the downstream datasets. This improvement is measured against the best performance achieved through pre-training on natural images within any of the evaluated frameworks. Moreover, the in-domain pre-trained models also exhibit increased robustness against distortion perturbations (noise, contrast, blur, etc.), where the in-domain pre-trained ResNet50 and Vision-Transformer-small with DINO achieved on average 1.28% and 3.55% higher on the performance metrics, compared to the best performance found for pre-trained models on natural images. Overall, this study highlights the importance of in-domain pre-training for improving the generic nature, scalability and performance of deep learning for medical image analysis. The GastroNet-5M pre-trained weights are made publicly available in our repository: huggingface.co/tgwboers/GastroNet-5M_Pretrained_Weights.
Collapse
Affiliation(s)
- Tim G W Boers
- Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands.
| | - Kiki N Fockens
- Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | | | - Tim J M Jaspers
- Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - Carolus H J Kusters
- Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - Jelmer B Jukema
- Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Martijn R Jong
- Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | | | - Jeroen de Groof
- Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Jacques J Bergman
- Amsterdam UMC, Location VUmc, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Peter H N de With
- Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - Fons van der Sommen
- Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| |
Collapse
|
4
|
Chang Q, Bai Y, Wang S, Wang F, Wang Y, Zuo F, Xie X. Automatic soft-tissue analysis on orthodontic frontal and lateral facial photographs based on deep learning. Orthod Craniofac Res 2024; 27:893-902. [PMID: 38967085 DOI: 10.1111/ocr.12830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/18/2024] [Indexed: 07/06/2024]
Abstract
BACKGROUND To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues. METHODS A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability. RESULTS The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency. CONCLUSION Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.
Collapse
Affiliation(s)
- Qiao Chang
- Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China
| | - Yuxing Bai
- Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China
| | - Shaofeng Wang
- Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China
| | - Fan Wang
- Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China
| | - Yajie Wang
- Department of Engineering Physics, Tsinghua University, Beijing, China
- LargeV Instrument Corporation Limited, Beijing, China
| | - Feifei Zuo
- LargeV Instrument Corporation Limited, Beijing, China
| | - Xianju Xie
- Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China
| |
Collapse
|
5
|
Liu Z, Zhang H, Zhang M, Qu C, Li L, Sun Y, Ma X. Compare three deep learning-based artificial intelligence models for classification of calcified lumbar disc herniation: a multicenter diagnostic study. Front Surg 2024; 11:1458569. [PMID: 39569028 PMCID: PMC11576459 DOI: 10.3389/fsurg.2024.1458569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 10/21/2024] [Indexed: 11/22/2024] Open
Abstract
Objective To develop and validate an artificial intelligence diagnostic model for identifying calcified lumbar disc herniation based on lateral lumbar magnetic resonance imaging(MRI). Methods During the period from January 2019 to March 2024, patients meeting the inclusion criteria were collected. All patients had undergone both lumbar spine MRI and computed tomography(CT) examinations, with regions of interest (ROI) clearly marked on the lumbar sagittal MRI images. The participants were then divided into separate sets for training, testing, and external validation. Ultimately, we developed a deep learning model using the ResNet-34 algorithm model and evaluated its diagnostic efficacy. Results A total of 1,224 eligible patients were included in this study, consisting of 610 males and 614 females, with an average age of 53.34 ± 10.61 years. Notably, the test datasets displayed an impressive classification accuracy rate of 91.67%, whereas the external validation datasets achieved a classification accuracy rate of 88.76%. Among the test datasets, the ResNet34 model outperformed other models, yielding the highest area under the curve (AUC) of 0.96 (95% CI: 0.93, 0.99). Additionally, the ResNet34 model also exhibited superior performance in the external validation datasets, exhibiting an AUC of 0.88 (95% CI: 0.80, 0.93). Conclusion In this study, we established a deep learning model with excellent performance in identifying calcified intervertebral discs, thereby offering a valuable and efficient diagnostic tool for clinical surgeons.
Collapse
Affiliation(s)
- Zhiming Liu
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Hao Zhang
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Min Zhang
- Department of Neonatology, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Changpeng Qu
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Lei Li
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Yihao Sun
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xuexiao Ma
- Department of Spine Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
6
|
Bareja R, Ismail M, Martin D, Nayate A, Yadav I, Labbad M, Dullur P, Garg S, Tamrazi B, Salloum R, Margol A, Judkins A, Iyer S, de Blank P, Tiwari P. nnU-Net-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study. Radiol Artif Intell 2024; 6:e230115. [PMID: 39166971 PMCID: PMC11427926 DOI: 10.1148/ryai.230115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 06/21/2024] [Accepted: 07/30/2024] [Indexed: 08/23/2024]
Abstract
Purpose To evaluate nnU-Net-based segmentation models for automated delineation of medulloblastoma tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2 to 18 years, with medulloblastomas, from three different sites (28 from hospital A, 18 from hospital B, and 32 from hospital C), who had data available from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery). The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core plus nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (a) the transfer learning nnU-Net model was pretrained on an adult glioma cohort (n = 484) and fine-tuned on medulloblastoma studies using Models Genesis and (b) the direct deep learning nnU-Net model was trained directly on the medulloblastoma datasets, across fivefold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from two sites at a time used for training and data from the third site used for testing. Results Analysis on the three test sites yielded Dice scores of 0.81, 0.86, and 0.86 and 0.80, 0.86, and 0.85 for tumor habitat; 0.68, 0.84, and 0.77 and 0.67, 0.83, and 0.76 for enhancing tumor; 0.56, 0.71, and 0.69 and 0.56, 0.71, and 0.70 for edema; and 0.32, 0.48, and 0.43 and 0.29, 0.44, and 0.41 for cystic core plus nonenhancing tumor for the transfer learning and direct nnU-Net models, respectively. The models were largely robust to site-specific variations. Conclusion nnU-Net segmentation models hold promise for accurate, robust automated delineation of medulloblastoma tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric medulloblastoma. Keywords: Pediatrics, MR Imaging, Segmentation, Transfer Learning, Medulloblastoma, nnU-Net, MRI Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Rudie and Correia de Verdier in this issue.
Collapse
Affiliation(s)
- Rohan Bareja
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Marwa Ismail
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Douglas Martin
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Ameya Nayate
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Ipsa Yadav
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Murad Labbad
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Prateek Dullur
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Sanya Garg
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Benita Tamrazi
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Ralph Salloum
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Ashley Margol
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Alexander Judkins
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Sukanya Iyer
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Peter de Blank
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| | - Pallavi Tiwari
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.B., M.I., I.Y.); University Hospitals, Cleveland, Ohio (D.M., A.N.); Departments of Biomedical Engineering (M.L., S.G., S.I.) and Neurosciences (P.D.), Case Western Reserve University, Cleveland, Ohio; Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, Calif (B.T.); Division of Hematology, Oncology & Bone Marrow Transplant, Nationwide Children’s Hospital, Columbus, Ohio (R.S.); Department of Pediatrics, Keck School of Medicine of University of Southern California, Children’s Hospital Los Angeles, Los Angeles, Calif (A.M.); Department of Pathology, Children’s Hospital Los Angeles, Los Angeles, Calif (A.J.); Division of Oncology, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio (P.d.B.); William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wis (P.T.); and Department of Radiology and Biomedical Engineering, University of Wisconsin-Madison, 750 Highland Ave, Madison, WI 53726 (P.T.)
| |
Collapse
|
7
|
Paverd H, Zormpas-Petridis K, Clayton H, Burge S, Crispin-Ortuzar M. Radiology and multi-scale data integration for precision oncology. NPJ Precis Oncol 2024; 8:158. [PMID: 39060351 PMCID: PMC11282284 DOI: 10.1038/s41698-024-00656-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
In this Perspective paper we explore the potential of integrating radiological imaging with other data types, a critical yet underdeveloped area in comparison to the fusion of other multi-omic data. Radiological images provide a comprehensive, three-dimensional view of cancer, capturing features that would be missed by biopsies or other data modalities. This paper explores the complexities and challenges of incorporating medical imaging into data integration models, in the context of precision oncology. We present the different categories of imaging-omics integration and discuss recent progress, highlighting the opportunities that arise from bringing together spatial data on different scales.
Collapse
Affiliation(s)
- Hania Paverd
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | | | - Hannah Clayton
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Sarah Burge
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Mireia Crispin-Ortuzar
- Department of Oncology, University of Cambridge, Cambridge, UK.
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK.
| |
Collapse
|
8
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery. Int J Med Robot 2024; 20:e2634. [PMID: 38767083 DOI: 10.1002/rcs.2634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research. METHODS Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology. RESULTS The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789. CONCLUSIONS Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| |
Collapse
|
9
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
10
|
Farkhani S, Demnitz N, Boraxbekk CJ, Lundell H, Siebner HR, Petersen ET, Madsen KH. End-to-end volumetric segmentation of white matter hyperintensities using deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108008. [PMID: 38290291 DOI: 10.1016/j.cmpb.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 12/08/2023] [Accepted: 01/03/2024] [Indexed: 02/01/2024]
Abstract
BACKGROUND AND OBJECTIVES Reliable detection of white matter hyperintensities (WMH) is crucial for studying the impact of diffuse white-matter pathology on brain health and monitoring changes in WMH load over time. However, manual annotation of 3D high-dimensional neuroimages is laborious and can be prone to biases and errors in the annotation procedure. In this study, we evaluate the performance of deep learning (DL) segmentation tools and propose a novel volumetric segmentation model incorporating self-attention via a transformer-based architecture. Ultimately, we aim to evaluate diverse factors that influence WMH segmentation, aiming for a comprehensive analysis of the state-of-the-art algorithms in a broader context. METHODS We trained state-of-the-art DL algorithms, and incorporated advanced attention mechanisms, using structural fluid-attenuated inversion recovery (FLAIR) image acquisitions. The anatomical MRI data utilized for model training was obtained from healthy individuals aged 62-70 years in the Live active Successful Aging (LISA) project. Given the potential sparsity of lesion volume among healthy aging individuals, we explored the impact of incorporating a weighted loss function and ensemble models. To assess the generalizability of the studied DL models, we applied the trained algorithm to an independent subset of data sourced from the MICCAI WMH challenge (MWSC). Notably, this subset had vastly different acquisition parameters compared to the LISA dataset used for training. RESULTS Consistently, DL approaches exhibited commendable segmentation performance, achieving the level of inter-rater agreement comparable to expert performance, ensuring superior quality segmentation outcomes. On the out of sample dataset, the ensemble models exhibited the most outstanding performance. CONCLUSIONS DL methods generally surpassed conventional approaches in our study. While all DL methods performed comparably, incorporating attention mechanisms could prove advantageous in future applications with a wider availability of training data. As expected, our experiments indicate that the use of ensemble-based models enables the superior generalization in out-of-distribution settings. We believe that introducing DL methods in the WHM annotation workflow in heathy aging cohorts is promising, not only for reducing the annotation time required, but also for eventually improving accuracy and robustness via incorporating the automatic segmentations in the evaluation procedure.
Collapse
Affiliation(s)
- Sadaf Farkhani
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark.
| | - Naiara Demnitz
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark
| | - Carl-Johan Boraxbekk
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark; Institute for Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Institute of Sports Medicine Copenhagen (ISMC), Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark
| | - Henrik Lundell
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark; Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Hartwig Roman Siebner
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark; Institute for Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark
| | - Esben Thade Petersen
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark; Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Kristoffer Hougaard Madsen
- Danish Research Center for Magnetic Resonance, Center for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Kattegaard Alle 30, Hvidovre, Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
11
|
Priya S, Dhruba DD, Perry SS, Aher PY, Gupta A, Nagpal P, Jacob M. Optimizing Deep Learning for Cardiac MRI Segmentation: The Impact of Automated Slice Range Classification. Acad Radiol 2024; 31:503-513. [PMID: 37541826 DOI: 10.1016/j.acra.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 07/07/2023] [Accepted: 07/09/2023] [Indexed: 08/06/2023]
Abstract
RATIONALE AND OBJECTIVES Cardiac magnetic resonance imaging is crucial for diagnosing cardiovascular diseases, but lengthy postprocessing and manual segmentation can lead to observer bias. Deep learning (DL) has been proposed for automated cardiac segmentation; however, its effectiveness is limited by the slice range selection from base to apex. MATERIALS AND METHODS In this study, we integrated an automated slice range classification step to identify basal to apical short-axis slices before DL-based segmentation. We employed publicly available Multi-Disease, Multi-View & Multi-Center Right Ventricular Segmentation in Cardiac MRI data set with short-axis cine data from 160 training, 40 validation, and 160 testing cases. Three classification and seven segmentation DL models were studied. The top-performing segmentation model was assessed with and without the classification model. Model validation to compare automated and manual segmentation was performed using Dice score and Hausdorff distance and clinical indices (correlation score and Bland-Altman plots). RESULTS The combined classification (CBAM-integrated 2D-CNN) and segmentation model (2D-UNet with dilated convolution block) demonstrated superior performance, achieving Dice scores of 0.952 for left ventricle (LV), 0.933 for right ventricle (RV), and 0.875 for myocardium, compared to the stand-alone segmentation model (0.949 for LV, 0.925 for RV, and 0.867 for myocardium). Combined classification and segmentation model showed high correlation (0.92-0.99) with manual segmentation for biventricular volumes, ejection fraction, and myocardial mass. The mean absolute difference (2.8-8.3 mL) for clinical parameters between automated and manual segmentation was within the interobserver variability range, indicating comparable performance to manual annotation. CONCLUSION Integrating an initial automated slice range classification step into the segmentation process improves the performance of DL-based cardiac chamber segmentation.
Collapse
Affiliation(s)
- Sarv Priya
- Department of Radiology, University of Iowa Carver College of Medicine, Iowa City, Iowa (S.P.).
| | - Durjoy D Dhruba
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa (D.D.D., M.J.)
| | - Sarah S Perry
- Department of Biostatistics, University of Iowa, Iowa City, Iowa (S.S.P.)
| | - Pritish Y Aher
- Department of Radiology, University of Miami, Miller School of Medicine, Miami, Florida (P.Y.A.)
| | - Amit Gupta
- Department of Radiology, University Hospital Cleveland Medical Center, Cleveland, Ohio (A.G.)
| | - Prashant Nagpal
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin (P.N.)
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa (D.D.D., M.J.)
| |
Collapse
|
12
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:857-911. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea
| |
Collapse
|
13
|
Wanjiku RN, Nderu L, Kimwele M. Improved transfer learning using textural features conflation and dynamically fine-tuned layers. PeerJ Comput Sci 2023; 9:e1601. [PMID: 37810335 PMCID: PMC10557498 DOI: 10.7717/peerj-cs.1601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Transfer learning involves using previously learnt knowledge of a model task in addressing another task. However, this process works well when the tasks are closely related. It is, therefore, important to select data points that are closely relevant to the previous task and fine-tune the suitable pre-trained model's layers for effective transfer. This work utilises the least divergent textural features of the target datasets and pre-trained model's layers, minimising the lost knowledge during the transfer learning process. This study extends previous works on selecting data points with good textural features and dynamically selected layers using divergence measures by combining them into one model pipeline. Five pre-trained models are used: ResNet50, DenseNet169, InceptionV3, VGG16 and MobileNetV2 on nine datasets: CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Stanford Dogs, Caltech 256, ISIC 2016, ChestX-ray8 and MIT Indoor Scenes. Experimental results show that data points with lower textural feature divergence and layers with more positive weights give better accuracy than other data points and layers. The data points with lower divergence give an average improvement of 3.54% to 6.75%, while the layers improve by 2.42% to 13.04% for the CIFAR-100 dataset. Combining the two methods gives an extra accuracy improvement of 1.56%. This combined approach shows that data points with lower divergence from the source dataset samples can lead to a better adaptation for the target task. The results also demonstrate that selecting layers with more positive weights reduces instances of trial and error in selecting fine-tuning layers for pre-trained models.
Collapse
Affiliation(s)
| | - Lawrence Nderu
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| | - Michael Kimwele
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| |
Collapse
|
14
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
15
|
Ramos López D, Pugliese GMI, Iaselli G, Amoroso N, Gong C, Pascali V, Altieri S, Protti N. Study of Alternative Imaging Methods for In Vivo Boron Neutron Capture Therapy. Cancers (Basel) 2023; 15:3582. [PMID: 37509243 PMCID: PMC10377696 DOI: 10.3390/cancers15143582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/01/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
Boron Neutron Capture Therapy (BNCT) is an innovative and highly selective treatment against cancer. Nowadays, in vivo boron dosimetry is an important method to carry out such therapy in clinical environments. In this work, different imaging methods were tested for dosimetry and tumor monitoring in BNCT based on a Compton camera detector. A dedicated dataset was generated through Monte Carlo tools to study the imaging capabilities. We first applied the Maximum Likelihood Expectation Maximization (MLEM) iterative method to study dosimetry tomography. As well, two methods based on morphological filtering and deep learning techniques with Convolutional Neural Networks (CNN), respectively, were studied for tumor monitoring. Furthermore, clinical aspects such as the dependence on the boron concentration ratio in image reconstruction and the stretching effect along the detector position axis were analyzed. A simulated spherical gamma source was studied in several conditions (different detector distances and boron concentration ratios) using MLEM. This approach proved the possibility of monitoring the boron dose. Tumor monitoring using the CNN method shows promising results that could be enhanced by increasing the training dataset.
Collapse
Affiliation(s)
- Dayron Ramos López
- Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy
| | - Gabriella Maria Incoronata Pugliese
- Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy
| | - Giuseppe Iaselli
- Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy
| | - Nicola Amoroso
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy
- Dipartimento di Farmacia-Scienze del Farmaco, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
| | - Chunhui Gong
- School of Environmental and Biological Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
- Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, 27100 Pavia, Italy
| | - Valeria Pascali
- Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, 27100 Pavia, Italy
- Dipartimento di Fisica, Università degli Studi di Pavia, 27100 Pavia, Italy
| | - Saverio Altieri
- Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, 27100 Pavia, Italy
- Dipartimento di Fisica, Università degli Studi di Pavia, 27100 Pavia, Italy
| | - Nicoletta Protti
- Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, 27100 Pavia, Italy
- Dipartimento di Fisica, Università degli Studi di Pavia, 27100 Pavia, Italy
| |
Collapse
|
16
|
Qureshi A, Lim S, Suh SY, Mutawak B, Chitnis PV, Demer JL, Wei Q. Deep-Learning-Based Segmentation of Extraocular Muscles from Magnetic Resonance Images. Bioengineering (Basel) 2023; 10:699. [PMID: 37370630 PMCID: PMC10295225 DOI: 10.3390/bioengineering10060699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/31/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. Performances of the four models were evaluated and compared with the standard F-measure-based metrics of intersection over union (IoU) and Dice, where the U-Net achieved the highest overall IoU and Dice scores of 0.77 and 0.85, respectively. Centroid distance offset between identified and ground truth EOM centroids was measured where U-Net and DeepLabV3+ achieved low offsets (p > 0.05) of 0.33 mm and 0.35 mm, respectively. Our results also demonstrated that segmentation accuracy varies in spatially different image planes. This study systematically compared factors that impact the variability of segmentation and morphometric accuracy of the deep learning models when applied to segmenting EOMs from MRI.
Collapse
Affiliation(s)
- Amad Qureshi
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Seongjin Lim
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Soh Youn Suh
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Bassam Mutawak
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Parag V. Chitnis
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Joseph L. Demer
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| |
Collapse
|
17
|
Kung PC, Heydari M, Tsou NT, Tai BL. A neural network framework for immediate temperature prediction of surgical hand-held drilling. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107524. [PMID: 37060686 DOI: 10.1016/j.cmpb.2023.107524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Heat generation and associated temperature rise in surgical drilling can cause irreversible tissue damage. It is nearly impossible to provide immediate temperature prediction for a hand-held drilling process since both feed rate and motion vary with time. The objective of this study is to present and test a framework for immediate bone drilling temperature visualization based on a neural network (NN) model and a linear time-invariant (LTI) model. METHODS In this study, the finite element analysis (FEA) model is used as the ground truth. The NN model is used to predict the location-dependent thermal responses of FEA, while LTI is used to superimpose these responses based on the location history of the heat source. The use of LTI can eliminate the uncertainty of the unlimited possibility in the time domain. To test the framework, two three-dimensional drilling cases are studied, one with a constant drilling feed and straight path and the other with a varying feed and a varying path. RESULTS The NN model using U-net architecture can achieve the predicted correlation of over 97% with only 1% of the total number of data points. Using the framework with U-net and LTI, both case studies show good agreement in temporal and spatial temperature distributions with the ground truth. The average error near the drilling path is less than 10%. Discrepancies are mainly found near the heat source and the regions near the removed material. CONCLUSIONS An FEA surrogate model for rapid and accurate prediction of 3D temperature during arbitrary bone drilling is successfully made. The overall error is less than 5% on average in the two case studies. Future improvements include strategies for training data selection and data formating.
Collapse
Affiliation(s)
- Pei-Ching Kung
- Texas A&M University, USA; National Yang-Ming Chiao-Tung University, USA
| | | | - Nien-Ti Tsou
- Texas A&M University, USA; National Yang-Ming Chiao-Tung University, USA
| | | |
Collapse
|
18
|
Albeaik S, Wu T, Vurimi G, Chou F, Lu X, Bayen AM. Longitudinal deep truck: Deep longitudinal model with application to sim2real deep reinforcement learning for heavy‐duty truck control in the field. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Saleh Albeaik
- Department of Civil and Environmental Engineering University of California—Berkeley Berkeley California USA
| | - Trevor Wu
- Department of Civil and Environmental Engineering University of California—Berkeley Berkeley California USA
| | - Ganeshnikhil Vurimi
- Department of Mechanical Engineering University of California—Berkeley Berkeley California USA
| | - Fang‐Chieh Chou
- Department of Mechanical Engineering University of California—Berkeley Berkeley California USA
| | - Xiao‐Yun Lu
- California PATH University of California—Berkeley Richmond California USA
| | - Alexandre M. Bayen
- Department of Electrical Engineering and Computer Sciences University of California—Berkeley Berkeley California USA
| |
Collapse
|
19
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
20
|
Li G, Wu X, Ma X. Artificial intelligence in radiotherapy. Semin Cancer Biol 2022; 86:160-171. [PMID: 35998809 DOI: 10.1016/j.semcancer.2022.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/18/2022] [Indexed: 11/19/2022]
Abstract
Radiotherapy is a discipline closely integrated with computer science. Artificial intelligence (AI) has developed rapidly over the past few years. With the explosive growth of medical big data, AI promises to revolutionize the field of radiotherapy through highly automated workflow, enhanced quality assurance, improved regional balances of expert experiences, and individualized treatment guided by multi-omics. In addition to independent researchers, the increasing number of large databases, biobanks, and open challenges significantly facilitated AI studies on radiation oncology. This article reviews the latest research, clinical applications, and challenges of AI in each part of radiotherapy including image processing, contouring, planning, quality assurance, motion management, and outcome prediction. By summarizing cutting-edge findings and challenges, we aim to inspire researchers to explore more future possibilities and accelerate the arrival of AI radiotherapy.
Collapse
Affiliation(s)
- Guangqi Li
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xin Wu
- Head & Neck Oncology ward, Division of Radiotherapy Oncology, Cancer Center, West China Hospital, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xuelei Ma
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China.
| |
Collapse
|
21
|
Tavanapong W, Oh J, Riegler MA, Khaleel M, Mittal B, de Groen PC. Artificial Intelligence for Colonoscopy: Past, Present, and Future. IEEE J Biomed Health Inform 2022; 26:3950-3965. [PMID: 35316197 PMCID: PMC9478992 DOI: 10.1109/jbhi.2022.3160098] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
During the past decades, many automated image analysis methods have been developed for colonoscopy. Real-time implementation of the most promising methods during colonoscopy has been tested in clinical trials, including several recent multi-center studies. All trials have shown results that may contribute to prevention of colorectal cancer. We summarize the past and present development of colonoscopy video analysis methods, focusing on two categories of artificial intelligence (AI) technologies used in clinical trials. These are (1) analysis and feedback for improving colonoscopy quality and (2) detection of abnormalities. Our survey includes methods that use traditional machine learning algorithms on carefully designed hand-crafted features as well as recent deep-learning methods. Lastly, we present the gap between current state-of-the-art technology and desirable clinical features and conclude with future directions of endoscopic AI technology development that will bridge the current gap.
Collapse
|
22
|
Medical Image Classification Using Transfer Learning and Chaos Game Optimization on the Internet of Medical Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9112634. [PMID: 35875781 PMCID: PMC9300353 DOI: 10.1155/2022/9112634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/07/2022] [Accepted: 06/21/2022] [Indexed: 12/23/2022]
Abstract
The Internet of Medical Things (IoMT) has dramatically benefited medical professionals that patients and physicians can access from all regions. Although the automatic detection and prediction of diseases such as melanoma and leukemia is still being investigated and studied in IoMT, existing approaches are not able to achieve a high degree of efficiency. Thus, with a new approach that provides better results, patients would access the adequate treatments earlier and the death rate would be reduced. Therefore, this paper introduces an IoMT proposal for medical images' classification that may be used anywhere, i.e., it is an ubiquitous approach. It was designed in two stages: first, we employ a transfer learning (TL)-based method for feature extraction, which is carried out using MobileNetV3; second, we use the chaos game optimization (CGO) for feature selection, with the aim of excluding unnecessary features and improving the performance, which is key in IoMT. Our methodology was evaluated using ISIC-2016, PH2, and Blood-Cell datasets. The experimental results indicated that the proposed approach obtained an accuracy of 88.39% on ISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell datsets. Moreover, our approach had successful performances for the metrics employed compared to other existing methods.
Collapse
|
23
|
Impact of Penetration and Image Analysis in Optical Coherence Tomography on the Measurement of Choroidal Vascularity Parameters. Retina 2022; 42:1965-1974. [DOI: 10.1097/iae.0000000000003547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|