1
|
Nguyen D, Balagopal A, Bai T, Dohopolski M, Lin MH, Jiang S. Prior guided deep difference meta-learner for fast adaptation to stylized segmentation. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2025; 6:025016. [PMID: 40247921 PMCID: PMC12001319 DOI: 10.1088/2632-2153/adc970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 03/28/2025] [Accepted: 04/04/2025] [Indexed: 04/19/2025] Open
Abstract
Radiotherapy treatment planning requires segmenting anatomical structures in various styles, influenced by guidelines, protocols, preferences, or dose planning needs. Deep learning-based auto-segmentation models, trained on anatomical definitions, may not match local clinicians' styles at new institutions. Adapting these models can be challenging without sufficient resources. We hypothesize that consistent differences between segmentation styles and anatomical definitions can be learned from initial patients and applied to pre-trained models for more precise segmentation. We propose a Prior-guided deep difference meta-learner (DDL) to learn and adapt these differences. We collected data from 440 patients for model development and 30 for testing. The dataset includes contours of the prostate clinical target volume (CTV), parotid, and rectum. We developed a deep learning framework that segments new images with a matching style using example styles as a prior, without model retraining. The pre-trained segmentation models were adapted to three different clinician styles for post-operative CTV for prostate, parotid gland, and rectum segmentation. We tested the model's ability to learn unseen styles and compared its performance with transfer learning, using varying amounts of prior patient style data (0-10 patients). Performance was quantitatively evaluated using dice similarity coefficient (DSC) and Hausdorff distance. With exposure to only three patients for the model, the average DSC (%) improved from 78.6, 71.9, 63.0, 69.6, 52.2 and 46.3-84.4, 77.8, 73.0, 77.8, 70.5, 68.1, for CTVstyle1, CTVstyle2, CTVstyle3, Parotidsuperficial, Rectumsuperior, and Rectumposterior, respectively. The proposed Prior-guided DDL is a fast and effortless network for adapting a structure to new styles. The improved segmentation accuracy may result in reduced contour editing time, providing a more efficient and streamlined clinical workflow.
Collapse
Affiliation(s)
- Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Anjali Balagopal
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Ti Bai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
2
|
Fayemiwo M, Gardiner B, Harkin J, McDaid L, Prakash P, Dennedy M. A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01449-y. [PMID: 40038136 DOI: 10.1007/s10278-025-01449-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 02/12/2025] [Accepted: 02/14/2025] [Indexed: 03/06/2025]
Abstract
Accurate segmentation of adrenal glands from CT images is essential for enhancing computer-aided diagnosis and surgical planning. However, the small size, irregular shape, and proximity to surrounding tissues make this task highly challenging. This study introduces a novel pipeline that significantly improves the segmentation of left and right adrenal glands by integrating advanced pre-processing techniques and a robust post-processing framework. Utilising a 2D UNet architecture with various backbones (VGG16, ResNet34, InceptionV3), the pipeline leverages test-time augmentation (TTA) and targeted removal of unconnected regions to enhance accuracy and robustness. Our results demonstrate a substantial improvement, with a 38% increase in the Dice similarity coefficient for the left adrenal gland and an 11% increase for the right adrenal gland on the AMOS dataset, achieved by the InceptionV3 model. Additionally, the pipeline significantly reduces false positives, underscoring its potential for clinical applications and its superiority over existing methods. These advancements make our approach a crucial contribution to the field of medical image segmentation.
Collapse
Affiliation(s)
- Michael Fayemiwo
- School of Computing, Engineering, and Intelligent Systems, Ulster University, Londonderry, Northern Ireland, UK.
| | - Bryan Gardiner
- School of Computing, Engineering, and Intelligent Systems, Ulster University, Londonderry, Northern Ireland, UK
| | - Jim Harkin
- School of Computing, Engineering, and Intelligent Systems, Ulster University, Londonderry, Northern Ireland, UK
| | - Liam McDaid
- School of Computing, Engineering, and Intelligent Systems, Ulster University, Londonderry, Northern Ireland, UK
| | - Punit Prakash
- Mike Wiegers Department of Electrical and Computer Engineering, Kansas State University, Manhattan, KS, USA
| | - Michael Dennedy
- School of Medicine, National University of Ireland, Galway, Ireland
| |
Collapse
|
3
|
Ilan Y. The Constrained Disorder Principle Overcomes the Challenges of Methods for Assessing Uncertainty in Biological Systems. J Pers Med 2024; 15:10. [PMID: 39852203 PMCID: PMC11767140 DOI: 10.3390/jpm15010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/06/2024] [Accepted: 12/27/2024] [Indexed: 01/26/2025] Open
Abstract
Different disciplines are developing various methods for determining and dealing with uncertainties in complex systems. The constrained disorder principle (CDP) accounts for the randomness, variability, and uncertainty that characterize biological systems and are essential for their proper function. Per the CDP, intrinsic unpredictability is mandatory for the dynamicity of biological systems under continuously changing internal and external perturbations. The present paper describes some of the parameters and challenges associated with uncertainty and randomness in biological systems and presents methods for quantifying them. Modeling biological systems necessitates accounting for the randomness, variability, and underlying uncertainty of systems in health and disease. The CDP provides a scheme for dealing with uncertainty in biological systems and sets the basis for using them. This paper presents the CDP-based second-generation artificial intelligence system that incorporates variability to improve the effectiveness of medical interventions. It describes the use of the digital pill that comprises algorithm-based personalized treatment regimens regulated by closed-loop systems based on personalized signatures of variability. The CDP provides a method for using uncertainties in complex systems in an outcome-based manner.
Collapse
Affiliation(s)
- Yaron Ilan
- Department of Medicine, Hadassah Medical Center, Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| |
Collapse
|
4
|
Wang L, Sun R, Wei X, Chen J, Jia S, Wu G, Nie S. Enhancing prostate cancer segmentation on multiparametric magnetic resonance imaging with background information and gland masks. Med Phys 2024; 51:8179-8191. [PMID: 39134025 DOI: 10.1002/mp.17346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 06/24/2024] [Accepted: 07/15/2024] [Indexed: 11/03/2024] Open
Abstract
BACKGROUND The landscape of prostate cancer (PCa) segmentation within multiparametric magnetic resonance imaging (MP-MRI) was fragmented, with a noticeable lack of consensus on incorporating background details, culminating in inconsistent segmentation outputs. Given the complex and heterogeneous nature of PCa, conventional imaging segmentation algorithms frequently fell short, prompting the need for specialized research and refinement. PURPOSE This study sought to dissect and compare various segmentation methods, emphasizing the role of background information and gland masks in achieving superior PCa segmentation. The goal was to systematically refine segmentation networks to ascertain the most efficacious approach. METHODS A cohort of 232 patients (ages 61-73 years old, prostate-specific antigen: 3.4-45.6 ng/mL), who had undergone MP-MRI followed by prostate biopsies, was analyzed. An advanced segmentation model, namely Attention-Unet, which combines U-Net with attention gates, was employed for training and validation. The model was further enhanced through a multiscale module and a composite loss function, culminating in the development of Matt-Unet. Performance metrics included Dice Similarity Coefficient (DSC) and accuracy (ACC). RESULTS The Matt-Unet model, which integrated background information and gland masks, outperformed the baseline U-Net model using raw images, yielding significant gains (DSC: 0.7215 vs. 0.6592; ACC: 0.8899 vs. 0.8601, p < 0.001). CONCLUSION A targeted and practical PCa segmentation method was designed, which could significantly improve PCa segmentation on MP-MRI by combining background information and gland masks. The Matt-Unet model showcased promising capabilities for effectively delineating PCa, enhancing the precision of MP-MRI analysis.
Collapse
Affiliation(s)
- Lei Wang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiaobin Wei
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Chen
- Department of Radiology, Huangpu Branch, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shouqiang Jia
- Jinan People's Hospital Affiliated to Shandong First Medical University, Shandong, China
| | - Guangyu Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
5
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
6
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
7
|
Jeganathan T, Salgues E, Schick U, Tissot V, Fournier G, Valéri A, Nguyen TA, Bourbonne V. Inter-Rater Variability of Prostate Lesion Segmentation on Multiparametric Prostate MRI. Biomedicines 2023; 11:3309. [PMID: 38137530 PMCID: PMC10741937 DOI: 10.3390/biomedicines11123309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/10/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
INTRODUCTION External radiotherapy is a major treatment for localized prostate cancer (PCa). Dose escalation to the whole prostate gland increases biochemical relapse-free survival but also acute and late toxicities. Dose escalation to the dominant index lesion (DIL) only is of growing interest. It requires a robust delineation of the DIL. In this context, we aimed to evaluate the inter-observer variability of DIL delineation. MATERIAL AND METHODS Two junior radiologists and a senior radiation oncologist delineated DILs on 64 mpMRIs of patients with histologically confirmed PCa. For each mpMRI and each reader, eight individual DIL segmentations were delineated. These delineations were blindly performed from one another and resulted from the individual analysis of the T2, apparent diffusion coefficient (ADC), b2000, and dynamic contrast enhanced (DCE) sequences, as well as the analysis of combined sequences (T2ADC, T2ADCb2000, T2ADCDCE, and T2ADCb2000DCE). Delineation variability was assessed using the DICE coefficient, Jaccard index, Hausdorff distance measure, and mean distance to agreement. RESULTS T2, ADC, T2ADC, b2000, T2 + ADC + b2000, T2 + ADC + DCE, and T2 + ADC + b2000 + DCE sequences obtained DICE coefficients of 0.51, 0.50, 0.54, 0.52, 0.54, 0.55, 0.53, respectively, which are significantly higher than the perfusion sequence alone (0.35, p < 0.001). The analysis of other similarity metrics lead to similar results. The tumor volume and PI-RADS classification were positively correlated with the DICE scores. CONCLUSION Our study showed that the contours of prostatic lesions were more reproducible on certain sequences but confirmed the great variability of prostatic contours with a maximum DICE coefficient calculated at 0.55 (joint analysis of T2, ADC, and perfusion sequences).
Collapse
Affiliation(s)
- Thibaut Jeganathan
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Emile Salgues
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Ulrike Schick
- Radiation Oncology Department, University Hospital, 29200 Brest, France;
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
| | - Valentin Tissot
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Georges Fournier
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Antoine Valéri
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Truong-An Nguyen
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Vincent Bourbonne
- Radiation Oncology Department, University Hospital, 29200 Brest, France;
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
| |
Collapse
|
8
|
Obuchowicz R, Nurzynska K, Pierzchala M, Piorkowski A, Strzelecki M. Texture Analysis for the Bone Age Assessment from MRI Images of Adolescent Wrists in Boys. J Clin Med 2023; 12:2762. [PMID: 37109098 PMCID: PMC10141677 DOI: 10.3390/jcm12082762] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/03/2023] [Accepted: 04/03/2023] [Indexed: 04/29/2023] Open
Abstract
Currently, bone age is assessed by X-rays. It enables the evaluation of the child's development and is an important diagnostic factor. However, it is not sufficient to diagnose a specific disease because the diagnoses and prognoses may arise depending on how much the given case differs from the norms of bone age. BACKGROUND The use of magnetic resonance images (MRI) to assess the age of the patient would extend diagnostic possibilities. The bone age test could then become a routine screening test. Changing the method of determining the bone age would also prevent the patient from taking a dose of ionizing radiation, making the test less invasive. METHODS The regions of interest containing the wrist area and the epiphyses of the radius are marked on the magnetic resonance imaging of the non-dominant hand of boys aged 9 to 17 years. Textural features are computed for these regions, as it is assumed that the texture of the wrist image contains information about bone age. RESULTS The regression analysis revealed that there is a high correlation between the bone age of a patient and the MRI-derived textural features derived from MRI. For DICOM T1-weighted data, the best scores reached 0.94 R2, 0.46 RMSE, 0.21 MSE, and 0.33 MAE. CONCLUSIONS The experiments performed have shown that using the MRI images gives reliable results in the assessment of bone age while not exposing the patient to ionizing radiation.
Collapse
Affiliation(s)
- Rafal Obuchowicz
- Department of Diagnostic Imaging, Jagiellonian University Medical College, 31-008 Krakow, Poland;
| | - Karolina Nurzynska
- Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
| | | | - Adam Piorkowski
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland;
| | - Michal Strzelecki
- Institute of Electronics, Lodz University of Technology, 93-590 Lodz, Poland;
| |
Collapse
|
9
|
Ameen YA, Badary DM, Abonnoor AEI, Hussain KF, Sewisy AA. Which data subset should be augmented for deep learning? a simulation study using urothelial cell carcinoma histopathology images. BMC Bioinformatics 2023; 24:75. [PMID: 36869300 PMCID: PMC9983182 DOI: 10.1186/s12859-023-05199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 02/21/2023] [Indexed: 03/05/2023] Open
Abstract
BACKGROUND Applying deep learning to digital histopathology is hindered by the scarcity of manually annotated datasets. While data augmentation can ameliorate this obstacle, its methods are far from standardized. Our aim was to systematically explore the effects of skipping data augmentation; applying data augmentation to different subsets of the whole dataset (training set, validation set, test set, two of them, or all of them); and applying data augmentation at different time points (before, during, or after dividing the dataset into three subsets). Different combinations of the above possibilities resulted in 11 ways to apply augmentation. The literature contains no such comprehensive systematic comparison of these augmentation ways. RESULTS Non-overlapping photographs of all tissues on 90 hematoxylin-and-eosin-stained urinary bladder slides were obtained. Then, they were manually classified as either inflammation (5948 images), urothelial cell carcinoma (5811 images), or invalid (3132 images; excluded). If done, augmentation was eight-fold by flipping and rotation. Four convolutional neural networks (Inception-v3, ResNet-101, GoogLeNet, and SqueezeNet), pre-trained on the ImageNet dataset, were fine-tuned to binary classify images of our dataset. This task was the benchmark for our experiments. Model testing performance was evaluated using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve. Model validation accuracy was also estimated. The best testing performance was achieved when augmentation was done to the remaining data after test-set separation, but before division into training and validation sets. This leaked information between the training and the validation sets, as evidenced by the optimistic validation accuracy. However, this leakage did not cause the validation set to malfunction. Augmentation before test-set separation led to optimistic results. Test-set augmentation yielded more accurate evaluation metrics with less uncertainty. Inception-v3 had the best overall testing performance. CONCLUSIONS In digital histopathology, augmentation should include both the test set (after its allocation), and the remaining combined training/validation set (before being split into separate training and validation sets). Future research should try to generalize our results.
Collapse
Affiliation(s)
- Yusra A Ameen
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt.
| | - Dalia M Badary
- Department of Pathology, Faculty of Medicine, Assiut University, Asyut, Egypt
| | | | - Khaled F Hussain
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt
| | - Adel A Sewisy
- Department of Computer Science, Faculty of Computers and Information, Assiut University, Asyut, Egypt
| |
Collapse
|
10
|
Shamrat FJM, Azam S, Karim A, Ahmed K, Bui FM, De Boer F. High-precision multiclass classification of lung disease through customized MobileNetV2 from chest X-ray images. Comput Biol Med 2023; 155:106646. [PMID: 36805218 DOI: 10.1016/j.compbiomed.2023.106646] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 01/30/2023] [Accepted: 02/06/2023] [Indexed: 02/12/2023]
Abstract
In this study, multiple lung diseases are diagnosed with the help of the Neural Network algorithm. Specifically, Emphysema, Infiltration, Mass, Pleural Thickening, Pneumonia, Pneumothorax, Atelectasis, Edema, Effusion, Hernia, Cardiomegaly, Pulmonary Fibrosis, Nodule, and Consolidation, are studied from the ChestX-ray14 dataset. A proposed fine-tuned MobileLungNetV2 model is employed for analysis. Initially, pre-processing is done on the X-ray images from the dataset using CLAHE to increase image contrast. Additionally, a Gaussian Filter, to denoise images, and data augmentation methods are used. The pre-processed images are fed into several transfer learning models; such as InceptionV3, AlexNet, DenseNet121, VGG19, and MobileNetV2. Among these models, MobileNetV2 performed with the highest accuracy of 91.6% in overall classifying lesions on Chest X-ray Images. This model is then fine-tuned to optimise the MobileLungNetV2 model. On the pre-processed data, the fine-tuned model, MobileLungNetV2, achieves an extraordinary classification accuracy of 96.97%. Using a confusion matrix for all the classes, it is determined that the model has an overall high precision, recall, and specificity scores of 96.71%, 96.83% and 99.78% respectively. The study employs the Grad-cam output to determine the heatmap of disease detection. The proposed model shows promising results in classifying multiple lesions on Chest X-ray images.
Collapse
Affiliation(s)
- Fm Javed Mehedi Shamrat
- Department of Software Engineering, Daffodil International University, Birulia, 1216, Dhaka, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Group of Bio-photomatiχ, Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Friso De Boer
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia
| |
Collapse
|
11
|
Bratchenko IA, Bratchenko LA, Khristoforova YA, Moryatov AA, Kozlov SV, Zakharov VP. Classification of skin cancer using convolutional neural networks analysis of Raman spectra. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106755. [PMID: 35349907 DOI: 10.1016/j.cmpb.2022.106755] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 01/21/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is the most common malignancy in whites accounting for about one third of all cancers diagnosed per year. Portable Raman spectroscopy setups for skin cancer "optical biopsy" are utilized to detect tumors based on their spectral features caused by the comparative presence of different chemical components. However, low signal-to-noise ratio in such systems may prevent accurate tumors classification. Thus, there is a challenge to develop methods for efficient skin tumors classification. METHODS We compare the performance of convolutional neural networks and the projection on latent structures with discriminant analysis for discriminating skin cancer using the analysis of Raman spectra with a high autofluorescence background stimulated by a 785 nm laser. We have registered the spectra of 617 cases of skin neoplasms (615 patients, 70 melanomas, 122 basal cell carcinomas, 12 squamous cell carcinomas and 413 benign tumors) in vivo with a portable Raman setup and created classification models both for convolutional neural networks and projection on latent structures approaches. To check the classification models stability, a 10-fold cross-validation was performed for all created models. To avoid models overfitting, the data was divided into a training set (80% of spectral dataset) and a test set (20% of spectral dataset). RESULTS The results for different classification tasks demonstrate that the convolutional neural networks significantly (p<0.01) outperforms the projection on latent structures. For the convolutional neural networks implementation we obtained ROC AUCs of 0.96 (0.94 - 0.97; 95% CI), 0.90 (0.85-0.94; 95% CI), and 0.92 (0.87 - 0.97; 95% CI) for classifying a) malignant vs benign tumors, b) melanomas vs pigmented tumors and c) melanomas vs seborrheic keratosis respectively. CONCLUSIONS The performance of the convolutional neural networks classification of skin tumors based on Raman spectra analysis is higher or comparable to the accuracy provided by trained dermatologists. The increased accuracy with the convolutional neural networks implementation is due to a more precise accounting of low intensity Raman bands in the intense autofluorescence background. The achieved high performance of skin tumors classifications with convolutional neural networks analysis opens a possibility for wide implementation of Raman setups in clinical setting.
Collapse
Affiliation(s)
- Ivan A Bratchenko
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation.
| | - Lyudmila A Bratchenko
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| | - Yulia A Khristoforova
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| | - Alexander A Moryatov
- Department of Oncology, Samara State Medical University, 159 Tashkentskaya Street, Samara, 443095, Russian Federation; Department of Visual Localization Tumors, Samara Regional Clinical Oncology Dispensary, 50 Solnechnaya Street, Samara, 443095, Russian Federation
| | - Sergey V Kozlov
- Department of Oncology, Samara State Medical University, 159 Tashkentskaya Street, Samara, 443095, Russian Federation; Department of Visual Localization Tumors, Samara Regional Clinical Oncology Dispensary, 50 Solnechnaya Street, Samara, 443095, Russian Federation
| | - Valery P Zakharov
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| |
Collapse
|
12
|
Shamrat FMJM, Azam S, Karim A, Islam R, Tasnim Z, Ghosh P, De Boer F. LungNet22: A Fine-Tuned Model for Multiclass Classification and Prediction of Lung Disease Using X-ray Images. J Pers Med 2022; 12:jpm12050680. [PMID: 35629103 PMCID: PMC9143659 DOI: 10.3390/jpm12050680] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/01/2022] [Accepted: 04/20/2022] [Indexed: 12/29/2022] Open
Abstract
In recent years, lung disease has increased manyfold, causing millions of casualties annually. To combat the crisis, an efficient, reliable, and affordable lung disease diagnosis technique has become indispensable. In this study, a multiclass classification of lung disease from frontal chest X-ray imaging using a fine-tuned CNN model is proposed. The classification is conducted on 10 disease classes of the lungs, namely COVID-19, Effusion, Tuberculosis, Pneumonia, Lung Opacity, Mass, Nodule, Pneumothorax, and Pulmonary Fibrosis, along with the Normal class. The dataset is a collective dataset gathered from multiple sources. After pre-processing and balancing the dataset with eight augmentation techniques, a total of 80,000 X-ray images were fed to the model for classification purposes. Initially, eight pre-trained CNN models, AlexNet, GoogLeNet, InceptionV3, MobileNetV2, VGG16, ResNet 50, DenseNet121, and EfficientNetB7, were employed on the dataset. Among these, the VGG16 achieved the highest accuracy at 92.95%. To further improve the classification accuracy, LungNet22 was constructed upon the primary structure of the VGG16 model. An ablation study was used in the work to determine the different hyper-parameters. Using the Adam Optimizer, the proposed model achieved a commendable accuracy of 98.89%. To verify the performance of the model, several performance matrices, including the ROC curve and the AUC values, were computed as well.
Collapse
Affiliation(s)
- F. M. Javed Mehedi Shamrat
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (F.M.J.M.S.); (Z.T.)
| | - Sami Azam
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
- Correspondence:
| | - Asif Karim
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
| | - Rakibul Islam
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh;
| | - Zarrin Tasnim
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (F.M.J.M.S.); (Z.T.)
| | - Pronab Ghosh
- Department of Computer Science (CS), Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Canada;
| | - Friso De Boer
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
| |
Collapse
|
13
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|