1
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
2
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
3
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
4
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
5
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
6
|
Hooshangnejad H, Chen Q, Feng X, Zhang R, Ding K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. Cancers (Basel) 2023; 15:3061. [PMID: 37297023 PMCID: PMC10252954 DOI: 10.3390/cancers15113061] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Major sources of delay in the standard of care RT workflow are the need for multiple appointments and separate image acquisition. In this work, we addressed the question of how we can expedite the workflow by synthesizing planning CT from diagnostic CT. This idea is based on the theory that diagnostic CT can be used for RT planning, but in practice, due to the differences in patient setup and acquisition techniques, separate planning CT is required. We developed a generative deep learning model, deepPERFECT, that is trained to capture these differences and generate deformation vector fields to transform diagnostic CT into preliminary planning CT. We performed detailed analysis both from an image quality and a dosimetric point of view, and showed that deepPERFECT enabled the preliminary RT planning to be used for preliminary and early plan dosimetric assessment and evaluation.
Collapse
Affiliation(s)
- Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA;
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Quan Chen
- City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA;
| | - Xue Feng
- Carina Medical LLC, Lexington, KY 40513, USA;
| | - Rui Zhang
- Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN 55455, USA;
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
7
|
Gu X, Zhang Y, Zeng W, Zhong S, Wang H, Liang D, Li Z, Hu Z. Cross-modality image translation: CT image synthesis of MR brain images using multi generative network with perceptual supervision. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 237:107571. [PMID: 37156020 DOI: 10.1016/j.cmpb.2023.107571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 04/19/2023] [Accepted: 04/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Computed tomography (CT) and magnetic resonance imaging (MRI) are the mainstream imaging technologies for clinical practice. CT imaging can reveal high-quality anatomical and physiopathological structures, especially bone tissue, for clinical diagnosis. MRI provides high resolution in soft tissue and is sensitive to lesions. CT combined with MRI diagnosis has become a regular image-guided radiation treatment plan. METHODS In this paper, to reduce the dose of radiation exposure in CT examinations and ameliorate the limitations of traditional virtual imaging technologies, we propose a Generative MRI-to-CT transformation method with structural perceptual supervision. Even though structural reconstruction is structurally misaligned in the MRI-CT dataset registration, our proposed method can better align structural information of synthetic CT (sCT) images to input MRI images while simulating the modality of CT in the MRI-to-CT cross-modality transformation. RESULTS We retrieved a total of 3416 brain MRI-CT paired images as the train/test dataset, including 1366 train images of 10 patients and 2050 test images of 15 patients. Several methods (the baseline methods and the proposed method) were evaluated by the HU difference map, HU distribution, and various similarity metrics, including the mean absolute error (MAE), structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). In our quantitative experimental results, the proposed method achieves the lowest MAE mean of 0.147, highest PSNR mean of 19.27, and NCC mean of 0.431 in the overall CT test dataset. CONCLUSIONS In conclusion, both qualitative and quantitative results of synthetic CT validate that the proposed method can preserve higher similarity of structural information of the bone tissue of target CT than the baseline methods. Furthermore, the proposed method provides better HU intensity reconstruction for simulating the distribution of the CT modality. The experimental estimation indicates that the proposed method is worth further investigation.
Collapse
Affiliation(s)
- Xianfan Gu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yu Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Wen Zeng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Sihua Zhong
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Haining Wang
- Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhenlin Li
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
8
|
Douglass M, Gorayski P, Patel S, Santos A. Synthetic cranial MRI from 3D optical surface scans using deep learning for radiation therapy treatment planning. Phys Eng Sci Med 2023; 46:367-375. [PMID: 36752996 PMCID: PMC10030422 DOI: 10.1007/s13246-023-01229-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/29/2023] [Indexed: 02/09/2023]
Abstract
BACKGROUND Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning. AIMS To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose. METHODS A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique. RESULTS The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison. CONCLUSIONS A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Collapse
Affiliation(s)
- Michael Douglass
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia.
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia.
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- University of South Australia, Allied Health & Human Performance, Adelaide, SA, 5000, Australia
| | - Sandy Patel
- Department of Radiology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
| | - Alexandre Santos
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia
| |
Collapse
|
9
|
Deb SD, Jha RK, Kumar R, Tripathi PS, Talera Y, Kumar M. CoVSeverity-Net: an efficient deep learning model for COVID-19 severity estimation from Chest X-Ray images. RESEARCH ON BIOMEDICAL ENGINEERING 2023. [PMCID: PMC9901380 DOI: 10.1007/s42600-022-00254-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Purpose COVID-19 is not going anywhere and is slowly becoming a part of our life. The World Health Organization declared it a pandemic in 2020, and it has affected all of us in many ways. Several deep learning techniques have been developed to detect COVID-19 from Chest X-Ray images. COVID-19 infection severity scoring can aid in establishing the optimum course of treatment and care for a positive patient, as all COVID-19 positive patients do not require special medical attention. Still, very few works are reported to estimate the severity of the disease from the Chest X-Ray images. The unavailability of the large-scale dataset might be a reason. Methods We aim to propose CoVSeverity-Net, a deep learning-based architecture for predicting the severity of COVID-19 from Chest X-ray images. CoVSeverity-Net is trained on a public COVID-19 dataset, curated by experienced radiologists for severity estimation. For that, a large publicly available dataset is collected and divided into three levels of severity, namely Mild, Moderate, and Severe. Results An accuracy of 85.71% is reported. Conducting 5-fold cross-validation, we have obtained an accuracy of 87.82 ± 6.25%. Similarly, conducting 10-fold cross-validation we obtained accuracy of 91.26 ± 3.42. The results were better when compared with other state-of-the-art architectures. Conclusion We strongly believe that this study has a high chance of reducing the workload of overworked front-line radiologists, speeding up patient diagnosis and treatment, and easing pandemic control. Future work would be to train a novel deep learning-based architecture on a larger dataset for severity estimation.
Collapse
Affiliation(s)
- Sagar Deep Deb
- Department of Electrical Engineering, Indian Institute of Technology Patna, Patna, 801103 India
| | - Rajib Kumar Jha
- Department of Electrical Engineering, Indian Institute of Technology Patna, Patna, 801103 India
| | - Rajnish Kumar
- Department of Paediatrics, Netaji Subhas Medical College & Hospital, Patna, 801106 India
| | - Prem S. Tripathi
- Department of Radiodiagnosis, Mahatma Gandhi Memorial Government Medical College, Indore, 452001 India
| | - Yash Talera
- Department of Radiodiagnosis, Mahatma Gandhi Memorial Government Medical College, Indore, 452001 India
| | - Manish Kumar
- Patna Medical College and Hospital, Bihar, 800001 India
| |
Collapse
|
10
|
Mecheter I, Abbod M, Zaidi H, Amira A. Brain MR images segmentation using 3D CNN with features recalibration mechanism for segmented CT generation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|