1
|
Moreau J, Mechtouff L, Rousseau D, Eker OF, Berthezene Y, Cho TH, Frindel C. Contrast quality control for segmentation task based on deep learning models-Application to stroke lesion in CT imaging. Front Neurol 2025; 16:1434334. [PMID: 39995787 PMCID: PMC11849432 DOI: 10.3389/fneur.2025.1434334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 01/13/2025] [Indexed: 02/26/2025] Open
Abstract
Introduction Although medical imaging plays a crucial role in stroke management, machine learning (ML) has been increasingly used in this field, particularly in lesion segmentation. Despite advances in acquisition technologies and segmentation architectures, one of the main challenges of subacute stroke lesion segmentation in computed tomography (CT) imaging is image contrast. Methods To address this issue, we propose a method to assess the contrast quality of an image dataset with a ML trained model for segmentation. This method identifies the critical contrast level below which the medical-imaging model fails to learn meaningful content from images. Contrast measurement relies on the Fisher's ratio, estimating how well the stroke lesion is contrasted from the background. The critical contrast is found-thanks to the following three methods: Performance, graphical, and clustering analysis. Defining this threshold improves dataset design and accelerates training by excluding low-contrast images. Results Application of this method to brain lesion segmentation in CT imaging highlights a Fisher's ratio threshold value of 0.05, and training validation of a new model without these images confirms this with similar results with only 60% of the training data, resulting in an almost 30% reduction in initial training time. Moreover, the model trained without the low-contrast images performed equally well with all images when tested on another database. Discussion This study opens discussion with clinicians concerning the limitations, areas for improvement, and strategies for enhancing datasets and training models. While the methodology was only applied to stroke lesion segmentation in CT images, it has the potential to be adapted to other tasks.
Collapse
Affiliation(s)
- Juliette Moreau
- CarMeN, INSERM U1060, INRAe U1397, Université Lyon 1, INSA de Lyon, Pierre-Bénite, France
- CREATIS, Universite Claude Bernard Lyon 1, INSA Lyon, UMR CNRS 5220, Inserm U1294, Villeurbanne, France
| | - Laura Mechtouff
- CarMeN, INSERM U1060, INRAe U1397, Université Lyon 1, INSA de Lyon, Pierre-Bénite, France
- Department of Neurology, Hospices Civils de Lyon, Bron, France
| | - David Rousseau
- LARIS, UMR IRHS INRAe, Universite d'Angers, Angers, France
| | - Omer Faruk Eker
- CREATIS, Universite Claude Bernard Lyon 1, INSA Lyon, UMR CNRS 5220, Inserm U1294, Villeurbanne, France
- Department of Neurology, Hospices Civils de Lyon, Bron, France
| | - Yves Berthezene
- CREATIS, Universite Claude Bernard Lyon 1, INSA Lyon, UMR CNRS 5220, Inserm U1294, Villeurbanne, France
- Department of Neurology, Hospices Civils de Lyon, Bron, France
| | - Tae-Hee Cho
- CarMeN, INSERM U1060, INRAe U1397, Université Lyon 1, INSA de Lyon, Pierre-Bénite, France
- Department of Neurology, Hospices Civils de Lyon, Bron, France
| | - Carole Frindel
- CREATIS, Universite Claude Bernard Lyon 1, INSA Lyon, UMR CNRS 5220, Inserm U1294, Villeurbanne, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
2
|
Bosma LS, Hussein M, Jameson MG, Asghar S, Brock KK, McClelland JR, Poeta S, Yuen J, Zachiu C, Yeo AU. Tools and recommendations for commissioning and quality assurance of deformable image registration in radiotherapy. Phys Imaging Radiat Oncol 2024; 32:100647. [PMID: 39328928 PMCID: PMC11424976 DOI: 10.1016/j.phro.2024.100647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 09/09/2024] [Accepted: 09/10/2024] [Indexed: 09/28/2024] Open
Abstract
Multiple tools are available for commissioning and quality assurance of deformable image registration (DIR), each with their own advantages and disadvantages in the context of radiotherapy. The selection of appropriate tools should depend on the DIR application with its corresponding available input, desired output, and time requirement. Discussions were hosted by the ESTRO Physics Workshop 2021 on Commissioning and Quality Assurance for DIR in Radiotherapy. A consensus was reached on what requirements are needed for commissioning and quality assurance for different applications, and what combination of tools is associated with this. For commissioning, we recommend the target registration error of manually annotated anatomical landmarks or the distance-to-agreement of manually delineated contours to evaluate alignment. These should be supplemented by the distance to discordance and/or biomechanical criteria to evaluate consistency and plausibility. Digital phantoms can be useful to evaluate DIR for dose accumulation but are currently only available for a limited range of anatomies, image modalities and types of deformations. For quality assurance of DIR for contour propagation, we recommend at least a visual inspection of the registered image and contour. For quality assurance of DIR for warping quantitative information such as dose, Hounsfield units or positron emission tomography-data, we recommend visual inspection of the registered image together with image similarity to evaluate alignment, supplemented by an inspection of the Jacobian determinant or bending energy to evaluate plausibility, and by the dose (gradient) to evaluate relevance. We acknowledge that some of these metrics are still missing in currently available commercial solutions.
Collapse
Affiliation(s)
- Lando S Bosma
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mohammad Hussein
- Metrology for Medical Physics Centre, National Physical Laboratory, Teddington, UK
| | - Michael G Jameson
- GenesisCare, Sydney, Australia
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
| | | | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jamie R McClelland
- Centre for Medical Image Computing and the Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Dept. Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Sara Poeta
- Medical Physics Department, Institut Jules Bordet - Université Libre de Bruxelles, Belgium
| | - Johnson Yuen
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
- St. George Hospital Cancer Care Centre, Sydney NSW2217, Australia
- Ingham Institute for Applied Medical Research, Sydney, Australia
| | - Cornel Zachiu
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Adam U Yeo
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- The Sir Peter MacCallum Department of Oncology, the University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
3
|
Thadikemalla VSG, Focke NK, Tummala S. A 3D Sparse Autoencoder for Fully Automated Quality Control of Affine Registrations in Big Data Brain MRI Studies. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:412-427. [PMID: 38343221 PMCID: PMC10976877 DOI: 10.1007/s10278-023-00933-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/13/2023] [Accepted: 10/24/2023] [Indexed: 03/02/2024]
Abstract
This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and "Food and Brain" study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for "Food and Brain" study (only T1w) and in the range 88-97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from "Food and Brain" and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.
Collapse
Affiliation(s)
- Venkata Sainath Gupta Thadikemalla
- Department of Electronics and Communication Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India.
| | - Niels K Focke
- Clinic for Neurology, University Medical Center, Göttingen, Germany
| | - Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University-AP, Andhra Pradesh, India.
| |
Collapse
|
4
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
5
|
Ma X, Cui H, Li S, Yang Y, Xia Y. Deformable medical image registration with global-local transformation network and region similarity constraint. Comput Med Imaging Graph 2023; 108:102263. [PMID: 37487363 DOI: 10.1016/j.compmedimag.2023.102263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/04/2023] [Accepted: 06/07/2023] [Indexed: 07/26/2023]
Abstract
Deformable medical image registration can achieve fast and accurate alignment between two images, enabling medical professionals to analyze images of different subjects in a unified anatomical space. As such, it plays an important role in many medical image studies. Current deep learning (DL)-based approaches for image registration directly learn spatial transformation from one image to another, relying on a convolutional neural network and ground truth or similarity metrics. However, these methods only use a global similarity energy function to evaluate the similarity of a pair of images, which ignores the similarity of regions of interest (ROIs) within the images. This can limit the accuracy of the image registration and affect the analysis of specific ROIs. Additionally, DL-based methods often estimate global spatial transformations of images directly, without considering local spatial transformations of ROIs within the images. To address this issue, we propose a novel global-local transformation network with a region similarity constraint that maximizes the similarity of ROIs within the images and estimates both global and local spatial transformations simultaneously. Experiments conducted on four public 3D MRI datasets demonstrate that the proposed method achieves the highest registration performance in terms of accuracy and generalization compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Xinke Ma
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Hengfei Cui
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Shuoyan Li
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Yibo Yang
- King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
6
|
Tummala S, Kadry S, Nadeem A, Rauf HT, Gul N. An Explainable Classification Method Based on Complex Scaling in Histopathology Images for Lung and Colon Cancer. Diagnostics (Basel) 2023; 13:diagnostics13091594. [PMID: 37174985 PMCID: PMC10178684 DOI: 10.3390/diagnostics13091594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/15/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Lung and colon cancers are among the leading causes of human mortality and morbidity. Early diagnostic work up of these diseases include radiography, ultrasound, magnetic resonance imaging, and computed tomography. Certain blood tumor markers for carcinoma lung and colon also aid in the diagnosis. Despite the lab and diagnostic imaging, histopathology remains the gold standard, which provides cell-level images of tissue under examination. To read these images, a histopathologist spends a large amount of time. Furthermore, using conventional diagnostic methods involve high-end equipment as well. This leads to limited number of patients getting final diagnosis and early treatment. In addition, there are chances of inter-observer errors. In recent years, deep learning has shown promising results in the medical field. This has helped in early diagnosis and treatment according to severity of disease. With the help of EffcientNetV2 models that have been cross-validated and tested fivefold, we propose an automated method for detecting lung (lung adenocarcinoma, lung benign, and lung squamous cell carcinoma) and colon (colon adenocarcinoma and colon benign) cancer subtypes from LC25000 histopathology images. A state-of-the-art deep learning architecture based on the principles of compound scaling and progressive learning, EffcientNetV2 large, medium, and small models. An accuracy of 99.97%, AUC of 99.99%, F1-score of 99.97%, balanced accuracy of 99.97%, and Matthew's correlation coefficient of 99.96% were obtained on the test set using the EffcientNetV2-L model for the 5-class classification of lung and colon cancers, outperforming the existing methods. Using gradCAM, we created visual saliency maps to precisely locate the vital regions in the histopathology images from the test set where the models put more attention during cancer subtype predictions. This visual saliency maps may potentially assist pathologists to design better treatment strategies. Therefore, it is possible to use the proposed pipeline in clinical settings for fully automated lung and colon cancer detection from histopathology images with explainability.
Collapse
Affiliation(s)
- Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University-AP, Amaravati 522240, Andhra Pradesh, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 36, Lebanon
| | - Ahmed Nadeem
- Department of Pharmacology & Toxicology, College of Pharmacy, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Nadia Gul
- Wah Medical College affiliated with POF Hospital, Wah Cantt 47040, Pakistan
| |
Collapse
|
7
|
A novel edge gradient distance metric for automated evaluation of deformable image registration quality. Phys Med 2022; 103:26-36. [DOI: 10.1016/j.ejmp.2022.09.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 07/28/2022] [Accepted: 09/26/2022] [Indexed: 11/17/2022] Open
|
8
|
Bierbrier J, Gueziri HE, Collins DL. Estimating medical image registration error and confidence: A taxonomy and scoping review. Med Image Anal 2022; 81:102531. [PMID: 35858506 DOI: 10.1016/j.media.2022.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 06/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022]
Abstract
Given that image registration is a fundamental and ubiquitous task in both clinical and research domains of the medical field, errors in registration can have serious consequences. Since such errors can mislead clinicians during image-guided therapies or bias the results of a downstream analysis, methods to estimate registration error are becoming more popular. To give structure to this new heterogenous field we developed a taxonomy and performed a scoping review of methods that quantitatively and automatically provide a dense estimation of registration error. The taxonomy breaks down error estimation methods into Approach (Image- or Transformation-based), Framework (Machine Learning or Direct) and Measurement (error or confidence) components. Following the PRISMA guidelines for scoping reviews, the 570 records found were reduced to twenty studies that met inclusion criteria, which were then reviewed according to the proposed taxonomy. Trends in the field, advantages and disadvantages of the methods, and potential sources of bias are also discussed. We provide suggestions for best practices and identify areas of future research.
Collapse
Affiliation(s)
- Joshua Bierbrier
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada.
| | - Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada
| | - D Louis Collins
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
9
|
Claessens M, Oria CS, Brouwer CL, Ziemer BP, Scholey JE, Lin H, Witztum A, Morin O, Naqa IE, Van Elmpt W, Verellen D. Quality Assurance for AI-Based Applications in Radiation Therapy. Semin Radiat Oncol 2022; 32:421-431. [DOI: 10.1016/j.semradonc.2022.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
10
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
11
|
Yang B, Chen X, Li J, Zhu J, Men K, Dai J. A feasible method to evaluate deformable image registration with deep learning–based segmentation. Phys Med 2022; 95:50-56. [DOI: 10.1016/j.ejmp.2022.01.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 12/18/2022] Open
|
12
|
A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra. NUCLEAR ENGINEERING AND TECHNOLOGY 2021. [DOI: 10.1016/j.net.2021.06.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
13
|
Fully automated quality control of rigid and affine registrations of T1w and T2w MRI in big data using machine learning. Comput Biol Med 2021; 139:104997. [PMID: 34753079 DOI: 10.1016/j.compbiomed.2021.104997] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 10/06/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND Magnetic resonance imaging (MRI)-based morphometry and relaxometry are proven methods for the structural assessment of the human brain in several neurological disorders. These procedures are generally based on T1-weighted (T1w) and/or T2-weighted (T2w) MRI scans, and rigid and affine registrations to a standard template(s) are essential steps in such studies. Therefore, a fully automatic quality control (QC) of these registrations is necessary in big data scenarios to ensure that they are suitable for subsequent processing. METHOD A supervised machine learning (ML) framework is proposed by computing similarity metrics such as normalized cross-correlation, normalized mutual information, and correlation ratio locally. We have used these as candidate features for cross-validation and testing of different ML classifiers. For 5-fold repeated stratified grid search cross-validation, 400 correctly aligned, 2000 randomly generated misaligned images were used from the human connectome project young adult (HCP-YA) dataset. To test the cross-validated models, the datasets from autism brain imaging data exchange (ABIDE I) and information eXtraction from images (IXI) were used. RESULTS The ensemble classifiers, random forest, and AdaBoost yielded best performance with F1-scores, balanced accuracies, and Matthews correlation coefficients in the range of 0.95-1.00 during cross-validation. The predictive accuracies reached 0.99 on the Test set #1 (ABIDE I), 0.99 without and 0.96 with noise on Test set #2 (IXI, stratified w.r.t scanner vendor and field strength). CONCLUSIONS The cross-validated and tested ML models could be used for QC of both T1w and T2w rigid and affine registrations in large-scale MRI studies.
Collapse
|
14
|
Connolly L, Jamzad A, Kaufmann M, Farquharson CE, Ren K, Rudan JF, Fichtinger G, Mousavi P. Combined Mass Spectrometry and Histopathology Imaging for Perioperative Tissue Assessment in Cancer Surgery. J Imaging 2021; 7:203. [PMID: 34677289 PMCID: PMC8539093 DOI: 10.3390/jimaging7100203] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/28/2021] [Accepted: 09/30/2021] [Indexed: 12/16/2022] Open
Abstract
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires the development of a database of mass spectrometry signals and their corresponding pathology labels. Assigning correct labels, in turn, necessitates precise spatial registration of histopathology and mass spectrometry data. This is a challenging task due to the domain differences and noisy nature of images. In this study, we create a registration framework for mass spectrometry and pathology images as a contribution to the development of perioperative tissue assessment. In doing so, we explore two opportunities in deep learning for medical image registration, namely, unsupervised, multi-modal deformable image registration and evaluation of the registration. We test this system on prostate needle biopsy cores that were imaged with desorption electrospray ionization mass spectrometry (DESI) and show that we can successfully register DESI and histology images to achieve accurate alignment and, consequently, labelling for future training. This automation is expected to improve the efficiency and development of a deep learning architecture that will benefit the use of mass spectrometry imaging for cancer diagnosis.
Collapse
Affiliation(s)
- Laura Connolly
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Amoon Jamzad
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Martin Kaufmann
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Catriona E. Farquharson
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Kevin Ren
- Department of Pathology and Molecular Medicine, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - John F. Rudan
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| |
Collapse
|
15
|
Field M, Hardcastle N, Jameson M, Aherne N, Holloway L. Machine learning applications in radiation oncology. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 19:13-24. [PMID: 34307915 PMCID: PMC8295850 DOI: 10.1016/j.phro.2021.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 05/19/2021] [Accepted: 05/22/2021] [Indexed: 12/23/2022]
Abstract
Machine learning technology has a growing impact on radiation oncology with an increasing presence in research and industry. The prevalence of diverse data including 3D imaging and the 3D radiation dose delivery presents potential for future automation and scope for treatment improvements for cancer patients. Harnessing this potential requires standardization of tools and data, and focused collaboration between fields of expertise. The rapid advancement of radiation oncology treatment technologies presents opportunities for machine learning integration with investments targeted towards data quality, data extraction, software, and engagement with clinical expertise. In this review, we provide an overview of machine learning concepts before reviewing advances in applying machine learning to radiation oncology and integrating these techniques into the radiation oncology workflows. Several key areas are outlined in the radiation oncology workflow where machine learning has been applied and where it can have a significant impact in terms of efficiency, consistency in treatment and overall treatment outcomes. This review highlights that machine learning has key early applications in radiation oncology due to the repetitive nature of many tasks that also currently have human review. Standardized data management of routinely collected imaging and radiation dose data are also highlighted as enabling engagement in research utilizing machine learning and the ability integrate these technologies into clinical workflow to benefit patients. Physicists need to be part of the conversation to facilitate this technical integration.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| | - Michael Jameson
- GenesisCare, Alexandria, NSW, Australia.,St Vincent's Clinical School, Faculty of Medicine, University of New South Wales, Australia
| | - Noel Aherne
- Mid North Coast Cancer Institute, NSW, Australia.,Rural Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.,Cancer Therapy Centre, Liverpool Hospital, Sydney, NSW, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
16
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|