1
|
Zhou Z, Yin P, Liu Y, Hu J, Qian X, Chen G, Hu C, Dai Y. Uncertain prediction of deformable image registration on lung CT using multi-category features and supervised learning. Med Biol Eng Comput 2024; 62:2669-2686. [PMID: 38658497 DOI: 10.1007/s11517-024-03092-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 04/08/2024] [Indexed: 04/26/2024]
Abstract
The assessment of deformable registration uncertainty is an important task for the safety and reliability of registration methods in clinical applications. However, it is typically done by a manual and time-consuming procedure. We propose a novel automatic method to predict registration uncertainty based on multi-category features and supervised learning. Three types of features, including deformation field statistical features, deformation field physiologically realistic features, and image similarity features, are introduced and calculated to train the random forest regressor for local registration uncertain prediction. Deformation field statistical features represent the numerical stability of registration optimization, which are correlated to the uncertainty of deformation fields; deformation field physiologically realistic features represent the biomechanical properties of organ motions, which mathematically reflect the physiological reality of deformation; image similarity features reflect the similarity between the warped image and fixed image. The multi-category features comprehensively reflect the registration uncertainty. The strategy of spatial adaptive random perturbations is also introduced to accurately simulate spatial distribution of registration uncertainty, which makes deformation field statistical features more discriminative to the uncertainty of deformation fields. Experiments were conducted on three publicly available thoracic CT image datasets. Seventeen randomly selected image pairs are used to train the random forest model, and 9 image pairs are used to evaluate the prediction model. The quantitative experiments on lung CT images show that the proposed method outperforms the baseline method for uncertain prediction of classical iterative optimization-based registration and deep learning-based registration with different registration qualities. The proposed method achieves good performance for registration uncertain prediction, which has great potential in improving the accuracy of registration uncertain prediction.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Pengfei Yin
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Yuhang Liu
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Guangqiang Chen
- The Second Affiliated Hospital of Soochow University, Suzhou, 215163, China
| | - Chunhong Hu
- The First Affiliated Hospital of Soochow University, Suzhou, 215163, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China.
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China.
| |
Collapse
|
2
|
Ren X, Song H, Zhang Z, Yang T. MSRA-Net: multi-channel semantic-aware and residual attention mechanism network for unsupervised 3D image registration. Phys Med Biol 2024; 69:165011. [PMID: 39047770 DOI: 10.1088/1361-6560/ad6741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 07/23/2024] [Indexed: 07/27/2024]
Abstract
Objective. Convolutional neural network (CNN) is developing rapidly in the field of medical image registration, and the proposed U-Net further improves the precision of registration. However, this method may discard certain important information in the process of encoding and decoding steps, consequently leading to a decline in accuracy. To solve this problem, a multi-channel semantic-aware and residual attention mechanism network (MSRA-Net) is proposed in this paper.Approach. Our proposed network achieves efficient information aggregation by cleverly extracting the features of different channels. Firstly, a context-aware module (CAM) is designed to extract valuable contextual information. And the depth-wise separable convolution is employed in the CAM to alleviate the computational burden. Then, a new multi-channel semantic-aware module (MCSAM) is designed for more comprehensive fusion of up-sampling features. Additionally, the residual attention module is introduced in the up-sampling process to extract more semantic information and minimize information loss.Main results. This study utilizes Dice score, average symmetric surface distance and negative Jacobian determinant evaluation metrics to evaluate the influence of registration. The experimental results demonstrate that our proposed MSRA-Net has the highest accuracy compared to several state-of-the-art methods. Moreover, our network has demonstrated the highest Dice score across multiple datasets, thereby indicating that the superior generalization capabilities of our model.Significance. The proposed MSRA-Net offers a novel approach to improve medical image registration accuracy, with implications for various clinical applications. Our implementation is available athttps://github.com/shy922/MSRA-Net.
Collapse
Affiliation(s)
- Xiaozhen Ren
- Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Ministry of Education, Zhengzhou 450001, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control, Henan University of Technology, Zhengzhou 450001, People's Republic of China
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Haoyuan Song
- Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Ministry of Education, Zhengzhou 450001, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control, Henan University of Technology, Zhengzhou 450001, People's Republic of China
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Zihao Zhang
- Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Ministry of Education, Zhengzhou 450001, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control, Henan University of Technology, Zhengzhou 450001, People's Republic of China
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Tiejun Yang
- Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Ministry of Education, Zhengzhou 450001, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control, Henan University of Technology, Zhengzhou 450001, People's Republic of China
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| |
Collapse
|
3
|
Bierbrier J, Gueziri HE, Collins DL. Estimating medical image registration error and confidence: A taxonomy and scoping review. Med Image Anal 2022; 81:102531. [PMID: 35858506 DOI: 10.1016/j.media.2022.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 06/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022]
Abstract
Given that image registration is a fundamental and ubiquitous task in both clinical and research domains of the medical field, errors in registration can have serious consequences. Since such errors can mislead clinicians during image-guided therapies or bias the results of a downstream analysis, methods to estimate registration error are becoming more popular. To give structure to this new heterogenous field we developed a taxonomy and performed a scoping review of methods that quantitatively and automatically provide a dense estimation of registration error. The taxonomy breaks down error estimation methods into Approach (Image- or Transformation-based), Framework (Machine Learning or Direct) and Measurement (error or confidence) components. Following the PRISMA guidelines for scoping reviews, the 570 records found were reduced to twenty studies that met inclusion criteria, which were then reviewed according to the proposed taxonomy. Trends in the field, advantages and disadvantages of the methods, and potential sources of bias are also discussed. We provide suggestions for best practices and identify areas of future research.
Collapse
Affiliation(s)
- Joshua Bierbrier
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada.
| | - Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada
| | - D Louis Collins
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
4
|
Lee EJ, Plishker W, Hata N, Shyn PB, Silverman SG, Bhattacharyya SS, Shekhar R. Rapid Quality Assessment of Nonrigid Image Registration Based on Supervised Learning. J Digit Imaging 2021; 34:1376-1386. [PMID: 34647199 PMCID: PMC8669090 DOI: 10.1007/s10278-021-00523-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 08/03/2021] [Accepted: 08/17/2021] [Indexed: 11/25/2022] Open
Abstract
When preprocedural images are overlaid on intraprocedural images, interventional procedures benefit in that more structures are revealed in intraprocedural imaging. However, image artifacts, respiratory motion, and challenging scenarios could limit the accuracy of multimodality image registration necessary before image overlay. Ensuring the accuracy of registration during interventional procedures is therefore critically important. The goal of this study was to develop a novel framework that has the ability to assess the quality (i.e., accuracy) of nonrigid multimodality image registration accurately in near real time. We constructed a solution using registration quality metrics that can be computed rapidly and combined to form a single binary assessment of image registration quality as either successful or poor. Based on expert-generated quality metrics as ground truth, we used a supervised learning method to train and test this system on existing clinical data. Using the trained quality classifier, the proposed framework identified successful image registration cases with an accuracy of 81.5%. The current implementation produced the classification result in 5.5 s, fast enough for typical interventional radiology procedures. Using supervised learning, we have shown that the described framework could enable a clinician to obtain confirmation or caution of registration results during clinical procedures.
Collapse
Affiliation(s)
- Eung-Joo Lee
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD USA
| | - William Plishker
- Institute for Advanced Computer Studies, University of Maryland, College Park, MD USA
| | | | | | | | - Shuvra S. Bhattacharyya
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD USA
- Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA USA
| | - Raj Shekhar
- Institute for Advanced Computer Studies, University of Maryland, College Park, MD USA
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC USA
| |
Collapse
|
5
|
Grimwood A, Rivaz H, Zhou H, McNair HA, Jakubowski K, Bamber JC, Tree AC, Harris EJ. Improving 3D ultrasound prostate localisation in radiotherapy through increased automation of interfraction matching. Radiother Oncol 2020; 149:134-141. [PMID: 32387546 PMCID: PMC7456791 DOI: 10.1016/j.radonc.2020.04.044] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/22/2020] [Accepted: 04/25/2020] [Indexed: 12/04/2022]
Abstract
BACKGROUND AND PURPOSE Daily image guidance is standard care for prostate radiotherapy. Innovations which improve the accuracy and efficiency of ultrasound guidance are needed, particularly with respect to reducing interobserver variation. This study explores automation tools for this purpose, demonstrated on the Elekta Clarity Autoscan®. The study was conducted as part of the Clarity-Pro trial (NCT02388308). MATERIALS AND METHODS Ultrasound scan volumes were collected from 32 patients. Prostate matches were performed using two proposed workflows and the results compared with Clarity's proprietary software. Gold standard matches derived from manually localised landmarks provided a reference. The two workflows incorporated a custom 3D image registration algorithm, which was benchmarked against a third-party application (Elastix). RESULTS Significant reductions in match errors were reported from both workflows compared to standard protocol. Median (IQR) absolute errors in the left-right, anteroposterior and craniocaudal axes were lowest for the Manually Initiated workflow: 0.7(1.0) mm, 0.7(0.9) mm, 0.6(0.9) mm compared to 1.0(1.7) mm, 0.9(1.4) mm, 0.9(1.2) mm for Clarity. Median interobserver variation was ≪0.01 mm in all axes for both workflows compared to 2.2 mm, 1.7 mm, 1.5 mm for Clarity in left-right, anteroposterior and craniocaudal axes. Mean matching times was also reduced to 43 s from 152 s for Clarity. Inexperienced users of the proposed workflows attained better match precision than experienced users on Clarity. CONCLUSION Automated image registration with effective input and verification steps should increase the efficacy of interfraction ultrasound guidance compared to the current commercially available tools.
Collapse
Affiliation(s)
- Alexander Grimwood
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden Hospital Trust, Sutton, UK
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
| | - Hang Zhou
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
| | - Helen A McNair
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden Hospital Trust, Sutton, UK
| | | | - Jeffrey C Bamber
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden Hospital Trust, Sutton, UK
| | - Alison C Tree
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden Hospital Trust, Sutton, UK
| | - Emma J Harris
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden Hospital Trust, Sutton, UK.
| |
Collapse
|